need deploy an application on GCP
must run on Debian Linux env.
application requires extensive configuration in order to operate correctly.
you want to ensure that you can install Debian distribution updates with minimal manual intervention whenever they become available.
install Debian distribution updates with minimal manual intervention whenever they become available.
Create a Debian-based Compute Engine instance.
install and configure the application and use OS patch management to install available updates.
OS patch management
reduce the number of unplanned rollbacks of erroneous production deployments in your company’s web hosting platform.
Improvement to the QA/Test processes accomplished an 80% reduction
Reduce rollbacks.
deploy a stateful workload on GCP.
the workload can scale horizontally, but each instance needs to read and write to the same POSIX filesystem.
At high load, the stateful workload needs to support up to 100MB/s of writes.
Key words.
read and write to the same POSIX filesystem.
Create a Cloud Filestore instance and mount it in each instance.
Cloud Filestore is fully-managed.
Shared file system for data.
Up to 100MB/s
Key word
Cloud Filestore
4.
track someone is present in a meeting room reserved for a scheduled meeting.
there are 1000 meeting rooms. across 5 offices on 3 continents.
Each room is equipped with a motion sensor that reports its status every second.
data from motion sensor includes only a sensor ID and several difference discrete items of information.
large amounts of data.
NoSQL.
Cloud Firestore or Cloud Bigtable would be suitable for storing the sensor data.
5.
Organization:
Finance and Shopping.
Development team assigned project owner role on Organization.
you want to prevent the development team from creating resource in projects in the Finance folder.
Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group Project Owner role from the Organization.
6.
Docker file.
deployments are taking too long.
From ubuntu:16.04
COPY . /src
RUN apt-get update && apt-get install -y python python-pip
RUN pip install -r requirements.txt
Optimize Dockerfile without adversely affecting.
slimmed-down base image like Apline Linux.
Copy the source after package dependencies(Python and pip) are installed.
Cache dependencies. - copy and run instruction together.
From alpine:3.12
COPY requirements.txt /src/requirements.txt
RUN apk update && apk add —no-cache python3 python3-dev &&
pip3 install —upgrade pip
&& pip3 install —no-cache-dir -r /src/requirements.txt
7.
migrate to cloud
want to analyze data stream to optimize operations.
They do not have any existing code for this analysis.
options mix of batch and stream processing.
running some hourly jobs and live processing some data as it comes in.
which technology should they use for this?
Cloud Dataflow
allow both batch and stream processing.
process data real-time using Apache Beam and Apache Flink.
It can integrate with BigQuery, Cloud Storage, Cloud Pub/Sub.
built-in support auto scaling, fault tolerance, data integration.
8.
sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter.
you want utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change.
Google Cloud Dataproc
suitable product to utilize in order to scale the upcoming demand for Apach Spark and Hadoop
fully managed cloud service for Apach Spark and Hadoop.
allow creation and deletion of cluster on-demand. which can help for scale up and down as needed.
automatic scaling.
integrate with other gcp service such as BigQuery, Cloud Storage
9.
on-premises data center
require minimal user disruption.
strict security team requirements for storing passwords.
Federate authentication via SAML 2.0 to the existing Identity Provider
use their existing identity provider for authentication
minimizing user disruption.
micro service.
any code change pushed to remote branch
built and tested automatically.
build and test successful -> deploy automatically.
Colud Build trigger based on the development branch
test the code, build the container and stores it in Container Registry.
Create deployment pipeline that watches for new images and deploy the new image on development cluster.
11.
receive and store the data from 500,000 requests per sec.
service will not receive any requests.
costs low.
Cloud Run
handle large number of requests per sec.
auto scale.
Cloud Bigtable.
handle high-volume and low-latency data lookups.
offer cost-efficient storage.
12.
Cloud Logging.
want quickly anomally
ex) unwanted firewall change,
server breach
export logs to a pub/sub topic and trigger cloud function with the relevant log events.
Cloud Fuction
process and log to Cloud Monitoring for alerting.
13.
Using Firewall Insights feature in Google Network Intelligence
you have several firewall rules applied to Compute Engine.
In Firewall Insight page, There are no logs.
Enable Firewall Rules Logging for the firewall rules you want to monitor.
14.
Cloud SQL MySQL.
in case of catastrophic failure <- 복구 불가능 에러.
Automated backups.
Banary logging
15.
don’t expect a lot of traffic.
But. it could spike occasionally.
leverage Cloud Load Balancing.
cost effective
?
가끔씩 트래픽이 튈건데 어떻게 할거냐? ->
Store static content Such as HTML and images in a Cloud Storage bucket.
Use Cloud Functions to host the APIs and save the user data in Firestore.
16.
restrict external IP address on instance on only approved instance.
you want enforce this requirement across all of your Virtual Private Clouds(VPC).
Set Organization Policy with a constraint on constraints/compute.vmExternallpAccess.
List the approved instances in the allowed Values list.
17.
document saved separate file.
documents cannot be deleted or overwriten for the next 5 years.
Create
- retention policy on bucket
for the duration of 5 years
create
- lock on the retention policy
18.
integrate A env and B env.
A using VPC IP range overlap with B IP .
Cloud VPN connection
from the new VPC to the datacenter.
Create a
Cloud Router
and apply new IP addresses so ther is no overlapping IP space.
19.
database VM has ext4 format disk.
DB’s storage run out.
How remediate this with the least amount of downtime?
increase the size of the persistent disk and use the
resize2fs command in linux. for match the new disk size.
detach the disk
re-attach the disk
use resize2fs
20.
A Cluster GKE
two micro service running
Mesh Service
Config Management
want which service cause the delay.
Service Mesh visualization in the cloud console.
to inspect the telemetry between the micro services.
or
Trace viewer in the cloud console.
for trace of specific requests and identify any issues.
21.
large data send to Cloud.
data set available 24 hrs a day.
use in SQL interface.
Load Data into Google Bigquery.
fully managed.
cloud native.
easy to use for SQL experience.
real time analysis.
Cloud sql - RDB. slow query. exceed capacity.
Google Cloud Storage - 분석을 위해 추가적인 과정이 필요함.
Google Cloud Datastore - transactional data store을 위해 디자인됨. 분석용에는 적합하지않음. large dataset에도 적합하지않음.
22.
transaction data를 smallest
how should you design your architecture?
Create a tokenizer service and store only tokenized data.
sensitive credit data ( card number, cvc etc) replace it with a token.
token can be use for track transaction and analyze trends.
tokenized data will store in your database.
original sensitive data will securely stored in a PCI compliant env.
23.
choose the tool
capture error and help to analyze historical log data.
define the requirements.
assess viable logging tool
24.
90일이 지난 backup 파일을 bucket에서 지우고싶다.
want optimize ongoing Cloud Storage spend.
지속적인 스토리지 비용을 최적화하고싶다.
write a lifecycle management rule in JSON. and push it to the bucket with gsutil.
25.
migrating a J2EE application to the cloud.
What should considered?
Deploy a continuous integration tool with automated testing in a staging environment.
Port the application code to run on Google App Engine.
Selet an automation framework to reliably provision the cloud infrastructure.
26.
microservice based application take a very long time.
API requests traverse many services.
want to know which service takes the longest time.
Stackdriver Trace.
break down the request latencies at each microservice.
27.
app with microservices.
each microservice can be configure specific number of replicas.
you also want to be able to address a specific microservice from any other microservice in a uniform way.
특정 마이크로 서비스를 다른 마이크로 서비스에서 동일한 방식으로 지정? 할수잇기를 원함.
Deploy each microservice as a Deployment.
Expose the Deployment in the cluster using a Service. and use the Service DNS name to address it from other microservices within the cluster.
28.
improve.
importing and normalizing performance statistics. <- disk I/O가 많은 상황?
current.
MySQL on Debian.
n1-standard-8 with 80GB SSD.
Dynamically resize the SSD persistent dis to 500GB.
29.
production
24/7
acceptance
office hours.
development.
Cloud Scheduler
trigger Cloud Function.
will stop develop and acceptance env after office hours. and start them before office hour.
30.
when high trffic circumstance,
RDB crash, but replica never promoted to master.
why? how avoid this?
routinely scheduled failovers. of your db
주기적으로 failover하도록 설정.
Use Load balancer or proxy server to route traffic
dbms which support automatic failover (Google cloud SQL)
31.
python script print error that it cannot connect to Bigquery.
빅쿼리랑 연결 못할때?
Create new servie account with Bigquery access and excute.
권한 가진 SC를 새로 생성해서 사용한다.
32.
Bigquery
large amount of data.
legislation
delete such information.
민감정보 다 지워야할수잇음?
Use unique identifier for each individual.
delete request, delete all rows from bigquery with this identifier.
id만들어서 묶어두면 한방에 다지울수 잇음.
33.
기본 인프라 구조를 수정하지않고 Hadoop 을 migration하고싶음.
노력과 비용을 최소화
Dataproc
Hadoop job on a managed service.
without the need to manage the underlying infrasturcture.
34.
오토 스케일링 instance를 준비해둿는데
얘가 자꾸 terminated and re launched. every minute.
public IP를 가지지는 않음.
curl명령어를 통해 적절한 response가 오는지 확인하고싶음.
ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
로드 밸런서의 핼스 체크가 인스턴스에 도달할수 있는 방화벽 룰이있는지 확인.
35.
외부 네트워크에서 bucket에 접속하는걸 막고싶음.
VPC service controls. includes the project with the bucket.
access level with CIDER of the office network.
36.
zonal outage occur, should restore in another zone.
서비스 중단이 발생할때 다른 zone에서 restore해야한다.
Configure the compute engine instances with instance template for the application.
use a regional persistent disk for the application data.
whenever a zonal outage occurs,
use the instance template to spin up the application in another zone in the same region .
Use the regional persistent disk for the application data.
37.
easy stage new version.
promote staged version to production.
App Engine.
easy and automated staging and promotion of new version.
minimize operational overhead.
38.
Compute engine.
dropping request.
single application process consuming all available CPU.
no abnomal load on any other related system.
Increase the maximum number of instances in the autoscaling group.
39.
Cloud Monitoring workspace.
Site Reliability Engineer(SRE)
triage incidents.
분류 어케 할거임?
navigate the predefined dashboards in the Cloud Monitoring workspcae.
add metrics and create alert policies.
Monitoring 에서는 predefined dashboard쓰는게 추천 인듯.
40.
click data.
6000 clicks per minute. upto 8500 per second.
which storage do you choose?
대용량 데이터인데, 어디다 저장할거임?
Google Cloud Bigtable.
High speed data streaming.
ability to handle burst of traffic.
need for long term storage.
scalable.
NoSql.
41.
Update API.
keep the old version of API.
same SSL and DNS를 이용해서 구버전과 신버전 다 쓸거임
Use separate backend pools. for each API path behind the load balancer.
load balancer route requests to the appropriate backend service.
각 API는 각각의 backend service를 구성.
load balancer가 API의 버전이나 PATH를 보고 적절한 API로 설정.
이렇게하면 SSL이랑 DNS 동일하게 가져갈수잇음.
42.
Dev 팀. Network팀 구별되어잇고
Dev 팀은 Compute Engine을 이용하여 sensitive Data를 가지는 app을 운용중.
Dev팀은 Compute Engine을 사용하는데 administrative permissions를 요구
회사는 모든 Network 리소스가 Network 팀에 의해 관리되어야 한다고 지침.
Dev 팀은 Network팀이 sensitive data에 접근하지않기를 바람.
어케 해야됨?
Shard VPC and assign the Network Admin role to Network팀.
second Project. without VPC. configure it as a Shared VPC service project.
assign the Compute Admin role to the dev 팀.
A, B 프로젝트로 나눠서
A는 Shared VPC를 가지고, 네트워크 팀이 Network Admin 권한을 가짐.
B는 VPC를 생성하지않고, A프로젝트의 VPC를 사용하고, Compute Admin을 Dev팀이 가짐.
43.
test에서 못본 버그를 production에서 발견함.
차후에는 이런일이 없게하고싶음. 어케 해야됨?
Increase the load on your test env.
과부하 테스트를 해봄
Load testing tool. 동시에 요청이 많은 경우를 테스트할수잇음.
Profiling tool. performance metrics를 볼수잇음.
보틀넥같은거잇는지 확인
canary release. 소량의 유저한테만 공개.
Use diffent configuration. 다른 os, 다른 하드웨어에서 시험.
44.
포탈 사이트 LAMP를 사용하고. 2개 replica.
하나의 region에 배포되어잇고
autoscale 인데 db는 아님.
소규모 그룹에게 공개된상태고, 99퍼센트 SLA를 만족함
이제 공개할 예정임.
추가 사용자 부하에도, resiliency testing strategy. 복원력 테스트 전략을 통해 SLA를 유지할수잇어야한다.
어케할거냐
synthetic random user input
replay synthetic load until autoscale logic is triggered on at least one layer.
랜덤 유저 인풋을 만들어서
auto scale되는걸 확인한다.
introduce chaos to the system by terminating random resources on both zones.
두영역에서 리소스를 종료해서 시스템이 복구될수잇는지 확인?
시스템 맥시멈 capacity도 테스트 가능.
45.
optimize the performance of an accurate. real-time. weather-charting application.
50,000 센서, 10 readings a sec.
timestamp and sensor reading.
어따 저장해야댐?
high-volume
scale horizontally.
real-time.
46.
on-premises TO GCP.
자체에서 cloud로 갈거.
state to persist.
User —no-auto-delete flag on all persistent disks and stop the VM
47.
업데이트후 특정 유저한테서 지연이 감지됨.
업데이트 이전에느 ㄴ없엇던 문제임.
Roll back.
User Stackdriver Trace and Logging to dignose. in development/test/staging env.
48.
유저가 app의 특정 부분에서 에러를 자주 겪는다고 말함.
no logging. monitoring 상태임.
분석하고싶은데 이슈 재현이 안됨.
minimal disruption하고싶음. 어케해야댐?
Update GKE cluster to Use
Cloud Operations
Use GKE Monitoring dashboard to investigate logs from affected Pods.
Provide Logging and monitoring.
49.
app 업데이트 할거임.
실행중인 instance를 업데이트 하고싶지는 않음.
새로운 instance 만들어서 사용하고싶음.
어케 해야댐?
Rolling update.
select Opportunistic update mode.
새 인스턴스를 만들떄.
순한맛 업데이트
50.
instance 는 Public IP를 가지지못함.
Google Cloud 와 사무실에 VPN도없음.
SSH를 이용해서 머신과 연결해야댐.
Configure Identity-Aware Proxy (IAP) for the instance.
ensure that you have the role of IAP-secured Tunnel User.
Use Gcloud command line tool to ssh into the instance.
application을 on-premises 에서 cloud로 옮길건데
사용량이 일정함.
비용 최저화 해주셈
Compute Engine
with cpu and memory option (현재의 on-premises 구성과 비슷한 구성으로)
Install the Cloud Monitoring Agent.
Load test with normal traffic level on the application.
Follow the RightSizeing Recommendations in the Cloud Console.
현재와 비슷한 구성으로 instance 구성하고
모니터링 툴 설치한 다음에
현재와 비슷한 Load 발생시키고
얼마짜리 쓰면되는지 추천해주는거 쓰면 댐.
Compute products
no-ops
auto scaling.
GKE with containers.
managed instance needs ops.
data lake
ingestion pipline to collect unstructured data from different source.
After the data is stored in Google Cloud,
recommendation engine.
the structure of the data retrieved from the source systems can change at any time.
소스시스템에서 검색된 데이터는 언제든 변경될수잇음.
the data must be stored exactly as it was retrieved for reprocessing purposes in case the data structure is incompatible with the current processing piprelines.
데이터 구조가 현재 처리 파이프라인과 호환되지 않는 경우 재처리 목적으로 검색한 대로 데이터를 정확하게 저장해야 합니다.
검색해온 데이터가 현재와, 미래가 달라질수있기떄문에
현재 검색한 내용을 저장해두고
나중에 해당 데이터를 processing 할수도있어야함.
Store the data in a cloud storage bucket.
design the processing pipelines to retrieve the data from the bucket.
버킷에 넣고, 버킷에서 검색할수잇도록 만들어야댐.
GSK cluster 만들거임.
새로 만드는 app이 internet 접속해야댐.
근데 회사가 Compute Engine instance가 public IP를 가지는 꼴을 못봄.
어케할거임
GKE cluster as private cluster.
Cloud NAT Gateway for cluster subnet.
Cloud NAT(네트워크 주소 변환)를 사용하면 외부 IP 주소가 없는 특정 리소스가 인터넷에 대한 아웃바운드 연결을 만들 수 있습니다.
customer can be 100 ~ 500,000명
collect user information.
-> auto scale, no sql?
server side
managed instance group.
data side
Cloud Bigtable
Cloud Datastore
GKE.
application should handle large amount of load.
latency of application 이 어느정도 이하이길 바람.
load testing tool to simulate the expected number of concurrent users.
total requests to your application.
inspect the results
prod deploymest가 소스코드 코밋이랑 연결되어잇는지, fully auditable 한지 어케 알수잇음?
Make the container tag match the source code commit hash.
App Engine. <- VPC.
On-premises env와 Cloud VPN tunnel 을 통해 연결.
on-premises 의 DB와 cloud 의 APP을 연결하고싶음.
serverless VPC access를 구성한다.
sensitive data.
workloads.
물리적으로 분리된 hardware에 저장되어야함.
다른 client에서 발생한 workloads도 분리되어야함.
sole-tenant node 그룹을 생성해서 각 클라이언트에게 노드를 추가함.
sole-tenant node 그룹
단독으로 하드웨어를 사용함?
https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes?hl=ko
Use affinity label base on the node name.
creating Compute Engine instances in order to host each workload on the correct node.
선호도 라벨.
sensitive data in Bigquery.
generate encryption key outside of Google Cloud.
Import a key in Cloud KMS
Create a dataset in Bigquery using the customer-supplied key option.
select the created key.
Large data.
Large user.
인프라에 신경쓰지않고 싶음.
Git에 있는 프로젝트용 CD pipline 만드는중.
code change가 production에 deploy되기전에 검증되기를 원함.
Jenkins monitor tag in repogitory.
Deploy staging tags to stg env for test.
tag the repo for production
deploy to prod env.
Compute Engine과
On-premise
연결하고싶음.
적어도 20Gbps는 되야함.
VPC 생성.
Dedicated Interconnect를 이용해서 On-premise랑 연결.
모든 app은 5년간 유지해야함.
미래의 분석과 법적 필요로.
어케해야함?
Stackdriver Monitoring
export to Bigquery.
cloud storage로 할수도잇는데,, 분석이라서 그런가?
conversation을 Bigtable로 가져가는데,,
sanitizing this data of personally identifiable information or payment card information before initial storage?
저장하기전에 개인정보나 결제정보 등을 삭제하는 방법?
De-identify the data
with
Cloud Data Loss Prevention API
open source
auto scale on demand
support continuous
Run multiple segregated copies of the same application stack.
deploy app using dynamic template.
route traffic to specific service base on URL
GKE (Goole Kubernetes Engine), Helm. Jenkins?
Cloud shell.
custom utility를 사용할건데,
default excution path이면서, persists across session이면 어따 저장해야댐?
~/bin
A는 로컬에서 사용하던 custom tool을 cloud에서도 사용하고싶음.
you want to advocate for the adoption of Google Cloud Deployment Manager.
매니저의 선택을 지지하고싶음.
비지니스 리스크 두가지 말해보셈.
이게 먼질문이지….
Cloud Deployment Manager only supports automation of Google Cloud resources.
Cloud Deployment Manager can be used to permanently delete cloud resources.
IAM.
부서별로. centrally.
A Single Organization with Folders for each department.
1.
테스트 하는데 오래걸림. cloud로 옮길거임.
테스트에 걸리는 시간을 최소화 하려면 어케해?
Compute Engine amanaged instance groups with auto-scaling.
authentication PostgreSQL 백업 하려함.
대규모 업데이트가 빈번함.
Replication은 private address space communication을 필요로함.
어케해야댐?
Google Cloud Dedicated Interconnection
물리적인 직접 연결을 제공합니다
네트워크 간에 대량의 데이터를 전송할 수 있음.
Cloud VPN 을 통해 MySQL을 Cloud에 옮기려고하는데
latency issues 지연 이슈
packet loss 이슈가있음.
어케 해결?
Configure a Google CLoud Dedicated Interconnect.
gcp building high throughput vpns
성능좋은 VPN 구축?
data processed in dataproc.
Create a Key with Cloud KMS (Key management service).
set encryption key on the bucket to the Cloud KMS key.
project에 새로운 network가 생성되었다.
GCE(Compute Engine)는 열려있는 SSH Port를 하나 가지고있음.
네트워크가 어디서 왔는지 알고싶음.
Search the Create Insert entry in Logging section.
3-tier
Web -> API -> DB
scales independently.
네트워크 구성 어케 할거임?
Add Tags to each tier
Set up firewall rules to allow the desired traffic flow.
한방향으로만 흘러가게 방화벽 룰 생성.
preemptible VM <- 선점형 VM.
저렴함. 대신 다른사람이 내꺼 가져가서 쓸수도잇음;
이거 사용중인데 다른사람이 점유를 뺏어가는 경우, 뺏기기전에 application을 정상적으로 종료하고싶음.
Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the cloud platform console when
Create a shutdown script
새 VM을 만들때 사용하라.
10.
cluster to scale as demand for your application changes.
어플리케이션 변경에 따른 cluster 자동 scale 하려면?.
update the existing Kubernetes Engine cluster . with
gcloud alpha container clusters update mycluster —enable- autoscaling —min-nodes=1 —max-nodes=10
pub/sub에서 메세지를 가져와서 FIleStore에 저장한다.
real time으로 하고싶다.
autoscaling deployment based on the subscription/num_undelivered_messages.
API lifecycle.
stability for
incase the API makes backward-incompatible changes.
업데이트 햇는데 이전버전이랑 호환이 안되는 변경이잇는경우 안정성 제공하려면?
Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change.
버저닝을 한다.
리눅스 커널 모듈을 새로 설치함.
batch run이 마니 실패함.
failure 디테일을 모아서 dev팀에 전해주려고함.
머해야댐?
Stackdriver logging
Use gcloud or Cloud console to connect to the serial console and observe the logs.
Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics.
file을 Cloud Storage에 업로드 한후
on-premises와 cloud의 파일이 동일한지 확인하고싶음.
gsutil hash -c FILE_NAME to generate CRC32C hashes of all on-premises file.
gsutil ls -L gs://[BUCKET] to collect CRC32C hashes of the uploaded files.
compare above two hash.
IAM의 streamline과 expedite the analysis and audit process.
지난 12개월의 IAM변경사항 확인 어케함?
ACL : 액세스 제어 목록.
Enable Logging export to Google Bigquery.
use ACL and views to scope the data shared with the auditor
여러 부서가있고, 각자의 project를 가진다.
각부서의 부서원들은 같은 project에 대해 책임을 가진다.
minimal maintenance and maximum overiview of IAM permissions
어케 구성할거냐
Google Group 을 부서마다 만든다.
모든 부서원들을 그들의 그룹에 추가한다.
forder를 부서마다 만든다.
해당 그룹을 IAM 권한을 폴더 레벨로 부여한다.
Forder에 프로젝트를 추가한다.
3.
long-term disaster recovery backup.
test the anlytics features available to them there
대용량의 데이터를 archive하고싶음.
분석도 하고싶음.
BigQuery.
Google Cloud Storage.
2.
Bigquery 사용중.
on-premises env -> cloud
cloud VPN사용.
avoid data exfiltration by malicious insiders, compromised code, and accidental oversharing.
악의적 내부자. 손상된 코드, 실수로 인한 과공유 등의 데이터유출을 방지 하기 위해
머할꺼임?
Configure VPC Service Contorol.
configure Private Google Access.
Zuletzt geändertvor einem Jahr