01. Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way.
How should you design the process?
a) Create a scalable environment in GCP for simulating production load.
b) Use the existing infrastructure to test the GCP-based backend at scale.
c) Build stress tests into each component of your application and use resources from the already deployed production backend to simulate load.
d) Create a set of static environments in GCP to test different levels of load—for example, high, medium, and low.
02. To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform.
These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department.
Which two steps should you take?
a) Use persistent disks to store the state. Start and stop the VM as needed.
b) Use the --auto-delete flag on all persistent disks before stopping the VM.
c) Apply VM CPU utilization label and include it in the BigQuery billing export.
d) Use BigQuery billing export and labels to relate cost to groups.
e) Store all state in local SSD, snapshot the persistent disks, and terminate the VM.
f) Store all state in Cloud Storage, snapshot the persistent disks, and terminate the VM.
03. You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database backend.
You want to store the credentials securely. Where should you store the credentials?
a) In the source code
b) In an environment variable
c) In a key management system
d) In a config file that has restricted access through ACLs
e) In a secret management system
04. Your company has decided to make a major revision of their API in order to create better experiences for their developers.
They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API.
They want to keep the same SSL and DNS records in place to serve both APIs. What should they do?
a) Configure a new load balancer for the new version of the API.
b) Reconfigure old clients to use a new endpoint for the new API.
c) Have the old API forward traffic to the new API based on the path.
d) Use separate backend services for each API path behind the load balancer.
05. Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others.
Network traffic should flow through the web to the API tier, and then on to the database tier. Traffic should not flow between the web and the database tier.
How should you configure the network?
a) Add each tier to a different subnetwork.
b) Set up software-based firewalls on individual VMs.
c) Add tags to each tier and set up routes to allow the desired traffic flow.
d) Add tags to each tier and set up firewall rules to allow the desired traffic flow.
06. Because you do not know every possible future use for the data TerramEarth collects, you have decided to build a system that captures and stores all raw data in case you need it later.
How can you most cost-effectively accomplish this goal?
a) Have the vehicles in the field continue to dump data via FTP, and adjust the existing Linux machines to immediately upload it to Cloud Storage with gsutil.
b) Have the vehicles in the field pass the data to Cloud Pub/Sub and dump it into a Cloud Dataproc cluster that stores data in Apache Hadoop Distributed File System (HDFS) on persistent disks.
c) Have the vehicles in the field continue to dump data via FTP, adjust the existing Linux machines, and use a collector to upload them into Cloud Dataproc HDFS for storage.
d) Have the vehicles in the field stream the data directly into BigQuery.
07. The database administration team has asked you to help them improve the performance of their new database server running on Compute Engine.
The database is used for importing and normalizing the company’s performance statistics. It is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD zonal persistent disk.
What should they change to get better performance from this system in a cost-effective manner?
a) Increase the virtual machine’s memory to 64 GB.
b) Create a new virtual machine running PostgreSQL.
c) Dynamically resize the SSD persistent disk to 500 GB.
d) Migrate their performance metrics warehouse to BigQuery.
08. Today, TerramEarth maintenance workers receive interactive performance graphs for the last 24 hours (86,400 events) by plugging their maintenance tablets into the vehicle.
The support group wants support technicians to view this data remotely to help troubleshoot problems. You want to minimize the latency of graph loads.
How should you provide this functionality?
a) Execute queries against data stored in a Cloud SQL.
b) Execute queries against data indexed by vehicle_id.timestamp in Cloud Bigtable.
c) Execute queries against data stored on daily partitioned BigQuery tables.
d) Execute queries against BigQuery with data stored in Cloud Storage via BigQuery federation.
09. You analyzed TerramEarth’s business requirement to reduce downtime and found that they can achieve a majority of time saving by reducing customers’ wait time for parts. You decided to focus on reduction of the 3 weeks’ aggregate reporting time.
Which modifications to the company’s processes should you recommend?
a) Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.
b) Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.
c) Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.
d) Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.
10. Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup.
Which two steps should they take?
a) Load logs into BigQuery.
b) Load logs into Cloud SQL.
c) Import logs into Stackdriver.
d) Insert logs into Cloud Bigtable.
e) Upload log files into Cloud Storage.