Google Professional Cloud DevOps Engineer (GCP-PCDE) Certification Exam Sample Questions

GCP-PCDE Exam Dumps, GCP-PCDE Examcollection, GCP-PCDE Braindumps, GCP-PCDE Questions PDF, GCP-PCDE VCE, GCP-PCDE Sample Questions, GCP-PCDE Official Cert Guide PDFWe have prepared Google Professional Cloud DevOps Engineer (GCP-PCDE) certification sample questions to make you aware of actual exam properties. This sample question set provides you with information about the Google GCP-PCDE exam pattern, question formate, a difficulty level of questions and time required to answer each question. To get familiar with Google Cloud Platform - Professional Cloud DevOps Engineer (GCP-PCDE) exam, we suggest you try our Sample Google GCP-PCDE Certification Practice Exam in simulated Google certification exam environment.

To test your knowledge and understanding of concepts with real-time scenario based questions, we strongly recommend you to prepare and practice with Premium Google GCP-PCDE Certification Practice Exam. The premium certification practice exam helps you identify topics in which you are well prepared and topics in which you may need further training to achieving great score in actual Google Cloud Platform - Professional Cloud DevOps Engineer (GCP-PCDE) exam.

Google GCP-PCDE Sample Questions:

01. You are running a production application on Compute Engine. You want to monitor the key metrics of CPU, Memory, and Disk I/O time.
You want to ensure that the metrics are visible by the team and will be explorable if an issue occurs. What should you do?
(Choose 2)
a) Set up logs-based metrics based on your application logs to identify errors.
b) Export key metrics to a Google Cloud Function and then analyze them for outliers.
c) Set up alerts in Cloud Monitoring for key metrics breaching defined thresholds.
d) Create a Dashboard with key metrics and indicators that can be viewed by the team.
e) Export key metrics to BigQuery and then run hourly queries on the metrics to identify outliers.
 
02. Several teams in your company want to use Cloud Build to deploy to their own Google Kubernetes Engine (GKE) clusters.
The clusters are in projects that are dedicated to each team. The teams only have access to their own projects. One team should not have access to the cluster of another team.
You are in charge of designing the Cloud Build setup, and want to follow Google-recommended practices. What should you do?
a) Limit each team member’s access so that they only have access to their team’s clusters. Ask each team member to install the gcloud CLI and to authenticate themselves by running “gcloud init”. Ask each team member to execute Cloud Build builds by using “gcloud builds submit”.
b) Create a single project for Cloud Build that all the teams will use. List the service accounts in this project and identify the one used by Cloud Build. Grant the Kubernetes Engine Developer IAM role to that service account in each team’s project.
c) In each team’s project, list the service accounts and identify the one used by Cloud Build for each project. In each project, grant the Kubernetes Engine Developer IAM role to the service account used by Cloud Build. Ask each team to execute Cloud Build builds in their own project.
d) In each team’s project, create a service account, download a JSON key for that service account, and grant the Kubernetes Engine Developer IAM role to that service account in that project. Create a single project for Cloud Build that all the teams will use. In that project, encrypt all the service account keys by using Cloud KMS. Grant the Cloud KMS CryptoKey Decrypter IAM role to Cloud Build’s service account. Ask each team to include in their “cloudbuild.yaml” files a step that decrypts the key of their service account, and use that key to connect to their cluster.
 
03. Your Site Reliability Engineering team does toil work to archive unused data in tables within your application’s relational database. This toil is required to ensure that your application has a low Latency Service Level Indicator (SLI) to meet your Service Level Objective (SLO).
Toil is preventing your team from focusing on a high-priority engineering project that will improve the Availability SLI of your application.
You want to: (1) reduce repetitive tasks to avoid burnout, (2) improve organizational efficiency, and (3) follow the Site Reliability Engineering recommended practices.
What should you do?
a) Identify repetitive tasks that contribute to toil and onboard additional team members for support.
b) Identify repetitive tasks that contribute to toil and automate them.
c) Change the SLO of your Latency SLI to accommodate toil being done less often. Use this capacity to work on the Availability SLI engineering project.
d) Assign the Availability SLI engineering project to the Software Engineering team.
 
04. You are deploying an application to a Kubernetes cluster that requires a username and password to connect to another service.
When you deploy the application, you want to ensure that the credentials are used securely in multiple environments with minimal code changes.
What should you do?
a) Bundle the credentials with the code inside the container and secure the container registry.
b) Leverage a CI/CD pipeline to update the variables at build time and inject them into a templated Kubernetes application manifest.
c) Store the credentials as a Kubernetes Secret and let the application access it via environment variables at runtime.
d) Store the credentials as a Kubernetes ConfigMap and let the application access it via environment variables at runtime.
 
05. You have a Compute Engine instance that uses the default Debian image. The application hosted on this instance recently suffered a series of crashes that you weren’t able to debug in real time: the application process died suddenly every time.
The application usually consumes 50% of the instance’s memory, and normally never more than 70%, but you suspect that a memory leak was responsible for the crashes. You want to validate this hypothesis.
What should you do?
a) Go to Metrics Explorer and look for the “compute.googleapis.com/guest/system/problem_count” metric for that instance. Examine its value for when the application crashed in the past.
b) In Cloud Monitoring, create an uptime check for your application. Create an alert policy for that uptime check to be notified when your application crashes. When you receive an alert, use your usual debugging tools to investigate the behavior of the application in real time.
c) Install the Cloud Monitoring agent on the instance. Go to Metrics Explorer and look for the “agent.googleapis.com/memory/percent_used” metric for that instance. Examine its value for when the application crashed in the past.
d) Install the Cloud Monitoring agent on the instance. Create an alert policy on the “agent.googleapis.com/memory/percent_used” metric for that instance to be alerted when the memory used is higher than 75%. When you receive an alert, use your usual debugging tools to investigate the behavior of the application in real time.
 
06. You support a website with a global audience. The website has a frontend web service and a backend database service that runs on different clusters. All clusters are scaled to handle at least ⅓ of the total user traffic.
You use 4 different regions in Google Cloud and Cloud Load Balancing to direct traffic to a region closer to the user.
You are applying a critical security patch to the backend database. You successfully patch the database in the first 2 regions, but you make a configuration error while patching Region 3. The unsuccessful patching causes 50% of user requests to Region 3 to time out.
You want to mitigate the impact of unsuccessful patching on users. What should you do?
a) Add more capacity to the frontend of Region 3.
b) Revert the Region 3 backend database and run it without the patch.
c) Drain the requests to Region 3 and redirect new requests to other regions.
d) Back up the database in the backend of Region 3 and restart the database.
 
07. You have an application deployed on Google Kubernetes Engine (GKE). The application logs are captured by Cloud Logging. You need to remove sensitive data before it reaches the Cloud Logging API.
What should you do?
a) Customize the GKE clusters’ Fluentd configuration with a filter rule. Update the Fluentd Config Map and Daemon Set in the GKE cluster.
b) Write the log information to the container file system. Execute a second process inside the container that will filter the sensitive information before writing to Standard Output.
c) Configure a filter in the Cloud Logging UI to exclude the logs with sensitive data.
d) Configure BigQuery as a sink for the logs from Cloud Logging, and then create a Data Loss Prevention job.
 
08. You work with a video rendering application that publishes small tasks as messages to a Cloud Pub/Sub topic. You need to deploy the application that will execute these tasks on multiple virtual machines (VMs).
Each task takes less than 1 hour to complete. The rendering is expected to be completed within a month. You need to minimize rendering costs.
What should you do?
a) Deploy the application as a managed instance group with Preemptible VMs.
b) Deploy the application as a managed instance group. Configure a Committed Use Discount for the amount of CPU and memory required.
c) Deploy the application as a managed instance group.
d) Deploy the application as a managed instance group with Preemptible VMs. Configure a Committed Use Discount for the amount of CPU and memory required.
 
09. You support a Python application running in production on Compute Engine. You want to debug some of the application code by inspecting the value of a specific variable. What should you do?
a) Create a Cloud Debugger logpoint with the variable at a specific line location in your application's source code, and view the value in the Logs Viewer.
b) Use your local development environment and code editor to set up a breakpoint in the source code, run the application locally, and then inspect the value of the variable.
c) Modify the source code of the application to log the value of the variable, deploy to the development environment, and then run the application to capture the value in Cloud Logging.
d) Create a Cloud Debugger snapshot at a specific line location in your application's source code, and view the value of the variable in the Google Cloud Console.
 
10. Your application runs in Google Kubernetes Engine (GKE). You want to use Spinnaker with the Kubernetes Provider to perform blue/green deployments and control which version of the application receives traffic. What should you do?
a) Use a Kubernetes Replica Set and use Spinnaker to create a new service for each new version of the application to be deployed.
b) Use a Kubernetes Replica Set and use Spinnaker to update the Replica Set for each new version of the application to be deployed.
c) Use a Kubernetes Deployment and use Spinnaker to update the deployment for each new version of the application to be deployed.
d) Use a Kubernetes Deployment and use Spinnaker to create a new deployment object for each new version of the application to be deployed.

Answers:

Question: 01
Answer: c, d
Question: 02
Answer: c
Question: 03
Answer: b
Question: 04
Answer: c
Question: 05
Answer: d
Question: 06
Answer: c
Question: 07
Answer: a
Question: 08
Answer: a
Question: 09
Answer: d
Question: 10
Answer: b

Note: Please update us by writing an email on feedback@vmexam.com for any error in Google Cloud Platform - Professional Cloud DevOps Engineer (GCP-PCDE) certification exam sample questions

Your rating: None Rating: 5 / 5 (76 votes)