AWS DOP-C02 Certification Exam Sample Questions

DOP-C02 Braindumps, DOP-C02 Exam Dumps, DOP-C02 Examcollection, DOP-C02 Questions PDF, DOP-C02 Sample Questions, AWS-DevOps Dumps, AWS-DevOps Official Cert Guide PDF, AWS-DevOps VCE, AWS-DevOps PDFWe have prepared AWS DevOps Engineer Professional (DOP-C02) certification sample questions to make you aware of actual exam properties. This sample question set provides you with information about the AWS-DevOps exam pattern, question formate, a difficulty level of questions and time required to answer each question. To get familiar with AWS Certified DevOps Engineer - Professional exam, we suggest you try our Sample AWS DOP-C02 Certification Practice Exam in simulated AWS certification exam environment.

To test your knowledge and understanding of concepts with real-time scenario based AWS DOP-C02 questions, we strongly recommend you to prepare and practice with Premium AWS-DevOps Certification Practice Exam. The premium AWS-DevOps certification practice exam helps you identify topics in which you are well prepared and topics in which you may need further training to achieving great score in actual AWS Certified DevOps Engineer - Professional exam.

AWS DOP-C02 Sample Questions:

01. A company is reviewing its AWS account security policies. The company has staff members in different countries and wants to monitor its AWS accounts for unusual behavior that is associated with an IAM identity. The company wants to send a notification to any staff member for whom unusual activity is detected. The company also wants to send a notification to the user’s team leader. An external messaging platform will send the notifications.
The platform requires a target user-id for each recipient. The company already has an API on AWS that the company can use to return the user-id of the staff member and the team leader from IAM user names. The company manages its AWS accounts by using AWS Organizations.
Which solution will meet these requirements?
a) Designate an account in the organization as the Amazon GuardDuty administrator. Add the company’s AWS accounts as GuardDuty member accounts that are associated with the GuardDuty administrator account. Create an AWS Lambda function to perform the user-id lookup and to send notifications to the external messaging platform. Create an Amazon EventBridge (Amazon CloudWatch Events) rule in the GuardDuty administrator account to match the Impact:IAMUser/AnomalousBehavior notification type and invoke the Lambda function.
b) Designate an account in the organization as the Amazon Detective administrator. Add the company’s AWS accounts as Detective member accounts that are associated with the Detective administrator account. Create an AWS Lambda function to perform the user-id lookup and to send notifications to the external messaging platform. Create an Amazon EventBridge (Amazon CloudWatch Events) rule in the Detective administrator account to match the Impact:IAMUser/AnomalousBehavior notification type and invoke the Lambda function.
c) Designate an account in the organization as the Amazon GuardDuty administrator. Add the company’s AWS accounts as GuardDuty member accounts that are associated with the GuardDuty administrator account. Create an AWS Lambda function to perform the user-id lookup and to send notifications to the external messaging platform. Create an Amazon Simple Notification Service (Amazon SNS) topic in the GuardDuty administrator account to match the Impact:IAMUser/AnomalousBehavior notification type and invoke the Lambda function.
d) Designate an account in the organization as the Amazon Detective administrator. Add the company’s AWS accounts as Detective member accounts that are associated with the Detective administrator account. Create an AWS Lambda function to perform the user-id lookup and to send notifications to the external messaging platform. Create an Amazon Simple Notification Service (Amazon SNS) topic in the Detective administrator account to match the Impact:IAMUser/AnomalousBehavior notification type and invoke the Lambda function.
 
02. A DevOps engineer is managing a legacy application on AWS. The application is a monolithic Windows program that runs on a single Amazon EC2 instance. The source code for the application is not available, so the application cannot be modified. The application has a memory leak and malfunctions when memory utilization on the EC2 instance increases to more than 90%.
The DevOps engineer has configured the unified Amazon CloudWatch agent on the EC2 instance to collect the operation system’s memory utilization metrics. The DevOps engineer needs to implement a solution to prevent the application from malfunctioning.
Which combination of steps will meet these requirements with the MOST operational efficiency?
(Select TWO.)
a) Create an Amazon EventBridge (Amazon CloudWatch Events) rule that publishes to an Amazon Simple Notification Service (Amazon SNS) topic when memory utilization increases to more than 80%.
b) Create a metric filter on memory utilization in Amazon CloudWatch Logs. Create a CloudWatch alarm on the memory utilization filter. Configure the alarm to publish to an Amazon Simple Notification Service (Amazon SNS) topic when the memory utilization increases to more than 80%.
c) Create a CloudWatch alarm on the memory utilization metric. Configure the alarm to publish to an Amazon Simple Notification Service (Amazon SNS) topic when the memory utilization increases to more than 80%.
d) Configure an AWS Lambda function to restart the application by using AWS Systems Manager Run Command. Subscribe the Lambda function to the Amazon Simple Notification Service (Amazon SNS) topic.
e) Configure the EC2 instance to run a script that restarts the application. Subscribe the EC2 instance to the Amazon Simple Notification Service (Amazon SNS) topic.
 
03. A company controls the source code for an application in AWS CodeCommit. The company is creating a CI/CD pipeline for the application by using AWS CodePipeline.
The pipeline must start automatically when changes occur to the main branch of the CodeCommit repository. Changes occur frequently every day, so the pipeline must be as responsive as possible.
What should a DevOps engineer do to meet these requirements?
a) Configure the pipeline to periodically check the repository’s main branch for changes. Start the pipeline when changes are detected.
b) Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to detect changes to the repository’s main branch. Configure the pipeline to start in response to the changes.
c) Configure the repository to periodically run an AWS Lambda function. Configure the function to check the repository’s main branch and to start the pipeline when the function detects changes.
d) Configure the repository to publish a notification to an Amazon Simple Notification Service (Amazon SNS) topic when changes occur to the repository’s main branch. Subscribe the pipeline to the SNS topic.
 
04. A company is using AWS CodeBuild to build an application. Company policy requires all build artifacts to be encrypted at rest. The company must limit access to the artifacts to IAM users in an operations IAM group that have permission to assume an operations IAM role.
Which solution will meet these requirements?
a) Add a post-build command to the CodeBuild build specification to push build objects to an Amazon S3 bucket. Set a bucket policy that prevents upload to the bucket unless the request includes the x-amzserver-side-encryption header. Add a Deny statement for all actions with a NotPrincipal element that references the operations IAM group.
b) Add a post-build command to the CodeBuild build specification to push build objects to an Amazon S3 bucket. Configure an S3 event notification to invoke an AWS Lambda function to get the object, encrypt the object, and put the object back into the S3 bucket with a tag key of Encrypted and a tag value of True. Set a bucket policy with a Deny statement for all actions with a NotPrincipal element that references the operations IAM group. Include in the policy a Condition element that references the Encrypted tag.
c) Add a post-build command to the CodeBuild build specification to push build objects to an Amazon S3 bucket that has S3 default encryption enabled. Set a bucket policy that contains a Deny statement for all actions with a NotPrincipal element that references the operations IAM role.
d) Add a post-build command to the CodeBuild build specification to call the AWS Key Management Service (AWS KMS) Encrypt API operation and pass the artifact to AWS KMS for encryption with a specified KMS key. Push the encrypted artifact to an Amazon S3 bucket. Set up the operations IAM group as the only user for the specified KMS key.
 
05. A company runs an application on Amazon EC2 instances that use the latest version of the Amazon Linux 2 AMI. When server administrators apply new security patches, the server administrators manually remove affected instances from service, patch the instances, and place the instances back into service.
A new security policy requires the company to apply security patches within 7 days after patches are released. The company’s security team must verify that all the EC2 instances are compliant with this policy. The patching must occur during a time that has the least impact on users.
Which solution will automate compliance with these requirements?
a) Configure an AWS CodeBuild project to download and apply patches to all the instances over SSH. Use an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule to run the CodeBuild project during a maintenance window.
b) Use AWS Systems Manager Patch Manager to create a patch baseline. Create a script on the EC2 instances to use the AWS CLI to pull the latest patches from Patch Manager. Create a cron job to schedule the script to run during a maintenance window.
c) Create a script to apply any available security patches. Create a cron job to schedule the script to run during a maintenance window. Install the script and cron job on the application AMI. Redeploy the application.
d) Enlist all the EC2 instances in an AWS Systems Manager Patch Manager patch group. Use Patch Manager to create a patch baseline. Configure a maintenance window to apply the patch baseline.
 
06. A DevOps team has an application that stores critical company assets in an existing Amazon S3 bucket. The team uses a single AWS Region. A new company policy requires the team to deploy the application to multiple Regions. The assets must always be accessible. Users must use the same endpoint to access the assets.
Which combination of steps should the team take to meet these requirements in the MOST operationally efficient way?
(Select THREE.)
a) Use AWS CloudFormation StackSets to create a new S3 bucket that has versioning enabled in each required Region. Copy the assets from the existing S3 bucket to the new S3 buckets. Create an AWS Lambda function to copy files that are added to the new S3 bucket in the primary Region to the additional Regions.
b) Use AWS CloudFormation StackSets to create a new S3 bucket that has versioning enabled in each required Region. Create multiple S3 replication rules on the new S3 bucket in the primary Region to replicate all its contents to the additional Regions. Copy the assets from the existing S3 bucket to the new S3 bucket in the primary Region.
c) Create an Amazon CloudFront distribution. Configure new origins for each S3 bucket. Create an origin group that contains all the newly created origins. Update the default behavior of the distribution to use the new origin group.
d) Create an Amazon CloudFront distribution. Configure new origins for each S3 bucket. Create a Lambda@Edge function to validate the availability of the origin and to route the viewer request to an available nearby origin.
e) Create an Amazon Route 53 alias record. Configure a failover routing policy that uses the newly created S3 buckets as a target.
f) Create an Amazon Route 53 alias record. Configure a simple routing policy that uses the Amazon CloudFront distribution as a target.
 
07. A company uses AWS CloudTrail on all its AWS accounts and sends all trails to a centralized Amazon S3 bucket. The company sends specified events to a third-party logging tool by using S3 event notifications and an AWS Lambda function. The company has hired a security services provider to set up a security operations center.
The security services provider wants to receive the CloudTrail logs through an Amazon Simple Queue Service (Amazon SQS) queue. The company must continue to use S3 event notifications and the Lambda function to send events to the third-party logging tool.
What is the MOST operationally efficient way to meet these requirements?
a) Add an additional notification to the S3 bucket for all CreateObject events to send all objects to the SQS queue.
b) Replace the existing S3 event notification destination with an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the Lambda function and the SQS queue to the topic.
c) Replace the existing S3 event notification destination with an Amazon Kinesis data stream. Create consumers for the Lambda function and the SQS queue.
d) Configure the trail to send logs to Amazon CloudWatch Logs. Subscribe the SQS queue to the CloudWatch Logs log group.
 
08. A development team is designing an application that has a large customer base spread across three AWS Regions. The application will use an Amazon DynamoDB table that must be available in all three Regions to deliver low-latency data access. When the table is updated in one Region, the changes must seamlessly propagate to the other Regions.
How should a DevOps engineer configure the table to meet these requirements with the LEAST operational overhead?
a) Create a DynamoDB table in each of the three Regions. Give each table the same name.
b) Configure three DynamoDB tables in each of the three Regions. Use the AWS SDK for DynamoDB to synchronize data changes among the tables.
c) Configure a multi-Region, multi-active DynamoDB global table that includes the three Regions.
d) Use DynamoDB global tables to configure a primary table in one Region and a read replica in each of the other Regions.
 
09. A DevOps engineer needs to implement a blue/green deployment process for an application on AWS. The DevOps engineer must gradually shift the traffic between the environments.
The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group. The application stores data on an Amazon RDS Multi-AZ DB instance. Amazon Route 53 provides external DNS.
Which combination of steps should the DevOps engineer take to meet these requirements?
(Select THREE.)
a) Create a second Auto Scaling group behind the same ALB.
b) Create a second Auto Scaling group behind a second ALB.
c) In Route 53, create a second alias record that points to the new environment. Use a failover routing policy to choose between the two records.
d) In Route 53, create a second alias record that points to the new environment. Use a weighted routing policy to choose between the two records.
e) Configure the new EC2 instances to use the primary RDS DB instance.
f) Configure the new EC2 instances to use the standby RDS DB instance.
 
10. A company has a legacy API that runs on a fleet of Amazon EC2 instances behind a public Application Load Balancer (ALB). The ALB has access logging enabled and stores the access logs in Amazon S3. The API is available through the hostname api.example.com. The company uses Amazon Route 53 to manage the hostname.
Developers have rebuilt five of the API endpoints by using a different AWS Lambda function for each endpoint. A DevOps engineer wants to test the new versions of the Lambda functions with a limited number of random customers. To ensure compatibility with an existing log processing service, the test must not affect the ALB access logs.
How should the DevOps engineer perform the test to meet these requirements?
a) Add the five Lambda functions as targets to the existing target group for the EC2 instances. Set the weight in the target group of each Lambda function target to be less than the EC2 instance targets. Amend the default rule on the ALB to enable target group-level stickiness.
b) Create a single target group that includes all the Lambda functions as individual targets. On the ALB, create a new listener rule that includes a host header condition that matches the API endpoint’s hostname. Add the target group to the listener rule. Specify a lower weight for the new target group than the weight of the default rule’s target group.
c) Create a new ALB and a new target group for each Lambda function. Create a new listener rule that includes a host header condition that matches each of the endpoints and forwards traffic to the target groups. Create a new Route 53 alias record with a weight of 10. Update the existing Route 53 record for the api.example.com hostname with a weight of 90.
d) Create a new target group for each Lambda function. On the ALB, create new listener rules that include a path condition that matches each of the different endpoints. Set the rules to be weighted between the Lambda function target group for that endpoint and the instance-based target group.

Answers:

Question: 01
Answer: a
Question: 02
Answer: c, d
Question: 03
Answer: b
Question: 04
Answer: c
Question: 05
Answer: d
Question: 06
Answer: b, c, f
Question: 07
Answer: b
Question: 08
Answer: d
Question: 09
Answer: b, d, e
Question: 10
Answer: d

Note: Please update us by writing an email on feedback@vmexam.com for any error in AWS Certified DevOps Engineer - Professional certification exam sample questions

Your rating: None Rating: 4.8 / 5 (80 votes)