01. You have an application running on an EC2 instance which will allow users to download files from a private S3 bucket using a pre-signed URL.
Before generating the URL, the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely?
a) Create an IAM user for the application with permissions that allow list access to the S3 bucket; launch the instance as the IAM user, and retrieve the IAM user's credentials from the EC2 instance user data.
b) Create an IAM role for EC2 that allows list access to objects In the S3 bucket; launch the Instance with the role, and retrieve the role's credentials from the EC2 instance metadata.
c) Create an IAM user for the application with permissions that allow list access to the S3 bucket; the application retrieves the 1AM user credentials from a temporary directory with permissions that allow read access only to the Application user.
d) Use the AWS account access keys; the application retrieves the credentials from the source code of the application.
02. An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes.
The customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?
a) Take 15 minute DB backups stored in Glacier with transaction logs stored in S3 every 5 minutes.
b) Use synchronous database master-slave replication between two availability zones.
c) Take hourly DB backups to EC2 instance store volumes with transaction logs stored In S3 every 5 minutes.
d) Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes.
03. You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be recreated in the second region?
(Choose 2 answers)
a) Route53 Record Sets
b) Launch Configurations
c) EC2 Key Pairs
d) Security Groups
e) IAM Roles
f) Elastic IP Addresses (EIP)
04. An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon DynamoDB for storage.
When creating the CloudFormation template which of the following would allow the application Instance access to the DynamoDB tables without exposing API credentials?
a) Create an Identity and Access Management Role that has the required permissions to read and write from the .required DynamoDB table and associate the Role to the application instances by referencing an instance profile.
b) Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance.
c) Use the Parameter section in the CloudFormation template to have the user input Access and Secret keys from an already created IAM user that has the permissions required to read and write from the required DynamoDB table.
d) Create an Identity and Access Management user in the CloudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt function to retrieve the Access and Secret keys and pass them to the application instance through user-data.
05. Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest?
(Choose 3 answers)
a) Implement third party volume encryption tools
b) Implement SSL/TLS for all services running on the server
c) Encrypt data inside your applications before storing it on EBS
d) Encrypt data using native data encryption drivers at the file system level
e) Do nothing as EBS volumes are encrypted by default
06. Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes.
To be certain that you do not drop any writes to a database hosted on AWS, which service should you use?
a) Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database.
b) Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.
c) Amazon ElastiCache to store the writes until the writes are committed to the database.
d) Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
07. Company B is launching a new game app for mobile devices. Users will log into the game using their existing social media account. To streamline data capture, Company B would like to directly save player data and scoring information from the mobile app to a DynamoDB table named ScoreData.
When a user saves their game, the progress data will be stored to the GameState S3 bucket. What is the best approach for storing data to DynamoDB and S3?
a) Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile app with access to the ScoreData DynamoDB table and the GameState S3 bucket.
b) Use temporary security credentials that assume a role providing access to the ScoreData DynamoDB table and the GameState S3 bucket using web identity federation
c) Use an IAM user with access credentials assigned a role providing access to the ScoreData DynamoDB table and the GameState S3 bucket for distribution with the mobile app
d) Use an EC2 instance that is launched with an EC2 role providing access to the ScoreData DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web services
08. You require the ability to analyze a customer's clickstream data on a website, so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on.
This data will be used in real time to modify the page layouts as customers dick through the site, to increase stickiness and advertising click-through. Which option meets the requirements for capturing and analyzing this data?
a) Log dicks in weblogs by URL, store to Amazon S3, and then analyze with Elastic Map Reduce.
b) Publish web clicks by session to an Amazon SQS queue; then periodically drain these events to Amazon RDS and analyze with SQL.
c) Push web clicks by session to Amazon Kinesis, then analyze behavior using Kinesis workers.
d) Write click events directly to Amazon Redshift, and then analyze with SQL.
09. A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?
a) Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch, and RDS with read replicas
b) Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch, and multi-AZ RDS
c) Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch, and RDS with read replicas
d) Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch, and multi-AZ RDS
10. You require the ability to analyze a large amount of data which is stored on Amazon S3 using Amazon Elastic MapReduce. You are using the cc2.8xlarge instance type, whose CPUs are mostly idle during processing.
Which of the below would be the most cost efficient way to reduce the runtime of the job?
a) Create fewer, larger files m Amazon S3.
b) Use smaller instances that have higher aggregate I/O performance.
c) Create more, smaller files on Amazon S3.
d) Add additional cc2.8xlarge instances by introducing a task group.