Amazon AWS-Certified-DevOps-Engineer-Professional ExamAmazon AWS Certified DevOps Engineer Professional

Total Question: 371 Last Updated: Jun 13,2019
  • Updated AWS-Certified-DevOps-Engineer-Professional Dumps
  • Based on Real AWS-Certified-DevOps-Engineer-Professional Exams Scenarios
  • Free AWS-Certified-DevOps-Engineer-Professional pdf Demo Available
  • Check out our AWS-Certified-DevOps-Engineer-Professional Dumps in a new PDF format
  • Instant AWS-Certified-DevOps-Engineer-Professional download
  • Guarantee AWS-Certified-DevOps-Engineer-Professional success in first attempt
Package Select:

Questions & Answers PDF

Practice Test Software

Practice Test + PDF 30% Discount

Price: $65.95 $29.99

Buy Now Free Trial

Precise AWS-Certified-DevOps-Engineer-Professional Free Practice Questions 2019

We provide in two formats. Download PDF & Practice Tests. Pass Amazon AWS-Certified-DevOps-Engineer-Professional Exam quickly & easily. The AWS-Certified-DevOps-Engineer-Professional PDF type is available for reading and printing. You can print more and practice many times. With the help of our product and material, you can easily pass the AWS-Certified-DevOps-Engineer-Professional exam.

Free demo questions for Amazon AWS-Certified-DevOps-Engineer-Professional Exam Dumps Below:

You work for a company that automatically tags photographs using artificial neural networks (ANNs), which run on GPUs using C++. You receive millions of images at a time, but only 3 times per day on average. These images are loaded into an AWS S3 bucket you control for you in a batch, and then the customer publishes a JSON-formatted manifest into another S3 bucket you control as well. Each image takes 10 milliseconds to process using a full GPU. Your neural network software requires 5 minutes to bootstrap. Image tags are JSON objects, and you must publish them to an S3 bucket.
Which of these is the best system architectures for this system?

  • A. Create an OpsWorks Stack with two Layer
  • B. The first contains lifecycle scripts for launching and bootstrapping an HTTP API on G2 instances for ANN image processing, and the second has an always-on instance which monitors the S3 manifest bucket for new file
  • C. When a new file is detected, request instances to boot on the ANN laye
  • D. When the instances are booted and the HTTP APIs are up, submit processing requests to indMdual instances.
  • E. Make an S3 notification configuration which publishes to AWS Lambda on the manifest bucke
  • F. Make the Lambda create a CIoudFormation Stack which contains the logic to construct an autoscaling worker tier of EC2 G2 instances with the ANN code on each instanc
  • G. Create an SQS queue of the images in the manifes
  • H. Tear the stack down when the queue is empty.
  • I. Deploy your ANN code to AWS Lambda as a bundled binary for the C++ extensio
  • J. Make an S3 notification configuration on the manifest, which publishes to another AWS Lambda running controller cod
  • K. This controller code publishes all the images in the manifest to AWS Kinesi
  • L. Your ANN code Lambda Function uses the Kinesis as an Event Sourc
  • M. The system automatically scales when the stream contains image events.
  • N. Create an Auto Scaling, Load Balanced Elastic Beanstalk worker tier Application and Environmen
  • O. Deploy the ANN code to G2 instances in this tie
  • P. Set the desired capacity to 1. Make the code periodically check S3 for new manifest
  • Q. When a new manifest is detected, push all of the images in the manifest into the SQS queue associated with the Elastic Beanstalk worker tier.

Answer: B

Explanation: The Elastic Beanstalk option is incorrect because it requires a constantly-polling instance, which may break and costs money.
The Lambda fileet option is incorrect because AWS Lambda does not support GPU usage.
The OpsWorks stack option both requires a constantly-polling instance, and also requires complex timing and capacity planning logic.
The CIoudFormation option requires no polling, has no always-on instances, and allows arbitrarily fast processing by simply setting the instance count as high as needed.

Which is not a restriction on AWS EBS Snapshots?

  • A. Snapshots which are shared cannot be used as a basis for other snapshots.
  • B. You cannot share a snapshot containing an AWS Access Key ID or AWS Secret Access Key.
  • C. You cannot share unencrypted snapshots.
  • D. Snapshot restorations are restricted to the region in which the snapshots are create

Answer: A

Explanation: Snapshots shared with other users are usable in full by the recipient, including but limited to the ability to base modified volumes and snapshots.

What is the scope of an EBS volume?

  • A. VPC
  • B. Region
  • C. Placement Group
  • D. Availability Zone

Answer: D

Explanation: An Amazon EBS volume is tied to its Availability Zone and can be attached only to instances in the same Availability Zone.

You need to know when you spend $1000 or more on AWS. What's the easy way for you to see that notification?

  • A. AWS CIoudWatch Events tied to API calls, when certain thresholds are exceeded, publish to SNS.
  • B. Scrape the billing page periodically and pump into Kinesis.
  • C. AWS CIoudWatch Metrics + Billing Alarm + Lambda event subscriptio
  • D. When a threshold is exceeded, email the manager.
  • E. Scrape the billing page periodically and publish to SN

Answer: C

Explanation: Even if you're careful to stay within the free tier, it's a good idea to create a billing alarm to notify you if you exceed the limits of the free tier. Billing alarms can help to protect you against unknowingly accruing charges if you inadvertently use a service outside of the free tier or if traffic exceeds your expectations. Reference:

What is true of the way that encryption works with EBS?

  • A. Snapshotting an encrypted volume makes an encrypted snapshot; restoring an encrypted snapshot creates an encrypted volume when specified / requested.
  • B. Snapshotting an encrypted volume makes an encrypted snapshot when specified / requested; restoring an encrypted snapshot creates an encrypted volume when specified / requested.
  • C. Snapshotting an encrypted volume makes an encrypted snapshot; restoring an encrypted snapshot always creates an encrypted volume.
  • D. Snapshotting an encrypted volume makes an encrypted snapshot when specified / requested; restoring an encrypted snapshot always creates an encrypted volume.

Answer: C

Explanation: Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that are created from encrypted snapshots are also automatically encrypted. Your encrypted volumes and any associated snapshots always remain protected. For more information, see Amazon EBS Encryption.

When thinking of AWS OpsWorks, which of the following is not an instance type you can allocate in a stack layer?

  • A. 24/7 instances
  • B. Spot instances
  • C. Time-based instances
  • D. Load-based instances

Answer: B

Explanation: AWS OpsWorks supports the following instance types, which are characterized by how they are started and stopped. 24/7 instances are started manually and run until you stop them.Time-based instances are run by AWS OpsWorks on a specified daily and weekly schedule. They allow your stack to automatically adjust the number of instances to accommodate predictable usage patterns. Load-based instances are automatically started and stopped by AWS OpsWorks, based on specified load metrics, such as CPU utilization. They allow your stack to automatically adjust the number of instances to accommodate variations in incoming traffic. Load-based instances are available only for Linux-based stacks. Reference:

Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 400 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?

  • A. Your API Gateway deployment is throttling your requests.
  • B. Your AWS API Gateway Deployment is bottlenecking on request (de)seriaIization.
  • C. You did not request a limit increase on concurrent Lambda function executions.
  • D. You used Consistent Read requests on DynamoDB and are experiencing semaphore loc

Answer: C

Explanation: AWS API Gateway by default throttles at 500 requests per second steady-state, and 1000 requests per second at spike. Lambda, by default, throttles at 100 concurrent requests for safety. At 500 milliseconds (half of a second) per request, you can expect to support 200 requests per second at 100 concurrency. This is less than the 400 requests per second your system now requires. Make a limit increase request via the AWS Support Console.
AWS Lambda: Concurrent requests safety throttle per account -> 100

You need to migrate 10 million records in one hour into DynamoDB. All records are 1.5KB in size. The data is evenly distributed across the partition key. How many write capacity units should you provision during this batch load?

  • A. 6667
  • B. 4166
  • C. 5556
  • D. 2778

Answer: C

Explanation: You need 2 units to make a 1.5KB write, since you round up. You need 20 million total units to perform this load. You have 3600 seconds to do so. DMde and round up for 5556.
Reference: ut.htmI

You are designing a service that aggregates clickstream data in batch and delivers reports to subscribers via email only once per week. Data is extremely spikey, geographically distributed, high-scale, and unpredictable. How should you design this system?

  • A. Use a large RedShift cluster to perform the analysis, and a fileet of Lambdas to perform record inserts into the RedShift table
  • B. Lambda will scale rapidly enough for the traffic spikes.
  • C. Use a CIoudFront distribution with access log delivery to S3. Clicks should be recorded as querystring GETs to the distributio
  • D. Reports are built and sent by periodically running EMR jobs over the access logs in S3.
  • E. Use API Gateway invoking Lambdas which PutRecords into Kinesis, and EMR running Spark performing GetRecords on Kinesis to scale with spike
  • F. Spark on EMR outputs the analysis to S3, which are sent out via email.
  • G. Use AWS Elasticsearch service and EC2 Auto Scaling group
  • H. The Autoscaling groups scale based on click throughput and stream into the Elasticsearch domain, which is also scalabl
  • I. Use Kibana to generate reports periodically.

Answer: B

Explanation: Because you only need to batch analyze, anything using streaming is a waste of money. CIoudFront is a Gigabit-Scale HTTP(S) global request distribution service, so it can handle scale, geo-spread, spikes, and unpredictability. The Access Logs will contain the GET data and work just fine for batch analysis and email using EMR.
Can I use Amazon CIoudFront if I expect usage peaks higher than 10 Gbps or 15,000 RPS? Yes. Complete our request for higher limits here, and we will add more capacity to your account within two business days.

You run accounting software in the AWS cloud. This software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other, unrelated batch jobs that need to run once per day at any time of your choosing. How should you minimize cost?

  • A. Purchase a Heavy Utilization Reserved Instance to run the accounting softwar
  • B. Turn it off after hour
  • C. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
  • D. Purchase a Medium Utilization Reserved Instance to run the accounting softwar
  • E. Turn it off after hour
  • F. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
  • G. Purchase a Light Utilization Reserved Instance to run the accounting softwar
  • H. Turn it off after hour
  • I. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
  • J. Purchase a Full Utilization Reserved Instance to run the accounting softwar
  • K. Turn it off after hour
  • L. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

Answer: A

Explanation: Because the instance will always be online during the day, in a predictable manner, and there are a sequence of batch jobs to perform at any time, we should run the batch jobs when the account software is off. We can achieve Heavy Utilization by alternating these times, so we should purchase the reservation as such, as this represents the lowest cost. There is no such thing a "FuII" level utilization purchases on EC2.

If you're trying to configure an AWS Elastic Beanstalk worker tier for easy debugging if there are problems finishing queue jobs, what should you configure?

  • A. Configure Rolling Deployments.
  • B. Configure Enhanced Health Reporting
  • C. Configure Blue-Green Deployments.
  • D. Configure a Dead Letter Queue

Answer: D

Explanation: Elastic Beanstalk worker environments support Amazon Simple Queue Service (SQS) dead letter queues. A dead letter queue is a queue where other (source) queues can send messages that for some reason could not be successfully processed. A primary benefit of using a dead letter queue is the ability to sideline and isolate the unsuccessfully processed messages. You can then analyze any messages sent
to the dead letter queue to try to determine why they were not successfully processed. Reference: eadletter

For AWS CIoudFormation, which stack state refuses UpdateStack calls?

  • A. <code>UPDATE_ROLLBACK_FAILED</code>
  • C. <code>UPDATE_CONIPLETE</code>
  • D. <code>CREATE_COMPLETE</code>

Answer: A

Explanation: When a stack is in the UPDATE_ROLLBACK_FA|LED state, you can continue rolling it back to return it to a working state (to UPDATE_ROLLBACK_COMPLETE). You cannot update a stack that is in the UPDATE_ROLLBACK_FA|LED state. However, if you can continue to roll it back, you can return the stack to its original settings and try to update it again.
Reference: pdateroIIback.htmI

For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving steady state in Standby mode?

  • A. Detaching
  • B. Terminating:Wait
  • C. Pending
  • D. EnteringStandby

Answer: C

Explanation: You can put any instance that is in an InService state into a Standby state. This enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service. Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of your application until you put them back into service.

You are building out a layer in a software stack on AWS that needs to be able to scale out to react to increased demand as fast as possible. You are running the code on EC2 instances in an Auto Scaling Group behind an ELB. Which application code deployment method should you use?

  • A. SSH into new instances that come online, and deploy new code onto the system by pulling it from an S3 bucket, which is populated by code that you refresh from source control on new pushes.
  • B. Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration.
  • C. Create a Dockerfile when preparing to deploy a new version to production and publish it to S3. Use UserData in the Auto Scaling Launch configuration to pull down the Dockerfile from S3 and run it when new instances launch.
  • D. Create a new Auto Scaling Launch Configuration with UserData scripts configured to pull the latest code at all times.

Answer: B

Explanation: the bootstrapping process can be slower if you have a complex application or multiple applications to install. Managing a fileet of applications with several build tools and dependencies can be a challenging task during rollouts. Furthermore, your deployment service should be designed to do faster rollouts to take advantage of Auto Scaling.

You need to create a Route53 record automatically in CIoudFormation when not running in production during all launches of a Template. How should you implement this?

  • A. Use a <code>Parameter</code> for <code>environment</code>, and add a <code>Condition</code> on the Route53 <code>Resource</code> in the template to create the record only when<code>environment</code> is not <code>production</code>.
  • B. Create two templates, one with the Route53 record value and one with a null value for the recor
  • C. Use the one without it when deploying to production.
  • D. Use a <code>Parameter</code> for <code>environment</code>, and add a <code>Condition</code> on the Route53 <code>Resource</code> in the template to create the record with a null string when<code>environment</code> is <code>production</code>.
  • E. Create two templates, one with the Route53 record and one without i
  • F. Use the one without it when deploying to production.

Answer: A

Explanation: The best way to do this is with one template, and a Condition on the resource. Route53 does not allow null strings for records.

To monitor API calls against our AWS account by different users and entities, we can use to create a history of calls in bulk for later review, and use for reacting to AWS API calls in real-time.

  • A. AWS Config; AWS Inspector
  • B. AWS CIoudTraiI; AWS Config
  • C. AWS CIoudTraiI; CIoudWatch Events
  • D. AWS Config; AWS Lambda

Answer: C

Explanation: CIoudTraiI is a batch API call collection service, CIoudWatch Events enables real-time monitoring of calls through the Rules object interface.

You need to create a simple, holistic check for your system's general availablity and uptime. Your system presents itself as an HTTP-speaking API. What is the most simple tool on AWS to achieve this with?

  • A. Route53 Health Checks
  • B. CIoudWatch Health Checks
  • C. AWS ELB Health Checks
  • D. EC2 Health Checks

Answer: A

Explanation: You can create a health check that will run into perpetuity using Route53, in one API call, which will ping your service via HTTP every 10 or 30 seconds.
Amazon Route 53 must be able to establish a TCP connection with the endpoint within four seconds. In addition, the endpoint must respond with an HTTP status code of 200 or greater and less than 400 within two seconds after connecting.
Reference: s.htmI

Recommend!! Get the Full AWS-Certified-DevOps-Engineer-Professional dumps in VCE and PDF From Surepassexam, Welcome to Download: (New 102 Q&As Version)

Related AWS-Certified-DevOps-Engineer-Professional Articles