Spring Sale Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: buysanta

Exact2Pass Menu

AWS Certified DevOps Engineer - Professional

Last Update 12 hours ago Total Questions : 425

The AWS Certified DevOps Engineer - Professional content is now fully updated, with all current exam questions added 12 hours ago. Deciding to include DOP-C02 practice exam questions in your study plan goes far beyond basic test preparation.

You'll find that our DOP-C02 exam questions frequently feature detailed scenarios and practical problem-solving exercises that directly mirror industry challenges. Engaging with these DOP-C02 sample sets allows you to effectively manage your time and pace yourself, giving you the ability to finish any AWS Certified DevOps Engineer - Professional practice test comfortably within the allotted time.

Question # 11

A company wants to use AWS CloudFormation for infrastructure deployment. The company has strict tagging and resource requirements and wants to limit the deployment to two Regions. Developers will need to deploy multiple versions of the same application.

Which solution ensures resources are deployed in accordance with company policy?

A.

Create AWS Trusted Advisor checks to find and remediate unapproved CloudFormation StackSets.

B.

Create a Cloud Formation drift detection operation to find and remediate unapproved CloudFormation StackSets.

C.

Create CloudFormation StackSets with approved CloudFormation templates.

D.

Create AWS Service Catalog products with approved CloudFormation templates.

Question # 12

A space exploration company receives telemetry data from multiple satellites. Small packets of data are received through Amazon API Gateway and are placed directly into an Amazon Simple Queue Service (Amazon SQS) standard queue. A custom application is subscribed to the queue and transforms the data into a standard format.

Because of inconsistencies in the data that the satellites produce, the application is occasionally unable to transform the data. In these cases, the messages remain in the SQS queue. A DevOps engineer must develop a solution that retains the failed messages and makes them available to scientists for review and future processing.

Which solution will meet these requirements?

A.

Configure AWS Lambda to poll the SQS queue and invoke a Lambda function to check whether the queue messages are valid. If validation fails, send a copy of the data that is not valid to an Amazon S3 bucket so that the scientists can review and correct the data. When the data is corrected, amend the message in the SQS queue by using a replay Lambda function with the corrected data.

B.

Convert the SQS standard queue to an SQS FIFO queue. Configure AWS Lambda to poll the SQS queue every 10 minutes by using an Amazon EventBridge schedule. Invoke the Lambda function to identify any messages with a SentTimestamp value that is older than 5 minutes, push the data to the same location as the application ' s output location, and remove the messages from the queue.

C.

Create an SQS dead-letter queue. Modify the existing queue by including a redrive policy that sets the Maximum Receives setting to 1 and sets the dead-letter queue ARN to the ARN of the newly created queue. Instruct the scientists to use the dead-letter queue to review the data that is not valid. Reprocess this data at a later time.

D.

Configure API Gateway to send messages to different SQS virtual queues that are named for each of the satellites. Update the application to use a new virtual queue for any data that it cannot transform, and send the message to the new virtual queue. Instruct the scientists to use the virtual queue to review the data that is not valid. Reprocess this data at a later time.

Question # 13

A DevOps engineer is designing an application that integrates with a legacy REST API. The application has an AWS Lambda function that reads records from an Amazon Kinesis data stream. The Lambda function sends the records to the legacy REST API.

Approximately 10% of the records that the Lambda function sends from the Kinesis data stream have data errors and must be processed manually. The Lambda function event source configuration has an Amazon Simple Queue Service (Amazon SQS) dead-letter queue as an on-failure destination. The DevOps engineer has configured the Lambda function to process records in batches and has implemented retries in case of failure.

During testing the DevOps engineer notices that the dead-letter queue contains many records that have no data errors and that already have been processed by the legacy REST API. The DevOps engineer needs to configure the Lambda function ' s event source options to reduce the number of errorless records that are sent to the dead-letter queue.

Which solution will meet these requirements?

A.

Increase the retry attempts

B.

Configure the setting to split the batch when an error occurs

C.

Increase the concurrent batches per shard

D.

Decrease the maximum age of record

Question # 14

A company has developed an AWS Lambda function that handles orders received through an API. The company is using AWS CodeDeploy to deploy the Lambda function as the final stage of a CI/CD pipeline.

A DevOps engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment. After some investigation the DevOps engineer believes the failures are due to database changes not having fully propagated before the Lambda function is invoked

How should the DevOps engineer overcome this?

A.

Add a BeforeAllowTraffic hook to the AppSpec file that tests and waits for any necessary database changes before traffic can flow to the new version of the Lambda function.

B.

Add an AfterAlIowTraffic hook to the AppSpec file that forces traffic to wait for any pending database changes before allowing the new version of the Lambda function to respond.

C.

Add a BeforeAllowTraffic hook to the AppSpec file that tests and waits for any necessary database changes before deploying the new version of the Lambda function.

D.

Add a validateService hook to the AppSpec file that inspects incoming traffic and rejects the payload if dependent services such as the database are not yet ready.

Question # 15

A company has an application that runs on AWS Lambda and sends logs to Amazon CloudWatch Logs. An Amazon Kinesis data stream is subscribed to the log groups in CloudWatch Logs. A single consumer Lambda function processes the logs from the data stream and stores the logs in an Amazon S3 bucket.

The company ' s DevOps team has noticed high latency during the processing and ingestion of some logs.

Which combination of steps will reduce the latency? (Select THREE.)

A.

Create a data stream consumer with enhanced fan-out. Set the Lambda function that processes the logs as the consumer.

B.

Increase the ParallelizationFactor setting in the Lambda event source mapping.

C.

Configure reserved concurrency for the Lambda function that processes the logs.

D.

Increase the batch size in the Kinesis data stream.

E.

Turn off the ReportBatchltemFailures setting in the Lambda event source mapping.

F.

Increase the number of shards in the Kinesis data stream.

Question # 16

A DevOps engineer is creating an AWS CloudFormation template to deploy a web service. The web service will run on Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). The DevOps engineer must ensure that the service can accept requests from clients that have IPv6 addresses.

What should the DevOps engineer do with the CloudFormation template so that IPv6 clients can access the web service?

A.

Add an IPv6 CIDR block to the VPC and the private subnet for the EC2 instances. Create route table entries for the IPv6 network, use EC2 instance types that support IPv6, and assign IPv6 addresses to each EC2 instance.

B.

Assign each EC2 instance an IPv6 Elastic IP address. Create a target group, and add the EC2 instances as targets. Create a listener on port 443 of the ALB, and associate the target group with the ALB.

C.

Replace the ALB with a Network Load Balancer (NLB). Add an IPv6 CIDR block to the VPC and subnets for the NLB, and assign the NLB an IPv6 Elastic IP address.

D.

Add an IPv6 CIDR block to the VPC and subnets for the ALB. Create a listener on port 443. and specify the dualstack IP address type on the ALB. Create a target group, and add the EC2 instances as targets. Associate the target group with the ALB.

Question # 17

A company has chosen AWS to host a new application. The company needs to implement a multi-account strategy. A DevOps engineer creates a new AWS account and an organization in AWS Organizations. The DevOps engineer also creates the OU structure for the organization and sets up a landing zone by using AWS Control Tower.

The DevOps engineer must implement a solution that automatically deploys resources for new accounts that users create through AWS Control Tower Account Factory. When a user creates a new account, the solution must apply AWS CloudFormation templates and SCPs that are customized for the OU or the account to automatically deploy all the resources that are attached to the account. All the OUs are enrolled in AWS Control Tower.

Which solution will meet these requirements in the MOST automated way?

A.

Use AWS Service Catalog with AWS Control Tower. Create portfolios and products in AWS Service Catalog. Grant granular permissions to provision these resources. Deploy SCPs by using the AWS CLI and JSON documents.

B.

Deploy CloudFormation stack sets by using the required templates. Enable automatic deployment. Deploy stack instances to the required accounts. Deploy a CloudFormation stack set to the organization’s management account to deploy SCPs.

C.

Create an Amazon EventBridge rule to detect the CreateManagedAccount event. Configure AWS Service Catalog as the target to deploy resources to any new accounts. Deploy SCPs by using the AWS CLI and JSON documents.

D.

Deploy the Customizations for AWS Control Tower (CfCT) solution. Use an AWS CodeCommit repository as the source. In the repository, create a custom package that includes the CloudFormation templates and the SCP JSON documents.

Question # 18

A company uses Amazon RDS for Microsoft SQL Server as its primary database. They need high availability within and across AWS Regions, with an RPO < 1 min and RTO < 10 min. Route 53 CNAME is used for the DB endpoint and must redirect to standby during failover.

Which solution meets these requirements?

A.

Deploy an Amazon RDS for SQL Server Multi-AZ DB cluster with cross-Region read replicas. Use automation to promote replica and update Route 53.

B.

Deploy RDS Multi-AZ with snapshots copied every 5 minutes; use Lambda to restore snapshot and update Route 53 on failover.

C.

Deploy Single-AZ RDS and use AWS DMS to continuously replicate to another Region. Use CloudWatch alarms for failover notification.

D.

Deploy Single-AZ RDS and use AWS Backup for cross-Region backups every 30 seconds. Use automation to restore and update Route 53 during failover.

Question # 19

A DevOps team manages infrastructure for an application. The application uses long-running processes to process items from an Amazon Simple Queue Service (Amazon SQS) queue. The application is deployed to an Auto Scaling group.

The application recently experienced an issue where items were taking significantly longer to process. The queue exceeded the expected size, which prevented various business processes from functioning properly. The application records all logs to a third-party tool.

The team is currently subscribed to an Amazon Simple Notification Service (Amazon SNS) topic that the team uses for alerts. The team needs to be alerted if the queue exceeds the expected size.

Which solution will meet these requirements with the MOST operational efficiency?

A.

Create an Amazon CloudWatch metric alarm with a period of 1 hour and a static threshold to alarm if the average of the ApproximateNumberOfMessagesDelayed metric is greater than the expected value. Configure the alarm to notify the SNS topic.

B.

Create an Amazon CloudWatch metric alarm with a period of 1 hour and a static threshold to alarm if the sum of the ApproximateNumberOfMessagesVisible metric is greater than the expected value. Configure the alarm to notify the SNS topic.

C.

Create an AWS Lambda function that retrieves the ApproximateNumberOfMessages SQS queue attribute value and publishes it as a new CloudWatch custom metric. Create an Amazon EventBridge rule that is scheduled to run every 5 minutes and that invokes the Lambda function. Configure a CloudWatch metrics alarm with a period of 1 hour and a static threshold to alarm if the sum of the new custom metric is greater than the expected value.

D.

Create an AWS Lambda function that checks the ApproximateNumberOfMessagesDelayed SQS queue attribute and compares the value to a defined expected size in the function. Create an Amazon EventBridge rule that is scheduled to run every 5 minutes and that invokes the Lambda function. When the ApproximateNumberOfMessagesDelayed SQS queue attribute exceeds the expected size, send a notification the SNS topic.

Question # 20

A company has application code in an AWS CodeConnections compatible Git repository. The company wants to configure unit tests to run when pull requests are opened. The company wants to ensure that the test status is visible in pull requests when the tests are completed. The company wants to save output data files that the tests generate to an Amazon S3 bucket after the tests are finished. Which combination of solutions will meet these requirements? (Select THREE.)

A.

Create an IAM service role to allow access to the resources that are required to run the tests.

B.

Create a pipeline in AWS CodePipeline that has a test stage. Create a trigger to run the pipeline when pull requests are created or updated. Add a source action to report test results.

C.

Create an AWS CodeBuild project to run the tests. Enable webhook triggers to run the tests when pull requests are created or updated. Enable build status reporting to report test results.

D.

Create a buildspec.yml file that has a reports section to upload output files when the tests have finished running.

E.

Create a buildspec.yml file that has an artifacts section to upload artifacts when the tests have finished running.

F.

Create an appspec.yml file that has a files section to upload output files when the tests have finished running.

Go to page: