Summer Sale Special 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: ex2p65

Exact2Pass Menu

Question # 4

You are managing an application that runs in Compute Engine The application uses a custom HTTP server to expose an API that is accessed by other applications through an internal TCP/UDP load balancer A firewall rule allows access to the API port from 0.0.0-0/0. You need to configure Cloud Logging to log each IP address that accesses the API by using the fewest number of steps What should you do Bret?

A.

Enable Packet Mirroring on the VPC

B.

Install the Ops Agent on the Compute Engine instances.

C.

Enable logging on the firewall rule

D.

Enable VPC Flow Logs on the subnet

Full Access
Question # 5

Your organization recently adopted a container-based workflow for application development. Your team develops numerous applications that are deployed continuously through an automated build pipeline to a Kubernetes cluster in the production environment. The security auditor is concerned that developers or operators could circumvent automated testing and push code changes to production without approval. What should you do to enforce approvals?

A.

Configure the build system with protected branches that require pull request approval.

B.

Use an Admission Controller to verify that incoming requests originate from approved sources.

C.

Leverage Kubernetes Role-Based Access Control (RBAC) to restrict access to only approved users.

D.

Enable binary authorization inside the Kubernetes cluster and configure the build pipeline as an attestor.

Full Access
Question # 6

Your company follows Site Reliability Engineering principles. You are writing a postmortem for an incident, triggered by a software change, that severely affected users. You want to prevent severe incidents from happening in the future. What should you do?

A.

Identify engineers responsible for the incident and escalate to their senior management.

B.

Ensure that test cases that catch errors of this type are run successfully before new software releases.

C.

Follow up with the employees who reviewed the changes and prescribe practices they should follow in the future.

D.

Design a policy that will require on-call teams to immediately call engineers and management to discuss a plan of action if an incident occurs.

Full Access
Question # 7

Your company runs an e-commerce business. The application responsible for payment processing has structured JSON logging with the following schema:

Capture and access of logs from the payment processing application is mandatory for operations, but the jsonPayload.user_email field contains personally identifiable information (PII). Your security team does not want the entire engineering team to have access to PII. You need to stop exposing PII to the engineering team and restrict access to security team members only. What should you do?

A.

Apply a jsonPayload.user_email exclusion filter to the _Default bucket.

B.

Apply the conditional role binding resource.name.extract("locations/global/buckets/(bucket)/") == "_Default" to the _Default bucket.

C.

Apply a jsonPayload.user_email restricted field to the _Default bucket. Grant the Log Field Accessor role to the security team members.

D.

Modify the application to toggle inclusion of user_email when the log_user_email environment variable is set to true. Restrict the engineering team members who can change the production environment variable by using the CODEOWNERS file.

Full Access
Question # 8

You support a large service with a well-defined Service Level Objective (SLO). The development team deploys new releases of the service multiple times a week. If a major incident causes the service to miss its SLO, you want the development team to shift its focus from working on features to improving service reliability. What should you do before a major incident occurs?

A.

Develop an appropriate error budget policy in cooperation with all service stakeholders.

B.

Negotiate with the product team to always prioritize service reliability over releasing new features.

C.

Negotiate with the development team to reduce the release frequency to no more than once a week.

D.

Add a plugin to your Jenkins pipeline that prevents new releases whenever your service is out of SLO.

Full Access
Question # 9

Your company runs applications in Google Kubernetes Engine (GKE) that are deployed following a GitOps methodology.

Application developers frequently create cloud resources to support their applications. You want to give developers the ability to manage infrastructure as code, while ensuring that you follow Google-recommended practices. You need to ensure that infrastructure as code reconciles periodically to avoid configuration drift. What should you do?

A.

Install and configure Config Connector in Google Kubernetes Engine (GKE).

B.

Configure Cloud Build with a Terraform builder to execute plan and apply commands.

C.

Create a Pod resource with a Terraform docker image to execute terraform plan and terraform apply commands.

D.

Create a Job resource with a Terraform docker image to execute terraforrm plan and terraform apply commands.

Full Access
Question # 10

You are building the Cl/CD pipeline for an application deployed to Google Kubernetes Engine (GKE) The application is deployed by using a Kubernetes Deployment, Service, and Ingress The application team asked you to deploy the application by using the blue'green deployment methodology You need to implement the rollback actions What should you do?

A.

Run the kubectl rollout undo command

B.

Delete the new container image, and delete the running Pods

C.

Update the Kubernetes Service to point to the previous Kubernetes Deployment

D.

Scale the new Kubernetes Deployment to zero

Full Access
Question # 11

You support a trading application written in Python and hosted on App Engine flexible environment. You want to customize the error information being sent to Stackdriver Error Reporting. What should you do?

A.

Install the Stackdriver Error Reporting library for Python, and then run your code on a Compute Engine VM.

B.

Install the Stackdriver Error Reporting library for Python, and then run your code on Google Kubernetes Engine.

C.

Install the Stackdriver Error Reporting library for Python, and then run your code on App Engine flexible environment.

D.

Use the Stackdriver Error Reporting API to write errors from your application to ReportedErrorEvent, and then generate log entries with properly formatted error messages in Stackdriver Logging.

Full Access
Question # 12

You are deploying an application that needs to access sensitive information. You need to ensure that this information is encrypted and the risk of exposure is minimal if a breach occurs. What should you do?

A.

Store the encryption keys in Cloud Key Management Service (KMS) and rotate the keys frequently

B.

Inject the secret at the time of instance creation via an encrypted configuration management system.

C.

Integrate the application with a Single sign-on (SSO) system and do not expose secrets to the application

D.

Leverage a continuous build pipeline that produces multiple versions of the secret for each instance of the application.

Full Access
Question # 13

You are currently planning how to display Cloud Monitoring metrics for your organization's Google Cloud projects. Your organization has three folders and six projects:

You want to configure Cloud Monitoring dashboards lo only display metrics from the projects within one folder You need to ensure that the dashboards do not display metrics from projects in the other folders You want to follow Google-recommended practices What should you do?

A.

Create a single new scoping project

B.

Create new scoping projects for each folder

C.

Use the current app-one-prod project as the scoping project

D.

Use the current app-one-dev, app-one-staging and app-one-prod projects as the scoping project for each folder

Full Access
Question # 14

You are responsible for creating and modifying the Terraform templates that define your Infrastructure. Because two new engineers will also be working on the same code, you need to define a process and adopt a tool that will prevent you from overwriting each other's code. You also want to ensure that you capture all updates in the latest version. What should you do?

A.

• Store your code in a Git-based version control system.• Establish a process that allows developers to merge their own changes at the end of each day.• Package and upload code lo a versioned Cloud Storage bucket as the latest master version.

B.

• Store your code in a Git-based version control system.• Establish a process that includes code reviews by peers and unit testing to ensure integrity and functionality before integration of code.• Establish a process where the fully integrated code in the repository becomes the latest master version.

C.

• Store your code as text files in Google Drive in a defined folder structure that organizes the files.• At the end of each day. confirm that all changes have been captured in the files within the folder structure.• Rename the folder structure with a predefined naming convention that increments the version.

D.

• Store your code as text files in Google Drive in a defined folder structure that organizes the files.• At the end of each day, confirm that all changes have been captured in the files within the folder structure and create a new .zip archive with a predefined naming convention.• Upload the .zip archive to a versioned Cloud Storage bucket and accept it as the latest version.

Full Access
Question # 15

You need to define Service Level Objectives (SLOs) for a high-traffic multi-region web application. Customers expect the application to always be available and have fast response times. Customers are currently happy with the application performance and availability. Based on current measurement, you observe that the 90th percentile of latency is 120ms and the 95th percentile of latency is 275ms over a 28-day window. What latency SLO would you recommend to the team to publish?

A.

90th percentile – 100ms95th percentile – 250ms

B.

90th percentile – 120ms95th percentile – 275ms

C.

90th percentile – 150ms95th percentile – 300ms

D.

90th percentile – 250ms95th percentile – 400ms

Full Access
Question # 16

Your team has recently deployed an NGINX-based application into Google Kubernetes Engine (GKE) and has exposed it to the public via an HTTP Google Cloud Load Balancer (GCLB) ingress. You want to scale the deployment of the application's frontend using an appropriate Service Level Indicator (SLI). What should you do?

A.

Configure the horizontal pod autoscaler to use the average response time from the Liveness and Readiness probes.

B.

Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster as pods expand.

C.

Install the Stackdriver custom metrics adapter and configure a horizontal pod autoscaler to use the number of requests provided by the GCLB.

D.

Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request metrics exposed by the NGINX deployment.

Full Access
Question # 17

You are writing a postmortem for an incident that severely affected users. You want to prevent similar incidents in the future. Which two of the following sections should you include in the postmortem? (Choose two.)

A.

An explanation of the root cause of the incident

B.

A list of employees responsible for causing the incident

C.

A list of action items to prevent a recurrence of the incident

D.

Your opinion of the incident’s severity compared to past incidents

E.

Copies of the design documents for all the services impacted by the incident

Full Access
Question # 18

You support a high-traffic web application that runs on Google Cloud Platform (GCP). You need to measure application reliability from a user perspective without making any engineering changes to it. What should you do?

Choose 2 answers

A.

Review current application metrics and add new ones as needed.

B.

Modify the code to capture additional information for user interaction.

C.

Analyze the web proxy logs only and capture response time of each request.

D.

Create new synthetic clients to simulate a user journey using the application.

E.

Use current and historic Request Logs to trace customer interaction with the application.

Full Access
Question # 19

Your team is designing a new application for deployment into Google Kubernetes Engine (GKE). You need to set up monitoring to collect and aggregate various application-level metrics in a centralized location. You want to use Google Cloud Platform services while minimizing the amount of work required to set up monitoring. What should you do?

A.

Publish various metrics from the application directly to the Slackdriver Monitoring API, and then observe these custom metrics in Stackdriver.

B.

Install the Cloud Pub/Sub client libraries, push various metrics from the application to various topics, and then observe the aggregated metrics in Stackdriver.

C.

Install the OpenTelemetry client libraries in the application, configure Stackdriver as the export destination for the metrics, and then observe the application's metrics in Stackdriver.

D.

Emit all metrics in the form of application-specific log messages, pass these messages from the containers to the Stackdriver logging collector, and then observe metrics in Stackdriver.

Full Access
Question # 20

Your team uses Cloud Build for all CI/CO pipelines. You want to use the kubectl builder for Cloud Build to deploy new images to Google Kubernetes Engine (GKE). You need to authenticate to GKE while minimizing development effort. What should you do?

A.

Assign the Container Developer role to the Cloud Build service account.

B.

Specify the Container Developer role for Cloud Build in the cloudbuild.yaml file.

C.

Create a new service account with the Container Developer role and use it to run Cloud Build.

D.

Create a separate step in Cloud Build to retrieve service account credentials and pass these to kubectl.

Full Access
Question # 21

You have deployed a fleet Of Compute Engine instances in Google Cloud. You need to ensure that monitoring metrics and logs for the instances are visible in Cloud Logging and Cloud Monitoring by your company's operations and cyber

security teams. You need to grant the required roles for the Compute Engine service account by using Identity and Access Management (IAM) while following the principle of least privilege. What should you do?

A.

Grant the logging.editor and monitoring.metricwriter roles to the Compute Engine service accounts.

B.

Grant the Logging. admin and monitoring . editor roles to the Compute Engine service accounts.

C.

Grant the logging. logwriter and monitoring. editor roles to the Compute Engine service accounts.

D.

Grant the logging. logWriter and monitoring. metricWriter roles to the Compute Engine service accounts.

Full Access
Question # 22

You have an application running in Google Kubernetes Engine. The application invokes multiple services per request but responds too slowly. You need to identify which downstream service or services are causing the delay. What should you do?

A.

Analyze VPC flow logs along the path of the request.

B.

Investigate the Liveness and Readiness probes for each service.

C.

Create a Dataflow pipeline to analyze service metrics in real time.

D.

Use a distributed tracing framework such as OpenTelemetry or Stackdriver Trace.

Full Access
Question # 23

Your application runs on Google Cloud Platform (GCP). You need to implement Jenkins for deploying application releases to GCP. You want to streamline the release process, lower operational toil, and keep user data secure. What should you do?

A.

Implement Jenkins on local workstations.

B.

Implement Jenkins on Kubernetes on-premises

C.

Implement Jenkins on Google Cloud Functions.

D.

Implement Jenkins on Compute Engine virtual machines.

Full Access
Question # 24

You need to enforce several constraint templates across your Google Kubernetes Engine (GKE) clusters. The constraints include policy parameters, such as restricting the Kubernetes API. You must ensure that the policy parameters are stored in a GitHub repository and automatically applied when changes occur. What should you do?  

A.

Set up a GitHub action to trigger Cloud Build when there is a parameter change. In Cloud Build, run a gcloud CLI command to apply the change.

B.

When there is a change in GitHub, use a webhook to send a request to Cloud Service Mesh, and apply the change.

C.

Configure Config Sync with the GitHub repository. When there is a change in the repository, use Config Sync to apply the change.

D.

Configure Config Connector with the GitHub repository. When there is a change in the repository, use Config Connector to apply the change.

Full Access
Question # 25

You are leading a DevOps project for your organization. The DevOps team is responsible for managing the service infrastructure and being on-call for incidents. The Software Development team is responsible for writing, submitting, and reviewing code. Neither team has any published SLOs. You want to design a new joint-ownership model for a service between the DevOps team and the Software Development team. Which responsibilities should be assigned to each team in the new joint-ownership model?

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Full Access
Question # 26

You currently store the virtual machine (VM) utilization logs in Stackdriver. You need to provide an easy-to-share interactive VM utilization dashboard that is updated in real time and contains information aggregated on a quarterly basis. You want to use Google Cloud Platform solutions. What should you do?

A.

1. Export VM utilization logs from Stackdriver to BigOuery.2. Create a dashboard in Data Studio.3. Share the dashboard with your stakeholders.

B.

1. Export VM utilization logs from Stackdriver to Cloud Pub/Sub.2. From Cloud Pub/Sub, send the logs to a Security Information and Event Management (SIEM) system.3. Build the dashboards in the SIEM system and share with your stakeholders.

C.

1. Export VM utilization logs (rom Stackdriver to BigQuery.2. From BigQuery. export the logs to a CSV file.3. Import the CSV file into Google Sheets.4. Build a dashboard in Google Sheets and share it with your stakeholders.

D.

1. Export VM utilization logs from Stackdriver to a Cloud Storage bucket.2. Enable the Cloud Storage API to pull the logs programmatically.3. Build a custom data visualization application.4. Display the pulled logs in a custom dashboard.

Full Access
Question # 27

You are responsible for the reliability of a high-volume enterprise application. A large number of users report that an important subset of the application’s functionality – a data intensive reporting feature – is consistently failing with an HTTP 500 error. When you investigate your application’s dashboards, you notice a strong correlation between the failures and a metric that represents the size of an internal queue used for generating reports. You trace the failures to a reporting backend that is experiencing high I/O wait times. You quickly fix the issue by resizing the backend’s persistent disk (PD). How you need to create an availability Service Level Indicator (SLI) for the report generation feature. How would you define it?

A.

As the I/O wait times aggregated across all report generation backends

B.

As the proportion of report generation requests that result in a successful response

C.

As the application’s report generation queue size compared to a known-good threshold

D.

As the reporting backend PD throughout capacity compared to a known-good threshold

Full Access
Question # 28

You are designing a system with three different environments: development, quality assurance (QA), and production.

Each environment will be deployed with Terraform and has a Google Kubemetes Engine (GKE) cluster created so that application teams can deploy their applications. Anthos Config Management will be used and templated to deploy

infrastructure level resources in each GKE cluster. All users (for example, infrastructure operators and application owners) will use GitOps. How should you structure your source control repositories for both Infrastructure as Code (laC) and application code?

A.

Cloud Infrastructure (Terraform) repository is shared: different directories are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: differentoverlay directories are different environmentsApplication (app source code) repositories are separated: different branches are different features

B.

Cloud Infrastructure (Terraform) repository is shared: different directories are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated:different branches are different environmentsApplication (app source code) repositories are separated: different branches are different features

C.

Cloud Infrastructure (Terraform) repository is shared: different branches are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: differentoverlay directories are different environmentsApplication (app source code) repository is shared: different directories are different features

D.

Cloud Infrastructure (Terraform) repositories are separated: different branches are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated:different overlay directories are different environmentsApplication (app source code) repositories are separated: different branches are different features

Full Access
Question # 29

You have an application deployed to Cloud Run. A new version of the application has recently been deployed using the canary deployment strategy. Your Site Reliability Engineering (SRE) teammate informs you that an SLO has been exceeded for this application. You need to make the application healthy as quickly as possible. What should you do first?

A.

Configure traffic splitting to send 100% of the traffic to the latest revision.

B.

Configure traffic splitting to send 100% of the traffic to the previous revision.

C.

Create a new revision using the last known good version of the application.

D.

Identify the cause of the latency by using Cloud Trace.

Full Access
Question # 30

Your company allows teams to self-manage Google Cloud projects, including project-level Identity and Access Management (IAM). You are concerned that the team responsible for the Shared VPC project might accidentally delete the project, so a lien has been placed on the project. You need to design a solution to restrict Shared VPC project deletion to those with the resourcemanager.projects.updateLiens permission at the organization level. What should you do?

A.

Enable VPC Service Controls for the container.googleapis.com API service.

B.

Revoke the resourcemanager.projects.updateLiens permission from all users associated with the project.

C.

Enable the compute.restrictXpnProjectLienRemoval organization policy constraint.

D.

Instruct teams to only perform IAM permission management as code with Terraform.

Full Access
Question # 31

You are managing an application that exposes an HTTP endpoint without using a load balancer. The latency of the HTTP responses is important for the user experience. You want to understand what HTTP latencies all of your users are experiencing. You use Stackdriver Monitoring. What should you do?

A.

• In your application, create a metric with a metricKind set to DELTA and a valueType set to DOUBLE.• In Stackdriver's Metrics Explorer, use a Slacked Bar graph to visualize the metric.

B.

• In your application, create a metric with a metricKind set to CUMULATIVE and a valueType set to DOUBLE.• In Stackdriver's Metrics Explorer, use a Line graph to visualize the metric.

C.

• In your application, create a metric with a metricKind set to gauge and a valueType set to distribution.• In Stackdriver's Metrics Explorer, use a Heatmap graph to visualize the metric.

D.

• In your application, create a metric with a metricKind. set toMETRlc_KIND_UNSPECIFIEDanda valueType set to INT64.• In Stackdriver's Metrics Explorer, use a Stacked Area graph to visualize the metric.

Full Access
Question # 32

You are creating a CI/CD pipeline to perform Terraform deployments of Google Cloud resources Your CI/CD tooling is running in Google Kubernetes Engine (GKE) and uses an ephemeral Pod for each pipeline run You must ensure that the pipelines that run in the Pods have the appropriate Identity and Access Management (1AM) permissions to perform the Terraform deployments You want to follow Google-recommended practices for identity management What should you do?

Choose 2 answers

A.

Create a new Kubernetes service account, and assign the service account to the Pods Use Workload Identity to authenticate as the Google service account

B.

Create a new JSON service account key for the Google service account store the key as a Kubernetes secret, inject the key into the Pods, and set the boogle_application_credentials environment variable

C.

Create a new Google service account, and assign the appropriate 1AM permissions

D.

Create a new JSON service account key for the Google service account store the key in the secret management store for the CI/CD tool and configure Terraform to use this key for authentication

E.

Assign the appropriate 1AM permissions to the Google service account associated with the Compute Engine VM instances that run the Pods

Full Access
Question # 33

You recently configured an App Hub application. You are able to see the managed instance group, backend service, and URL map listed in App Hub, but you do not see the forwarding rule. You must ensure that the forwarding rule is listed. What should you do?

A.

Attach the project containing the forwarding rule as an App Hub service project.

B.

Enable the App Hub API in the project containing the forwarding rule.

C.

Configure the forwarding rule to forward to the correct target proxy.

D.

Register the forwarding rule as a service in the application configuration.

Full Access
Question # 34

You deploy a new release of an internal application during a weekend maintenance window when there is minimal user traffic. After the window ends, you learn that one of the new features isn't working as expected in the production environment. After an extended outage, you roll back the new release and deploy a fix. You want to modify your release process to reduce the mean time to recovery so you can avoid extended outages in the future. What should you do?

Choose 2 answers

A.

Before merging new code, require 2 different peers to review the code changes.

B.

Adopt the blue/green deployment strategy when releasing new code via a CD server.

C.

Integrate a code linting tool to validate coding standards before any code is accepted into the repository.

D.

Require developers to run automated integration tests on their local development environments before release.

E.

Configure a CI server.Add a suite of unit tests to your code and have your CI server run them on commit and verify any changes.

Full Access
Question # 35

You are using Stackdriver to monitor applications hosted on Google Cloud Platform (GCP). You recently deployed a new application, but its logs are not appearing on the Stackdriver dashboard.

You need to troubleshoot the issue. What should you do?

A.

Confirm that the Stackdriver agent has been installed in the hosting virtual machine.

B.

Confirm that your account has the proper permissions to use the Stackdriver dashboard.

C.

Confirm that port 25 has been opened in the firewall to allow messages through to Stackdriver.

D.

Confirm that the application is using the required client library and the service account key has proper permissions.

Full Access
Question # 36

You are configuring connectivity across Google Kubernetes Engine (GKE) clusters in different VPCs You notice that the nodes in Cluster A are unable to access the nodes in Cluster B You suspect that the workload access issue is due to the network configuration You need to troubleshoot the issue but do not have execute access to workloads and nodes You want to identify the layer at which the network connectivity is broken What should you do?

A.

Install a toolbox container on the node in Cluster A Confirm that the routes to Cluster B are configured appropriately

B.

Use Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster

C.

Use a debug container to run the traceroute command from Cluster A to Cluster B and from Cluster B to Cluster A Identify the common failure point

D.

Enable VPC Flow Logs in both VPCs and monitor packet drops

Full Access
Question # 37

You use Terraform to manage an application deployed to a Google Cloud environment The application runs on instances deployed by a managed instance group The Terraform code is deployed by using aCI/CD pipeline When you change the machine type on the instance template used by the managed instance group, the pipeline fails at the terraform apply stage with the following error message

You need to update the instance template and minimize disruption to the application and the number of pipeline runs What should you do?

A.

Delete the managed instance group and recreate it after updating the instance template

B.

Add a new instance template update the managed instance group to use the new instance template and delete the old instance template

C.

Remove the managed instance group from the Terraform state file update the instance template and reimport the managed instance group.

D.

Set the create_bef ore_destroy meta-argument to true in the lifecycle block on the instance template

Full Access
Question # 38

You manage several production systems that run on Compute Engine in the same Google Cloud Platform (GCP) project. Each system has its own set of dedicated Compute Engine instances. You want to know how must it costs to run each of the systems. What should you do?

A.

In the Google Cloud Platform Console, use the Cost Breakdown section to visualize the costs per system.

B.

Assign all instances a label specific to the system they run. Configure BigQuery billing export and query costs per label.

C.

Enrich all instances with metadata specific to the system they run. Configure Stackdriver Logging to export to BigQuery, and query costs based on the metadata.

D.

Name each virtual machine (VM) after the system it runs. Set up a usage report export to a Cloud Storage bucket. Configure the bucket as a source in BigQuery to query costs based on VM name.

Full Access
Question # 39

You are building and running client applications in Cloud Run and Cloud Functions Your client requires that all logs must be available for one year so that the client can import the logs into their logging service You must minimize required code changes What should you do?

A.

Update all images in Cloud Run and all functions in Cloud Functions to send logs to both Cloud Logging andthe client's logging service Ensure that all the ports required to send logs are open in the VPC firewall

B.

Create a Pub/Sub topic subscription and logging sink Configure the logging sink to send all logs into thetopic Give your client access to the topic to retrieve the logs

C.

Create a storage bucket and appropriate VPC firewall rules Update all images in Cloud Run and allfunctions in Cloud Functions to send logs to a file within the storage bucket

D.

Create a logs bucket and logging sink. Set the retention on the logs bucket to 365 days Configure thelogging sink to send logs to the bucket Give your client access to the bucket to retrieve the logs

Full Access
Question # 40

Your organization is starting to containerize with Google Cloud. You need a fully managed storage solution for container images and Helm charts. You need to identify a storage solution that has native integration into existing Google Cloud services, including Google Kubernetes Engine (GKE), Cloud Run, VPC Service Controls, and Identity and Access Management (IAM). What should you do?

A.

Use Docker to configure a Cloud Storage driver pointed at the bucket owned by your organization.

B.

Configure Container Registry as an OCI-based container registry for container images.

C.

Configure Artifact Registry as an OCI-based container registry for both Helm charts and container images.

D.

Configure an open source container registry server to run in GKE with a restrictive role-based access control (RBAC) configuration.

Full Access
Question # 41

You are configuring Cloud Logging for a new application that runs on a Compute Engine instance with a public IP address. A user-managed service account is attached to the instance. You confirmed that the necessary agents are running on the instance but you cannot see any log entries from the instance in Cloud Logging. You want to resolve the issue by following Google-recommended practices. What should you do?

A.

Add the Logs Writer role to the service account.

B.

Enable Private Google Access on the subnet that the instance is in.

C.

Update the instance to use the default Compute Engine service account.

D.

Export the service account key and configure the agents to use the key.

Full Access
Question # 42

You have a CI/CD pipeline that uses Cloud Build to build new Docker images and push them to Docker Hub. You use Git for code versioning. After making a change in the Cloud Build YAML configuration, you notice that no new artifacts are being built by the pipeline. You need to resolve the issue following Site Reliability Engineering practices. What should you do?

A.

Disable the CI pipeline and revert to manually building and pushing the artifacts.

B.

Change the CI pipeline to push the artifacts to Container Registry instead of Docker Hub.

C.

Upload the configuration YAML file to Cloud Storage and use Error Reporting to identify and fix the issue.

D.

Run a Git compare between the previous and current Cloud Build Configuration files to find and fix the bug.

Full Access
Question # 43

You created a Stackdriver chart for CPU utilization in a dashboard within your workspace project. You want to share the chart with your Site Reliability Engineering (SRE) team only. You want to ensure you follow the principle of least privilege. What should you do?

A.

Share the workspace Project ID with the SRE team. Assign the SRE team the Monitoring Viewer IAM role in the workspace project.

B.

Share the workspace Project ID with the SRE team. Assign the SRE team the Dashboard Viewer IAM role in the workspace project.

C.

Click "Share chart by URL" and provide the URL to the SRE team. Assign the SRE team the Monitoring Viewer IAM role in the workspace project.

D.

Click "Share chart by URL" and provide the URL to the SRE team. Assign the SRE team the Dashboard Viewer IAM role in the workspace project.

Full Access
Question # 44

Your application images are built and pushed to Google Container Registry (GCR). You want to build an automated pipeline that deploys the application when the image is updated while minimizing the development effort. What should you do?

A.

Use Cloud Build to trigger a Spinnaker pipeline.

B.

Use Cloud Pub/Sub to trigger a Spinnaker pipeline.

C.

Use a custom builder in Cloud Build to trigger a Jenkins pipeline.

D.

Use Cloud Pub/Sub to trigger a custom deployment service running in Google Kubernetes Engine (GKE).

Full Access
Question # 45

You are the Operations Lead for an ongoing incident with one of your services. The service usually runs at around 70% capacity. You notice that one node is returning 5xx errors for all requests. There has also been a noticeable increase in support cases from customers. You need to remove the offending node from the load balancer pool so that you can isolate and investigate the node. You want to follow Google-recommended practices to manage the incident and reduce the impact on users. What should you do?

A.

1. Communicate your intent to the incident team.2. Perform a load analysis to determine if the remaining nodes can handle the increase in traffic offloaded from the removed node, and scale appropriately.3. When any new nodes report healthy, drain traffic from the unhealthy node, and remove the unhealthy node from service.

B.

1. Communicate your intent to the incident team.2. Add a new node to the pool, and wait for the new node to report as healthy.3. When traffic is being served on the new node, drain traffic from the unhealthy node, and remove the old node from service.

C.

1 . Drain traffic from the unhealthy node and remove the node from service.2. Monitor traffic to ensure that the error is resolved and that the other nodes in the pool are handling the traffic appropriately.3. Scale the pool as necessary to handle the new load.4. Communicate your actions to the incident team.

D.

1 . Drain traffic from the unhealthy node and remove the old node from service.2. Add a new node to the pool, wait for the new node to report as healthy, and then serve traffic to the new node.3. Monitor traffic to ensure that the pool is healthy and is handling traffic appropriately.4. Communicate your actions to the incident team.

Full Access
Question # 46

Your company runs services by using Google Kubernetes Engine (GKE). The GKE clusters in the development environment run applications with verbose logging enabled. Developers view logs by using the kubect1 logs

command and do not use Cloud Logging. Applications do not have a uniform logging structure defined. You need to minimize the costs associated with application logging while still collecting GKE operational logs. What should you do?

A.

Run the gcloud container clusters update --logging—SYSTEM command for the development cluster.

B.

Run the gcloud container clusters update logging=WORKLOAD command for the development cluster.

C.

Run the gcloud logging sinks update _Defau1t --disabled command in the project associated with the development environment.

D.

Add the severity >= DEBUG resource. type "k83 container" exclusion filter to the Default logging sink in the project associated with the development environment.

Full Access
Question # 47

You are performing a semi-annual capacity planning exercise for your flagship service You expect a service user growth rate of 10% month-over-month for the next six months Your service is fully containerized and runs on a Google Kubemetes Engine (GKE) standard cluster across three zones with cluster autoscaling enabled You currently consume about 30% of your total deployed CPU capacity and you require resilience against the failure of a zone. You want to ensure that your users experience minimal negative impact as a result of this growth o' as a result of zone failure while you avoid unnecessary costs How should you prepare to handle the predicted growth?

A.

Verify the maximum node pool size enable a Horizontal Pod Autoscaler and then perform a load lest to verify your expected resource needs

B.

Because you deployed the service on GKE and are using a cluster autoscaler your GKE cluster will scale automatically regardless of growth rate

C.

Because you are only using 30% of deployed CPU capacity there is significant headroom and you do not need to add any additional capacity for this rate of growth

D.

Proactively add 80% more node capacity to account for six months of 10% growth rate and then perform a load test to ensure that you have enough capacity

Full Access
Question # 48

You are ready to deploy a new feature of a web-based application to production. You want to use Google Kubernetes Engine (GKE) to perform a phased rollout to half of the web server pods.

What should you do?

A.

Use a partitioned rolling update.

B.

Use Node taints with NoExecute.

C.

Use a replica set in the deployment specification.

D.

Use a stateful set with parallel pod management policy.

Full Access
Question # 49

Your organization is using Helm to package containerized applications Your applications reference both public and private charts Your security team flagged that using a public Helm repository as a dependency is a risk You want to manage all charts uniformly, with native access control and VPC Service Controls What should you do?

A.

Store public and private charts in OCI format by using Artifact Registry

B.

Store public and private charts by using GitHub Enterprise with Google Workspace as the identity provider

C.

Store public and private charts by using Git repository Configure Cloud Build to synchronize contents of the repository into a Cloud Storage bucket Connect Helm to the bucket by using https: // [bucket] .srorage.googleapis.com/ [holnchart] as the Helm repository

D.

Configure a Helm chart repository server to run in Google Kubernetes Engine (GKE) with Cloud Storage bucket as the storage backend

Full Access
Question # 50

Your company is migrating its production systems to Google Cloud. You need to implement site reliability engineering (SRE) practices during the migration to minimize customer impact from potential future incidents. Which two SRE practices should you implement?

Choose 2 answers

A.

Ensure that full autonomy and permissions are only granted to the on-call team.

B.

Automate common tasks to analyze key impact information and intelligently suggest mitigating actions for the on-call team.

C.

Ensure that all teams can modify the production environment to resolve issues.

D.

Create an alerting mechanism for your SRE team based on your system's internal behavior.

E.

Create up-to-date playbooks with instructions for debugging and mitigating issues.

Full Access
Question # 51

You are running a web application deployed to a Compute Engine managed instance group Ops Agent is installed on all instances You recently noticed suspicious activity from a specific IP address You need to configure Cloud Monitoring to view the number of requests from that specific IP address with minimal operational overhead. What should you do?

A.

Configure the Ops Agent with a logging receiver Create a logs-based metric

B.

Create a script to scrape the web server log Export the IP address request metrics to the Cloud Monitoring API

C.

Update the application to export the IP address request metrics to the Cloud Monitoring API

D.

Configure the Ops Agent with a metrics receiver

Full Access
Question # 52

You are working with a government agency that requires you to archive application logs for seven years. You need to configure Stackdriver to export and store the logs while minimizing costs of storage. What should you do?

A.

Create a Cloud Storage bucket and develop your application to send logs directly to the bucket.

B.

Develop an App Engine application that pulls the logs from Stackdriver and saves them in BigQuery.

C.

Create an export in Stackdriver and configure Cloud Pub/Sub to store logs in permanent storage for seven years.

D.

Create a sink in Stackdriver, name it, create a bucket on Cloud Storage for storing archived logs, and then select the bucket as the log export destination.

Full Access
Question # 53

You are developing reusable infrastructure as code modules. Each module contains integration tests that launch the module in a test project. You are using GitHub for source control. You need to Continuously test your feature branch and ensure that all code is tested before changes are accepted. You need to implement a solution to automate the integration tests. What should you do?

A.

Use a Jenkins server for Cl/CD pipelines. Periodically run all tests in the feature branch.

B.

Use Cloud Build to run the tests. Trigger all tests to run after a pull request is merged.

C.

Ask the pull request reviewers to run the integration tests before approving the code.

D.

Use Cloud Build to run tests in a specific folder. Trigger Cloud Build for every GitHub pull request.

Full Access
Question # 54

You need to introduce postmortems into your organization. You want to ensure that the postmortem process is well received. What should you do?

Choose 2 answers

A.

Create a designated team that is responsible for conducting all postmortems.

B.

Encourage new employees to conduct postmortems to learn through practice.

C.

Ensure that writing effective postmortems is a rewarded and celebrated practice.

D.

Encourage your senior leadership to acknowledge and participate in postmortems.

E.

Provide your organization with a forum to critique previous postmortems.

Full Access
Question # 55

You have an application that runs on Cloud Run. You want to use live production traffic to test a new version of the application while you let the quality assurance team perform manual testing. You want to limit the potential impact of any issues while testing the new version, and you must be able to roll back to a previous version of the application if needed. How should you deploy the new version?

Choose 2 answers

A.

Deploy the application as a new Cloud Run service.

B.

Deploy a new Cloud Run revision with a tag and use the —no-traffic option.

C.

Deploy a new Cloud Run revision without a tag and use the —no-traffic option.

D.

Deploy the new application version and use the —no-traffic option Route production traffic to the revision's URL.

E.

Deploy the new application version and split traffic to the new version.

Full Access
Question # 56

You support an application deployed on Compute Engine. The application connects to a Cloud SQL instance to store and retrieve data. After an update to the application, users report errors showing database timeout messages. The number of concurrent active users remained stable. You need to find the most probable cause of the database timeout. What should you do?

A.

Check the serial port logs of the Compute Engine instance.

B.

Use Stackdriver Profiler to visualize the resources utilization throughout the application.

C.

Determine whether there is an increased number of connections to the Cloud SQL instance.

D.

Use Cloud Security Scanner to see whether your Cloud SQL is under a Distributed Denial of Service (DDoS) attack.

Full Access
Question # 57

Your company recently migrated to Google Cloud. You need to design a fast, reliable, and repeatable solution for your company to provision new projects and basic resources in Google Cloud. What should you do?

A.

Use the Google Cloud console to create projects.

B.

Write a script by using the gcloud CLI that passes the appropriate parameters from the request. Save the script in a Git repository.

C.

Write a Terraform module and save it in your source control repository. Copy and run the apply command to create the new project.

D.

Use the Terraform repositories from the Cloud Foundation Toolkit. Apply the code with appropriate parameters to create the Google Cloud project and related resources.

Full Access