Black Friday Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: clap70

Professional-Cloud-Database-Engineer Google Cloud Certified - Professional Cloud Database Engineer Questions and Answers

Questions 4

You are building an Android game that needs to store data on a Google Cloud serverless database. The database will log user activity, store user preferences, and receive in-game updates. The target audience resides in developing countries that have intermittent internet connectivity. You need to ensure that the game can synchronize game data to the backend database whenever an internet network is available. What should you do?

Options:

A.

Use Firestore.

B.

Use Cloud SQL with an external (public) IP address.

C.

Use an in-app embedded database.

D.

Use Cloud Spanner.

Buy Now
Questions 5

You are designing for a write-heavy application. During testing, you discover that the write workloads are performant in a regional Cloud Spanner instance but slow down by an order of magnitude in a multi-regional instance. You want to make the write workloads faster in a multi-regional instance. What should you do?

Options:

A.

Place the bulk of the read and write workloads closer to the default leader region.

B.

Use staleness of at least 15 seconds.

C.

Add more read-write replicas.

D.

Keep the total CPU utilization under 45% in each region.

Buy Now
Questions 6

Your company is shutting down their data center and migrating several MySQL and PostgreSQL databases to Google Cloud. Your database operations team is severely constrained by ongoing production releases and the lack of capacity for additional on-premises backups. You want to ensure that the scheduled migrations happen with minimal downtime and that the Google Cloud databases stay in sync with the on-premises data changes until the applications can cut over.

What should you do? (Choose two.)

Options:

A.

Use an external read replica to migrate the databases to Cloud SQL.

B.

Use a read replica to migrate the databases to Cloud SQL.

C.

Use Database Migration Service to migrate the databases to Cloud SQL.

D.

Use a cross-region read replica to migrate the databases to Cloud SQL.

E.

Use replication from an external server to migrate the databases to Cloud SQL.

Buy Now
Questions 7

Your digital-native business runs its database workloads on Cloud SQL. Your website must be globally accessible 24/7. You need to prepare your Cloud SQL instance for high availability (HA). You want to follow Google-recommended practices. What should you do? (Choose two.)

Options:

A.

Set up manual backups.

B.

Create a PostgreSQL database on-premises as the HA option.

C.

Configure single zone availability for automated backups.

D.

Enable point-in-time recovery.

E.

Schedule automated backups.

Buy Now
Questions 8

You are managing two different applications: Order Management and Sales Reporting. Both applications interact with the same Cloud SQL for MySQL database. The Order Management application reads and writes to the database 24/7, but the Sales Reporting application is read-only. Both applications need the latest data. You need to ensure that the Performance of the Order Management application is not affected by the Sales Reporting application. What should you do?

Options:

A.

Create a read replica for the Sales Reporting application.

B.

Create two separate databases in the instance, and perform dual writes from the Order Management application.

C.

Use a Cloud SQL federated query for the Sales Reporting application.

D.

Queue up all the requested reports in PubSub, and execute the reports at night.

Buy Now
Questions 9

You are managing a Cloud SQL for MySQL environment in Google Cloud. You have deployed a primary instance in Zone A and a read replica instance in Zone B, both in the same region. You are notified that the replica instance in Zone B was unavailable for 10 minutes. You need to ensure that the read replica instance is still working. What should you do?

Options:

A.

Use the Google Cloud Console or gcloud CLI to manually create a new clone database.

B.

Use the Google Cloud Console or gcloud CLI to manually create a new failover replica from backup.

C.

Verify that the new replica is created automatically.

D.

Start the original primary instance and resume replication.

Buy Now
Questions 10

You want to migrate your on-premises PostgreSQL database to Compute Engine. You need to migrate this database with the minimum downtime possible. What should you do?

Options:

A.

Perform a full backup of your on-premises PostgreSQL, and then, in the migration window, perform an incremental backup.

B.

Create a read replica on Cloud SQL, and then promote it to a read/write standalone instance.

C.

Use Database Migration Service to migrate your database.

D.

Create a hot standby on Compute Engine, and use PgBouncer to switch over the connections.

Buy Now
Questions 11

You manage a meeting booking application that uses Cloud SQL. During an important launch, the Cloud SQL instance went through a maintenance event that resulted in a downtime of more than 5 minutes and adversely affected your production application. You need to immediately address the maintenance issue to prevent any unplanned events in the future. What should you do?

Options:

A.

Set your production instance's maintenance window to non-business hours.

B.

Migrate the Cloud SQL instance to Cloud Spanner to avoid any future disruptions due to maintenance.

C.

Contact Support to understand why your Cloud SQL instance had a downtime of more than 5 minutes.

D.

Use Cloud Scheduler to schedule a maintenance window of no longer than 5 minutes.

Buy Now
Questions 12

You host an application in Google Cloud. The application is located in a single region and uses Cloud SQL for transactional data. Most of your users are located in the same time zone and expect the application to be available 7 days a week, from 6 AM to 10 PM. You want to ensure regular maintenance updates to your Cloud SQL instance without creating downtime for your users. What should you do?

Options:

A.

Configure a maintenance window during a period when no users will be on the system. Control the order of update by setting non-production instances to earlier and production instances to later.

B.

Create your database with one primary node and one read replica in the region.

C.

Enable maintenance notifications for users, and reschedule maintenance activities to a specific time after notifications have been sent.

D.

Configure your Cloud SQL instance with high availability enabled.

Buy Now
Questions 13

You need to migrate existing databases from Microsoft SQL Server 2016 Standard Edition on a single Windows Server 2019 Datacenter Edition to a single Cloud SQL for SQL Server instance. During the discovery phase of your project, you notice that your on-premises server peaks at around 25,000 read IOPS. You need to ensure that your Cloud SQL instance is sized appropriately to maximize read performance. What should you do?

Options:

A.

Create a SQL Server 2019 Standard on Standard machine type with 4 vCPUs, 15 GB of RAM, and 800 GB of solid-state drive (SSD).

B.

Create a SQL Server 2019 Standard on High Memory machine type with at least 16 vCPUs, 104 GB of RAM, and 200 GB of SSD.

C.

Create a SQL Server 2019 Standard on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 4 TB of SSD.

D.

Create a SQL Server 2019 Enterprise on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 500 GB of SSD.

Buy Now
Questions 14

You are using Compute Engine on Google Cloud and your data center to manage a set of MySQL databases in a hybrid configuration. You need to create replicas to scale reads and to offload part of the management operation. What should you do?

Options:

A.

Use external server replication.

B.

Use Data Migration Service.

C.

Use Cloud SQL for MySQL external replica.

D.

Use the mysqldump utility and binary logs.

Buy Now
Questions 15

Your organization has a production Cloud SQL for MySQL instance. Your instance is configured with 16 vCPUs and 104 GB of RAM that is running between 90% and 100% CPU utilization for most of the day. You need to scale up the database and add vCPUs with minimal interruption and effort. What should you do?

Options:

A.

Issue a gcloud sql instances patch command to increase the number of vCPUs.

B.

Update a MySQL database flag to increase the number of vCPUs.

C.

Issue a gcloud compute instances update command to increase the number of vCPUs.

D.

Back up the database, create an instance with additional vCPUs, and restore the database.

Buy Now
Questions 16

Your team recently released a new version of a highly consumed application to accommodate additional user traffic. Shortly after the release, you received an alert from your production monitoring team that there is consistently high replication lag between your primary instance and the read replicas of your Cloud SQL for MySQL instances. You need to resolve the replication lag. What should you do?

Options:

A.

Identify and optimize slow running queries, or set parallel replication flags.

B.

Stop all running queries, and re-create the replicas.

C.

Edit the primary instance to upgrade to a larger disk, and increase vCPU count.

D.

Edit the primary instance to add additional memory.

Buy Now
Questions 17

Your organization operates in a highly regulated industry. Separation of concerns (SoC) and security principle of least privilege (PoLP) are critical. The operations team consists of:

Person A is a database administrator.

Person B is an analyst who generates metric reports.

Application C is responsible for automatic backups.

You need to assign roles to team members for Cloud Spanner. Which roles should you assign?

Options:

A.

roles/spanner.databaseAdmin for Person A

roles/spanner.databaseReader for Person B

roles/spanner.backupWriter for Application C

B.

roles/spanner.databaseAdmin for Person A

roles/spanner.databaseReader for Person B

roles/spanner.backupAdmin for Application C

C.

roles/spanner.databaseAdmin for Person A

roles/spanner.databaseUser for Person B

roles/spanner databaseReader for Application C

D.

roles/spanner.databaseAdmin for Person A

roles/spanner.databaseUser for Person B

roles/spanner.backupWriter for Application C

Buy Now
Questions 18

Your organization is migrating 50 TB Oracle databases to Bare Metal Solution for Oracle. Database backups must be available for quick restore. You also need to have backups available for 5 years. You need to design a cost-effective architecture that meets a recovery time objective (RTO) of 2 hours and recovery point objective (RPO) of 15 minutes. What should you do?

Options:

A.

Create the database on a Bare Metal Solution server with the database running on flash storage.

Keep a local backup copy on all flash storage.

Keep backups older than one day stored in Actifio OnVault storage.

B.

Create the database on a Bare Metal Solution server with the database running on flash storage.

Keep a local backup copy on standard storage.

Keep backups older than one day stored in Actifio OnVault storage.

C.

Create the database on a Bare Metal Solution server with the database running on flash storage.

Keep a local backup copy on standard storage.

Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to a Coldline Storage bucket.

D.

Create the database on a Bare Metal Solution server with the database running on flash storage.

Keep a local backup copy on all flash storage.

Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to an Archive Storage bucket.

Buy Now
Questions 19

You work for a large retail and ecommerce company that is starting to extend their business globally. Your company plans to migrate to Google Cloud. You want to use platforms that will scale easily, handle transactions with the least amount of latency, and provide a reliable customer experience. You need a storage layer for sales transactions and current inventory levels. You want to retain the same relational schema that your existing platform uses. What should you do?

Options:

A.

Store your data in Firestore in a multi-region location, and place your compute resources in one of the constituent regions.

B.

Deploy Cloud Spanner using a multi-region instance, and place your compute resources close to the default leader region.

C.

Build an in-memory cache in Memorystore, and deploy to the specific geographic regions where your application resides.

D.

Deploy a Bigtable instance with a cluster in one region and a replica cluster in another geographic region.

Buy Now
Questions 20

You are designing a database strategy for a new web application. You plan to start with a small pilot in one country and eventually expand to millions of users in a global audience. You need to ensure that the application can run 24/7 with minimal downtime for maintenance. What should you do?

Options:

A.

Use Cloud Spanner in a regional configuration.

B.

Use Cloud Spanner in a multi-region configuration.

C.

Use Cloud SQL with cross-region replicas.

D.

Use highly available Cloud SQL with multiple zones.

Buy Now
Questions 21

Your organization has a ticketing system that needs an online marketing analytics and reporting application. You need to select a relational database that can manage hundreds of terabytes of data to support this new application. Which database should you use?

Options:

A.

Cloud SQL

B.

BigQuery

C.

Cloud Spanner

D.

Bigtable

Buy Now
Questions 22

Your company wants to migrate an Oracle-based application to Google Cloud. The application team currently uses Oracle Recovery Manager (RMAN) to back up the database to tape for long-term retention (LTR). You need a cost-effective backup and restore solution that meets a 2-hour recovery time objective (RTO) and a 15-minute recovery point objective (RPO). What should you do?

Options:

A.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and store backups on tapes on-premises.

B.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and use Actifio to store backup files on Cloud Storage using the Nearline Storage class.

C.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and back up the Oracle databases to Cloud Storage using the Standard Storage class.

D.

Migrate the Oracle databases to Compute Engine, and store backups on tapes on-premises.

Buy Now
Questions 23

Your company wants to move to Google Cloud. Your current data center is closing in six months. You are running a large, highly transactional Oracle application footprint on VMWare. You need to design a solution with minimal disruption to the current architecture and provide ease of migration to Google Cloud. What should you do?

Options:

A.

Migrate applications and Oracle databases to Google Cloud VMware Engine (VMware Engine).

B.

Migrate applications and Oracle databases to Compute Engine.

C.

Migrate applications to Cloud SQL.

D.

Migrate applications and Oracle databases to Google Kubernetes Engine (GKE).

Buy Now
Questions 24

Your organization deployed a new version of a critical application that uses Cloud SQL for MySQL with high availability (HA) and binary logging enabled to store transactional information. The latest release of the application had an error that caused massive data corruption in your Cloud SQL for MySQL database. You need to minimize data loss. What should you do?

Options:

A.

Open the Google Cloud Console, navigate to SQL > Backups, and select the last version of the automated backup before the corruption.

B.

Reload the Cloud SQL for MySQL database using the LOAD DATA command to load data from CSV files that were used to initialize the instance.

C.

Perform a point-in-time recovery of your Cloud SQL for MySQL database, selecting a date and time before the data was corrupted.

D.

Fail over to the Cloud SQL for MySQL HA instance. Use that instance to recover the transactions that occurred before the corruption.

Buy Now
Questions 25

You are managing a Cloud SQL for PostgreSQL instance in Google Cloud. You need to test the high availability of your Cloud SQL instance by performing a failover. You want to use the cloud command.

What should you do?

Options:

A.

Use gcloud sql instances failover .

B.

Use gcloud sql instances failover .

C.

Use gcloud sql instances promote-replica .

D.

Use gcloud sql instances promote-replica .

Buy Now
Questions 26

You plan to use Database Migration Service to migrate data from a PostgreSQL on-premises instance to Cloud SQL. You need to identify the prerequisites for creating and automating the task. What should you do? (Choose two.)

Options:

A.

Drop or disable all users except database administration users.

B.

Disable all foreign key constraints on the source PostgreSQL database.

C.

Ensure that all PostgreSQL tables have a primary key.

D.

Shut down the database before the Data Migration Service task is started.

E.

Ensure that pglogical is installed on the source PostgreSQL database.

Buy Now
Questions 27

Your online delivery business that primarily serves retail customers uses Cloud SQL for MySQL for its inventory and scheduling application. The required recovery time objective (RTO) and recovery point objective (RPO) must be in minutes rather than hours as a part of your high availability and disaster recovery design. You need a high availability configuration that can recover without data loss during a zonal or a regional failure. What should you do?

Options:

A.

Set up all read replicas in a different region using asynchronous replication.

B.

Set up all read replicas in the same region as the primary instance with synchronous replication.

C.

Set up read replicas in different zones of the same region as the primary instance with synchronous replication, and set up read replicas in different regions with asynchronous replication.

D.

Set up read replicas in different zones of the same region as the primary instance with asynchronous replication, and set up read replicas in different regions with synchronous replication.

Buy Now
Questions 28

Your organization needs to migrate a critical, on-premises MySQL database to Cloud SQL for MySQL. The on-premises database is on a version of MySQL that is supported by Cloud SQL and uses the InnoDB storage engine. You need to migrate the database while preserving transactions and minimizing downtime. What should you do?

Options:

A.

Use Database Migration Service to connect to your on-premises database, and choose continuous replication.

After the on-premises database is migrated, promote the Cloud SQL for MySQL instance, and connect applications to your Cloud SQL instance.

B.

Build a Cloud Data Fusion pipeline for each table to migrate data from the on-premises MySQL database to Cloud SQL for MySQL.

Schedule downtime to run each Cloud Data Fusion pipeline.

Verify that the migration was successful.

Re-point the applications to the Cloud SQL for MySQL instance.

C.

Pause the on-premises applications.

Use the mysqldump utility to dump the database content in compressed format.

Run gsutil –m to move the dump file to Cloud Storage.

Use the Cloud SQL for MySQL import option.

After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.

D.

Pause the on-premises applications.

Use the mysqldump utility to dump the database content in CSV format.

Run gsutil –m to move the dump file to Cloud Storage.

Use the Cloud SQL for MySQL import option.

After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.

Buy Now
Questions 29

Your organization works with sensitive data that requires you to manage your own encryption keys. You are working on a project that stores that data in a Cloud SQL database. You need to ensure that stored data is encrypted with your keys. What should you do?

Options:

A.

Export data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys.

B.

Use Cloud SQL Auth proxy.

C.

Connect to Cloud SQL using a connection that has SSL encryption.

D.

Use customer-managed encryption keys with Cloud SQL.

Buy Now
Questions 30

Your organization is running a MySQL workload in Cloud SQL. Suddenly you see a degradation in database performance. You need to identify the root cause of the performance degradation. What should you do?

Options:

A.

Use Logs Explorer to analyze log data.

B.

Use Cloud Monitoring to monitor CPU, memory, and storage utilization metrics.

C.

Use Error Reporting to count, analyze, and aggregate the data.

D.

Use Cloud Debugger to inspect the state of an application.

Buy Now
Questions 31

You support a consumer inventory application that runs on a multi-region instance of Cloud Spanner. A customer opened a support ticket to complain about slow response times. You notice a Cloud Monitoring alert about high CPU utilization. You want to follow Google-recommended practices to address the CPU performance issue. What should you do first?

Options:

A.

Increase the number of processing units.

B.

Modify the database schema, and add additional indexes.

C.

Shard data required by the application into multiple instances.

D.

Decrease the number of processing units.

Buy Now
Questions 32

Your company is using Cloud SQL for MySQL with an internal (private) IP address and wants to replicate some tables into BigQuery in near-real time for analytics and machine learning. You need to ensure that replication is fast and reliable and uses Google-managed services. What should you do?

Options:

A.

Develop a custom data replication service to send data into BigQuery.

B.

Use Cloud SQL federated queries.

C.

Use Database Migration Service to replicate tables into BigQuery.

D.

Use Datastream to capture changes, and use Dataflow to write those changes to BigQuery.

Buy Now
Questions 33

You are managing multiple applications connecting to a database on Cloud SQL for PostgreSQL. You need to be able to monitor database performance to easily identify applications with long-running and resource-intensive queries. What should you do?

Options:

A.

Use log messages produced by Cloud SQL.

B.

Use Query Insights for Cloud SQL.

C.

Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.

D.

Use Cloud SQL instance monitoring in the Google Cloud Console.

Buy Now
Questions 34

You are working on a new centralized inventory management system to track items available in 200 stores, which each have 500 GB of data. You are planning a gradual rollout of the system to a few stores each week. You need to design an SQL database architecture that minimizes costs and user disruption during each regional rollout and can scale up or down on nights and holidays. What should you do?

Options:

A.

Use Oracle Real Application Cluster (RAC) databases on Bare Metal Solution for Oracle.

B.

Use sharded Cloud SQL instances with one or more stores per database instance.

C.

Use a Biglable cluster with autoscaling.

D.

Use Cloud Spanner with a custom autoscaling solution.

Buy Now
Questions 35

Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8 TB of data and can scale to millions of read and write requests per second. What should you do?

Options:

A.

Write your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.

B.

Deploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.

C.

Use Memorystore to handle your low-latency requirements and for real-time analytics.

D.

Stream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.

Buy Now
Questions 36

You are designing a payments processing application on Google Cloud. The application must continue to serve requests and avoid any user disruption if a regional failure occurs. You need to use AES-256 to encrypt data in the database, and you want to control where you store the encryption key. What should you do?

Options:

A.

Use Cloud Spanner with a customer-managed encryption key (CMEK).

B.

Use Cloud Spanner with default encryption.

C.

Use Cloud SQL with a customer-managed encryption key (CMEK).

D.

Use Bigtable with default encryption.

Buy Now
Questions 37

You need to provision several hundred Cloud SQL for MySQL instances for multiple project teams over a one-week period. You must ensure that all instances adhere to company standards such as instance naming conventions, database flags, and tags. What should you do?

Options:

A.

Automate instance creation by writing a Dataflow job.

B.

Automate instance creation by setting up Terraform scripts.

C.

Create the instances using the Google Cloud Console UI.

D.

Create clones from a template Cloud SQL instance.

Buy Now
Questions 38

Your organization stores marketing data such as customer preferences and purchase history on Bigtable. The consumers of this database are predominantly data analysts and operations users. You receive a service ticket from the database operations department citing poor database performance between 9 AM-10 AM every day. The application team has confirmed no latency from their logs. A new cohort of pilot users that is testing a dataset loaded from a third-party data provider is experiencing poor database performance. Other users are not affected. You need to troubleshoot the issue. What should you do?

Options:

A.

Isolate the data analysts and operations user groups to use different Bigtable instances.

B.

Check the Cloud Monitoring table/bytes_used metric from Bigtable.

C.

Use Key Visualizer for Bigtable.

D.

Add more nodes to the Bigtable cluster.

Buy Now
Questions 39

You are starting a large CSV import into a Cloud SQL for MySQL instance that has many open connections. You checked memory and CPU usage, and sufficient resources are available. You want to follow Google-recommended practices to ensure that the import will not time out. What should you do?

Options:

A.

Close idle connections or restart the instance before beginning the import operation.

B.

Increase the amount of memory allocated to your instance.

C.

Ensure that the service account has the Storage Admin role.

D.

Increase the number of CPUs for the instance to ensure that it can handle the additional import operation.

Buy Now
Exam Name: Google Cloud Certified - Professional Cloud Database Engineer
Last Update: Nov 25, 2024
Questions: 132
Professional-Cloud-Database-Engineer pdf

Professional-Cloud-Database-Engineer PDF

$25.5  $84.99
Professional-Cloud-Database-Engineer Engine

Professional-Cloud-Database-Engineer Testing Engine

$30  $99.99
Professional-Cloud-Database-Engineer PDF + Engine

Professional-Cloud-Database-Engineer PDF + Testing Engine

$40.5  $134.99