Winter Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: geek65

Databricks-Generative-AI-Engineer-Associate Databricks Certified Generative AI Engineer Associate Questions and Answers

Questions 4

A small and cost-conscious startup in the cancer research field wants to build a RAG application using Foundation Model APIs.

Which strategy would allow the startup to build a good-quality RAG application while being cost-conscious and able to cater to customer needs?

Options:

A.

Limit the number of relevant documents available for the RAG application to retrieve from

B.

Pick a smaller LLM that is domain-specific

C.

Limit the number of queries a customer can send per day

D.

Use the largest LLM possible because that gives the best performance for any general queries

Buy Now
Questions 5

A Generative AI Engineer just deployed an LLM application at a digital marketing company that assists with answering customer service inquiries.

Which metric should they monitor for their customer service LLM application in production?

Options:

A.

Number of customer inquiries processed per unit of time

B.

Energy usage per query

C.

Final perplexity scores for the training of the model

D.

HuggingFace Leaderboard values for the base LLM

Buy Now
Questions 6

A Generative AI Engineer is tasked with deploying an application that takes advantage of a custom MLflow Pyfunc model to return some interim results.

How should they configure the endpoint to pass the secrets and credentials?

Options:

A.

Use spark.conf.set ()

B.

Pass variables using the Databricks Feature Store API

C.

Add credentials using environment variables

D.

Pass the secrets in plain text

Buy Now
Questions 7

A Generative Al Engineer has created a RAG application to look up answers to questions about a series of fantasy novels that are being asked on the author’s web forum. The fantasy novel texts are chunked and embedded into a vector store with metadata (page number, chapter number, book title), retrieved with the user’s query, and provided to an LLM for response generation. The Generative AI Engineer used their intuition to pick the chunking strategy and associated configurations but now wants to more methodically choose the best values.

Which TWO strategies should the Generative AI Engineer take to optimize their chunking strategy and parameters? (Choose two.)

Options:

A.

Change embedding models and compare performance.

B.

Add a classifier for user queries that predicts which book will best contain the answer. Use this to filter retrieval.

C.

Choose an appropriate evaluation metric (such as recall or NDCG) and experiment with changes in the chunking strategy, such as splitting chunks by paragraphs or chapters.

Choose the strategy that gives the best performance metric.

D.

Pass known questions and best answers to an LLM and instruct the LLM to provide the best token count. Use a summary statistic (mean, median, etc.) of the best token counts to choose chunk size.

E.

Create an LLM-as-a-judge metric to evaluate how well previous questions are answered by the most appropriate chunk. Optimize the chunking parameters based upon the values of the metric.

Buy Now
Questions 8

A Generative AI Engineer is designing a RAG application for answering user questions on technical regulations as they learn a new sport.

What are the steps needed to build this RAG application and deploy it?

Options:

A.

Ingest documents from a source –> Index the documents and saves to Vector Search –> User submits queries against an LLM –> LLM retrieves relevant documents –> Evaluate model –> LLM generates a response –> Deploy it using Model Serving

B.

Ingest documents from a source –> Index the documents and save to Vector Search –> User submits queries against an LLM –> LLM retrieves relevant documents –> LLM generates a response -> Evaluate model –> Deploy it using Model Serving

C.

Ingest documents from a source –> Index the documents and save to Vector Search –> Evaluate model –> Deploy it using Model Serving

D.

User submits queries against an LLM –> Ingest documents from a source –> Index the documents and save to Vector Search –> LLM retrieves relevant documents –> LLM generates a response –> Evaluate model –> Deploy it using Model Serving

Buy Now
Questions 9

A Generative Al Engineer is tasked with improving the RAG quality by addressing its inflammatory outputs.

Which action would be most effective in mitigating the problem of offensive text outputs?

Options:

A.

Increase the frequency of upstream data updates

B.

Inform the user of the expected RAG behavior

C.

Restrict access to the data sources to a limited number of users

D.

Curate upstream data properly that includes manual review before it is fed into the RAG system

Buy Now
Questions 10

A team wants to serve a code generation model as an assistant for their software developers. It should support multiple programming languages. Quality is the primary objective.

Which of the Databricks Foundation Model APIs, or models available in the Marketplace, would be the best fit?

Options:

A.

Llama2-70b

B.

BGE-large

C.

MPT-7b

D.

CodeLlama-34B

Buy Now
Questions 11

A Generative AI Engineer is designing a chatbot for a gaming company that aims to engage users on its platform while its users play online video games.

Which metric would help them increase user engagement and retention for their platform?

Options:

A.

Randomness

B.

Diversity of responses

C.

Lack of relevance

D.

Repetition of responses

Buy Now
Questions 12

A Generative Al Engineer is tasked with developing a RAG application that will help a small internal group of experts at their company answer specific questions, augmented by an internal knowledge base. They want the best possible quality in the answers, and neither latency nor throughput is a huge concern given that the user group is small and they’re willing to wait for the best answer. The topics are sensitive in nature and the data is highly confidential and so, due to regulatory requirements, none of the information is allowed to be transmitted to third parties.

Which model meets all the Generative Al Engineer’s needs in this situation?

Options:

A.

Dolly 1.5B

B.

OpenAI GPT-4

C.

BGE-large

D.

Llama2-70B

Buy Now
Questions 13

When developing an LLM application, it’s crucial to ensure that the data used for training the model complies with licensing requirements to avoid legal risks.

Which action is NOT appropriate to avoid legal risks?

Options:

A.

Reach out to the data curators directly before you have started using the trained model to let them know.

B.

Use any available data you personally created which is completely original and you can decide what license to use.

C.

Only use data explicitly labeled with an open license and ensure the license terms are followed.

D.

Reach out to the data curators directly after you have started using the trained model to let them know.

Buy Now
Exam Name: Databricks Certified Generative AI Engineer Associate
Last Update: Nov 21, 2024
Questions: 45
Databricks-Generative-AI-Engineer-Associate pdf

Databricks-Generative-AI-Engineer-Associate PDF

$28  $80
Databricks-Generative-AI-Engineer-Associate Engine

Databricks-Generative-AI-Engineer-Associate Testing Engine

$33.25  $95
Databricks-Generative-AI-Engineer-Associate PDF + Engine

Databricks-Generative-AI-Engineer-Associate PDF + Testing Engine

$45.5  $130