Summer Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: geek65

CCDAK Confluent Certified Developer for Apache Kafka Certification Examination Questions and Answers

Questions 4

You need to explain the best reason to implement the consumer callback interface ConsumerRebalanceListener prior to a Consumer Group Rebalance.

Which statement is correct?

Options:

A.

Partitions assigned to a consumer may change.

B.

Previous log files are deleted.

C.

Offsets are compacted.

D.

Partition leaders may change.

Buy Now
Questions 5

You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:

    Topic name: DLQ-Topic

    Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)

Options:

A.

errors.tolerance=all

B.

errors.deadletterqueue.topic.name=DLQ-Topic

C.

errors.deadletterqueue.context.headers.enable=true

D.

errors.tolerance=none

E.

errors.log.enable=true

F.

errors.log.include.messages=true

Buy Now
Questions 6

Which statement describes the storage location for a sink connector’s offsets?

Options:

A.

The __consumer_offsets topic, like any other consumer

B.

The topic specified in the offsets.storage.topic configuration parameter

C.

In a file specified by the offset.storage.file.filename configuration parameter

D.

In memory which is then periodically flushed to a RocksDB instance

Buy Now
Questions 7

Which two statements about Kafka Connect Single Message Transforms (SMTs) are correct?

(Select two.)

Options:

A.

Multiple SMTs can be chained together and act on source or sink messages.

B.

SMTs are often used to join multiple records from a source data system into a single Kafka record.

C.

Masking data is a good example of an SMT.

D.

SMT functionality is included within Kafka Connect converters.

Buy Now
Questions 8

An application is consuming messages from Kafka.

The application logs show that partitions are frequently being reassigned within the consumer group.

Which two factors may be contributing to this?

(Select two.)

Options:

A.

There is a slow consumer processing application.

B.

The number of partitions does not match the number of application instances.

C.

There is a storage issue on the broker.

D.

An instance of the application is crashing and being restarted.

Buy Now
Questions 9

A stream processing application is consuming from a topic with five partitions. You run three instances of the application. Each instance has num.stream.threads=5.

You need to identify the number of stream tasks that will be created and how many will actively consume messages from the input topic.

Options:

A.

5 created, 1 actively consuming

B.

5 created, 5 actively consuming

C.

15 created, 5 actively consuming

D.

15 created, 15 actively consuming

Buy Now
Questions 10

The producer code below features a Callback class with a method called onCompletion().

In the onCompletion() method, when the request is completed successfully, what does the value metadata.offset() represent?

Options:

A.

The sequential ID of the message committed into a partition

B.

Its position in the producer’s batch of messages

C.

The number of bytes that overflowed beyond a producer batch of messages

D.

The ID of the partition to which the message was committed

Buy Now
Questions 11

You are writing a producer application and need to ensure proper delivery. You configure the producer with acks=all.

Which two actions should you take to ensure proper error handling?

(Select two.)

Options:

A.

Use a callback argument in producer.send() where you check delivery status.

B.

Check that producer.send() returned a RecordMetadata object and is not null.

C.

Surround the call of producer.send() with a try/catch block to catch KafkaException.

D.

Check the value of ProducerRecord.status().

Buy Now
Questions 12

You are writing to a topic with acks=all.

The producer receives acknowledgments but you notice duplicate messages.

You find that timeouts due to network delay are causing resends.

Which configuration should you use to prevent duplicates?

Options:

A.

enable.auto.commit=true

B.

retries=2147483647

max.in.flight.requests.per.connection=5

enable.idempotence=true

C.

retries=0

max.in.flight.requests.per.connection=5

enable.idempotence=true

D.

retries=2147483647

max.in.flight.requests.per.connection=1

enable.idempotence=false

Buy Now
Questions 13

Your application is consuming from a topic configured with a deserializer.

It needs to be resilient to badly formatted records ("poison pills"). You surround the poll() call with a try/catch for RecordDeserializationException.

You need to log the bad record, skip it, and continue processing.

Which action should you take in the catch block?

Options:

A.

Log the bad record, no other action needed.

B.

Log the bad record and seek the consumer to the offset of the next record.

C.

Log the bad record and call the consumer.skip() method.

D.

Throw a runtime exception to trigger a restart of the application.

Buy Now
Questions 14

You are sending messages to a Kafka cluster in JSON format and want to add more information related to each message:

    Format of the message payload

    Message creation time

    A globally unique identifier that allows the message to be traced through the systemWhere should this additional information be set?

Options:

A.

Header

B.

Key

C.

Value

D.

Broker

Buy Now
Questions 15

Match the testing tool with the type of test it is typically used to perform.

Options:

Buy Now
Questions 16

Your configuration parameters for a Source connector and Connect worker are:

    offset.flush.interval.ms=60000

    offset.flush.timeout.ms=500

    offset.storage.topic=connect-offsets

    offset.storage.replication.factor=-1Which four statements match the expected behavior?(Select four.)

Options:

A.

The connector will wait 60000ms before trying to commit offsets for tasks.

B.

The connector will wait 500ms for offset data to be committed.

C.

The connector will commit offsets to a topic called connect-offsets.

D.

The offsets topic will use the broker default replication factor.

Buy Now
Questions 17

Which tool can you use to modify the replication factor of an existing topic?

Options:

A.

kafka-reassign-partitions.sh

B.

kafka-recreate-topic.sh

C.

kafka-topics.sh

D.

kafka-reassign-topics.sh

Buy Now
Questions 18

Where are source connector offsets stored?

Options:

A.

offset.storage.topic

B.

storage.offset.topic

C.

topic.offset.config

D.

offset, storage, partitions

Buy Now
Exam Code: CCDAK
Exam Name: Confluent Certified Developer for Apache Kafka Certification Examination
Last Update: Jul 10, 2025
Questions: 61
CCDAK pdf

CCDAK PDF

$29.75  $84.99
CCDAK Engine

CCDAK Testing Engine

$35  $99.99
CCDAK PDF + Engine

CCDAK PDF + Testing Engine

$47.25  $134.99