You need to explain the best reason to implement the consumer callback interface ConsumerRebalanceListener prior to a Consumer Group Rebalance.
Which statement is correct?
You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:
Topic name: DLQ-Topic
Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)
Which two statements about Kafka Connect Single Message Transforms (SMTs) are correct?
(Select two.)
An application is consuming messages from Kafka.
The application logs show that partitions are frequently being reassigned within the consumer group.
Which two factors may be contributing to this?
(Select two.)
A stream processing application is consuming from a topic with five partitions. You run three instances of the application. Each instance has num.stream.threads=5.
You need to identify the number of stream tasks that will be created and how many will actively consume messages from the input topic.
The producer code below features a Callback class with a method called onCompletion().
In the onCompletion() method, when the request is completed successfully, what does the value metadata.offset() represent?
You are writing a producer application and need to ensure proper delivery. You configure the producer with acks=all.
Which two actions should you take to ensure proper error handling?
(Select two.)
You are writing to a topic with acks=all.
The producer receives acknowledgments but you notice duplicate messages.
You find that timeouts due to network delay are causing resends.
Which configuration should you use to prevent duplicates?
Your application is consuming from a topic configured with a deserializer.
It needs to be resilient to badly formatted records ("poison pills"). You surround the poll() call with a try/catch for RecordDeserializationException.
You need to log the bad record, skip it, and continue processing.
Which action should you take in the catch block?
You are sending messages to a Kafka cluster in JSON format and want to add more information related to each message:
Format of the message payload
Message creation time
A globally unique identifier that allows the message to be traced through the systemWhere should this additional information be set?
Your configuration parameters for a Source connector and Connect worker are:
offset.flush.interval.ms=60000
offset.flush.timeout.ms=500
offset.storage.topic=connect-offsets
offset.storage.replication.factor=-1Which four statements match the expected behavior?(Select four.)