Apache Kafka Practice Exam 2025 – The Complete All-in-One Guide for Exam Success!

Question: 1 / 400

When is data considered safe in Kafka?

When it is stored on a single replica

When it is available in-memory

When it is on enough replicas and written to disk

Data is considered safe in Kafka when it is stored on enough replicas and written to disk. This is primarily due to Kafka's design principles that prioritize data durability and high availability.

When data is written to Kafka, it is not only stored on the broker's local disk but also replicated across multiple brokers according to the defined replication factor. This means that even if one broker fails, the data persists on another broker. Writing to disk ensures that the data is not lost in case of a power failure or crash, as in-memory data could be lost if the server goes down unexpectedly.

Having replicas increases the fault tolerance of the system. For example, if a message is produced with a replication factor of three, it means that three copies of that message will exist across different brokers. Only after the message is acknowledged by a majority of the replicas (based on the acknowledgement configuration) is it considered safe. This guarantees that even in the case of node failures, the data remains accessible and consistent.

Other options do not provide the same level of safety:

- Storing data on a single replica does not protect against data loss if that broker fails.

- Depending solely on in-memory data does not provide any durability, as it can be lost with a crashes.

Get further explanation with Examzify DeepDiveBeta

When the system is under low load

Next Question

Report this question

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy