Loading section...
Replay Infrastructure
Concepts covered: paReplay, paTimeTravel
Replay is the streaming-world equivalent of backfill. It is the act of reprocessing events from a known offset or timestamp to correct downstream state. Replay is harder than batch backfill because there is no clean partition to overwrite, and easier because the source is often retained in a log that supports random access. Designing for replay requires three pieces of infrastructure: a retained source, addressable positions, and idempotent downstream consumers. Without all three, replay is a manual recovery operation. With them, replay is a feature. What Replay Is For The Source Retention Requirement Replay assumes the source still has the events to replay. Kafka retains messages by time (default seven days, often configured to thirty or more) or by total size. Kinesis retains messages fo
About This Interactive Section
This section is part of the Idempotency and Backfill: Advanced lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.
How DataDriven Lessons Work
DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.