Loading section...
Reconciliation Passes
Concepts covered: paReconciliation
A reconciliation pass is a periodic batch job that reads the canonical source data over a closed historical window and overwrites the corresponding rows of the streaming output. The streaming pipeline serves real-time consumers with approximate-but-fast output. The reconciliation pass serves audit-grade consumers with eventually-correct output. Both write to the same target. The contract is that the latest writer wins per partition, and the reconciliation runs late enough that even the long-tail late events are settled before it begins. What a Reconciliation Pass Does The pass is straightforward in concept. The complexity is in the contract with the streaming pipeline. The streaming pipeline must be designed to be overwritten without breaking real-time consumers. The reconciliation must be
About This Interactive Section
This section is part of the Schema Evolution and Late Data: Advanced lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.
How DataDriven Lessons Work
DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.