Loading section...
Late Data: Rerun Last 7 Days
Concepts covered: paLateDataRerun, paIdempotency
The simplest workable fix for late data in a batch pipeline is also the most common: every day, do not compute today alone; also recompute the last several days. The size of the window depends on how late events tend to arrive. Seven days is a typical default because it covers nearly all mobile SDK retry tail behavior without making the daily run prohibitively expensive. Why a Rerun Window Works If today's run also recomputes the last seven days, then any event whose event_time was within the last seven days and whose ingestion_time is today will be picked up in the right bucket. The dashboard for last Tuesday gets corrected today. The cost is computational: instead of processing one day's events, the pipeline processes eight. The benefit is that history corrects itself within the window.
About This Interactive Section
This section is part of the Schema Evolution and Late Data: Beginner lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.
How DataDriven Lessons Work
DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.