Loading section...

When Batch Outgrows Itself

Concepts covered: paBatchOutgrowsItself, paIncrementalTransforms

The exercise below walks through a real-shaped scenario: a pipeline that started as nightly batch, grew, and stopped meeting its freshness target. The redesign is not a wholesale switch to streaming. The redesign is a careful examination of which dimension is failing and the smallest change that fixes it. Most batch-to-streaming migrations in production look like this exercise, not like a rewrite. The Starting Pipeline An e-commerce company's nightly pipeline reads orders from a Postgres database, joins with a product catalog, computes daily aggregates by category and region, and writes a fact_daily_orders table consumed by the executive dashboard. The pipeline runs at 2am, finishes by 5am, and the dashboard is fresh by 6am Pacific. Volume is 4 million orders per day. Compute cost is rough

About This Interactive Section

This section is part of the Batch vs Streaming: Intermediate lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.

How DataDriven Lessons Work

DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.