Loading section...

Micro-Batch: The Middle Ground

Concepts covered: paMicroBatchVsTrue

Most production pipelines that look like streaming are not pure streaming. They are micro-batch: very small batches, often every few seconds or every minute, processed by an engine that exposes a streaming API on top. Spark Structured Streaming is the largest example. Flink can run in batch or streaming mode with a tunable trigger interval. The pattern exists because pure streaming is expensive and pure batch cannot meet sub-15-minute freshness. Micro-batch sits in the middle: latency low enough to feel real-time, throughput high enough to amortize per-batch overhead, complexity simpler than full streaming. How Micro-Batch Works The Spark Structured Streaming Pattern The trigger argument is the key knob. Setting processingTime to one minute tells Spark to process whatever has accumulated i

About This Interactive Section

This section is part of the Batch vs Streaming: Intermediate lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.

How DataDriven Lessons Work

DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.