Loading section...
Two Ways Data Can Move
Concepts covered: paBatchVsStreaming
Data moves through a pipeline in one of two basic rhythms. The first rhythm is scheduled. Data piles up for a while, then a job wakes up, processes everything that has accumulated since the last run, and goes back to sleep. The second rhythm is continuous. Each new event flows through the pipeline as it arrives, with no waiting for a scheduled wake-up. Almost every pipeline in production fits into one of these two rhythms, or a hybrid that explicitly mixes them. Naming the rhythm is the first useful skill, because every other architectural choice (compute shape, cost profile, failure handling, freshness expectation) depends on it. The Two Rhythms Both rhythms produce the same end result if the inputs and the logic are the same. A daily count of signups by country can be computed by summing
About This Interactive Section
This section is part of the Batch vs Streaming: Beginner lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.
How DataDriven Lessons Work
DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.