Loading section...

Streaming: Picture, Rhythm, Example

Concepts covered: paStreamProcessing

Streaming processing is the second basic rhythm. A streaming pipeline runs continuously. Each new event arrives at the source and flows through the transforms within milliseconds or seconds. There is no concept of a chunk and no concept of a scheduled wake-up. The pipeline is a long-running service, more like a web server than a script. The shape is more recent than batch in mainstream use, dating roughly from the rise of Apache Kafka in the early 2010s and the stream processors that grew up around it: Spark Streaming, Flink, Kafka Streams, Beam. The Shape of a Streaming Pipeline The Live Event Feed The canonical example is a live event feed. A user clicks a button on a website. The click is recorded as an event, sent to a Kafka topic, and within a few seconds shows up on an internal dashb

About This Interactive Section

This section is part of the Batch vs Streaming: Beginner lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.

How DataDriven Lessons Work

DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.