Loading section...
One Source, Two Different Consumers
Concepts covered: paSharedRawLayer, paConsumerSpecificTransforms
A common architecture problem is one rich source feeding two consumers with different needs. The example here is a single Kafka topic of user activity events being read by two consumers: a daily executive dashboard and a machine learning feature store that powers churn prediction. The same event stream, two completely different shapes at the edge. The Source Each event is small, semi-structured, and produced at a rate of roughly five thousand per second at peak. The Kafka topic has thirty-day retention and twelve partitions. The pipeline reads continuously and lands raw events in S3, partitioned by hour. From there, the same raw data feeds two very different transforms. Consumer 1: The Executive Dashboard The executive dashboard wants daily active users, broken out by acquisition channel a
About This Interactive Section
This section is part of the What a Data Pipeline Is: Intermediate lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.
How DataDriven Lessons Work
DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.