Loading section...
The Four Roles in Any Pipeline
Concepts covered: paPipelineRoles
Every pipeline, no matter how complex, can be described in terms of four roles. A source produces data. A transform reshapes it. Storage holds it for later. A consumer reads it for some purpose. Real pipelines often have many of each, chained together, but the roles do not change. Naming the four roles is the single most useful skill a new data engineer can develop, because once they are named, every architecture diagram becomes legible. Role 1: Source A source is wherever data originates. It is the system that produced the data in the first place. The source is upstream of everything else. A pipeline does not own its sources; it consumes from them. This distinction matters because the source can change without warning, and the pipeline must absorb that change. Common sources include opera
About This Interactive Section
This section is part of the What a Data Pipeline Is: Beginner lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.
How DataDriven Lessons Work
DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.