Loading section...

Many Sources, One Curated Layer

Concepts covered: paLayeredArchitecture, paMedallion

A first pipeline is one source, one transform, one consumer. The vocabulary is small enough to fit on a napkin. A real production environment has many of each, and the question changes from 'what should this pipeline do' to 'how do these pipelines fit together so each one does not solve the same problem in a slightly different way.' The answer is almost always a shared middle layer that every pipeline writes to and reads from. Without that shared layer, the same data ends up extracted three times, cleaned three different ways, and reconciled in spreadsheets at quarter end. The Combinatorial Problem Five sources times ten consumers is fifty potential point-to-point pipelines. Five sources times one shared curated layer plus ten consumers reading from it is fifteen. The reduction is not abou

About This Interactive Section

This section is part of the What a Data Pipeline Is: Intermediate lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.

How DataDriven Lessons Work

DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.