Loading section...

Declarative vs Imperative Pipeline

Concepts covered: paPipelineAsCode, paDeclarativeOrchestration, paImperativeOrchestration

Pipelines used to be Python scripts that called other Python scripts. Modern pipeline tooling has moved toward two distinct philosophies: declarative, where the code describes the desired state of data assets, and imperative, where the code describes the steps to take. dbt and Dagster software-defined assets sit on the declarative side. Airflow operators sit on the imperative side. The choice is not a tool preference; it is a workload fit, and the wrong choice produces the kind of pipeline that works but resists every kind of change. The Two Models in One Sentence Each An imperative pipeline answers the question 'what should run, in what order, when.' A declarative pipeline answers the question 'what data assets should exist, derived from what other assets, and how fresh.' Imperative is pr

About This Interactive Section

This section is part of the Pipeline Operations: Advanced lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.

How DataDriven Lessons Work

DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.