Loading section...

The Lakehouse: ACID on Object

Concepts covered: paLakehouse, paTableFormats, paIceberg, paDeltaLake

The lakehouse is a marketing term that names a real architectural shift. The shift is the addition of a metadata layer on top of files in object storage that provides the consistency guarantees a database has and a folder of files lacks. Iceberg, Delta Lake, and Apache Hudi are three implementations of the same idea. The data files are still Parquet (or ORC). The folders look mostly the same. The difference is a small set of metadata files that turn the directory of Parquet into a transactional table. What the Metadata Layer Adds How an Iceberg Table Is Laid Out Each metadata.json file describes the table at a moment in time: schema, partition spec, and pointer to the current snapshot. Each snapshot points to a list of manifest files. Each manifest file lists the data files that are part o

About This Interactive Section

This section is part of the Storage Layers and Table Formats: Advanced lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.

How DataDriven Lessons Work

DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.