Loading section...
10TB Versus 100GB: A Worked Example
Concepts covered: paStorageOptimization, paBytesScanned, paLayoutComposition
The four levers (columnar layout, partitioning, compression, predicate pushdown) compose. A worked example shows how the savings multiply. The setup is a real-shaped clickstream table at moderate scale: eighteen months of mobile events, twenty-eight columns wide, around two trillion total rows. The query is unremarkable: a daily count of unique users for one country in the last seven days. The same SQL runs three different ways across three table layouts. The bytes scanned change by two and a half orders of magnitude. The Workload Setup 1: CSV, No Partitioning The engine has no way to skip files because there are no partitions. The engine has no way to skip columns because CSV is row-oriented. The engine has no way to use min/max statistics because CSV does not embed any. Every byte gets r
About This Interactive Section
This section is part of the Storage Layers and Table Formats: Intermediate lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.
How DataDriven Lessons Work
DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.