Loading section...

Compression: Bytes Versus CPU

Concepts covered: paCompression, paSplittability, paZstdSnappy

Compression is the lever that trades CPU cycles for bytes. Smaller files mean fewer bytes read from disk or network, which usually wins. Smaller files also mean more CPU spent decompressing on read and compressing on write. The tradeoff is rarely close in modern analytical workloads: I/O is slow and getting slower relative to CPU, so the bytes saved are almost always worth the cycles spent. The interesting choice is which codec, not whether to compress. The Codecs in Use ZSTD has become the modern default in many stacks because it offers GZIP-like ratios at near-Snappy speeds. Spark, Iceberg, Delta, and Trino all support ZSTD natively. Snappy remains the historical default for Parquet because it was the original choice when Parquet emerged at Twitter and Cloudera around 2013, and changing

About This Interactive Section

This section is part of the Storage Layers and Table Formats: Intermediate lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.

How DataDriven Lessons Work

DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.