Loading section...

The Small Files Problem

Concepts covered: paSmallFiles, paCompaction, paZOrdering

A 30-second streaming job writes a file every 30 seconds per partition. That is 2,880 files per partition per day. A daily ingestion that writes one large file per partition produces one. The two designs run the same SQL the same way, but the streaming version is dramatically slower because every read has to open thousands of tiny files instead of a few large ones. This is the small files problem, and it is the single most common operational headache in lake and lakehouse environments. The fix is compaction: periodically rewriting many small files into fewer large files, with no change to the table's logical contents. Why Small Files Hurt The Numbers 256 MB The sweet spot is files between 128 MB and 1 GB after compression. At 256 MB, a typical Spark task processes the file in one stage wit

About This Interactive Section

This section is part of the Storage Layers and Table Formats: Advanced lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.

How DataDriven Lessons Work

DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.