Loading...
The Cache That Ate the Cluster
A medium spark interview practice problem on DataDriven. Write and execute real spark code with instant grading.
- Domain
- spark
- Difficulty
- medium
- Seniority
- senior
Problem
An iterative ML feature engineering pipeline reads a 200 GB base DataFrame and runs 8 sequential enrichment steps. Each step joins against a different dimension table and adds columns. A previous engineer cached the base DataFrame to speed up the repeated reads, but after step 4 executors start dying with OOM. The cache is eating so much memory that later steps have no room for shuffle data. Fix the caching strategy so the pipeline completes without OOM.
Practice This Problem
Solve this spark problem with real code execution. DataDriven runs your solution and grades it automatically.