Loading section...
Spark Execution Model
Concepts: paSparkExecutionModel
What They Want to Hear 'Spark splits work across a cluster. The driver is the coordinator: it plans the work, divides it into tasks, and sends those tasks to executors. Executors are the workers: each one processes a partition of the data in parallel. The key insight is that Spark is lazy. It builds a plan (the DAG) but does not execute anything until you call an action like .count() or .write().' That is the answer. Driver plans, executors execute, nothing happens until an action triggers it. The Vocabulary to Use