Loading section...
Lambda to Kappa Worked Example
Concepts covered: paLambdaToKappaMigration
The synthesis exercise walks through a real-shaped migration: a workload originally designed as Lambda, redesigned as Kappa, with explicit notes on what changes in code, in storage, and in operations. The example is a streaming media company's content engagement pipeline. The exercise shows that the migration is not a rewrite; it is a careful retirement of the batch layer and a tightening of the streaming layer, with the immutable event log surviving as the architectural anchor. The Lambda Starting Point The original architecture has three layers. A batch layer, written in Spark, recomputes daily content engagement aggregates (views, likes, completion rate) from the full event log every night. A speed layer, written in Storm, processes the live event stream and produces near-real-time appr
About This Interactive Section
This section is part of the Batch vs Streaming: Advanced lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.
How DataDriven Lessons Work
DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.