Loading section...
Cost Attribution
Concepts covered: paCostAttribution, paQueryTags
Most data teams do not know what their pipelines cost until somebody asks. The bill arrives as a single number from Snowflake or BigQuery or Databricks; it does not break down by pipeline. Without attribution, the cost conversation is impossible: nobody can say which pipelines should be optimized, which ones can be retired, or which ones are growing fastest. The fix is query tagging, which threads a pipeline identifier through every query the warehouse runs. The pattern is universal across cloud warehouses, with small mechanical differences: a Snowflake QUERY_TAG, a BigQuery job label, a Databricks tag, a Redshift query group. The point is not the field name but the discipline of carrying a stable identifier through every query and rolling it up after the fact. Query Tags in Snowflake, Tag
About This Interactive Section
This section is part of the Pipeline Operations: Intermediate lesson on DataDriven, a free data engineering interview prep platform. Each section includes explanations, worked examples, and hands-on code challenges that execute in real time. SQL queries run against a live PostgreSQL database. Python runs in a sandboxed Docker container. Data modeling problems validate against interactive schema canvases. All content is framed around what data engineering interviewers actually test at companies like Meta, Google, Amazon, Netflix, Stripe, and Databricks.
How DataDriven Lessons Work
DataDriven combines four interview rounds (SQL, Python, Data Modeling, Pipeline Architecture) with adaptive difficulty and spaced repetition. Easy problems get harder as you improve. Weak concepts resurface until you master them. Your readiness score tracks progress across every topic interviewers test. Every lesson section ends with problems you solve by writing and running real code, not by picking multiple-choice answers.