-
Continue reading →: Deploying Lakeflow Jobs with Databricks Asset Bundles
Databricks Lakeflow Jobs provide a powerful way to orchestrate notebooks and data processes directly inside Databricks without relying on external orchestration tools like Azure Data Factory, Airflow, or Dagster. A key requirement for modern data engineering is keeping job definitions as code and deploying them consistently across environments. This is…
-
Continue reading →: Databricks CLI Explained: The Power of Automation Beyond the UI
Databricks provides a rich user interface that makes it easy to interact with notebooks, jobs, clusters, and data objects. But as your platform grows, teams mature, and automation becomes a requirement, the Databricks Command Line Interface (CLI) becomes an indispensable tool. In this blog, we’ll explore what the Databricks CLI…
-
Continue reading →: Key Practices That Make Databricks DE Life Easy
Focusing on performance is important—but that doesn’t mean a data team cost comes cheap. As requirements grow more complex, you need skilled data engineers, and that naturally increases cost.One of the most effective ways to reduce that cost is to keep your code simple. Databricks gives us several built-in features…
-
Continue reading →: Clustering by Z-order demystified
Clustering is one of the famous techniques in big data systems, especially in lakehouse architecture, It is data layout optimization that arranges data on disk so that, when querying, instead of reading all files in the lakehouse, only a limited number of files will be read using file metadata stats…
-
Continue reading →: Spark Bucketing Demystified
When working with massive data, shuffling is one of the costliest operations, and engineers make every effort to reduce it as much as possible. This is achieved through two data-skipping pillars: partitioning and clustering. Clustering is preferred for high-cardinality field searches, and lakehouse architectures like Delta Lake, Iceberg, and Hudi…
-
Continue reading →: Clustering options in delta-lake
When working with large-scale data in a lakehouse architecture, performance matters — a lot. One of the most effective ways to boost query performance is through a concept called data skipping. In simple terms, data skipping helps the query engine avoid scanning unnecessary data. Instead of reading every record, it…
-
Continue reading →: 3 Proven OLAP Query Concepts That Boost Efficiency
Not every performance gain comes from fancy techniques like broadcasting, partitioning, or caching. Sometimes, the right way of querying makes all the difference — and that’s where strong data skills and a deep understanding of technology fundamentals come in. Here are some sample scenarios where queries can perform efficiently regardless…
-
Continue reading →: Hive Partitioning Unlocked
Data skipping is a crucial performance optimization technique, especially in OLAP (Online Analytical Processing) environments. One of the most effective ways to enable data skipping is through partitioning — a technique widely used in lake house architecture or any other storage. Key principle Instead of scanning the entire dataset in…
-
Continue reading →: Repartition and coalesce in spark
In Spark, repartition and coalesce are two options used to rebalance DataFrame partitions for better performance and data management. The key technical differences are shown below: At first glance, coalesce seems more efficient and is often preferred. However, in certain situations repartition can be much more effective. When to Prefer…