-
Continue reading →: Demystifying Apache Spark: Jobs, Stages, and Tasks
Spark driver program is a JVM process that runs spark application main program and coordinates the jobs, stages, and tasks. Spark executor is a JVM process launched on worker nodes executes the tasks. A job can contain multiple stages, and each stage can further contain multiple tasks. But how does…
-
Continue reading →: How Spark Saves Time & Cost by Being Lazy
Understanding Spark internals is important because it directly impacts how effectively you can utilize Spark for performance, scalability, and cost efficiency. One key aspect to note in Spark is the concept of lazy evaluation. Before diving into the main topic, let’s first take a quick look at what actions and…
-
Continue reading →: Spark Performance Pitfalls: The Hidden Cost of Misplaced Unions
Using the union operation in the wrong place can be costly in Spark. For example, if you read data from a source, process it, then split it into two DataFrames based on certain filter conditions, perform additional operations on each DataFrame separately, and finally union them back together, Spark will…
-
Continue reading →: Handling Orphan Data
Data integrity and consistency are the foundation of data reliability. Without them, every model, report, or insight becomes a guess at best — and a risk at worst. Orphan records are a commonly encountered issue in data management, and effectively handling them is crucial to maintaining high-quality, usable datasets. Unlike…
-
Continue reading →: Parameterize ADF linked service for multiuse
Parameterization is one of the important best practise activities in data pipeline design like any system designing to keep the code clean and easy maintainable. Parameterizing linked services in Azure Data Factory is crucial for reusability and flexibility across different environments or functions It enables dynamic connections by allowing values…
-
Continue reading →: Unique key in spark DataFrame
Creating a unique key within a data pipeline is essential for reliably identifying individual records, especially in scenarios where the source dataset lacks a natural primary key and where record traceability is required in later stages of processing. In distributed processing frameworks like Apache Spark, which operate in-memory and leverage…
-
Continue reading →: Different ways of removing duplicates in spark
Removing duplicates in any data processing systems is essential, like other systems spark has some good ways to get rid of duplicates. We will look into the different ways of removing duplicates spark and application of that. Distinct & Drop duplicates. Distinct and drop duplicates are most common ways and…
-
Continue reading →: Union vs UnionAll in spark
Unlike traditional structured query databases, the difference between union and unionAll in Spark is unusual and not very intuitive. Below is the exercise, Two dataframes created with some of duplicate values. Ideally, in any traditional database union removes the duplicates from both the dataset (ie table) and returns only unique…
-
Continue reading →: Stock Price Streaming using Apache Kafka
In today’s fast-paced and highly volatile financial markets, having access to real-time stock quotes is crucial for making informed and precise decisions. Traditional methods of obtaining stock quotes often involve delays, which can lead to missed opportunities. This project aims to develop a robust and scalable pipeline that ingests live…
-
Continue reading →: What is Machine Language?
Is it something that machines speaks? C, C++, Java are machine languages? The language which machine understands is machine language. So, what machine understands? Obviously, it is 0 and 1. Machine understand only digital values. So, if we need to interact with machine, the only way is to communicate is…