Create a Spark aggregator to retrieve schema of json string in a column
on 2021-10-05
To transform a dataframe with a column containing a json string to a typed dataframe, we have to know exactly what is the schema of our json string. This blog post presents a method to infer a global schema from our column containing different json strings by using an user-defined aggregate function
Data Definition Language (DDL) for defining Spark Schema
on 2021-10-04
If you want to transform a Spark’s dataframe schema into a String, you have two schema string representation available: JSON and DDL. DDL stands for Data Definition Language and provides a very concise way to represent a Spark Schema. But how do we represent a Spark’s schema in DDL ?
Aggregate to a Map in Spark
on 2021-03-30
A small code snippet to aggregate two columns of a Spark dataframe to a map grouped by a third column
Pyspark setup for IntelliJ IDEA
on 2021-01-24
Simple configuration of a new Python IntelliJ IDEA project with working pyspark. I was inspired by "Pyspark on IntelliJ" blog post by Gaurav M Shah, I just removed all the parts about deep learning libraries. I assume that you have a working IntelliJ IDEA IDE with Python plugin installed, and Python 3 installed on your machine. We will create a Python project in IntelliJ IDEA, change its Python SDK to a virtualenv based Python SDK, add Pyspark dependency to this VirtualEnv, install Pyspark in this VirtualEnv and finally test it using a small Pyspark hello world.
Pyspark gotchas for Scala Spark developers
on 2021-01-22
Apache Spark is developed in Scala. However Python API is more and more popular as Python is becoming the main language of Data Science. Although Python and Scala APIs are very close, there are some differences that can prevent a developer used to one API to smoothly use the other. This article lists those small differences, from the point of view of a Scala Spark developer wanting to use PySpark.
Spark custom aggregator behavior on ordered window with duplicates
on 2020-12-06
User-defined aggregated functions are a powerful tool in Spark: you can avoid a lot of useless computation by crafting aggregated functions that does exactly what you want. However, sometimes their behavior can be surprising. For instance, be careful when using a custom aggregator over a windows ordered by a column that contains duplicate values: buffer is not flushed at each line but only when the value in ordering column changes.
Read more of Spark custom aggregator behavior on ordered window with duplicates
Option versus nullable: which type spark deserializes faster
on 2020-11-12
Recently, I was wondering about Spark’s deserialization performance. Especially this question: when you have a nullable column in a dataframe, is it better to deserialize it to an option or to a nullable type ? Let’s answer this question in this blog post. The benchmark To answer this question, I define the following benchmark. I create simple input data, read it with three Spark applications that select a column, replace its null value with a default value, and write the result to parquet.
Read more of Option versus nullable: which type spark deserializes faster
Reading parquets with different schemas in Spark
on 2020-10-25
Yesterday, I ran into a behavior of Spark’s DataFrameReader when reading Parquet data that can be misleading. If we have several parquet files in a parquet data directory having different schemas, and if we don’t provide any schema or if we don’t use the option mergeSchema, the inferred schema depends on the order of the parquet files in the data directory. The setup I am reading data stored in Parquet format.
Read more of Reading parquets with different schemas in Spark