June 8, 2022

Spark Execution

Spark provides an api and an engine, that engine is responsible for analyzing the code and performing several optimizations. But how does this work? We can do two kinds of operations with Spark, transformations and actions. Transformations are operations on top of the data that modify the data but do not yield a result directly, that is because they all are lazily evaluated so, you can add new columns, filter rows, or perform some computations that won’t be executed immediately. Read more

June 7, 2022

Spark Architecture

Spark works on top of a cluster supervised by a cluster manager. The later is responsible of: Tracking resource allocation across all applications running on the cluster. Monitoring the health of all the nodes. Inside each node there is a node manager which is responsible to track each node health and resources and inform the cluster manager. C l u s t e r M a n a g e r N N N o o o d d d e e e M M M a a a n n n a a a g g g e e e r r r When we run a Spark application we generate processes inside the cluster where one node will act as a Driver and the rest will be Workers. Here there are two main points: Read more

May 31, 2022

Faker with PySpark

I’m preparing a small blog post about some tweakings I’ve done for a delta table, but I want to dig into the Spark UI differences before this. As this was done as part of my work I’m reproducing the problem with some generated data. I didn’t know about Faker and boy it is really simple and easy. In this case, I want to generate a small dataset for a dimension product table including its id, category and price. Read more

March 21, 2022

Git 101

From time to time I get to the same place, telling some people about git, what it solves and some basic usage. Since I’ve done it a lot recenly I wanted to write down a post and enjoy it. What is git? Git is a gift from the gods for the following use cases: My laptop is broke! I need the data there is a whole month of work there! Read more

February 7, 2022

Sbt tests

Últimamente en el trabajo estoy usando mucho delta para algunas tablas de dimensiones y estas tablas realizan actualizaciones parciales de las filas para replicar la lógica de negocio. Esto, nos lleva a varios tests que replican un estado de la tabla y realizan las actualizaciones pertinentes para comprobar todos los flujos y por ende un sobrecoste de ejecución de ese tipo de tests que acaba siendo agotador. Una de las soluciones planteadas fue incluir en las builds un parámetro para saltarse el step de ejecución de los tests. Lo cual es legítimo pero al menos para mí, resulta algo arbitrario. Buscando otro concens llegamos a: en las pull request se ejecutarán todos los tests y en el resto de builds (manuales o automáticas de rama) se excluirán estos tests, para que al hacer pruebas o durante las integraciones de las ramas no estemos acumulando tiempo en tests ya validados. Read more

November 11, 2021

Multiplying rows in Spark

Earlier this week I checked on a Pull Request that bothered me since I saw it from the first time. Let’s say we work for a bank and we are going to give cash to our clients if they get some people to join our bank. And we have an advertising campaign definition like this: campaign_id inviter_cash receiver_cash FakeBank001 50 30 FakeBank002 40 20 FakeBank003 30 20 And then our BI teams defines the schema they want for their dashboards. Read more

November 7, 2021

The horrible azure devops ui

Disclaimer: I read the docs, I know this is just complaining and not giving feedback, but man, this UI stills is horrible. So… Let’s put into situation, there was a connection update between devops and bitbucket and suddenly most of our pipelines stopped working. They told me to change the connection in the yaml file and that didn’t work. I know that there are three parts involving in a pipeline for sure: Read more

November 5, 2021

Regex 101

-You will spent your whole life relearning regex, there is a beginning, but never and end. Last year I participated in some small code problems and practised some regex. I got used to it and feel quite good at it. And today I had to use it again. I had the following dataframe: product attributes 1 (SIZE-36) 2 (COLOR-RED) 3 (SIZE-38, COLOR-BLUE) 4 (COLOR-GREEN, SIZE-39) A wonderful set of string merged with properties that could vary. And we wanted one column for each: Read more

2017-2024 Adrián Abreu powered by Hugo and Kiss Theme