Designing a "low-effort" ELT system, using stitch and dbt

With the advent of powerful data warehouses like snowflake, bigquery, redshift spectrum, etc that allow separation of storage and execution, it has become very economical to store data in the data warehouse and then transform them as required. This post goes over how to design such a ELT system using stitch and DBT. The main objective is to keep the code complexity and server management low, while automating as much as possible

3 Key techniques, to optimize your Apache Spark code

This post covers key techniques to optimize your Apache Spark code. You will know exactly what distributed data storage and distributed data processing systems are, how they operate and how to use them efficiently. Go beyond the basic syntax and learn 3 powerful strategies to drastically improve the performance of your Apache Spark project.

Change Data Capture Using Debezium Kafka and Pg

Change data capture tutorial using Debezium Kafka and Postgres. Change data capture is a software design pattern used to capture changes to data and take corresponding action based on that change. The change to data is usually one of read, update or delete. The corresponding action usually is supposed to occur in another system in response to the change that was made in the source system.

Advantages of Using dbt(Data Build Tool)

In this article we aim to go over the reasoning behind why someone might want to use dbt. If you are interested in learning dbt checkout this article. Some common questions from Data Engineers about dbt are

it is not very clear to me why would I use dbt instead of running SQL queries on Airflow

Review: Building a Real Time Data Warehouse

Many data engineers coming from traditional batch processing frameworks have questions about real time data processing systems, like

“What kind of data model did you implement, for real-time processing?”