Skip to content

Build a modern data warehouse with postgresql/snowflake/duckDB, including ETL processes, data modelling, and analytics.

License

Notifications You must be signed in to change notification settings

deepakramani/sql-data-warehouse-project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project - Build a data warehouse using sql

Build a modern data warehouse with postgresql/snowflake/duckDB, including ETL processes, data modelling, and analytics.

ETL architecture Source: Data With Baraa

Prerequisites and Setup

We use Ubuntu 20.04 LTS AWS EC2 and GCP machine for the project.

We need the following:

To make things easier I have scripted these prerequisites. Just clone my repo and run the instructions I provide.

sudo apt update && sudo apt install git make -y
git clone https://github.com/deepakramani/sql-data-warehouse-project.git
cd sql-data-warehouse-project
make install_docker
source ~/.bashrc

Logout and log in back to the instance. To test docker if it is working, run

docker run --rm hello-world # should return "Hello from Docker!" without errors

Set environment variables:

export POSTGRES_USER=postgres
export POSTGRES_PASSWORD=postgres
export POSTGRES_HOST=postgres
export POSTGRES_DB=sql_dwh_db

Now we're ready to start our project.

cd ~/sql-data-warehouse-project
make up # creates the database and schemas for the medallion architecture

Extract-Transform-Load(ETL) Process

We begin the ETL process of the data pipeline with the bronze layer. This layer is usually called staging layer as the raw data from various sources are dumped into temp tables. We choose full-extract type instead of incremental here. Raw data are in csv files.

Bronze Layer

Setup bronze layer and populate raw data to tables

cd ~/sql-data-warehouse-project
make setup_bronze_tables
make populate_bronze_tables

Now the bronze layer tables have raw data populated.

Silver Layer

Setup silver layer table

(The script is designed to work only once in the beginning since gold layer table(downstream) is dependent on silver tables)

cd ~/sql-data-warehouse-project
make setup_silver_tables

In the Silver layer, data cleansing and transformation on the bronze layer tables are carried out.

Data Cleansing

This step includes -

  1. Remove duplicate entries
  2. data filtering
  3. handling missing/invalid data
  4. handling unwanted white spaces
  5. data type casting
  6. outlier detection

Data Transformation

This step includes -

  1. Data enrichment
  2. data integration
  3. deriving new columns
  4. data aggregations
  5. applying busiess rules and logic
  6. data normalisation and standardisation

Populate Silver layer tables

cd ~/sql-data-warehouse-project
make populate_silver_tables

As the above data integration image indicates, we design the silver t

Gold Layer(Business Logic layer)

We use the below image to see how data from the source comes downstream to the destination in gold layer views.

Data Flow Source: Data with Barra

Before the dimensional and fact tables are created, it is important to know the relationship between tables. The below data integration image shows how one table is related to other. This diagram helps in making joins with other tables using that specific key.

Data Integration

cd ~/sql-data-warehouse-project
make setup_gold_layer

Data Mart

In the image below, we can see how the gold layer dimensional and fact tables/views(data marts) are created.

Data Mart

Testing - Data quality checks

Testing data quality is an integral part of ETL process. It ensures bad data doesn't get transmitted to the stakeholders or business users potentially avoiding dire consequences.

Here we check data integrity, consistency, and correctness. It verifies primary key uniqueness, standardization, referential integrity, and logical correctness of business rules.

Silver - data quality check

cd ~/sql-data-warehouse-project
make check_silver_data_quality

Gold - data quality check

cd ~/sql-data-warehouse-project
make check_gold_data_quality

Data Catalog

Gold layer views are usually used by Business users. In order to help them understand what each row in the table/view represent is important. Therefore I provide a catalog file that gives the metadata of the views created.

Data Catalog

Future developments

  1. Take historical data into account
  2. Do incremental data load than full load each time
  3. Make use of an orchestration tool such as Dagster to orchestrate ETL process
  4. Do EDA on the gold layer and derive reports for business users

Deleting resources

To bring down all container and return to the original state, run the following instructions

make down

About

Build a modern data warehouse with postgresql/snowflake/duckDB, including ETL processes, data modelling, and analytics.

Topics

Resources

License

Stars

Watchers

Forks