You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An end-to-end data pipeline which extracts divvy bikeshare data from web loads it into data lake and datawarehouse transforms it using dbt and finally , a dashboard to visualize the data using looker studio, the pipeline is orchestrated using prefect
🌄📈📉 A Data Engineering Project 🌈 that implements an ELT data pipeline using Dagster, Docker, Dbt, Polars, Snowflake, PostgreSQL. Data from kaggle website 🔥
🛸 This project showcases an Extract, Load, Transform (ELT) pipeline built with Python, Apache Spark, Delta Lake, and Docker. The objective of the project is to scrape UFO sighting data from NUFORC and process it through the Medallion architecture to create a star schema in the Gold layer that is ready for analysis.
This is an ELT data pipeline setup to track the activities of an e-commerce website based on orders, reviews, deliveries and shipment date. This project utilized technologies like Airflow, AWS RDS-Postgres, Python etc.
This project was created as part of an assessment for DigitalXC AI. It demonstrates a cloud-based ELT pipeline using AWS MWAA, Airflow, dbt, PostgreSQL, and Superset. The pipeline automates data ingestion from S3, transformation with dbt, and visualization through Superset, following modern data engineering practices on a scalable AWS architecture.
Custom ELT pipeline for scraping job listings from 'Welcome to the Jungle' (France), transforming and cleaning the data, and visualizing it for job market analysis.
Enterprise ELT Framework using Airbyte, dbt, Prefect, and Power BI for seamless data extraction, transformation, and visualization. This project showcases a scalable pipeline integrating SQL Server, GCP, and tabular models in Power BI for real-time analytics and business intelligence. Ideal for data engineers and analysts seeking efficient ETL/ELT.
YAML-based data pipeline framework that runs both locally and fully in-browser designed for data engineers, ML teams, and SaaS developers who need flexible, SQL-powered pipelines.
This repository demonstrates an end-to-end ELT (Extract, Load, Transform) pipeline that extracts data from a source PostgreSQL database, loads it into a destination PostgreSQL database, and performs data transformations using dbt (Data Build Tool).
Extract Load and Transform (ELT) refers to the process of extracting data from source systems, loading the data into the Data Warehouse environment and then transforming it afterwards using in-database operations such as SQL. It relies on having the capacity to initially store large volumes of raw data.