Skip to content

The project focuses on building an end-to-end data engineering pipeline using PySpark to address real-world business scenarios. Key steps include exploring and understanding the dataset structure, performing data cleaning to handle inconsistencies, and applying transformations to prepare the data for analysis.

Notifications You must be signed in to change notification settings

bhavanachitragar/Flipkart-Data-Analysis-Using-PySpark-on-Databricks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Flipkart Data Analysis Using PySpark on Databricks


The project focuses on building an end-to-end data engineering pipeline using PySpark to address real-world business scenarios. Key steps include exploring and understanding the dataset structure, performing data cleaning to handle inconsistencies, and applying transformations to prepare the data for analysis. The workflow involves simulating practical use cases such as organizing product information, calculating metrics, and generating insights to meet business requirements. By leveraging PySpark's capabilities within the Databricks environment, the project demonstrates the implementation of a scalable and efficient data pipeline, providing a hands-on approach to solving data engineering challenges.

Link to the project: https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/19652298897236/3492530066299206/4655662666255799/latest.html

Steps Involved:

Data Exploration

  • Inspecting the dataset to understand its structure, including key columns like product ID, title, and ratings.
  • Refering to the data dictionary to interpret column meanings and verify the dataset.

Data Cleaning

  • Handling missing or invalid entries in the dataset.
  • Removing irrelevant rows to ensure data consistency.

Data Processing and Analysis

Perform operations like:
  • Filtering data based on defined criteria (e.g., valid product ratings).
  • Aggregating data to calculate totals and averages for key metrics.
  • Analyzing patterns in product performance.

Project Insights

  • Identifysing products with the highest ratings and consistent performance trends.
  • Analyzing key product categories contributing to overall trends.

Sanpshots:

Screenshot 2024-11-23 131227


Credits: Be a programmer

About

The project focuses on building an end-to-end data engineering pipeline using PySpark to address real-world business scenarios. Key steps include exploring and understanding the dataset structure, performing data cleaning to handle inconsistencies, and applying transformations to prepare the data for analysis.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published