Skip to content

This repo is for all projects completed and available for examination, review and additions. Consider this my official repository.

Notifications You must be signed in to change notification settings

Iyanuvicky22/Projects

Repository files navigation

Projects — Iyanuvicky22

My central hub for data science, ETL, and automation projects — all open for review, reuse, and contributions.


Project Showcase

Below is a curated overview of the projects in this repository, grouped by theme:

Automation & Cloud Pipelines

Project Overview
data_epic_capstone AI agents directory with EC2/S3 automation (Bash, PowerShell, GitHub Actions). Part of a larger ETL orchestration setup.
etl_pipeline End-to-end ETL workflow: Extract data, transform with Pandas, and load into PostgreSQL. FastAPI API layer included.
aws_project Utilities and scripts targeting AWS automation — likely involving EC2/S3, IAM, or infrastructure provision.

Data Processing & API Services

Project Overview
data-processing-api API for ingesting and analyzing an e-commerce dataset using Pandas, Polars, and FastAPI.
ecommerce-api Cleaned and enriched sales data, offering analytics on top products, regional sales via FastAPI+SQLAlchemy.
web-scraping-api Weather data scraper (BeautifulSoup + Requests), cleaned with Pandas and exposed via FastAPI.

ETL & Database Engineering

Project Overview
ETL Pipeline (Movies Dataset) Combined multiple movie datasets via RapidAPI; transformations done with Python and SQLAlchemy; served insights via APIs backed by PostgreSQL. (If different from etl_pipeline, clarify.)

Team & Soft-Skills Projects

Project Overview
team_agile Demonstrates agile team collaboration principles—perhaps including documentation, role-play, or process design.

Why These Projects Matter

  • Automation-first mindset: Multiple projects integrate CI/CD, scripting, and cloud orchestration, reflecting my ambition to build scalable and maintainable pipelines.
  • Data science at the core: From e-commerce analytics to scraping and transformation, I have put data insights front and center.
  • Modern tooling and architecture: I have used and using FastAPI, SQLAlchemy, PostgreSQL, and frameworks like Polars—all critical in today's data workflows.
  • Full-stack capability: I build both the backend data pipelines and the APIs that expose analytics—showing my breadth of data science practice and capabilities.

How to Explore These Projects

  1. Navigate to the project folder you're interested in.
  2. Check the README.md inside each (if available) for detailed setup, tools, and usage.
  3. Look for Dockerfile, GitHub Actions, or deployment configs to see how automation is set up.
  4. Run sample data inputs or API endpoints to explore functionality.

What's Next (Growth Plan)

  • Expand data_epic_capstone with an ETL dashboard (see previous suggestions) and make it deployable via Netlify or a cloud provider.
  • Add README badges per project—for CI status, main language, or live API links.
  • Enhance docs with architecture diagrams (Mermaid or PNG) to visualize workflows.

Connect with Me

Feel free to reach out — I'd love to discuss data engineering, automation workflows, or cloud-native ETL practices. [iyanuvicky@gmail.com]


⭐ From Iyanuvicky22

About

This repo is for all projects completed and available for examination, review and additions. Consider this my official repository.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •