Skip to content

tommasofacchin/machine-learning-notes

Repository files navigation

Machine Learning Notes

This repository contains my personal notes on Machine Learning, collected from my studies at Ca' Foscari University (Data Science program) and from independent learning. The notes are organized into different topics and stored in Markdown notebook files. They are a work in progress and will be updated as I continue studying and exploring new concepts.

Note: These notes are intended for personal learning and reference. They may contain errors or incomplete sections.


Contents

1. Supervised and Unsupervised Learning

2. Fundamentals of Neural Networks

  • Core concepts of neural networks, including architecture, activation functions, and optimization
  • Model evaluation, overfitting, underfitting, and regularization techniques
  • Fundamentals of Neural Networks

3. NLP and Large Language Models (LLMs)

  • NLP tasks: text classification, sentiment analysis, NER, POS tagging, translation, summarization, question answering, and text generation
  • Core concepts: tokenization, embeddings, language modeling, attention, seq2seq models, transformers
  • Large Language Models (LLMs), generative models, and applications
  • Tools and frameworks for NLP and LLMs
  • NLP and LLM

4. Generative AI (GenAI)

  • Types of generative models: GANs, VAEs, diffusion models, autoregressive models
  • Neural networks and deep learning concepts for generative models
  • Training, fine-tuning, and prompt engineering
  • Large Language Models (LLMs) and multi-modal AI
  • Retrieval-Augmented Generation (RAG), vector storage, and chunking
  • Tools, frameworks, and platforms for GenAI
  • Generative AI


Mathematical Background

To fully understand these notes, a solid grasp of the following mathematical and statistical concepts is recommended:

1. Linear Algebra

  • Vectors, matrices, and operations (addition, multiplication)
  • Dot product, cross product
  • Eigenvalues and eigenvectors
  • Matrix decomposition (SVD, LU)

2. Calculus

  • Derivatives and gradients
  • Partial derivatives
  • Chain rule
  • Gradient descent and optimization basics

3. Probability and Statistics

  • Probability distributions (normal, binomial, Poisson)
  • Expectation, variance, covariance
  • Conditional probability and Bayes’ theorem
  • Sampling, estimation, hypothesis testing

4. Additional Concepts

  • Logarithms and exponentials
  • Norms (L1, L2)
  • Basic combinatorics (for probabilistic reasoning)

These concepts are not explained in the notes.


How to Use These Notes

  • Each Markdown file is self-contained and can be read independently.
  • These notebooks are also available on my kaggle profile.
  • These notes are personal and intended as a study guide for ML concepts.
  • They are a work in progress and will be updated as I continue learning.

About

Personal notes on fundamentals of neural networks.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published