This repository contains the mathematical foundations and practical implementation of supervised learning models, with a focus on understanding and applying Gradient Descent from scratch.
The core objective of this project is to build a clear understanding of how supervised learning models work under the hood—without relying on external machine learning libraries.
- Implements the math behind linear regression and cost functions
- Derives and applies Gradient Descent manually
- Demonstrates training from scratch using NumPy
- Provides clean and readable code to reinforce learning
To reinforce the mathematical intuition behind supervised learning and gradient-based optimization by:
- Coding models from first principles
- Understanding the impact of learning rate, iterations, and convergence
- Visualizing the learning process
- Python
- NumPy
- Matplotlib (for visualization)
- Supervised Learning
- Linear Regression
- Cost Functions (MSE)
- Gradient Descent Optimization
- Model Training Without ML Libraries