Skip to content

Latest commit

 

History

History
48 lines (30 loc) · 1.58 KB

07-linear-regression-training.md

File metadata and controls

48 lines (30 loc) · 1.58 KB

2.7 Training linear regression: Normal equation

Slides

Notes

Obtaining predictions as close as possible to $y$ target values requires the calculation of weights from the general LR equation. The feature matrix does not have an inverse because it is not square, so it is required to obtain an approximate solution, which can be obtained using the Gram matrix (multiplication of feature matrix ($X$) and its transpose ($X^T$)). The vector of weights or coefficients $w$ obtained with this formula is the closest possible solution to the LR system.

Normal Equation:

$w$ = $(X^TX)^{-1}X^Ty$

Where:

$X^TX$ is the Gram Matrix

The entire code of this project is available in this jupyter notebook.

⚠️ The notes are written by the community.
If you see an error here, please create a PR with a fix.

Navigation