cost_function_linear_regression
"/home/yossef/notes/personal/ml/cost_function_linear_regression.md"
path: personal/ml/cost_function_linear_regression.md
- **fileName**: cost_function_linear_regression
- **Created on**: 2025-05-02 03:21:03
Cost Function for Linear Regression
In linear regression, the best-fit line is often hard to get
perfectly in real-life cases. So we must calculate the errors
between predicted values and actual values to minimize them.
The difference between the predicted value ( \hat{y} ) and the
true value ( y ) is captured by the cost function, also known
as the loss function.
Mean Squared Error (MSE)
In linear regression, we commonly use the Mean Squared Error (MSE)
as the cost function. It calculates the average of the squared
differences between the predicted values ( \hat{y}_i ) and the
actual values ( y_i ).
The linear equation for prediction is:
The MSE cost function is defined as:
Where:
- ( \hat{y}_i ) is the predicted value
- ( y_i ) is the actual value
- ( n ) is the number of data points
Optimization with Gradient Descent
To minimize the error, we apply gradient descent to update the
parameters ( \theta_1 ) and ( \theta_2 ) iteratively.
At each step, the gradient of the cost function is calculated, and
the parameters are updated to reduce the cost. This continues until
the MSE converges to a global minimum — the best fit.
The final result is a regression line that minimizes the squared
errors and best represents the relationship in the data.
continue:[[]]
before:./linear_regression.md