 We’ll look at an example((movie budget and revenue)) of linear regression to help you grasp it.

Please don’t be confused; our real data for the example will not appear like the above; the above figure is only plotted for understanding.

Our linear regression will get two kinds of data.

It will get our film production budgets and it will get our film revenues.

The budgets will be our feature, also called the independent variable and the revenue are what we are trying to estimate – that will be our target.

What the linear regression will do is try and represent the relationship between the budget and the revenue as a straight line.

But here’s the question. What kind of line?

Let’s think back to high school math class and let’s think about what describes a line.

From our math classes,

we know that we can plot y as a function of X and that’s a line.

And if we cut the y-axis at 10 then we say that our line has an intercept of 10 and if every time X increased by 2 and it made y increase by 1, then we say that the line has a slope that is equal to one half.

In that case our equation would look something like this: y = 1/2 x + 10

And that means that the generic equation for line would be something like this.

It would be y = mx + c, where m is the slope and c is the constant.

So let me ask you this.

What part of the equation for the line would tell you about how strong the relationship is between x and y?

In this case the slope is the key.

The slope tells us how much y will change for a given change in X – the larger the value of the slope the steeper the line becomes.

Let’s take a look at an example where there is no relationship between x and y. If there is no relationship then we would simply have a straight line.

But if there is a relationship between the two then the slope would be quite steep and the stronger the relationship the steeper the slope.

But here’s the thing.

There’s a big difference between machine learning and pure mathematics; in machine learning, we don’t actually know the true relationship and that’s why we refer to the slope and the intercept as parameters and these parameters have to be estimated for our linear regression.

In fact we even use a different notation. In our notation,

We will replace the c for the constant with theta 0 and the slope coefficient will be written as theta 1 and also we’ll change the order in this equation, so we’ll have the constant first and then the slope. And instead of writing y, what you’ll also often see is h theta x where h stands for hypothesis. This kind of notation is very popular in machine learning and even though it can look quite intimidating when you first see it all you’re looking at here is the equation for a simple line.

But at this point we still haven’t talked about where the line ultimately comes from.

How do we know which line to draw? Looking at the data we just have data points.

And as a matter of fact, you can draw a whole bunch of different lines through the same set of data points. So, which line is best?
Which line would you choose?
Which line has the best possible theta zero and best possible theta one? If our dataset looked just like this, our job would be easy.All we would have to do was connect all the data points with a straight line.

And this also seems like the best option because we would know that in this case our estimates for theta zero and theta one would be very accurate.

In other words there would be a difference between the actual data point and the point on the line.

The point on the line here, that’s called the fitted value or the predicted value.

But let’s talk more about these gaps because it’s these gaps that will help us choose the best possible intercept and the best possible slope for our line.

These white lines are actually called residuals.

Now, why will the residuals help us choose the best possible line for our data?

Let me show you another line that we can draw to this data.

So now we have a measure by which to compare the different lines that we can draw through the data, all we have to do is look at the size of the residuals and choose the line with the smallest residuals.
And that’s great because now our algorithm has a very clear objective.
The goal of our linear regression is going to be to calculate the line that minimizes these residuals.
But how exactly should that work?

That first residual is gonna be the difference between the actual value, the y1, and the predicted value which is the one on the line and that second residual would also be just the difference between the actual value in white here and the fitted value in green. the same is true for that third data point.

Now suppose we actually have calculated the values for these residuals and these residuals have the values 10, negative 6, and 4.