Scatter chart with simple liner regression - highcharts

I have a scatter chart with multiple points. And I want to add a linear fit for those points(like simple linear regression). My question is should I calculate the regression with the formula and get the data array and populate as a line here(which is very complicated), or is there any function that I can directly use to get the linear regression line?

In case someone else stumbles upon this:
A user (not me) created a highcharts plugin that calculates various trend lines, including linear regression. It works fairly well, but the documentation is lacking a bit.
HighCharts Regression Plugin (Linear + Non-Linear)

Like this on the Highcharts demo page ?

Related

Find linear trend as single number google sheets

I have 20 years of data. I want to find the linear trend of the %s as a single number. EG if you were to plot the linear trend, there would be a coefficient by which the line increases/ decreases over time.
Google sheets has a trend function, but it's used for creating new data based on predicting trends.
Your question is too vague to answer clearly and precisely for what you want. Are you looking for the formula for the trend line? Just the correlation coefficient? Or a future value based on the info? The slope of the trend line?
What you have described is linear regression. I would suggest browsing the Insert drop down menu for formulas > statistics. There are formulas for each piece of info you want to draw (except creating the formula for you).
An easy and superficial way of obtaining the correlation coefficient and actual formula (and thus slope for linear trend lines), is to use excel. Copy your data table into excel and then create a scatterplot with the table. Go into the settings for the scatter plot and check the box for “trendline”. Then go into the trendline settings for the plot, and you can select which type of regression you want excel to use. You want linear. Towards the bottom of that menu, you want to check the boxes that say “show formula on chart” and “show R coefficient” or something along those lines. Excel will then print out your formula and coefficient in a text box on the chart. Your slope will be the coefficient of the x variable.
Hope this helps! Regression is a wormhole. I’d love to get more in depth if you’re interested!
NOTE: The outlier for year 2003 will have a significant impact on a linear regression line. Consider removing it from the data to create a line that will be more accurate for future predictions.

Difference between linear regression and gradient descent

based assignment and I chose machine learning as my topic. I'm still in highschool so I don't know much about calculus.
My end goal is to try using a machine learning algorithm to predict stock values. But I want to understand what I'm doing without copying and analyzing existing codes that perform my required function.
This also isn't programming-related but mostly concerns over the theory part of it? I read through articles on linear regression and watched the lecture that Stanford has on its youtube. But I don't get it. These are my main confusions:
Are linear regression and gradient descent different algorithms or a set of algorithms used together to predict or classify stuff?
Are y = mx + c and f(x) = ϴ0 + ϴx same? What can I calculate with this?
This equation is shown in the linear regression part so what exactly does this do?
I will try to answer all three questions you asked.
First, let me classify ML into some categories.
Regression - Predicting continuous valued output (example, stock prediction)
Classification - Predicting discrete valued output (example, spam classification)
Now regression can be also classified as linear regression or polynomial regression.
Linear Regression is the simplest one. This is how it works.
Suppose I have this data.
These are the house prices plotted against size of the house. Now I want a straight line that can best fit this data. Maybe I will try this line.
And I will try more and more lines to see which actually fit best to the data. Now, to obtain different lines I will vary parameters like a and b in y=a+bx. This answers your second question, this equation represents a straight line which you are trying to fit to the data.
But, how will I decide if one line is better fit than the other. I will calculate some value which represents the error my line makes in correctly predicting the y values of all the x values in my data. This is actually called cost function. I can choose a cost function like this :
(Ignore if it doesn't make sense).
But basically I want my cost function (error representing value) to be minimum and Gradient Descent is one such algorithm that can minimize my cost function. Gradient Descent can actually minimize any general function and hence it is not exclusive to Linear Regression but still it is popular for linear regression. This answers your first question.
Next step is to know how Gradient descent work. This is the algo:
This is what you have asked in your third question. This is the line of code which actually adjusts your fitting line(called hypothesis) while minimizing the cost function.

which power of the feature should i train with? regression

I have a explanatory variable x and a response variable y. I am trying to find which power of the feature i should train with. You can ignore the colors for my question. the scatter data is from the sensor and the line plot is the theoretical curve from the lab, which you can also ignore for my question.
For this answer I understand you want to obtain some polynomial curve going through the croissant shaped zone where points are dense.
Also I assume that the independent variable is on the horizontal axis, while the dependent is on the vertical one. Otherwise as you can see from the blue line, there is no functional that could give you this.
Now to select the degree of polynomial you can use stepwise regression.
This is about running the regression with more or less features one at a time (i.e decrease or increase the degree of polynomial in this case), and calculating a score such as AIC, BIC, or even adjusted R2 to assess if it's worth it or not to add or remove this feature.

How does scikitlearn implement line search?

In this section of the documentation on gradient boosting, it says
Gradient Boosting attempts to solve this minimization problem numerically via steepest descent: The steepest descent direction is the negative gradient of the loss function evaluated at the current model F_{m-1} which can be calculated for any differentiable loss function:
Where the step length \gamma_m is chosen using line search:
I understand the purpose of the line search, but I don't understand the algorithm itself. I read through the source code, but it's still not clicking. An explanation would be much appreciated.
The implementation is depending on which loss function you choose when initialize a GradientBoostingClassifier instance(use this for example, the regression part should be similar). The default loss function is 'deviance' and the corresponding optimization algorithm is implemented here. In the _update_terminal_region function, a simple Newton iteration is implemented with only one step.
Is this the answer you want?
I suspect the thing you find confusing is this: you can see where scikit-learn computes the negative gradient of the loss function and fits a base estimator to that negative gradient. It looks like the _update_terminal_region method is responsible for figuring out the step size, but you can't see anywhere it might be solving the line search minimization problem as written in the documentation.
The reason you can't find a line search happening is that, for the special case of decision tree regressors, which are just piecewise constant functions, the optimal solution is usually known. For example, if you look at the _update_terminal_region method of the LeastAbsoluteError loss function, you see that the leaves of the tree are given the value of the weighted median of the difference between y and the predicted value for the examples for which that leaf is relevant. This median is the known optimal solution.
To summarize what's happening, for each gradient descent iteration the following steps are taken:
Compute the negative gradient of the loss function at the current prediction.
Fit a DecisionTreeRegressor to the negative gradient. This fitting produces a tree with good splits for decreasing the loss.
Replace the values at the leaves of the DecisionTreeRegressor with values that minimize loss. These are usually computed from some simple known formula that takes advantage of the fact that the decision tree is just a piecewise constant function.
This method should be at least as good as what is described in the docs, but I think in some cases might not be identical to it.
From your comments it seems the algorithm itself is unclear and not the way scikitlearn implements it.
Notation in the wikipedia article is slightly sloppy, one does not simply differentiate by a function evaluated at a point. Once you replace F_{m-1}(x_i) with \hat{y_i} and replace partial derivative with a partial derivative evaluated at \hat{y}=F_{m-1}(x) things become clearer:
This would also remove x_{i} (sort of) from the minimization problem and shows the intent of line search - to optimize depending on the current prediction and not depending on the training set. Now, notice that:
Hence you're just minimizing:
So line search simply optimizes one degree of freedom you have (once you've found the right gradient direction) - the step size.

Using a straight line in ScatterPlot Coreplot iOS

I'm trying to draw a graph using the coreplot library. I'm looking for a way to change the dataLineStyle of the graph so that all the dots will be connected in a straight line, without any playful turns. If needed, I can provide more information.
Is there any way to achieve this?
[EDIT]
I have included a picture to better understand what I'm talking about. I would not like the Graph Line to go above or under the data points.
Regression lines aren't built into Core Plot. You can use one scatterplot to draw the data points with just plot symbols and no data line. Use a second scatter plot to draw the regression line. It only needs two data points, one for each end of the line. You'll have to compute the regression coefficients yourself.
The lines connecting the data points are controlled by the interpolation property. The default is CPTScatterPlotInterpolationLinear which is what you want.

Resources