I am new in Maxima. I have a set of data, (x,y,error) and I want to fit a linear line on it. I found some examples in example by maxima "Chapter 5: 2D Plots and Graphics using qdraw " but honestly I don't know how to download and use "qdraw" package.
anyone can help?
I see that qdraw.mac is linked from the page you mentioned. Maybe you can search for qdraw.mac on that page.
Maxima has some capability to work with linear regression models, but other packages which are specifically devoted to statistics might be more suitable. Have you tried R? (http://www.r-project.org)
Related
I'm studying deep-learning and tensorboard, almost example code use summaries.
I wonder that why I need to use Variables summaries.
Their are a many type of data for summary like min, max, mean, variation, etc.
What should I use in a typical situation?
How to analyze and What can i get from these summary graph?
thank you :D
There is an awesome video tutorial (https://www.youtube.com/watch?v=eBbEDRsCmv4) on Tensorboard that describes almost everything about Tensorboard (Graph, Summaries etc.)
Variable summaries (scalar, histogram, image, text, etc) help track your model through the learning process. For example, tf.summary.scalar('v_loss', validation_loss) will add one point to the loss curve each time you call the summary op, thus give you a rough idea whether the model has converged and when to stop.
It depends on your variable type. For values like loss, tf.summary.scalar shows the trend across epochs; for variables like weights in a layer, it would be better to use tf.summary.histogram, which shows the change of entire distribution of weights; I typically use tf.summary.image and tf.summary.text to check the images / texts my model generates over different epochs.
The graph shows your model structure and the size of tensors flowing through each op. I found it hard at the beginning to organise ops nicely in the graph presentation, and I learnt a lot about variable scope from that. The other answer provides a link for a great tutorial for beginners.
The objective-c math library seems pretty basic.
I'm looking for some statistics analysis functions like the Excel function "linest" to retrieve the quadratic or polynomial regressions of a data set with a given order.
Is there any function similar to the "linest" function for objective-c? Or a known statistics library/framework?
I have a hard time to believe I'm the first person to stumble upon this problem in iOS.
I spend several days getting through the math and getting it in code because I couldn't find a math library for iOS with the function I needed. I wouldn't recommend anyone do to that again, it wasn't a walk in the park, so I published my solution on my github. You can find it here:
https://github.com/KingIsulgard/iOS-Polynomial-Regression
It's easy to use, just give the x values and y values of the data and the order of polynomial you want to get and voila, you got it.
Hope this might help some people. Feel free to improve if you can. I'm just happy it finally worked.
The standard math library in general only gives you an interface to the elementary mathematical operations that are implemented in the FPU part of a CPU.
For linear regression you need either your own algorithm, it is not that complicated to implement in a handful of loops, or a dedicated (most likely) statistics library.
Writing your own algorithm for higher order or general regression is simple if a QR decomposition algorithm is available, for instance via bindings for LAPACK or similar. Then to solve
minimize sum (b[0]*f[0](x[k])+...+b[n]*f[n](x[k])-y[k])^2
one has just to construct the matrix [X|Y] where X[k,j]=f[j](x[k]) is the matrix of the values of the ansatz functions and Y[k]=y[k] is the column vector of the values to approximate. Apply the QR algorithm to [X|Y], identify or extract the R factor from its result and solve for b in
R*[b|1]'=0
via back-substitution.
I have a large image (5400x3600) that has multiple CCTVs that I need to detect.
The detection takes lot of time (4-7 minutes) with rotation. But it still fails to resolve certain CCTVs.
What is the best method to match a template like this?
I am using skImage - openCV is not an option for me, but I am open to suggestions on that too.
For example: in the images below, the template is correct matched with the second image - but the first image is not matched - I guess due to the noise created by the text "BLDG..."
Template:
Source image:
Match result:
The fastest method is probably a cascade of boosted classifiers trained with several variations of your logo and possibly a few rotations and some negative examples too (non-logos). You have to roughly scale your overall image so the test and training examples are approximately matched by scale. Unlike SIFT or SURF that spend a lot of time in searching for interest points and creating descriptors for both learning and searching, binary classifiers shift most of the burden to a training stage while your testing or search will be much faster.
In short, the cascade would run in such a way that a very first test would discard a large portion of the image. If the first test passes the others will follow and refine. They will be super fast consisting of just a few intensity comparison in average around each point. Only a few locations will pass the whole cascade and can be verified with additional tests such as your rotation-correlation routine.
Thus, the classifiers are effective not only because they quickly detect your object but because they can also quickly discard non-object areas. To read more about boosted classifiers see a following openCV section.
This problem in general is addressed by Logo Detection. See this for similar discussion.
There are many robust methods for template matching. See this or google for a very detailed discussion.
But from your example i can guess that following approach would work.
Create a feature for your search image. It essentially has a rectangle enclosing "CCTV" word. So the width, height, angle, and individual character features for matching the textual information could be a suitable choice. (Or you may also use the image having "CCTV". In that case the method will not be scale invariant.)
Now when searching first detect rectangles. Then use the angle to prune your search space and also use image transformation to align the rectangles in parallel to axis. (This should take care of the need for the rotation). Then according to the feature choosen in step 1, match the text content. If you use individual character features, then probably your template matching step is essentially a classification step. Otherwise if you use image for matching, you may use cv::matchTemplate.
Hope it helps.
Symbol spotting is more complicated than logo spotting because interest points work hardly on document images such as architectural plans. Many conferences deals with pattern recognition, each year there are many new algorithms for symbol spotting so giving you the best method is not possible. You could check IAPR conferences : ICPR, ICDAR, DAS, GREC (Workshop on Graphics Recognition), etc. This researchers focus on this topic : M Rusiñol, J Lladós, S Tabbone, J-Y Ramel, M Liwicki, etc. They work on several techniques for improving symbol spotting such as : vectorial signatures, graph based signature and so on (check google scholar for more papers).
An easy way to start a new approach is to work with simples shapes such as lines, rectangles, triangles instead of matching everything at one time.
Your example can be recognized by shape matching (contour matching), much faster than 4 minutes.
For good match , you require nice preprocess and denoise.
examples can be found http://www.halcon.com/applications/application.pl?name=shapematch
I have a set of X-Y values (i.e. a scatter plot) and I want a Pascal routine to generate the coefficients of a Nth order polynomial that fits those points, in the same way that Excel does.
I used David J Taylor's Polyfit example (curvefit.zip), which implements a least squares curve fitting algorithm (also known as linear regression) David's site is here, but keep reading, because my version is better. (See below).
The origin of the algorithms David is using is a book on scientific math for Pascal programmers, Allen Miller's Curve Fitting routine from the book "Pascal Programs For Scientists And Engineers", typed and submitted to MTPUG in Oct. 1982 by Juergen Loewner,
and corrected and adaptated for Turbo Pascal by Jeff Weiss.
You can grab curvefit.zip directly from bitbucket here. (You can clone the sourcecode with Mercurial/TortoiseHG, or download a ZIP from bitbucket)
hg clone https://bitbucket.org/wpostma/curvefit curvefit
It runs in any delphi version 5 and up, Unicode or not, even Delphi 10 Berlin. It has a little chart in the demo, added by me. I also added a way to force the result through the origin, a common technique where you want a best fit on all values, other than the constant term, which should be forced, either to zero, or to some experimentally derived average. A forced "blank subtraction" which is set equal to the average of a series of analytical "zero samples", is common in certain types of analytical chemistry when used with certain types of instrumentation, and in other scientific cases, where it can be more useful than a best-fit, because you may wish to minimize error around the origin more than minimize error across the area of the curve that is farthest from the origin.
I should also clarify that for purposes of linear regression, a "curve" may also be a line, which is the case I needed for analytical chemistry purposes, and that equation for any straight line (y=mx+b) is also called the "calibration curve". A first order curve fit is a line (y = mx +b), a second order curve fit (shown in the picture) is a parabola (y= nX^2 + mX + b). As you might guess, this algorithm scales from first order up to any level you might wish. I haven't tested it above 8 terms though.
Here's a screenshot:
Bitbucket project link:
https://bitbucket.org/wpostma/curvefit/overview
Try TPMath http://tpmath.sourceforge.net/ - I've been using this for years for fitting a hill regression and can recommend it.
Check the functions in Turbo Power's SysTools library, now is open source, it includes math functions in the unit StStat.
Even though you've already awarded an answer, for completeness, I thought I'd add this:
We use SDL Components' Math pack and have been very happy with it.
http://www.lohninger.com/delfcomp.html
It's well thought out, and does exactly what we need.
He's got a variety of other interesting tools on his site.
XlXtrFun is the best curve fitting I know and use, but it is for Excel:
http://www.xlxtrfun.com/XlXtrFun/XlXtrFun.htm
Does anyone know the particular algorithm for Probabilistic Hough Transform in the OpenCV's implementation? I mean, is there a reference paper or documentation about the algorithm?
To get the idea, I can certainly look into the source code, but I wonder if there is any documentation about it. -- it's not in the source code's comments (OpenCV 1.0).
Thank you!
-Jin
The OpenCV documentation states that the algoithm is based on "Robust detection of lines using the progressive probabilistic hough transform", by J Matas et al. This is quite different from the RHT described on wikipedia.
The paper does not seem to be freely available on the internet, but you can purcahse it from Elsevier
The source code for HoughLinesProbabilistic in OpenCV 2.4.4 contains inline comments that explain the various steps involved.
https://github.com/Itseez/opencv/blob/master/modules/imgproc/src/hough.cpp
The article Line Detection by Hough transformation in the section 6 could be useful.
Here is a fairly concise paper by Matas et.al. that describes the approach, and as others mentioned, it is indeed quite different from Randomized Hough Transform:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.2186&rep=rep1&type=pdf
(Not sure for how long this link is going to be valid though. It's on/from citeseer, wouldn't expect it to just vanish tomorrow, but who knows...)
I had quick look at the implementation icvHoughLinesProbabilistic() in hough.cpp, because I'll be using it :-) It seems fairly straightforward, anyway, my primary interest was whether it does some least squares line-fitting in the end - it doesn't, which is fine. It just means, if it is desired to get accurate line-segments, one may want to use the start/end-point and implied line-parameters as returned by OpenCV to select related points from the overall point-set. I'd be using a fairly conservative distance-threshold in the first place, and run RANSAC/MSAC on these points with a smaller threshold. Finally, fit a line to the inlier-set as usual, e.g. using OpenCV's cvFitLine().
Here's an article about the Randomized Hough Transform which i believe to be the same as the "probabilistic Hough transform" used in OpenCV
http://en.wikipedia.org/wiki/Randomized_Hough_Transform
basically, you dont fill up the accumulator for all points but choose a set of points with a certain criteria to fill up the Hough transform.
The consequence is that sometimes, you could miss the actual line if there wasnt eenough points ot start with. I guess you'd want to use this if you have somewhat linear structures so that most points would be redundant.
reference no 2: L. Xu, E. Oja, and P. Kultanan, "A new curve detection method: Randomized Hough transform (RHT)", Pattern Recog. Lett. 11, 1990, 331-338.
I also read about some pretty different approaches where the algorithms would take two points and compute the point in the middle of those two points. if the point is an edge point, then we'd accumulate the bin for that line. This is apparently extremely fast but you'd assume a somewhat non-sparse matrix as you could easily miss lines if there wasnt enough edge points to start with.