I wanted to implement some convolutional sparse coding procedure similar to one described in this paper :
http://cs.nyu.edu/~ylan/files/publi/koray-nips-10.pdf
I tried with different frameworks (caffe, eblearn torch) but there seems to be lack of tutorials/support for unsupervised feature learning procedures such as this one. The authors say that this particular article is done using eblearn but I found no unsupervised learning procedure there. Have anyone tried to implement these kind of algorithms, and if so which libraries/frameworks did he use?
thx
I'm trying to do the same. So far i have found a matlab toolbox available at http://www.matthewzeiler.com/software/ (download link at the bottom). 'Convolutional sparse coding' is called 'Deconvolution' by him. The toolbox works, but you have to modify a little bit of code, some matlab functions were renamed.
Related
Is anyone aware whether someone has produced a cheatsheet--preferably like a summary table--of various machine learning techniques (e.g. kNN, regression tree, Naive Bayes, linear regression, neural nets, etc.) along with the type of dependent and independent variables they accept (continuous, categorical, binary, etc.)?
I realize there can be a lot of shifty grey area here, but a general guide of some sort could be helpful for becoming familiar with these tools. I've done a lot googling not turned up anything like this yet.
Cheers
Check out http://ml-cheatsheet.readthedocs.io/en/latest/
It covers basic regression as well as popular neural net architectures.
Also check out this compact infographic: https://blogs.sas.com/content/subconsciousmusings/2017/04/12/machine-learning-algorithm-use/
A nice one as well: https://learn.microsoft.com/en-us/azure/machine-learning/studio/algorithm-cheat-sheet
I have had an infatuation with a certain concept regarding machine learning that Sethbling proved with his Mar.io program: https://youtu.be/qv6UVOQ0F44
I have a decent amount of logical programming experience in a number of different languages and have read around a lot about machine learning and neural networking.
What I'm looking for is a good set of references that could teach me how to apply neural networks in code, rather than just as a mathematical teaching like most of what I have seen thus far.
Thanks in advance!
Sentdex (https://www.youtube.com/user/sentdex) has incredible tutorials on Youtube and walks through teaching a model to play GTA.
It may seem daunting at first, but the rewards of overcoming such a challenging task will be worth it.
You might want to check out the JavaScript library Neataptic to check out how they implemented neural networks in Agar.IO for example.
You might also want to check out the NeuroEvolution of Augmenting Topologies paper for a basic understanding of neuro-evolution.
For some time, I have been using OpenCV. It satisfied all my needs of feature extraction, matching and clustering(k-means till now) and classification(SVM). Recently, I came across Apache Mahout. But, most of the algorithms for machine learning are already available in OpenCV as well. Are there any advantages of using Mahout over OpenCV if the work relates to Videos and Images ?
This question might be put on hold since it is opinion based. I still want to add a basic comparison.
OpenCV is capable of anything about vision and ml that is possibly researched, or invented. The vision literature is based on it, and it develops according to the literature. Even the newborn ml algorithms -like TLD, originated on MATLAB- (http://www.tldvision.com/) can also be implemented using OpenCV (http://gnebehay.github.io/OpenTLD/) with some effort.
Mahout is capable, too and specific to ml. It includes not only the well known ml algorithms, but also the specific ones. Say you came across to a paper "Processing Apples with K-means Orientation Filtering". You can find OpenCV implementations of this paper all around the web. Even the actual algorithm might be open source and developed using OpenCV. With OpenCV, say it takes 500 lines of code, but with Mahout, the paper might be already implemented with a single method making everything easier
An example about this case is http://en.wikipedia.org/wiki/Canopy_clustering_algorithm, which is harder to implement using OpenCV right now.
Since you are going to work with image data sets you will need to learn about HIPI, too.
To sum up, here is a simple pro-con table:
know-how (learning curve): OpenCV is easier, since you already know about it. Mahout+HIPI will take more time.
examples: Literature + vision community commonly use OpenCV. Open source algorithms are mostly created with C++ api of OpenCV.
ml algorithms: Mahout is only about ml, whereas OpenCV is more generic. Still OpenCV has access to basic ml algorithms.
development: Mahout is easier to work with in terms of coding and time complexity (I am not sure about the latter, but I reckon it is).
I'm working on a classification problem where I have data about only One Class, so I wanna classify between that "Target"class against all other possibilities which is the "Outlier" Class in incremental learning. So, I have found some libraries, but none of them support updating classifier.
Do you know any library that supports one-class classifier with updating pre-existed classifier especially in java or matlab?
I can't think of any full pre-existing solution to your question. However, I can suggest two approaches:
Neural networks have been used for various types of anomaly detection (e.g. see here, with the problem framed as "novelty detection"). Depending on the nature of your problem, this might be a suitable solution, as NNs can be incrementally trained and are supported by several widely used libraries. The right one to use would be highly dependent on your problem framing and the network architecture chosen.
Although most SVM libraries do not support incremental training, there are some with such support (e.g. see in Can an SVM learn incrementally?). However, as far as I can see, none of the two libraries suggested in the cited reference supports unary classification. But you could try basing a tailored solution on one of them (their source code seems to be freely available).
PS if you found one of these (or any other) solution to work, please post it as an answer as well :)
I want to learn General SVM implementation which uses QP problem for training. Initially I do not want to learn Sequential minimal Optimization(SMO) kind of algorithm which over comes the QP matrix size issue. Can any one please give me some references to learn Pure General SVM implementation in any programming languages like C,C++ or Java. So that I can understand basic issues in SVM and it will help me in learning some other SVM optimized algorithms.
This blog post by Mathieu Blondel explains how to solve the SVM problem both with and without kernels using a generic QP solver in Python (in this case he is using CVXOPT).
The source code is published on this gist and is very simple to understand thanks to the numpy array notation for n-dimensional arrays (in this case, mostly 2D matrices and 1D vectors).
You could check some of the resources mentioned here. It is also advisable to have a look at the existing code. One of the most popular implementations, LIBSVM, is open-source, so you can study the implementation.