Online time series algorithms implemented in R/python/MOA - time-series

I am looking for implemented online learning time series algorithms. Does R, Python, MOA or any other tools have these kind of algorithms implemented?
TIA!

It's a little bit late, but in case someone is looking for the answer, I will share what I know:
PYTHON: sklearn clustering algorithms.
MiniBatchKMeans and Birch: both algorithm implementations have a partial_fit method allowing you to stream data through them in incremental updates (allowing online learning).
JAVA: MOA framework.
There are a lot of well known stream clustering algorithms implemented (CluStream, DenStream, etc ...). You can use it via:
terminal
user interface (see clustering demo)
code (Java API)
See the 'downloads' section in the MOA web, or check directly the source code on Github.
R: streamMOA: a R package that acts as a wrapper for the MOA [Java] classes. See the manual.

Related

Is there a native library written in Julia for Machine Learning?

I have started using Julia.I read that it is faster than C.
So far I have seen some libraries like KNET and Flux, but both are for Deep Learning.
also there is a command "Pycall" tu use Python inside Julia.
But I am interested in Machine Learning too. So I would like to use SVM, Random Forest, KNN, XGBoost, etc but in Julia.
Is there a native library written in Julia for Machine Learning?
Thank you
A lot of algorithms are just plain available using dedicated packages. Like BayesNets.jl
For "classical machine learning" MLJ.jl which is a pure Julia Machine Learning framework, it's written by the Alan Turing Institute with very active development.
For Neural Networks Flux.jl is the way to go in Julia. Also very active, GPU-ready and allow all the exotics combinations that exist in the Julia ecosystem like DiffEqFlux.jl a package that combines Flux.jl and DifferentialEquations.jl.
Just wait for Zygote.jl a source-to-source automatic differentiation package that will be some sort of backend for Flux.jl
Of course, if you're more confident with Python ML tools you still have TensorFlow.jl and ScikitLearn.jl, but OP asked for pure Julia packages and those are just Julia wrappers of Python packages.
Have a look at this kNN implementation and this for XGboost.
There are SVM implementations, but outdated an unmaintained (search for SVM .jl). But, really, think about other algorithms for much better prediction qualities and model construction performance. Have a look at the OLS (orthogonal least squares) and OFR (orthogonal forward regression) algorithm family. You will easily find detailed algorithm descriptions, easy to code in any suitable language. However, there is currently no Julia implementation I am aware of. I found only Matlab implementations and made my own java implementation, some years ago. I have plans to port it to julia, but that has currently no priority and may last some years. Meanwhile - why not coding by yourself? You won't find any other language making it easier to code a prototype and turn it into a highly efficient production algorithm running heavy load on a CUDA enabled GPGPU.
I recommend this quite new publication, to start with: Nonlinear identification using orthogonal forward regression with nested optimal regularization

Predictive Analytics - "why" factor & model interpretability

I have the data that contains tons of x variables that are mainly categorical/nominal and my target variable is a multi-class label. I am able to build a couple models around to predict multi-class variables and compare how each of them performed. I have training and testing data. Both the training and testing data gave me good results.
Now, I am trying to find out "why" did the model predicted certain Y-variable? Meaning if I have weather data: X Variable: city, state, zip code, temp, year; Y Variable: rain, sun, cloudy, snow. I want to find out "why" did the model predict: rain, sun, cloudy, or snow respectfully. I used classification algorithms like multi-nominal, decision tree, ... etc
This may be a broad question but I need somewhere I can start researching. I can predict "what" but I can't see "why" it was predicted as rain, sun, cloudy, or snow label. Basically, I am trying to find the links between the variables that caused to predict the variable.
So far I thought of using correlation matrix, principal component analysis (that happened during model building process)...at least to see which are good predictors and which ones are not. Is there is a way to figure out "why" factor?
Model interpretability is a hyper-active and hyper-hot area of current research (think of holy grail, or something), which has been brought forward lately not least due to the (often tremendous) success of deep learning models in various tasks, plus the necessity of algorithmic fairness & accountability...
Apart from the intense theoretical research, there have been some toolboxes & libraries on a practical level lately, both for neural networks as well as for other general ML models; here is a partial list which arguably should keep you busy for some time:
The Layer-wise Relevance Propagation (LRP) toolbox for neural networks (paper, project page, code, TF Slim wrapper)
FairML: Auditing Black-Box Predictive Models, by Cloudera Fast Forward Labs (blog post, paper, code)
LIME: Local Interpretable Model-agnostic Explanations (paper, code, blog post, R port)
Black Box Auditing and Certifying and Removing Disparate Impact (authors' Python code)
A recent (November 2017) paper by Geoff Hinton, Distilling a Neural Network Into a Soft Decision Tree, with various independent PyTorch implementations
SHAP: A Unified Approach to Interpreting Model Predictions (paper, authors' Python code, R package)
Interpretable Convolutional Neural Networks (paper, authors' Matlab code)
Lucid, a collection of infrastructure and tools for research in neural network interpretability by Google (code; papers: Feature Visualization, The Building Blocks of Interpretability)
Transparecy-by-Design (TbD) networks (paper, code, demo)
SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability (paper, code, Google blog post)
TCAV: Testing with Concept Activation Vectors (ICML 2018 paper, Tensorflow code)
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization (paper, authors' Torch code, Tensorflow code, PyTorch code, Keras example notebook)
Network Dissection: Quantifying Interpretability of Deep Visual Representations, by MIT CSAIL (project page, Caffe code, PyTorch port)
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks, by MIT CSAIL (project page, with links to paper & code)
Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions (paper, code)
Anchors: High-Precision Model-Agnostic Explanations (paper, code)
Diverse Counterfactual Explanations (DiCE) by Microsoft (paper, code, blog post)
Axiom-based Grad-CAM (XGrad-CAM): Towards Accurate Visualization and Explanation of CNNs, a refinement of the existing Grad-CAM method (paper, code)
As interpretability moves toward the mainstream, there are already frameworks and toolboxes that incorporate more than one of the algorithms and techniques mentioned and linked above; here is an (again, partial) list for Python stuff:
The ELI5 Python library (code, documentation)
The What-If tool by Google, a brand new (September 2018) feature of the open-source TensorBoard web application, which let users analyze an ML model without writing code (project page, code, blog post)
tf-explain - interpretability methods as Tensorflow 2.0 callbacks (code, docs, blog post)
InterpretML by Microsoft (homepage, code still in alpha, paper)
Captum by Facebook AI - model interpetability for Pytorch (homepage, code, intro blog post)
Skater, by Oracle (code, docs)
Alibi, by SeldonIO (code, docs)
AI Explainability 360, by IBM (code, blog post)
See also:
Interpretable Machine Learning, an online Gitbook by Christoph Molnar with R code available
Explanatory Model Analysis, another online book by Przemyslaw Biecek and Tomasz Burzykowski, with both R & Python code snippets
A Twitter thread, linking to several interpretation tools available for R.
A short (4 hrs) online course by Dan Becker at Kaggle, Machine Learning Explainability, and the accompanying blog post
... and a whole bunch of resources in the Awesome Machine Learning Interpetability repo
NOTE: I do no longer keep this answer updated; for updates, see my answer in the AI SE thread Which explainable artificial intelligence techniques are there?

Implementation of convolutional sparse coding in deep networks frameworks

I wanted to implement some convolutional sparse coding procedure similar to one described in this paper :
http://cs.nyu.edu/~ylan/files/publi/koray-nips-10.pdf
I tried with different frameworks (caffe, eblearn torch) but there seems to be lack of tutorials/support for unsupervised feature learning procedures such as this one. The authors say that this particular article is done using eblearn but I found no unsupervised learning procedure there. Have anyone tried to implement these kind of algorithms, and if so which libraries/frameworks did he use?
thx
I'm trying to do the same. So far i have found a matlab toolbox available at http://www.matthewzeiler.com/software/ (download link at the bottom). 'Convolutional sparse coding' is called 'Deconvolution' by him. The toolbox works, but you have to modify a little bit of code, some matlab functions were renamed.

GMM adaptation to new data

I have been using the GMM cluster package by Bouman, for which I did not find any adaptation module online. Before I start off reading up on the GMM adaptation theory and implementing it, I did like to know if there are any other opensource GMM projects online which does all of training, testing and adaptation to new data.?
It might be late to answer this now but for future reference, I suggest the Bob library (specifically bob.bio.gmm), which provides a wide range of functionalities to manipulate Guassian mixture models for speech related applications including MAP adaptation and UBM generation.

OpenCV vs Mahout for Computer Vision based Machine Learning?

For some time, I have been using OpenCV. It satisfied all my needs of feature extraction, matching and clustering(k-means till now) and classification(SVM). Recently, I came across Apache Mahout. But, most of the algorithms for machine learning are already available in OpenCV as well. Are there any advantages of using Mahout over OpenCV if the work relates to Videos and Images ?
This question might be put on hold since it is opinion based. I still want to add a basic comparison.
OpenCV is capable of anything about vision and ml that is possibly researched, or invented. The vision literature is based on it, and it develops according to the literature. Even the newborn ml algorithms -like TLD, originated on MATLAB- (http://www.tldvision.com/) can also be implemented using OpenCV (http://gnebehay.github.io/OpenTLD/) with some effort.
Mahout is capable, too and specific to ml. It includes not only the well known ml algorithms, but also the specific ones. Say you came across to a paper "Processing Apples with K-means Orientation Filtering". You can find OpenCV implementations of this paper all around the web. Even the actual algorithm might be open source and developed using OpenCV. With OpenCV, say it takes 500 lines of code, but with Mahout, the paper might be already implemented with a single method making everything easier
An example about this case is http://en.wikipedia.org/wiki/Canopy_clustering_algorithm, which is harder to implement using OpenCV right now.
Since you are going to work with image data sets you will need to learn about HIPI, too.
To sum up, here is a simple pro-con table:
know-how (learning curve): OpenCV is easier, since you already know about it. Mahout+HIPI will take more time.
examples: Literature + vision community commonly use OpenCV. Open source algorithms are mostly created with C++ api of OpenCV.
ml algorithms: Mahout is only about ml, whereas OpenCV is more generic. Still OpenCV has access to basic ml algorithms.
development: Mahout is easier to work with in terms of coding and time complexity (I am not sure about the latter, but I reckon it is).

Resources