I'm doing time series analysis and I want to build an ARIMAX-model for my data. I was just curious if someone could give me any recommendations on whether to use System Ide. or Econometrics toolbox in Matlab? Which one would you prefer for general time series analysis?
jjepsuomi,
You get what you pay for. What you are trying to do is really complicated. I would suggest not trying to reinvent the wheel, but buying software that can handle the complexities of denominator structure on your causals plus outliers like pulses, level shifts, changes in trend and seasonality. I would recommend looking at SAS, Autobox, SPSS. We developed Autobox.
As of MATLAB 2013b, you cannot compile ARMAX() of the System Identification Toolbox into a standalone application. As this was no-go for us we moved to arima() of the Econometrics Toolbox.
Related
I am new to the field of machine learning, I am planning to use python as the programing language for implementing algorithms and Java for system architecture.
As far as I understand, machine learning is more about modeling data specific to the domain, visualize the data, and choose appropriate models & parameters. Implementing the models/algorithms is the last and relatively easy step.
Matlab seems to have everything for machine learning but it is too expensive and requires to learn a new language.
What tools other than programming language do I need in general for machine learning for enterprise projects? things like data modeling, visualization,etc
After a couple of years of trial and error, I would suggest you to go directly with python, possibly with scikit-learn or tensorflow (if you want to go hardcore :).
I also tried R in the past, and while it is a very valid language it has some limitations: It is single threaded by default, and although there are solutions for that, they are non as clean as python.
Also, python seems to be THE language for machine learning, it is easy to learn, and fast (depending on the interpreter implementation of course), also there is huuuuuuge support for it, lots of tutorials, documentation and, more important, libraries are actively develop and supported.
Finally, i recommend you to consider spyder as a good IDE for data science, I also tried Rodeo, but it does not seem as mature and stable as spyder.
Hope this helps.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Can somebody explain me the main pros and cons of the most known datamining open-source tools?
Everywhere I read that RapidMiner, Weka, Orange, KNIME are the best ones.
look at this blog post
Can somebody do a fast technical comparison in a small bullet list.
My needs are the following:
It should support classification algorithms (Naive Bayes, SVM, C4.5,
kNN).
It should be easy to implement in Java.
It should have understandable documentation.
It should have reference production projects or use cases working on in.
some additional benchmark comparison if possible.
Thanks!
I would like to say firstly there are pro's and cons for each of them on your list however I would suggest out of your list weka from my personal experience it is incredibly simple to implement in your own java application using the weka jar file and has its own self contained tools for data mining.
Rapid miner seems to be a commercial solution offering an end to end solution however the most notable number of examples of external implementations of solutions for rapid miner are usually in python and r script not java.
Orange offers tools that seem to be targeted primarily at people with possibly less need for custom implementations into their own software but a far easier time with user itneraction, its written in python and source is available, user addons are supported.
Knime is another commercial platform offering end to end solutions for data mining and analysis providing all the tools required, this one has various good reviews around the internet but i havent used it enough to advise you or anyone on the pro's or cons of it.
See here for knime vs weka
Best data mining tools
As i said weka is my personal favorite as a software developer but im sure other people have varying reasons and opinions on why to choose one over the other. Hope you find the right solution for you.
Also per your requirements weka supports the following:
Naivebayes
SVM
C4.5
KNN
I have tried Orange and Weka with a 15K records database and found problems with the memory management in Weka, it needed more than 16Gb of RAM while Orange could've managed the database without using that much. Once Weka reaches the maximum amount of memory, it crashes, even if you set more memory in the ini file telling Java virtual machine to use more.
I recently evaluated many open source projects, comparing and contrasted them with regards to the decision tree machine learning algorithm. Weka and KNIME were included in that evaluation. I covered the differences in algorithm, UX, accuracy, and model inspection. You might chose one or the other depending on what features you value most.
I have had positive experience with RapidMiner:
a large set of machine learning algorithms
machine learning tools - feature selection, parameter grid search, data partitioning, cross validation, metrics
a large set of data manipulation algorithms - input, transformation, output
applicable to many domains - finance, web crawling and scraping, nlp, images (very basic)
extensible - one can send and receive data other technologies: R, python, groovy, shell
portable - can be run as a java process
developer friendly (to some extent, could use some improvements) - logging, debugging, breakpoints, macros
I would have liked to see something like RapidMiner in terms of user experience, but with the underlying engine based on python technologies: pandas, scikit-learn, spacy etc. Preferably, something that would allow moving back and forth from GUI to code.
I'm doing a project to detect (classify) human activities using a ARM cortex-m0 microcontroller (Freedom - KL25Z) with an accelerometer. I intend to predict the activity of the user using machine learning.
The problem is, the cortex-m0 is not capable of processing training or predicting algorithms, so I would probably have to collect the data, train it in my computer and then embed it somehow, which I don't really know how to do it.
I saw some post in the internet saying that you can generate a matrix of weights and embed it in a microcontroller, so it would be a straightforward function to predict something ,based on the data you providing for this function. Would it be the right way of doing ?
Anyway my question is, how could I embedded a classification algorithm in a microcontroller?
I hope you guys can help me and give some guidance, I'm kind of lost here.
Thank you in advance.
I've been thinking about doing this myself to solve a problem that I've had a hard time developing a heuristic for by hand.
You're going to have to write your own machine-learning methods, because there aren't any machine learning libraries out there suitable for low-end MCUs, as far as I know.
Depending on how hard the problem is, it may still be possible to develop and train a simple machine learning algorithm that performs well on a low-end MCU. After-all, some of the older/simpler machine learning methods were used with satisfactory results on hardware with similar constraints.
Very generally, this is how I'd go about doing this:
Get the (labelled) data to a PC (through UART, SD-card, or whatever means you have available).
Experiment with the data and a machine learning toolkit (scikit-learn, weka, vowpal wabbit, etc). Make sure an off-the-shelf method is able to produce satisfactory results before moving forward.
Experiment with feature engineering and selection. Try to get the smallest feature set possible to save resources.
Write your own machine learning method that will eventually be used on the embedded system. I would probably choose perceptrons or decision trees, because these don't necessarily need a lot of memory. Since you have no FPU, I'd only use integers and fixed-point arithmetic.
Do the normal training procedure. I.e. use cross-validation to find the best tuning parameters, integer bit-widths, radix positions, etc.
Run the final trained predictor on the held-out testing set.
If the performance of your trained predictor was satisfactory on the testing set, move your relevant code (the code that calculates the predictions) and the model you trained (e.g. weights) to the MCU. The model/weights will not change, so they can be stored in flash (e.g. as a const array).
I think you may be limited by your hardware. You may want to get something a little more powerful. For your project you've chosen the M-series processor from ARM. This is the simplest platform that they offer, the architecture doesn't lend itself to the kind of processing you're trying to do. ARM has three basic classifications as follows:
M - microcontroller
R - real-time
A - applications
You want to get something that has strong hardware support for these complex calculations. You're starting point should be an A-series for this. If you need to do floating point arithmetic, you'll definitely need to start with the A-series and probably get one with NEON-FPU.
TI's Discovery series is a nice place to start, or maybe just use the Raspberry Pi (at least for the development part)?
However, if you insist on using the M0 I think you might be able to pull it off using something lightweight like ROS-C. I know there are packages with ROS that can do it, even though its mainly for robotics you may be able to adapt it to what you're doing.
Dependency Free ROS
Neural Networks and Machine Learning with ROS
I'm having fmri dataset for the classification of Normal Controls and Alzheimer diseased patients. Now, as a newbie I'm unable to extract features from my dataset. I want to extract activation patterns, GM,WM, CSF, volumetric measures and hemo-dynamics in numerical form. Please guide me how and where to start from and please suggest some easy and efficient softwares for my work... I'll be obliged...
Take a look at the software packages called FSL (FMRIB Software Library) and SPM (Statistical Parametric Mapping).
Each of them can do the kind of analyses you're asking about. However, be warned that none of these analyses are trivial. You should probably read up a bit on the subject, first. The Handbook of Functional MRI Data Analysis is a great place to start for beginners.
Like #WeirdAlchemy says, these are many analyses you want to carry out, and all of them non-trivial. You typically learn to these over weeks at a relevant intensive course or months during a neuro Masters programme. To answer your question very explicitly:
GM, WM & CSF volumetric measures - You can do this with FSL SIENA, SPM VBM, AFNI 3Dclust, among others.
"Extract activation patterns" is too vague. In all probability, you likely have task-related BOLD fMRI data and want to perform a general linear model (GLM) analysis. FSL FEAT, SPM fMRI, AFNI and others support this. However, without knowing the experimental design, the nature of the data, and what you want to learn from it, it's hard to be more specific about which tool is appropriate.
"Haemodynamics in numerical form" This can mean a number of things, but if you are thinking about the amount of haemodynamic signal modulation (e.g. Condition led to a 2% change in BOLD signal), you get that out of the GLM analysis mentioned above.
I want to recommend items that are tagged and are categorized into three price categories (cheap, regular and expensive). I know that with Mahout recommendation could be achieved but here's why I don't know how to use it.
Mahout is based on the other users opinion but all of the new items that I want to recommend are just the new ones that don't have any preferences set yet.
Is Mahout the right tool for this? Is this content-based? (which mahout don't support yet????) or should I use classification?
Thanks!
Since I've never built any recommender system - do not take this answer very seriously (no-one has answered it, so I try)
recommendation system has to be built on some already known (or partially known data). If you have only new (unseen) data there is only possibility to use some clustering algorithm in order to build some clusters.
And if those clusters would be ok, they can be used for training some recommendation system.
Mahout is just a tool which implement various ML methods. You can use other tools like Weka, R, ...
If you have no data at all about a new user, there's really nothing you can do to make recommendations, no matter what you do. There is zero input that would differentiate the person from anyone else.
Good systems should however be able to do something reasonable after the first input is available.
This is not a classifier problem by nature, no. It is also not a clustering tool, other answers notwithstanding.
The price categories are not core to any rec process you would use. You have other data presumably, what is it? That's important.
Finally whether or not to use Mahout depends on taste. You would use it if you want to use Java and Hadoop. And in turn you would only consider Hadoop if you had very large input, and few people have that much data (like >10M data points at least).
(Well, not quite -- my recommender pieces in Mahout pre-date Hadoop and are for on-line, smaller-scale applications. You might indeed be interested in this, if you are working in Java.)