I'm building a new rule engine using PMML.
I have read quite a bit and would like to know if anyone can give me some pros/cons for using OpenScoring (http://openscoring.io/)? Or some performance statistics?
I have both real-time and batch processes requesting models that are going to query for scores.
Thanks,
The datapoint shared in https://news.ycombinator.com/item?id=13821217 suggests that OpenScoring is reasonably scalable. To quote -
Dataset size in the hundred millions of examples with hundreds of features.
Handled thousands of requests per second, no problem.
(Somewhat tangentially to the question, but hopefully useful) - I concur with the author that PMML itself is somewhat limiting in expressive power.
Related
[I'm approaching this as an outsider to machine learning. It just seems like a classification problem which I should be able to solve with fairly good accuracy with Machine Larning.]
Training Dataset:
I have millions of URLs, each tagged with a particular category. There are limited number of categories (50-100).
Now given a fresh URL, I want to categorize it into one of those categories. The category can be determined from the URL using conventional methods, but would require a huge unmanageable mess of pattern matching.
So I want to build a box where INPUT is URL, OUTPUT is Category. How do I build this box driven by ML?
As much as I would love to understand the basic fundamentals of how this would work out mathematically, right now much much more focussed on getting it done, so a conceptual understanding of the systems and processes involved is what I'm looking to get. I suppose machine learning is at a point where you can approach reasonably straight forward problems in that manner.
If you feel I'm wrong and I need to understand the foundations deeply in order to get value out of ML, do let me know.
I'm building this inside an AWS ecosystem so I'm open to using Amazon ML if it makes things quicker and simpler.
I suppose machine learning is at a point where you can approach reasonably straight forward problems in that manner.
It is not. Building an effective ML solution requires both an understanding of problem scope/constraints (in your case, new categories over time? Runtime requirements? Execution frequency? Latency requirements? Cost of errors? and more!). These constraints will then impact what types of feature engineering / processing you may look at, and what types of models you will look at. Your particular problem may also have issues with non I.I.D. data, which is an assumption of most ML methods. This would impact how you evaluate the accuracy of your model.
If you want to learn enough ML to do this problem, you might want to start looking at work done in Malicious URL classification. An example of which can be found here. While you could "hack" your way to something without learning more about ML, I would not personally trust any solution built in that manner.
If you feel I'm wrong and I need to understand the foundations deeply in order to get value out of ML, do let me know.
Okay, I'll bite.
There are really two schools of thought currently related to prediction: "machine learners" versus statisticians. The former group focuses almost entirely on practical and applied prediction, using techniques like k-fold cross-validation, bagging, etc., while the latter group is focused more on statistical theory and research methods. You seem to fall into the machine-learning camp, which is fine, but then you say this:
As much as I would love to understand the basic fundamentals of how this would work out mathematically, right now much much more focussed on getting it done, so a conceptual understanding of the systems and processes involved is what I'm looking to get.
While a "conceptual understanding of the systems and processes involved" is a prerequisite for doing advanced analytics, it isn't sufficient if you're the one conducting the analysis (it would be sufficient for a manager, who's not as close to the modeling).
With just a general idea of what's going on, say, in a logistic regression model, you would likely throw all statistical assumptions (which are important) to the wind. Do you know whether certain features or groups shouldn't be included because there aren't enough observations in that group for the test statistic to be valid? What can happen to your predictions and hypotheses when you have high variance-inflation factors?
These are important considerations when doing statistics, and oftentimes people see how easy it is to do from sklearn.svm import SVC or somthing like that and run wild. That's how you get caught with your pants around your ankles.
How do I build this box driven by ML?
You don't seem to have even a rudimentary understanding of how to approach machine/statistical learning problems. I would highly recommend that you take an "Introduction to Statistical Learning"- or "Intro to Regression Modeling"-type course in order to think about how you translate the URLs you have into meaningful features that have significant power predicting URL class. Think about how you can decompose a URL into individual pieces that might give some information as to which class a certain URL pertains. If you're classifying espn.com domains by sport, it'd be pretty important to parse nba out of http://www.espn.com/nba/team/roster/_/name/cle, don't you think?
Good luck with your project.
Edit:
To nudge you along, though: every ML problem boils down to some function mapping input to output. Your outputs are URL classes. Your inputs are URLs. However, machines only understand numbers, right? URLs aren't numbers (AFAIK). So you'll need to find a way to translate information contained in the URLs to what we call "features" or "variables." One place to start, there, would be one-hot encoding different parts of each URL. Think of why I mentioned the ESPN example above, and why I extracted info like nba from the URL. I did that because, if I'm trying to predict to which sport a given URL pertains, nba is a dead giveaway (i.e. it would very likely be highly predictive of sport).
I'm a programmer who is interested in processing and analyzing time-series data. I know basic statistics and math, but I'm afraid that's all.
Can you please recommend good books and/or articles that does not require Ph.D. to understand them?
As for my concrete tasks - I want to be able to spot trends, eliminate outliers, be able to make predictions and calculate stats over a range of values. We have quite a bit of events coming off our systems.
I started reading "Introduction to Time Series and Forecasting" by Brockwell and Davis - and I'm completely lost in math.
update on outliers by outliers I mean data points that doesn't necessarily make sense. e.g. the exchange rate is 1.5$(+-10 cents) for a pound on average, but a guy around the corner offers 1.09$ and says he's completely legit.
I've found the NIST Engineering Statistics Handbook's chapter on time series to be a simple and clear introduction to basic time series modeling. It discusses exponential smoothing, auto-regressive, moving average, and eventually ARMA time series modeling. These can be used for trend analysis and possibly prediction, subject to validation.
Outlier/anomaly detection is a much different task; the NIST book doesn't have much on this. It would be helpful to know what kind of outliers you are trying to detect.
I've gone through numerous books and articles and here are my findings. May be they will help others like me.
Regarding theory - I found an article An Introductory Study on Time Series Modeling and Forecasting very well written. That doesn't mean I understood all of its contents, but it's a really good overview of available time series models.
If you're like me and like to see some actual code - there's article series on QuantStart. Examples are in R, but I guess many of them are portable to Python.
I can highly recommend QuantStart blog by Michael Halls-Moore, I found articles easy to read and the author has done a great job trying not to overwhelm a reader with math. I also read Michael's first book and it's a good one for a beginner in the space like me.
Textbooks on the topic are extremely hard for me to read. I tried Time Series Analysis by Hamilton, but haven't gotten far.
Regarding outlier detection I mentioned - I've found this question on SO and its stats counterpart. By the looks of it, it's not something you can study and implement in a couple of evenings, at least not for me.
The problem statement is kind of vague but i am looking for directions because of privacy policy i can't share exact details. so please help out.
We have a problem at hand where we need to increase the efficiency of equipment or in other words decide on which values across multiple parameters should the machines operate to produce optimal outputs.
My query is whether it is possible to come up with such numbers using Linear Regression or Multinomial Logistic Regression algorithms, if no then can you please specify which algorithms will be more suitable. Also can you please point me to some active research done on this kind of problem that is available in public domain.
Does the type of problem i am asking suggestions for comes in the area of Machine Learning ?
Lots of unknowns here but I’ll make some assumptions.
What you are attempting to do could probably be achieved with multiple linear regression. I have zero familiarity with the Amazon service (I didn’t even know it existed until you brought this up, it’s not available in Europe). However, a read of the documentation suggests that the Amazon service would be capable of doing this for you. The problem you will perhaps have is that it’s geared to people unfamiliar with this field and a lot of its functionality might be removed or clumped together to prevent confusion. I am under the impression that you have turned to this service because you too are somewhat unfamiliar with this field.
Something that may suit your needs better is Response Surface Methodology (RSM), which I have applied to industrial optimisation problems that I think are similar to what you suggest. RSM works best if you can obtain your data through an experimental design such as a Central Composite Design or Box-Behnken design. I suggest you spend some time Googling these terms to get your head around them, I don’t think it’s an unmanageable burden to learn how to apply these with no prior experience in this area. Because your question is vague, only you can determine if this really is suitable. If you already have the data in an unstructured format, you can still generate an RSM but it is less robust. There are plenty of open-access articles using these techniques but Science Direct is conveniently down at the moment!
Minitab is a software package that will do all the regression and RSM for you. Its strength is that it has a robust GUI and partially reflects Excel so it is far less daunting to get into than something like R. It also has plenty of guides online. They offer a 30 day free trial so it might be worth doing some background reading, collecting the tutorials you need and develop a plan of action before downloading the trial.
Hope that is some help.
I'd like to take a shot at characterizing incoming documents in my app as either "well" or "poorly" written. I realize this is no easy task, but even a rough idea would be useful. I feel like the way to do this would be via naïve Bayes classifier with two classes, but am open to suggestions. So two questions:
is this method the optimal (taking into account simplicity) way to do this
assuming a large enough training db?
are there libraries in ruby
(or any integratable JRuby or
whatever) that i can plug into my
rails app to make this happen with little fuss?
Thanks!
You might try using vocabulary vector analysis. Covered some here:
http://en.wikipedia.org/wiki/Semantic_similarity
Basically you build up a corpus of texts that you deem "well-written" or "poorly-written" and count the frequency of certain words. Make a normalized vector for each, and then compute the distance between those to the vectors of each incoming document. I am not a statistician, but I'm told it's similar to Bayesian filtering, but seems to deal with misspellings and outliers better.
This is not perfect, by any means. Depending on how accurate you need it to be, you will probably still need humans to make the final judgement. But we've had good luck using it as a pre-filter to reduce number of reviewers.
Another simple algorithm to check out is the Flesch-Kincaid readability metric. It is quite widely used and should be easy to implement. I assume one of the Ruby NLP libraries has syllable methods.
You may find interesting this Burstein, Chodorow, and Leacock on the Criterion essay evaluation system for a pretty interesting very high-level overview of how one particular system did essay evaluation as well as style correction.
This question does not have a single "right" answer.
I'm interested in running Map Reduce algorithms, on a cluster, on Terabytes of data.
I want to learn more about the running time of said algorithms.
What books should I read?
I'm not interested in setting up Map Reduce clusters, or running standard algorithms. I want rigorous theoretical treatments or running time.
EDIT: The issue is not that map reduce changes running time. The issue is -- most algorithms do not distribute well to map reduce frameworks. I'm interested in algorithms that run on the map reduce framework.
Technically, there's no real different in the runtime analysis of MapReduce in comparison to "standard" algorithms - MapReduce is still an algorithm just like any other (or specifically, a class of algorithms that occur in multiple steps, with a certain interaction between those steps).
The runtime of a MapReduce job is still going to scale how normal algorithmic analysis would predict, when you factor in division of tasks across multiple machines and then find the maximum individual machine time required for each step.
That is, if you have a task which requires M map operations, and R reduce operations, running on N machines, and you expect that the average map operation will take m time and the average reduce operation r time, then you'll have an expected runtime of ceil(M/N)*m + ceil(R/N)*r time to complete all of the tasks in question.
Prediction of the values for M,R,m, and r are all something that can be accomplished with normal analysis of whatever algorithm you're plugging into MapReduce.
There are only two books that i know of that are published, but there are more in the works:
Pro hadoop and Hadoop: The Definitive Guide
Of these, Pro Hadoop is more of a beginners book, whilst The Definitive Guide is for those that know what Hadoop actually is.
I own The Definitive Guide and think its an excellent book. It provides good technical details on how the HDFS works, as well as covering a range of related topics such as MapReduce, Pig, Hive, HBase etc. It should also be noted that this book was written by Tom White who has been involved with the development of Hadoop for a good while, and now works at cloudera.
As far as the analysis of algorithms goes on Hadoop you could take a look at the TeraByte sort benchmarks. Yahoo have done a write up of how Hadoop performs for this particular benchmark: TeraByte Sort on Apache Hadoop. This paper was written in 2008.
More details about the 2009 results can be found here.
There is a great book about Data Mining algorithms applied to the MapReduce model.
It was written by two Stanford Professors and it if available for free:
http://infolab.stanford.edu/~ullman/mmds.html