dear community,
How much has the spanCategorizer improved your models?
I am curious. I have been using the textcat for categorizing text with a recall of about 85%. I wonder how much applying a spancategorizer could make a difference.
I am trying to predict if a question of a questionary will bring confidential (personally identifiable) information (such as name, telephone number, address, social security number, etc.). Some questions may be very long, and then the textcat gets confused. I am expecting that being able to catch key terms should improve the prediction, but I wonder how much improvement others have brought in their models. Many thanks for your answers!
Related
I am working on classifying customer chat messages into 5 categories. The example of categories are - Login, SSL etc. For Instance, if customer is having login issues, the message may read something like this - I am having a login issue or my login is not working... We have to take into account misspelling, mentioning multiple classified keywords (eg: I just upgraded my SSL but now I am having issue with login) etc.
Are there models/apis out there that I can use to solve this problem?
I think your question is quite broad, because your problem is essentially about text classification, and in literature it had been faced from most of NLP classification algorithms, so there are much more options (and maybe in your case better) than deep learning. But if you want to use deep learning you need to consider not only the architecture (simple multilayer, convolutional, LSTM, etc.), but the amount of labeled data you need for a good traning (and what about unsupervised algorithms for text classification?).
Then, independent of the approach you decide, I strongly recommend you check word embeddings algorithms (pretrained or built using your own data), specially those similar to fasttext, because will let you deal with misspelling words.
I hope this helps
[I'm approaching this as an outsider to machine learning. It just seems like a classification problem which I should be able to solve with fairly good accuracy with Machine Larning.]
Training Dataset:
I have millions of URLs, each tagged with a particular category. There are limited number of categories (50-100).
Now given a fresh URL, I want to categorize it into one of those categories. The category can be determined from the URL using conventional methods, but would require a huge unmanageable mess of pattern matching.
So I want to build a box where INPUT is URL, OUTPUT is Category. How do I build this box driven by ML?
As much as I would love to understand the basic fundamentals of how this would work out mathematically, right now much much more focussed on getting it done, so a conceptual understanding of the systems and processes involved is what I'm looking to get. I suppose machine learning is at a point where you can approach reasonably straight forward problems in that manner.
If you feel I'm wrong and I need to understand the foundations deeply in order to get value out of ML, do let me know.
I'm building this inside an AWS ecosystem so I'm open to using Amazon ML if it makes things quicker and simpler.
I suppose machine learning is at a point where you can approach reasonably straight forward problems in that manner.
It is not. Building an effective ML solution requires both an understanding of problem scope/constraints (in your case, new categories over time? Runtime requirements? Execution frequency? Latency requirements? Cost of errors? and more!). These constraints will then impact what types of feature engineering / processing you may look at, and what types of models you will look at. Your particular problem may also have issues with non I.I.D. data, which is an assumption of most ML methods. This would impact how you evaluate the accuracy of your model.
If you want to learn enough ML to do this problem, you might want to start looking at work done in Malicious URL classification. An example of which can be found here. While you could "hack" your way to something without learning more about ML, I would not personally trust any solution built in that manner.
If you feel I'm wrong and I need to understand the foundations deeply in order to get value out of ML, do let me know.
Okay, I'll bite.
There are really two schools of thought currently related to prediction: "machine learners" versus statisticians. The former group focuses almost entirely on practical and applied prediction, using techniques like k-fold cross-validation, bagging, etc., while the latter group is focused more on statistical theory and research methods. You seem to fall into the machine-learning camp, which is fine, but then you say this:
As much as I would love to understand the basic fundamentals of how this would work out mathematically, right now much much more focussed on getting it done, so a conceptual understanding of the systems and processes involved is what I'm looking to get.
While a "conceptual understanding of the systems and processes involved" is a prerequisite for doing advanced analytics, it isn't sufficient if you're the one conducting the analysis (it would be sufficient for a manager, who's not as close to the modeling).
With just a general idea of what's going on, say, in a logistic regression model, you would likely throw all statistical assumptions (which are important) to the wind. Do you know whether certain features or groups shouldn't be included because there aren't enough observations in that group for the test statistic to be valid? What can happen to your predictions and hypotheses when you have high variance-inflation factors?
These are important considerations when doing statistics, and oftentimes people see how easy it is to do from sklearn.svm import SVC or somthing like that and run wild. That's how you get caught with your pants around your ankles.
How do I build this box driven by ML?
You don't seem to have even a rudimentary understanding of how to approach machine/statistical learning problems. I would highly recommend that you take an "Introduction to Statistical Learning"- or "Intro to Regression Modeling"-type course in order to think about how you translate the URLs you have into meaningful features that have significant power predicting URL class. Think about how you can decompose a URL into individual pieces that might give some information as to which class a certain URL pertains. If you're classifying espn.com domains by sport, it'd be pretty important to parse nba out of http://www.espn.com/nba/team/roster/_/name/cle, don't you think?
Good luck with your project.
Edit:
To nudge you along, though: every ML problem boils down to some function mapping input to output. Your outputs are URL classes. Your inputs are URLs. However, machines only understand numbers, right? URLs aren't numbers (AFAIK). So you'll need to find a way to translate information contained in the URLs to what we call "features" or "variables." One place to start, there, would be one-hot encoding different parts of each URL. Think of why I mentioned the ESPN example above, and why I extracted info like nba from the URL. I did that because, if I'm trying to predict to which sport a given URL pertains, nba is a dead giveaway (i.e. it would very likely be highly predictive of sport).
I have a large question bank and students. The goal is to select questions for an exam for a student.
Questions have various properties:
Grade Level
Subjects (could be multiple: fractions, word problems, addition)
How other students did on this question (percent right, wrong, etc)
Has the student seen this question before or those like it?
So I want to choose questions for a student based on how the student is doing. My feedback for whether or not it's a "good" exam is the following:
Human feedback. A person can review the exam and reject certain questions for qualitative reasons
How the student does on the exam? If they got 100% right, that's bad. If they got 20% right, that's bad. We want to target 75%
Qualitative feedback on the exam as a whole from the teacher
I feel that a neural network is a possible solution here, but I'm not sure how. Any thoughts?
Thanks in advance.
I'm not sure whether neural networks are the best way to go here. They could be, but I thought almost instantly of something else.
Given the information in your question, you might want to check a statistic approach here, with some techniques like PCA or more broadly multivariate analysis.
If I understand the question correctly you would have to learn whether the relation between a question and a student is "good" or "bad"? This would give you a binary classification problem, where the input is a feature vector combining both the question's features as well as that of the student?
You can always throw this in a network and see how it does, I'm guessing that you don't have too many questions and students, but as you're classifying pairs your datasize does increase, which is nice.
I would suggest probabilistic modeling, since you have some noise on your real data introduced by the human evaluation. Two annotator would not for sure give the same "qualitative feedback" about the same exam.
It's best to have a model that takes account of uncertainties; Bayesian approach ! If you have little knowledge about this area, I point you to Bishop - Pattern recognition book - freely available online and you can use library like mc-stan lib or edward-lib. There's also a course about probabilistic modeling on coursera, where in the first chapters they treat an example very close to your use case.
One more comment about your suggestion using NN: since you don't have a lot of features (6 as you have mentioned), NN will easily overfit unless you have millions of data points.. This is slightly easy problem in terms of model complexity and you don't need hidden layers to accomplish a good result.
I hope this will help.
Try also looking over Ranking Algorithms. You can train it for combination (student, question) and point this kind of combinations or generate an ordered function.
I don't have much experience with it but may be worth trying it.
The problem statement is kind of vague but i am looking for directions because of privacy policy i can't share exact details. so please help out.
We have a problem at hand where we need to increase the efficiency of equipment or in other words decide on which values across multiple parameters should the machines operate to produce optimal outputs.
My query is whether it is possible to come up with such numbers using Linear Regression or Multinomial Logistic Regression algorithms, if no then can you please specify which algorithms will be more suitable. Also can you please point me to some active research done on this kind of problem that is available in public domain.
Does the type of problem i am asking suggestions for comes in the area of Machine Learning ?
Lots of unknowns here but I’ll make some assumptions.
What you are attempting to do could probably be achieved with multiple linear regression. I have zero familiarity with the Amazon service (I didn’t even know it existed until you brought this up, it’s not available in Europe). However, a read of the documentation suggests that the Amazon service would be capable of doing this for you. The problem you will perhaps have is that it’s geared to people unfamiliar with this field and a lot of its functionality might be removed or clumped together to prevent confusion. I am under the impression that you have turned to this service because you too are somewhat unfamiliar with this field.
Something that may suit your needs better is Response Surface Methodology (RSM), which I have applied to industrial optimisation problems that I think are similar to what you suggest. RSM works best if you can obtain your data through an experimental design such as a Central Composite Design or Box-Behnken design. I suggest you spend some time Googling these terms to get your head around them, I don’t think it’s an unmanageable burden to learn how to apply these with no prior experience in this area. Because your question is vague, only you can determine if this really is suitable. If you already have the data in an unstructured format, you can still generate an RSM but it is less robust. There are plenty of open-access articles using these techniques but Science Direct is conveniently down at the moment!
Minitab is a software package that will do all the regression and RSM for you. Its strength is that it has a robust GUI and partially reflects Excel so it is far less daunting to get into than something like R. It also has plenty of guides online. They offer a 30 day free trial so it might be worth doing some background reading, collecting the tutorials you need and develop a plan of action before downloading the trial.
Hope that is some help.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am interested in doing some Collective Intelligence programming, but wonder how it can work?
It is said to be able to give accurate predictions: the O'Reilly Programming Collective Intelligence book, for example, says a collection of traders' action actually can predict future prices (such as corn) better than an expert can.
Now we also know in statistics class that, if it is a room of 40 students taking exam, there will be 3 to 5 students who will get an "A" grade. There might be 8 that get "B", and 17 that got "C", and so on. That is, basically, a bell curve.
So from these two standpoints, how can a collection of "B" and "C" answers give a better prediction than the answer that got an "A"?
Note that the corn price, for example, is the accurate price factoring in weather, demand of food companies using corn, etc, rather than "self fulfilling prophecy" (more people buy the corn futures and price goes up and more people buy the futures again). It is actually predicting the supply and demand accurately to give out an accurate price in the future.
How is it possible?
Update: can we say Collective Intelligence won't work in stock market euphoria and panic?
The Wisdom of Crowds wiki page offers a good explanation.
In short, you don't always get good answers. There needs to be a few conditions for it to occur.
Well, you might want to think of the following "model" for a guess:
guess = right answer + error
If we ask a lot of people a question, we'll get lots of different guesses. But if, for some reason, the distribution of errors is symmetric around zero (actually it just has to have zero mean) then the average of the guesses will be a pretty good predictor of the right answer.
Note that the guesses don't necessarily have to be good -- i.e., the errors could indeed be large (grade B or C, rather than A) as long as there are grade B and C answers distributed on both sides of the right answer.
Of course, there are cases where this is a terrible model for our guesses, so collective intelligence won't always work...
Crowd Wisdom techniques, like prediction markets, work well in some situations, and poorly in others, just as other approaches (experts, for instance) have their strengths and weaknesses. The optimal arenas therefore, are ones where no other approaches do very well, and prediction markets can do well. Some examples include predicting public elections, estimating project completion dates, and predicting the prevalence of epidemics. These are areas where information is spread around sparsely, and experts haven't found effective models that reliably predict.
The general idea is that market participants make up for one another's weaknesses. The expectation isn't that the markets will always predict every outcome correctly, but that, due to people noticing other people's mistakes, they won't miss crucial information as often, and that over the long haul, they'll do better. In cases where the exerts actually know the answer, they'll be able to influence the outcome. Different experts can weigh in on different questions, so each has more influence where they have the most knowledge. And as markets continue over time, each participant gets feedback from their gains and losses that makes them better informed about which kinds of questions they actually understand and which ones they should stay away from.
In a classroom, people are often graded on a curve, so the distribution of grades doesn't tell you much about how good the answers were. Prediction markets calibrate all the answers against actual outcomes. This public record of successes and failures does a lot to reinforce the mechanism, and is missing in most other approaches to forecasting.
Collective intelligence is really good at coming up to to problems that have complex behavior behind them, because they are able to take multiple sources of opinions/attributes to determine the end result. With a setup like this, training helps to optimize the end result of the processes.
The fault is in your analogy, both opinions are not equal. Traders predict direct profit for their transaction (the little part of the market they have overview over), while the expert tries to predict the overall field.
IOW the overall traders position is pieced together like a jigsaw puzzle based on a large amount of small opinions for their respective piece of the pie (where they are assumed to be experts).
A single mind can't process that kind of detail, which is why the overall position MIGHT overshadow the real expert. Note that this is particularly phenomon is usually restricted to a fairly static market, not in periods of turmoil. Expert usually do better then, since they are often better trained and motivated to avoid going with general sentiment. (which is often comparable to that of a lemming in times of turmoil)
The problem with the class analogy is that the grading system doesn't assume that the students are masters in their (difficult to predict) terrain, so it is not comparable.
P.s. note that the base axiom depends on all players being experts in a small piece of the field. One can debate if this requirement actually transports well to a web 2 environment.