Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am interested in doing some Collective Intelligence programming, but wonder how it can work?
It is said to be able to give accurate predictions: the O'Reilly Programming Collective Intelligence book, for example, says a collection of traders' action actually can predict future prices (such as corn) better than an expert can.
Now we also know in statistics class that, if it is a room of 40 students taking exam, there will be 3 to 5 students who will get an "A" grade. There might be 8 that get "B", and 17 that got "C", and so on. That is, basically, a bell curve.
So from these two standpoints, how can a collection of "B" and "C" answers give a better prediction than the answer that got an "A"?
Note that the corn price, for example, is the accurate price factoring in weather, demand of food companies using corn, etc, rather than "self fulfilling prophecy" (more people buy the corn futures and price goes up and more people buy the futures again). It is actually predicting the supply and demand accurately to give out an accurate price in the future.
How is it possible?
Update: can we say Collective Intelligence won't work in stock market euphoria and panic?
The Wisdom of Crowds wiki page offers a good explanation.
In short, you don't always get good answers. There needs to be a few conditions for it to occur.
Well, you might want to think of the following "model" for a guess:
guess = right answer + error
If we ask a lot of people a question, we'll get lots of different guesses. But if, for some reason, the distribution of errors is symmetric around zero (actually it just has to have zero mean) then the average of the guesses will be a pretty good predictor of the right answer.
Note that the guesses don't necessarily have to be good -- i.e., the errors could indeed be large (grade B or C, rather than A) as long as there are grade B and C answers distributed on both sides of the right answer.
Of course, there are cases where this is a terrible model for our guesses, so collective intelligence won't always work...
Crowd Wisdom techniques, like prediction markets, work well in some situations, and poorly in others, just as other approaches (experts, for instance) have their strengths and weaknesses. The optimal arenas therefore, are ones where no other approaches do very well, and prediction markets can do well. Some examples include predicting public elections, estimating project completion dates, and predicting the prevalence of epidemics. These are areas where information is spread around sparsely, and experts haven't found effective models that reliably predict.
The general idea is that market participants make up for one another's weaknesses. The expectation isn't that the markets will always predict every outcome correctly, but that, due to people noticing other people's mistakes, they won't miss crucial information as often, and that over the long haul, they'll do better. In cases where the exerts actually know the answer, they'll be able to influence the outcome. Different experts can weigh in on different questions, so each has more influence where they have the most knowledge. And as markets continue over time, each participant gets feedback from their gains and losses that makes them better informed about which kinds of questions they actually understand and which ones they should stay away from.
In a classroom, people are often graded on a curve, so the distribution of grades doesn't tell you much about how good the answers were. Prediction markets calibrate all the answers against actual outcomes. This public record of successes and failures does a lot to reinforce the mechanism, and is missing in most other approaches to forecasting.
Collective intelligence is really good at coming up to to problems that have complex behavior behind them, because they are able to take multiple sources of opinions/attributes to determine the end result. With a setup like this, training helps to optimize the end result of the processes.
The fault is in your analogy, both opinions are not equal. Traders predict direct profit for their transaction (the little part of the market they have overview over), while the expert tries to predict the overall field.
IOW the overall traders position is pieced together like a jigsaw puzzle based on a large amount of small opinions for their respective piece of the pie (where they are assumed to be experts).
A single mind can't process that kind of detail, which is why the overall position MIGHT overshadow the real expert. Note that this is particularly phenomon is usually restricted to a fairly static market, not in periods of turmoil. Expert usually do better then, since they are often better trained and motivated to avoid going with general sentiment. (which is often comparable to that of a lemming in times of turmoil)
The problem with the class analogy is that the grading system doesn't assume that the students are masters in their (difficult to predict) terrain, so it is not comparable.
P.s. note that the base axiom depends on all players being experts in a small piece of the field. One can debate if this requirement actually transports well to a web 2 environment.
Related
I have a large question bank and students. The goal is to select questions for an exam for a student.
Questions have various properties:
Grade Level
Subjects (could be multiple: fractions, word problems, addition)
How other students did on this question (percent right, wrong, etc)
Has the student seen this question before or those like it?
So I want to choose questions for a student based on how the student is doing. My feedback for whether or not it's a "good" exam is the following:
Human feedback. A person can review the exam and reject certain questions for qualitative reasons
How the student does on the exam? If they got 100% right, that's bad. If they got 20% right, that's bad. We want to target 75%
Qualitative feedback on the exam as a whole from the teacher
I feel that a neural network is a possible solution here, but I'm not sure how. Any thoughts?
Thanks in advance.
I'm not sure whether neural networks are the best way to go here. They could be, but I thought almost instantly of something else.
Given the information in your question, you might want to check a statistic approach here, with some techniques like PCA or more broadly multivariate analysis.
If I understand the question correctly you would have to learn whether the relation between a question and a student is "good" or "bad"? This would give you a binary classification problem, where the input is a feature vector combining both the question's features as well as that of the student?
You can always throw this in a network and see how it does, I'm guessing that you don't have too many questions and students, but as you're classifying pairs your datasize does increase, which is nice.
I would suggest probabilistic modeling, since you have some noise on your real data introduced by the human evaluation. Two annotator would not for sure give the same "qualitative feedback" about the same exam.
It's best to have a model that takes account of uncertainties; Bayesian approach ! If you have little knowledge about this area, I point you to Bishop - Pattern recognition book - freely available online and you can use library like mc-stan lib or edward-lib. There's also a course about probabilistic modeling on coursera, where in the first chapters they treat an example very close to your use case.
One more comment about your suggestion using NN: since you don't have a lot of features (6 as you have mentioned), NN will easily overfit unless you have millions of data points.. This is slightly easy problem in terms of model complexity and you don't need hidden layers to accomplish a good result.
I hope this will help.
Try also looking over Ranking Algorithms. You can train it for combination (student, question) and point this kind of combinations or generate an ordered function.
I don't have much experience with it but may be worth trying it.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm looking for some help modelling this machine learning problem.
A hand consists of three rows (containing 3, 5, and 5 cards respectively). Your goal is to build a hand that scores the most points. You receive the cards in intervals called streets, five cards in the first street, and three in the next four streets (you must discard one of the cards in the final four streets). Cards can't be moved once you place them. More details on scoring.
My goal is to build a system that, given a set of streets, plays the hand similar to our best players. It seem pretty clear that I'll need to build a neural network for each street, using features based on the existing hand and the set of cards in the street. I've got plenty of data (streets, placements, and final score), but I'm a little unsure how to model the problem given that the possible outputs are unique on the set of cards (although there are less than 3^5 placements in the first street, and 3^3 after). I've previously only dealt with classification problems with fixed categories.
Does anyone have an example of a similar problem or suggestions how to prepare the training data when you have unique outputs?
A vague question gives a vague answer (which is my excuse for being too lazy to code ;-).
You wrote you have a lot of data, and it seems you want to map the game onto experience gained with supervised learning. But that is not the way game-optimization works. One usually does not perform supervised learning, but rather reinforcement learning. The differences are subtle, but reinforcement learning (with Markov decision processes as its theoretical basis) offers more a local view -- like optimize the decision given a specific state. Supervised learning rather corresponds to optimize several decisions at once.
Another show stopper for the usual supervised learning approach is that even if you have a lot of data, it will almost surely be too little. And it will not offer the "required paths".
The usual approach at least since Thesauro's backgammon player is rather: set up the basic rules of the game, possibly introduce human knowledge as heuristics, and then let the program play against itself as often as possible -- this is how google deep mind set up a master go player, for example. See also this interesting video.
In your case, the task should in principle be not that hard, as there is a comparatively small number of game states and, importantly, any issues involved by psychology like bluffing, consistent playing, and so on are completely absent.
So again: build a bot which can play against itself. One common basis is a function Q(S,a) which assigns to any game state and possible action of the player a value -- this is called Q-learning. And this function is often implemented as a neural network ... although I would think it does not need to be that sophisticated here.
I'll stay that vague for now. But I would be glad to assist you further if necessary.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Let me just start by saying I only took the undergrad AI class at school so I know just enough to be dangerous.
Here's the problem I'm looking to solve...accurate credit scoring is a key part to the success of my business. Currently we rely on a team of actuaries and statistical analysis to suss out patterns in the few dozen variables we track about each individual that indicate that they may be a low or high credit risk. As I understand it this is exactly the type of job that neural nets are great at solving, that is, finding high order relationships across many inputs that a human would likely never spot and then rendering a decision or output that is on average more accurate than what a trained human could do. In short, I want to be able to input your name, address, marital status, what car you drive, where you work, hair color, favorite food, etc in and get a credit score back.
My question is what type or architecture for a neural network would be best for this particular problem. I've done a bit of research and it seems I'm generating questions faster than I'm finding answers at this point. The best I've been able to come up with is some kind of generative deep neural network with multiple hidden layers where each layer is able to abstract one level beyond the previous one. Im assuming it's going to be feed-forward just because it seems to be the default. We have historical data on all previous customers including the information we used to make the initial score as well as data on what type of credit risk they actually turned out to be. This would seem to lend itself to unsupervised learning. Where I'm lost is in number of layers, how the layers are different from each other, size of each layer, connectedness of each of the perceptrons and so on. The more I dig the more I'm getting into research papers that are over my head so I just need some smart person to point me in the right direction
Does anyone have any ideas? Again, I don't need a thorough explanation just a general area I should focus on.
This is supervised learning since you have actual data that can be labelled. It's also feedforward since you're not predicting time series but assigning scores. Further, you should probably just prepare your data (assigning credit scores manually or with some rough heuristic) and start experimenting with some tools before you invest time into implementing state-of-the-art architectures. A multi-layer-perceptron (MLP) with 1 hidden layer is a sufficient starting point for such a problem. From there on, you can train the network to generalize your credit assignment heuristic you began with.
You should know that most "new" architectures you probably read about while researching are dealing with much more difficult problems than credit scoring (speech/image/character recognition/detection). There is a collection of papers on the scenario of credit scoring / risk classification, so I'd recommend reshifting your focus from architectures to actual case studies (see e.g. this paper). Just pick a recent paper with MLPs and apply their parameters. Start simple and improve the system incrementally (as #roganjosh stated).
Let's say we have 10-D data from class of students. The data involves parameters like Name, Grades, Courses, No. of hours of lectures, etc. of all the students of the class. Now, we want to analyze the impact of No. of hours of lectures on Grades.
If we closely watch our parameters, Name of the student has nothing to do with Grades, but Courses taken by student "might" have impact on Grades.
So, there could be parameters which are dependent on each other while some others can be totally independent. My question is, how do we decide that which parameter has impact on our classification/regression problem and which don't?
PS: I am not looking for exact solutions. If someone can just show me the right direction or keywords for google search, that should be sufficient.
Thanks.
The technique you're looking for is called dimension reduction. The Stanford machine learning class goes over one method (principal component analysis).
This is the problem of independent component analysis. ICA a family of methods for finding the statistically independent components of data sets. This is a difficult problem, and there exists a large variety of algorithms for finding good solutions. A popular algorithm is FastICA.
There are also related concepts of whitening and decorrelation.
Given a set of data very similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5> (ie. correlation coefficient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie. correlation coefficient = -1) or somewhere in between.
Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be:
user
direction (long/short)
market direction
sector of stock
The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a combination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended).
The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specific time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumption is that a single user is right no more than 57% of the time - but who knows.
I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings.
NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coefficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price.
Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started...
My hope is to implement this with F# and be able to impress my friends with a new skill set in F# with an implementation of machine learning and potentially something (application / source) I can include in a tech portfolio or blog space;
Thank you for any advice in advance.
I have an MBA, and teach data mining at a top grad school.
The term project this year was to predict stock price movements automatically from news reports. One team had 70% accuracy, on a reasonably small sample, which ain't bad.
Regarding your question, a lot of companies have made a lot of money on pair trading (find a pair of assets that normally correlate, and buy/sell pair when they diverge). See the writings of Ed Thorpe, of Beat the Dealer. He's accessible and kinda funny, if not curmudgeonly. He ran a good hedge fund for a long time.
There is probably some room in using data mining to predict companies that will default (be unable to make debt payments) and shorting†them, and use the proceeds to buy shares in companies less likely to default. Look into survival analysis. Search Google Scholar for "predict distress" etc in finance journals.
Also, predicting companies that will lose value after an IPO (and shorting them. edit: Facebook!). There are known biases, in academic literature, that can be exploited.
Also, look into capital structure arbitrage. This is when the value of the stocks in a company suggest one valuation, but the value of the bonds or options suggest another value. Buy the cheap asset, short the expensive one.
Techniques include survival analysis, sequence analysis (Hidden Markov Models, Conditional Random Fields, Sequential Association Rules), and classification/regression.
And for the love of God, please read Fooled By Randomness by Taleb.
†shorting a stock usually involves calling your broker (that you have a good relationship with) and borrowing some shares of a company. Then you sell them to some poor bastard. Wait a while, hopefully the price has gone down, you buy some more of the shares and give them back to your broker.
My Advice to You:
There are several Machine Learning/Artificial Intelligence (ML/AI) branches out there:
http://www-formal.stanford.edu/jmc/whatisai/node2.html
I have only tried genetic programming, but in the "learning from experience" branch you will find neural nets. GP/GA and neural nets seem to be the most commonly explored methodologies for the purpose of stock market predictions, but if you do some data mining on Predict Wall Street, you might be able to utilize a Naive Bayes classifier to do what you're interested in doing.
Spend some time learning about the various ML/AI techniques, get a small data set and try to implement some of those algorithms. Each one will have its strengths and weaknesses, so I would recommend that you try to combine them using Naive Bays classifier (or something similar).
My Experience:
I'm working on the problem for my Masters Thesis so I'll pitch my results using Genetic Programming: www.twitter.com/darwins_finches
I started live trading with real money in 09/09/09.. yes, it was a magical day! I post the GP's predictions before the market opens (i.e. the timestamps on twitter) and I also place the orders before the market opens. The profit for this period has been around 25%, we've consistently beat the Buy & Hold strategy and we're also outperforming the S&P 500 with stocks that are under-performing it.
Some Resources:
Here are some resources that you might want to look into:
Max Dama's blog: http://www.maxdama.com/search/label/Artificial%20Intelligence
My blog: http://mlai-lirik.blogspot.com/
AI Stock Market Forum: http://www.ai-stockmarketforum.com/
Weka is a data mining tool with a collection of ML/AI algorithms: http://www.cs.waikato.ac.nz/ml/weka/
The Chatter:
The general consensus amongst "financial people" is that Artificial Intelligence is a voodoo science, you can't make a computer predict stock prices and you're sure to loose your money if you try doing it. None-the-less, the same people will tell you that just about the only way to make money on the stock market is to build and improve on your own trading strategy and follow it closely.
The idea of AI algorithms is not to build Chip and let him trade for you, but to automate the process of creating strategies.
Fun Facts:
RE: monkeys can pick better than most experts
Apparently rats are pretty good too!
I understand monkeys can pick better than most experts, so why not an AI? Just make it random and call it an "advanced simian Mersenne twister AI" or something.
Much more money is made by the sellers of "money-making" systems then by the users of those systems.
Instead of trying to predict the performance of companies over which you have no control, form a company yourself and fill some need by offering a product or service (yes, your product might be a stock-predicting program, but something a little less theoretical is probably a better idea). Work hard, and your company's own value will rise much quicker than any gambling you'd do on stocks. You'll also have plenty of opportunities to apply programming skills to the myriad of internal requirements your own company will have.
If you want to go down this long, dark, lonesome road of trying to pick stocks you may want to look into data mining techniques using advanced data mining software such as SPSS or SAS or one of the dozen others.
You'll probably want to use a combination or technical indicators and fundamental data. The data will more than likely be highly correlated so a feature reduction technique such as PCA will be needed to reduce the number of features.
Also keep in mind your data will constantly have to be updated, trimmed, shuffled around because market conditions will constantly be changing.
I've done research with this for a grad level class and basically I was somewhat successful at picking whether a stock would go up or down the next day but the number of stocks in my data set was fairly small (200) and it was over a very short time frame with consistent market conditions.
What I'm trying to say is what you want to code has been done in very advanced ways in software that already exists. You should be able to input your data into one of these programs and using either regression, or decision trees or clustering be able to do what you want to do.
I have been thinking of this for a few months.
I am thinking about Random Matrix Theory/Wigner's distribution.
I am also thinking of Kohonen self-learning maps.
These comments on speculation and past performance apply to you as well.
I recently completed my masters thesis on deep learning and stock price forecasting. Basically, the current approach seems to be LSTM and other deep learning models. There are also 10-12 technical indicators (TIs) based on moving average that have been shown to be highly predictive for stock prices, especially indexes such as SP500, NASDAQ, DJI, etc. In fact, there are libraries such as pandas_ta for computing various TIs.
I represent a group of academics that are trying to predict stocks in a general form that can also be applied to anything, even the rating of content.
Our algorithm, which we describe as truth seeking, works as follows.
Basically each participant has their own credence rating. This means that the higher your credence or credibility, then the more their vote counts. Credence is worked out by how close to the weighted credence each vote is. It's like you get a better credence value the closer you get to the average vote that has already been adjusted for credence.
For example, let's say that everyone is predicting that a stock's value will be at value X in 30 day's time (a future's option). People who predict on the average get a better credence. The key here is that the individual doesn't know what the average is, only the system. The system is tweaked further by weighting the guesses so that the target spot that generates the best credence is those votes that are already endowed with more credence. So the smartest people (historically accurate) project the sweet spot that will be used for further defining who gets more credence.
The system can be improved too to adjust over time. For example, when you find out the actual value, those people who guessed it can be rewarded with a higher credence. In cases where you can't know the future outcome, you can still account if the average weighted credence changes in the future. People can be rewarded even more if they spotted the trend early. The point is we don't need to even know the outcome in the future, just the fact that the weighted rating changed in the future is enough to reward people who betted early on the sweet spot.
Such a system can be used to rate anything from stock prices, currency exchange rates or even content itself.
One such implementation asks people to vote with two parameters. One is their actual vote and the other is an assurity percentage, which basically means how much a particular participant is assured or confident of their vote. In this way, a person with a high credence does not need to risk downgrading their credence when they are not sure of their bet, but at the same time, the bet can be incorporated, it just won't sway the sweet spot as much if a low assurity is used. In the same vein, if the guess is directly on the sweet spot, with a low assurity, they won't gain the benefits as they would have if they had used a high assurity.