Determining past tweet frequency / time interval for a trending keyword - twitter

I'm doing some work with trend detection in twitter, and I'm looking to get data regarding tweet frequency for a time period before and after before the topic becomes trending.
I know I can use the streaming API to collect real-time data, but that would require apriori knowledge of whether a certain keyword would trend, which isn't very reliable.
Lets say i knew that a certain keyword "foo" became trending on September 3rd (just an arbitrary date). I want to access data regarding tweet frequency n hours before it is marked as trending and n hours after its marked as trending.
There probably isn't a way to directly access historical data, but i figured there's no harm in asking.
Thanks in advance!

Related

Legality on data usage

So I'm working on a project for my University where our current plan is to use the YouTube API and do some data analysis. We have some ideas since we're looking at the Terms of Service and the Developer Policies, but we're not entirely sure about a few things.
Our project does not focus on things such as monetary gain or predicting estimated income from a video or anything of that nature, or anything regarding trying to determine user data such as passwords/usernames, etc. It's much more about the content and statistics of the videos rather than anything else.
Our current ideas that we want to be sure would be ok to do and use:
Determine the category of a video given its title
Determine the category of a video given its tags
Determine the category of a video given its description
Determine the category of a video given its thumbnail
Some combination of above to create an ensemble model
Clustering on videos category/view counts
Sentiment analysis on comments
Trending topics over time
These are just a vague list for now but I would love to be able to reach out more to figure out what all we're allowed to use the data for.

Storing any number series data in a time-series database

I would like to make use of time-series database InfluxDb to store data points indexed by another number instead of time which every data point is stored against. So I can take advantage all the features for a series of datapoints against this number..
For example I have a rocket doing multiple launches on which I have several sensors recording temperature, air pressure, fuel level &c. And I want to graph these datapoints against elevation not time..
I realise I could store elevation itself against time then from the time for say a temperature reading work out the elevation and project the results - but that working out would lose the performance characteristics of just querying the datapoints indexed by elevation. Also third party tools which use the time-series database won't be able to simply get these datapoints against elevation as opposed to time to graph them out, e.g. Grafana, without me putting something in-between to marry the data up..
One idea I had was to have a fake time where meters = seconds and store against this, then I would need make that a composite with something else to differentiate rocket launches, e.g. increment year by 1 starting at year 0.. So I don't see every launch starting at the same elevation and can separate the "number-series" from each other - I guess I would have that problem anyway and the proper way to that would be through tags..
What makes you believe that this approach would be more efficient than storing the elevation jointly with your other sensor data? Fetching data is pretty cheap so the performance gain might be very light compared to the augmented complexity of your keys. Not to mention that you would still need to have the time make part of your elevation-timestamp, otherwise you will end up with duplicate pseudo timestamps and therefore incomplete data as most time series databases do not allow multiple values at the same timestamp for a given series.
I would encourage you to also have a look at other time series databases which include elevation as part of their standard data model. Check out Warp 10 for that matter (std disclaimer, I am the co-founder of SenX, maker of Warp 10).

Augmenting forecasts with knowledge of some future events

When using AWS Forecast, is there some way to augment our model with "partial future information" in order to improve forecasts?
I have been getting quite solid looking predictions from AWS Forecast so far, but suspect that I could improve the predictions somewhat substantially if I could provide some information about known future events.
I'm very new to forecasting and machine learning and by "partial future information", I mean:
I am trying to predict how the time-series of variable X will behave in the future
I am training a model with past time-series information for many different variables, including X
I would like to also provide known future time-series information for a subset of these variables because 1) they should have a significant impact on predictions and 2) this would give me the ability to perform "what-if" analysis
To be more concrete:
I am trying to predict future revenue from past revenue, web traffic volume, advertising spending, and promotional discounts
AWS Forecast has been providing me with good forecasts so far (I hold back so many months of known data from the model and its predictions about the "future" match the known data quite well)
However, I would really like to also tell AWS Forecast about, for example, a significant advertising campaign that is planned for the near future
I would also really like to be able to vary some future variable or variables and see how they affect the outcome ("what if I spend $Z on advertising next month?")
Currently, I am providing all of our past revenue, web traffic volume, advertising spending, and promotional discount information to AWS Forecast as a "Target Time Series" in the format of a single CSV file with 3 columns (metric name, timestamp, metric value); approximately 15 distinct values of metric name; and about 10,000 total rows of data (several years worth of daily values of 15 variables = ~ 2 * 365 * 15 = ~ 11,000 rows). Every metric is provided over the same time interval (for instance, all of the metrics are provided between 2017-10-01 and 2019-11-25).
I'd like to provide some additional, partial data that highlights known future significant events (spending on advertising, promotional discounts) to improve our predictions even further.
For example:
Revenue from 2017-10-01 to 2019-11-25
Web traffic from 2017-10-01 to 2019-11-25
Ad spend from 2017-10-01 to 2019-11-25
Promotional discounts from 2017-10-01 to 2019-11-25
plus planned ad spend for 2019-11-26 to 2020-02-01
Can someone please help me with some of the terminology and the "how-to" mechanics of this?
In general, to use a variable in your historical data, you need a forecast of it in the future as well. It would be like trying to forecast electrical usage and then putting historical temperatures in the data set. If you don't have a forecast of the future temperatures, that information hasn't done you any good in improving your forecast. Because now I know what the effect of "an extra one degree of temperature on electrical usage", but ¿what do I do with that if I have no idea what the temperature will be tomorrow?
In your case you have 1 metric you want to forecast (revenue) and three supporting pieces of data: traffic, ad spend, discount. It's great that you have future ad spend, but without the other two, you're a bit out of luck (per the prior paragraph).
However, you can still do something here, but you'll just have to make some assumptions. What I would do is choose a fixed value for all dates in the future and set that for all future dates. Perhaps appropriate values would be discount at zero (full price item) and web traffic at—I'm making this up—1K per day. Now you have full data sets for past and future.
With that set up you could now answer the question, albeit with a caveat. The forecast you get out is now saying...
Here's how much revenue we can expect given our planned ad spend, if we offer no discounts and we get 1K people to the website every day.
Perhaps you could improve that by inputting traffic values in the future that are the same from a year prior. In which case, you could now say ...
Here's how much revenue we can expect given our planned ad spend, if we offer no discounts and the website gets the same traffic as this time last year.
You can take that to variations such as "traffic goes up 10%" or you can take a guess at what the discounts will be or, like before, you could replicate your discounts and traffic from a year prior and say...
Here's how much revenue we can expect given our planned ad spend, if we offer discounts just like last year and see website traffic just like last year.
I suspect you get the idea, so I'll stop all the variations. These are, of course, really just future forecasts of those data; however, it's worth nothing that "creating a forecast" of discounts or web-traffic, doesn't have to be complicated and fancy. "The same as last year" is a perfectly valid "forecast" of what's to come.

How to do a Sending Time Optimization Model

I have a question about optimizing the message (email/push/text etc.) sending time to our subscribers. The desired output will be a time interval of each day for each person.
We have the history of the time when a person opened/clicked our message, their demographic information and some other browsing history. But I am not sure if this could be a machine learning model since each individual behaved so differently and I don't have many good predictors.
Should I just summarize the best reaching time for them in the historical data, or it could be a machine learning model?

Using Artificial Intelligence (AI) to predict Stock Prices

Given a set of data very similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5> (ie. correlation coefficient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie. correlation coefficient = -1) or somewhere in between.
Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be:
user
direction (long/short)
market direction
sector of stock
The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a combination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended).
The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specific time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumption is that a single user is right no more than 57% of the time - but who knows.
I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings.
NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coefficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price.
Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started...
My hope is to implement this with F# and be able to impress my friends with a new skill set in F# with an implementation of machine learning and potentially something (application / source) I can include in a tech portfolio or blog space;
Thank you for any advice in advance.
I have an MBA, and teach data mining at a top grad school.
The term project this year was to predict stock price movements automatically from news reports. One team had 70% accuracy, on a reasonably small sample, which ain't bad.
Regarding your question, a lot of companies have made a lot of money on pair trading (find a pair of assets that normally correlate, and buy/sell pair when they diverge). See the writings of Ed Thorpe, of Beat the Dealer. He's accessible and kinda funny, if not curmudgeonly. He ran a good hedge fund for a long time.
There is probably some room in using data mining to predict companies that will default (be unable to make debt payments) and shorting† them, and use the proceeds to buy shares in companies less likely to default. Look into survival analysis. Search Google Scholar for "predict distress" etc in finance journals.
Also, predicting companies that will lose value after an IPO (and shorting them. edit: Facebook!). There are known biases, in academic literature, that can be exploited.
Also, look into capital structure arbitrage. This is when the value of the stocks in a company suggest one valuation, but the value of the bonds or options suggest another value. Buy the cheap asset, short the expensive one.
Techniques include survival analysis, sequence analysis (Hidden Markov Models, Conditional Random Fields, Sequential Association Rules), and classification/regression.
And for the love of God, please read Fooled By Randomness by Taleb.
† shorting a stock usually involves calling your broker (that you have a good relationship with) and borrowing some shares of a company. Then you sell them to some poor bastard. Wait a while, hopefully the price has gone down, you buy some more of the shares and give them back to your broker.
My Advice to You:
There are several Machine Learning/Artificial Intelligence (ML/AI) branches out there:
http://www-formal.stanford.edu/jmc/whatisai/node2.html
I have only tried genetic programming, but in the "learning from experience" branch you will find neural nets. GP/GA and neural nets seem to be the most commonly explored methodologies for the purpose of stock market predictions, but if you do some data mining on Predict Wall Street, you might be able to utilize a Naive Bayes classifier to do what you're interested in doing.
Spend some time learning about the various ML/AI techniques, get a small data set and try to implement some of those algorithms. Each one will have its strengths and weaknesses, so I would recommend that you try to combine them using Naive Bays classifier (or something similar).
My Experience:
I'm working on the problem for my Masters Thesis so I'll pitch my results using Genetic Programming: www.twitter.com/darwins_finches
I started live trading with real money in 09/09/09.. yes, it was a magical day! I post the GP's predictions before the market opens (i.e. the timestamps on twitter) and I also place the orders before the market opens. The profit for this period has been around 25%, we've consistently beat the Buy & Hold strategy and we're also outperforming the S&P 500 with stocks that are under-performing it.
Some Resources:
Here are some resources that you might want to look into:
Max Dama's blog: http://www.maxdama.com/search/label/Artificial%20Intelligence
My blog: http://mlai-lirik.blogspot.com/
AI Stock Market Forum: http://www.ai-stockmarketforum.com/
Weka is a data mining tool with a collection of ML/AI algorithms: http://www.cs.waikato.ac.nz/ml/weka/
The Chatter:
The general consensus amongst "financial people" is that Artificial Intelligence is a voodoo science, you can't make a computer predict stock prices and you're sure to loose your money if you try doing it. None-the-less, the same people will tell you that just about the only way to make money on the stock market is to build and improve on your own trading strategy and follow it closely.
The idea of AI algorithms is not to build Chip and let him trade for you, but to automate the process of creating strategies.
Fun Facts:
RE: monkeys can pick better than most experts
Apparently rats are pretty good too!
I understand monkeys can pick better than most experts, so why not an AI? Just make it random and call it an "advanced simian Mersenne twister AI" or something.
Much more money is made by the sellers of "money-making" systems then by the users of those systems.
Instead of trying to predict the performance of companies over which you have no control, form a company yourself and fill some need by offering a product or service (yes, your product might be a stock-predicting program, but something a little less theoretical is probably a better idea). Work hard, and your company's own value will rise much quicker than any gambling you'd do on stocks. You'll also have plenty of opportunities to apply programming skills to the myriad of internal requirements your own company will have.
If you want to go down this long, dark, lonesome road of trying to pick stocks you may want to look into data mining techniques using advanced data mining software such as SPSS or SAS or one of the dozen others.
You'll probably want to use a combination or technical indicators and fundamental data. The data will more than likely be highly correlated so a feature reduction technique such as PCA will be needed to reduce the number of features.
Also keep in mind your data will constantly have to be updated, trimmed, shuffled around because market conditions will constantly be changing.
I've done research with this for a grad level class and basically I was somewhat successful at picking whether a stock would go up or down the next day but the number of stocks in my data set was fairly small (200) and it was over a very short time frame with consistent market conditions.
What I'm trying to say is what you want to code has been done in very advanced ways in software that already exists. You should be able to input your data into one of these programs and using either regression, or decision trees or clustering be able to do what you want to do.
I have been thinking of this for a few months.
I am thinking about Random Matrix Theory/Wigner's distribution.
I am also thinking of Kohonen self-learning maps.
These comments on speculation and past performance apply to you as well.
I recently completed my masters thesis on deep learning and stock price forecasting. Basically, the current approach seems to be LSTM and other deep learning models. There are also 10-12 technical indicators (TIs) based on moving average that have been shown to be highly predictive for stock prices, especially indexes such as SP500, NASDAQ, DJI, etc. In fact, there are libraries such as pandas_ta for computing various TIs.
I represent a group of academics that are trying to predict stocks in a general form that can also be applied to anything, even the rating of content.
Our algorithm, which we describe as truth seeking, works as follows.
Basically each participant has their own credence rating. This means that the higher your credence or credibility, then the more their vote counts. Credence is worked out by how close to the weighted credence each vote is. It's like you get a better credence value the closer you get to the average vote that has already been adjusted for credence.
For example, let's say that everyone is predicting that a stock's value will be at value X in 30 day's time (a future's option). People who predict on the average get a better credence. The key here is that the individual doesn't know what the average is, only the system. The system is tweaked further by weighting the guesses so that the target spot that generates the best credence is those votes that are already endowed with more credence. So the smartest people (historically accurate) project the sweet spot that will be used for further defining who gets more credence.
The system can be improved too to adjust over time. For example, when you find out the actual value, those people who guessed it can be rewarded with a higher credence. In cases where you can't know the future outcome, you can still account if the average weighted credence changes in the future. People can be rewarded even more if they spotted the trend early. The point is we don't need to even know the outcome in the future, just the fact that the weighted rating changed in the future is enough to reward people who betted early on the sweet spot.
Such a system can be used to rate anything from stock prices, currency exchange rates or even content itself.
One such implementation asks people to vote with two parameters. One is their actual vote and the other is an assurity percentage, which basically means how much a particular participant is assured or confident of their vote. In this way, a person with a high credence does not need to risk downgrading their credence when they are not sure of their bet, but at the same time, the bet can be incorporated, it just won't sway the sweet spot as much if a low assurity is used. In the same vein, if the guess is directly on the sweet spot, with a low assurity, they won't gain the benefits as they would have if they had used a high assurity.

Resources