I'm a relatively noob programmer. I am creating web based GIS tool where users can upload custom datasets ranging from 10 rows to 1million. The datasets can have variable columns and datatypes. How do you manage these user submitted datasets?
Is the creation of a table per dataset a bad idea? (BTW - i'll be using postgresql as the database).
My apologies if this is already answered somewhere, but my search did not turn up any good results. I may be using bad keywords in my search.
Thanks!
creating a table per dataset is not a 'bad' idea at all. swivel.com was a very similar app to what you are describing and we used table per dataset and it worked very well for graph generation on user uploaded datasets and comparing data across datasets using joins. we had over 10k datasets and close to a million graphs and some datasets were very large.
you also get lots of free usage out of your orm layer, for instance we could use active record for working with a dataset (each dataset is a generated model class with its table set to the actual table)
pitfall wise is you gotta do a LOT of joins if you have any kind of cross dataset calculations.
My coworkers and I recently tackled a similar problem where we had a poor data model in MySQL and were looking for better ways to implement it. We weighed a few different options, including MongoDB, and ended up using the entity attribute value model. The EAV model is essentially a 3-column model. It allowed us to a single model to represent a variable number of columns and data types.
You can read a little about our problem here, but it sounds like it might be a good fit for you too.
Related
I would like to know which model should I choose to forecast monthly sales. should I go for regression approaches or time-series methods for small 1.5-year data?
One of the first steps I would make is to clearly determine how many features you have.
In case of Univariate forecasting (observations in time of a single variable), you would most likely resort to even statistical approaches, such as ARIMA/SARIMA(I assume the concept of seasonality is known; if not, please read on properties of time series here : https://www.dummies.com/programming/big-data/data-science/key-properties-of-a-time-series-in-data-analysis/.
If you have multiple features(observations in time of multiple variables), you could first try with a VAR(vector autoregression).
Try these models at first, and only then proceed to more complicated ones such as LSTM/CNNs
Supporting #Nicolae Petridean's affirmation, the principle of Occam's Razor should always be applied: start with simple models and only after having tried several simpler ones should you progress to deep learning techniques.
Also, bear in mind that in the case of the latter, you will need much more data as compared to simpler statistical/mathematical models or even classical machine learning ones.
Depending on the data that you have either one or the other might work. Or other techniques. Try 2 simple models using each of the 2 techniques, and validate them against a common validation dataset. This way you will have your answer. Nobody can answer to your question unless has quite some good insights into the data that you have for training. Out of my belly I would probably start with a regression but in the end I assume you will end up using something else. It is always a good option to start with simple models first to better understand the problem and then progressively fine tune or do other tricks and more complicated models, depending on what the models you already have learn or not.
Have a look at this Kaggle competition : https://www.kaggle.com/c/competitive-data-science-predict-future-sales
Check several notebooks from there and maybe you will understand more on what works or does not work in this kind of prediction.
Link to notebooks : https://www.kaggle.com/c/competitive-data-science-predict-future-sales/notebooks
In the regression problem I'm working with, there are five independent columns and one dependent column. I cannot share the data set details directly due to privacy, but one of the independent variables is an ID field which is unique for each example.
I feel like I should not be using ID field in estimating the dependent variable. But this is just a gut feeling. I have no strong reason to do this.
What shall I do? Is there any way I decide which variables to use and which to ignore?
Well, I agree with #desertnaut. Id attribute does not seem relevant when creating a model and provides no help in prediction.
The term you are looking for is feature selection. Since it's a comprehensive section so I would just tell you the methods that are mostly used by data scientists.
As for regression problems you can try correlation heatmap to find the features that are highly correlated with the target.
sns.heatmap(df.corr())
There are several other ways too like PCA,using tree inbuilt feature selection methods to find the right features for your model.
You can also try James Phillips method. This approach is limited since model time complexity will increase linearly with the features. But in your case where you've only four features to compare you can try it out. You can compare the regression model trained with all the four features with the model trained with only three features by dropping one of the four features recursively. This would mean training four regression models and comparing them.
According to you, the ID variable is unique for each example. So the model won't be able to learn anything from this variable as with every example, you get a new ID and hence no general patterns to learn as every ID only occurs once.
Regarding feature elimination, it depends. If you have domain knowledge, based on that alone you can engineer/ remove features as needed. If you don't know much about the domain, you can try out some basic techniques like Backward Selection, Forward Selection, etc via Cross Validation to get the model with the best value of the metric that you're working with.
My task is to extract information from a various web-pages of a particular site. Now, the information to be extracted can be of the form as product name, product id, price, etc. The information is given in text using natural language. Also, I have been asked to extract that information using some Machine Learning algorithm. I thought of using NER (Named Entity Recognition) and training it on custom training data (which I can prepare using the scraped data and manually labeling the integers/data as required). I wanted to know if the model can even work this way?
Also, let me know if I can improve this question further.
You say a particular site. I am assuming that it means you have some fair idea of what the structures of webpages are, if the data is in table form or a free text form, how the website generally looks. In this case, a simple regex (prices, ids etc) supported by some POS tagger to extract product names and all is enough for you. A supervised approach is definitely an overkill and might underperform than the simpler regex.
I have a college assignment requirement to built a Data warehouse for product Inventory management which can help inventory management understand in-hand value and using historical data they can predict when to bring new inventory. I have been reading to find out best way to do it using Cubes or Data mart. My question here is do I have to create a Data warehouse first and on top of that built Cube, Data mart or I can directly extract transactional data into Cube/Data Mart.
Next, Is it mandatory to built a Star Schema(or other DW schema) for doing this assignment as after reading multiple articles my understanding is OLAP cube can have multiple facts surrounded by Dimensions.
Your question is far bigger than you know!
As a general principle, you would have a staging database(s) which lands the data from one or more OLTP systems. then the staging database(s) would feed data to a datawarehouse (DWH). On top of a DWH would be built a number of Marts, these typically are subject area specific.
There are several DWH methodologies
Kimball Star Schema - you mention star schema above, this broadly is Kimball Star Schema. Proposed by Ralph Kimball. Also I would include here Snowflake Schemas, which are a variation on Star Schemas.
Inmon Model - Proposed by Bill Inmon
Data Vault - proposed by Dan Linstedt. Has a large user base in the Benelux countries. There are variations on the Data Vault.
It's important not to get confused between a DWH methodology and the technology to implement a DWH, though sometimes there are some technologies that lend themselves to particular methodologies. For example OLAP cubes work easily with Kimball star schemas. There is no particular need to use a relational technology for particular databases. Some NoSQL databases (like Cassandra) lend themselves to staging databases well.
To answer your specific questions
Do I have to create a Data warehouse first and on
top of that built Cube, Data mart or I can directly extract
transactional data into Cube/Data Mart.
OLAP Cubes are optional if you have a specific Mart that is tailored to your reporting but it depends on your reporting and analysis requirements and the speed of access.
A Data Mart could actually be built only using an OLAP cube, coming straight from the DWH.
Specifically on inventory management, all of these DWH methodologies would be suitable.
I can't answer your last question, as that seems to be the point of the assignment and you havn't given enough information to answer the question, but you need to do some research into dimensional modelling, so I hope this has pointed you in the right direction!
The answer is yes, a star model will always help a better analysis, but it is relational, a cube is multidimensional (where it performs all data crossings) and often uses as a data source to star models (recommended).
OLAP cubes are generally used for fast analysis and summaries of data.
So, by standard, I recommend you make all the star models you need and then generate the OLAP cubes for your analysis.
AS this is a 'homework' question, I would guess that the lecturer is looking for pros/cons between Kimball and Inmon which are the two 'default' designs for end-user reporting. In the real world DataVault can also be applied as part of the DWH strategy but it plays a different purpose and is not recommended for end-user consumption.
DataVault is a design pattern to bring data in from source systems unmolested. Data will inevitably need to be cleaned before being presented to the end-user solution and DV allows the DWH ETL process to be re-run if any issues are found or the business requirements change, especially if the granularity level goes down (e.g. the original fact table was for sales and the dimension requirements were for salesman and product category, now they want fact-sales by sales round and salesman for product subcategory and category. Without DV you do not have the granular data to replay the historical information and rebuild the DWH)
So I have a data set which consists of tweets from various news organizations. I've loaded it into RapidMiner, tokenized it, and produced some n-grams of it. Now I want to be able to have RapidMiner automatically classify my data into various categories based on the topic of the tweets.
I'm pretty sure RapidMiner can do this, but according to the research I've done into it, I need a training data set to be able to show RapidMiner how I want things classified. So I need a training data set, though given the categories I wanted to classify things into, I might have to create my own.
So my questions are these:
1) Is there a training data set for twitter data that focuses more on the topic of the tweet as opposed to a sentiment analysis publicly available?
2) If there isn't one publicly available, how can I create my own? My idea to do it was to go through the tweets themselves and associate the tokens and n-grams with the categories I want. Some concerns I have with that are that I won't be able to manually classify enough tweets to create a training data set comprehensive enough so that I can get a good accuracy rate for the automatic classifier.
3) Any general advice for topical classification of text data would be great. This is the first time that I've done a project like this, and I'm sure there are things I could improve on. :)
There may be training corpora that work for you, but you need to say what your topic or categories are to identify it. The fact that this is Twitter may be relevant, but the data source is likely to be much less relevant to the classification accuracy you will achieve than the topic is. So if you take the infamous 20 newsgroups data set this is likely to work on Twitter as well, but only if the categories you are after are the 20 categories from that data set. If you want to classify cats vs dogs or Android vs iPhone you need to find a data set for that.
In most cases you will have to create initial labels manually, which is, as you say, a lot of work. One workaround might be to start with something simpler like a keyword search to create subsets of your tweets for which you know they deal with a particular category. Then you create the model on top of that and hope that it generalizes to identify the same categories even though the original keywords do not occur.
Alternatively, depending on your application (and if you actually want to build an applicaion), you may as well start with only a small data set and accept that you have poor classification. Then you generate classifications, show them to the users of your apps, and collect some form of explicit or implicit feedback on the classification (e.g. users can flag tweets as incorrectly classified). This way you improve your training corpus and periodically update your model.
Finally, if you do not know what your topics are and you want RapidMiner to identify the topics, you may want to try clustering as opposed to classification. Just create a few clusters and look at the top words for each cluster. They may well be quite dissimilar and describe what the respective clusters are about.
I believe your third question may be a bit broad for stackoverflow and is probably better answered by a text book.