A basic query about data mining [closed] - machine-learning

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Using data mining, we are able to find useful patterns in a large set of data using techniques like correlation etc etc and there must exist some open source tools for this (what are some examples?).
Is this pull-based or push-based? I mean, do we provide data set as well as specific queries as input to the data mining engine and it provides us answers (as in SQL) or we only supply large data set as input to the engine and it on its own find patterns (which we never knew existed and/or we couldn't formulate queries for this) and thus we don't really pull any specific queries from it, it pushes the patterns to us.
Some quick reading of Wikipedia article doesn't clarify my doubts in clear way.

As open source have a look at Weka.
In regards to the push-pull thing, well, it's a bit of both. But it's not quite that simple. You must be looking for something. E.g. if you are looking for clusters, there are unsupervised algorithms which will give you an answer with minimal guidance.
In practice things are more meaningful if you know about the data you analyse and you are looking at regularities and patterns that make sense.
Playing with Weka will give you a better idea of the range of possibilities.

Python and R are other great open source tools that have great popularity in the data mining area.

A great tool that i used recently is scikit-learn

Related

ML or rule based [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I already have 85 accuracy on my sklearn text classifier. What are the advantages and disadvantages of making a rule based system? Can save doing double the work? Maybe you can provide me with sources and evidence for each side, so that I can make the decision baed on my cirucumstances. Again, I want to know when ruls-based approach is favorable versus when a ML based approach is favorable? Thanks!
Here is an idea:
Instead of going one way or another, you can set up a hybrid model. Look at typical errors your machine learning classifier makes, and see if you can come up with a set of rules that capture those errors. Then run these rules on your input, and if they applied, finish there; if not, pass the input on to the classifier.
In the past I did this with a probabilistic part-of-speech tagger. It's difficult to tune a probabilistic model, but it's easy to add a few pre- or post-processing rules to capture some consistent errors.
https://www.linkedin.com/feed/update/urn:li:activity:6674229787218776064?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A6674229787218776064%2C6674239716663156736%29
Yoel Krupnik (CTO & co-founder | smrt - AI For Accounting) writes:
I think it really depends on the specific problem. Some problems can be completely solved with rule based logic, some require machine learning (often in combination with rule based logic before or after).
Advantages of the rule based are that it doesn't require labeled training data, might quickly provide decent results used as a benchmark and helps you better understand the problem for future labeling / text manipulations required by the ML algorithm.

How come a small dataset has a high variance? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Why does a small dataset have a high variance? Our professor once said it. I just did not understand it. Any help would be greatly appreciated.
Thanks in advance.
if your data set is small and you train your model to fit the data set ,it is easy to have overfitting problems.If your data set is big enough,a little overfitting may not a big problem ,but not in a small data set.
Every single one of us, by the time we are entering our professional careers, have been exposed to a larger visual dataset then the largest dataset available for AI researchers. On top of this, we have sound, smell, touch, and taste data all coming in from our external senses. In summary, humans have a lot of context on the human world. We have a general common-sense understanding of human situations. When analyzing a dataset, we combine the data itself with our past knowledge in order to come up with an analysis.
The typical machine learning algorithm has none of that — it has only the data you show to it, and that data must be in a standardized format. If a pattern isn’t present in the data, there is no way for the algorithm to learn it. That's why when given a small dataset it is more prone to error.

How to write a program that outputs source code [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This might not be the right place for this to ask, but I am interested in artificial neural networks and want to learn more.
How do you design a network and train it on source code so it can come up with programs for, for example, easy number theory problems?
What's the general name of this research field?
This is a hugely interesting, and very hard, problem area. It will probably take you months to read enough to even understand how to attack the problem. Here's a few things that might help you get started, and they are more to show the problems you will face than to provide solutions:
http://karpathy.github.io/2015/05/21/rnn-effectiveness/
Then read this, and related papers:
https://arxiv.org/pdf/1410.5401v2.pdf
Next, you probably want to read the classic papers in program synthesis and generation at the parse tree/AST level (mostly out of MIT, I think, in the early 90s.)
Best of luck. This is not trivial.

What is the difference between feature engineering and feature extraction? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am struggling to find the difference between the two concepts. From what I understand both refer to turning raw data into more comprehensive features to describe the problem at hand. Are they the same thing? If not could anyone please provide examples for both?
Feature extraction is usually used when the original data was very different. In particular when you could not have used the raw data.
E.g. original data were images. You extract the redness value, or a description of the shape of an object in the image. It's lossy, but at least you get some result now.
Feature engineering is the careful preprocessing into more meaningful features, even if you could have used the old data.
E.g. instead of using variables x, y, z you decide to use log(x)-sqrt(y)*z instead, because your engineering knowledge tells you that this derived quantity is more meaningful to solve your problem. You get better results than without.
Feature engineering - is transforming raw data into features/attributes that better represent the underlying structure of your data, usually done by domain experts.
Feature Extraction - is transforming raw data into the desired form.

What are some ways to have fun with a large amount of data? (ie, the Twitter, del.icio.us etc. APIs) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Twitter, Google, Amazon, del.icio.us etc. all give you a lot of data to play with, all for free. There's also a lot of textual data available through initiatives like Project Gutenberg. And that, it seems, is just the tip of the iceberg.
I have been wondering how you could use this data for fun. I'm a first year IT student, so I have no knowledge of statistics, machine learning, collaborative filtering etc. My interest in this area was piqued by the book Programming Collective Intelligence by Toby Segaran, and now I want to take a deeper look at what you can do with data. I don't know where to start. Any ideas?
I have also been pondering whether I should go and buy something like Paradigms of Artificial Intelligence Programming. Is it worth the trip across the city?
Try firing books in different styles from Guttenberg through a Markov Chain generator - there's one in Perl here to get you started.
Visualizations, do them, share them.
You can use some of that data to make money (if you're really good!)
http://www.netflixprize.com/ Netflix has made available an anonymized dataset, and are asking for better algorithms to predict customer choices.
If you're familiar with Python try playing around with the nltk. It has tons of libraries for text mining and even machine learning in general. Try working your way through nltk book.
If you want to start off with a easy AI problem, you might try clustering.
http://en.wikipedia.org/wiki/Data_clustering
You could use it to group flickr images together by tag or something cool like that.
You can make puzzles like hangman games. Or a mashup or try Yahoo pipes to join information.
Predict future stockmarket trends from the data. Profit!

Resources