I am going to do a research project which involves predicting imminent failure of an engine using time data obtained from sensors. The data basically contains the readings of various embedded sensors every 10 minutes for many months. Such data is available for about 100 or so different units (all are the same engine model), along with the time of failure.
While I do have a reasonably good understanding of Machine Learning, I am at a loss of approaching this. I have done a few projects that involved static datasets (using SVMs, Neural Nets, Logistic Regression etc.) and even one on predicting time series. But this is quite different. While the project involves time data, it is hardly a matter of predicting the future values. Rather it is a case of anomaly detection on sequential time data.
Please could you give some ideas as to how I could approach it?
I'm particularly interested in Neural Networks/ Deep Learning, so any ideas on using them for this task would also be welcome. I would prefer to use Python or R, although I would be open to using something else if it was particularly geared for this sort of task.
Also could you give me some formal terms using which I could search for relevant literature?
Thanks
As a general comment, try hard to express everything that you know about the physical system in a model, then use that model for inference. I worked on such problems in my dissertation: Unified Prediction and Diagnosis in Engineering Systems by means of Distributed Belief Networks (see chapter 6). I can say more if you provide additional details about your problem domain.
Don't expect general machine learning models (neural networks, SVM, etc) to figure out the structure of the problem for you. Having the right form of the model is much, much more important than having a general model + lots of data -- this is the summary of my experience.
Related
To be more specific, The traditional chatbot framework consists of 3 components:
NLU (1.intent classification 2. entity recognition)
Dialogue Management (1. DST 2. Dialogue Policy)
NLG.
I am just confused that If I use a deep learning model(seq2seq, lstm, transformer, attention, bert…) to train a chatbot, Is it cover all those 3 components? If so, could you explain more specifically how it related to those 3 parts? If not, how can I combine them?
For example, I have built a closed-domain chatbot, but it is only task-oriented which cannot handle the other part like greeting… And it can’t handle the problem of Coreference Resolution (it seems doesn't have Dialogue Management).
It seems like your question can be split into two smaller questions:
What is the difference between machine learning and deep learning?
How does deep learning factor into each of the three components of chatbot frameworks?
For #1, deep learning is an example of machine learning. Think of your task as a graphing problem. You transform your data so it has an n-dimensional representation on a plot. The goal of the algorithm is to create a function that represents a line drawn on the plot that (ideally) cleanly separates the points from one another. Each sector of the graph represents whatever output you want (be it a class/label, related words, etc). Basic machine learning creates a line on a 'linearly separable' problem (i.e. it's easy to draw a line that cleanly separates the categories). Deep learning enables you to tackle problems where the line might not be so clean by creating a really, really, really complex function. To do this, you need to be able to introduce multiple dimensions to the mapping function (which is what deep learning does). This is a very surface-level look at what deep learning does, but that should be enough to handle the first part of your question.
For #2, a good quick answer for you is that deep learning can be a part of each component of the chatbot framework depending on how complex your task is. If it's easy, then classical machine learning might be good enough to solve your problem. If it's hard, then you can begin to look into deep learning solutions.
Since it sounds like you want the chatbot to go a bit beyond simple input-output matching and handle complicated semantics like coreference resolution, your task seems sufficiently difficult and a good candidate for a deep learning solution. I wouldn't worry so much about identifying a specific solution for each of the chatbot framework steps because the tasks involved in each of those steps blend into one another with deep learning (e.g. a deep learning solution wouldn't need to classify intent and then manage dialogue, it would simply learn from hundreds of thousands of similar situations and apply a variation of the most similar response).
I would recommend handling the problem as a translation problem - but instead of translating from one language to another, you're translating from the input query to the output response. Translation frequently needs to resolve coreference and solutions people have used to solve that might be an ideal course of action for you.
Here are some excellent resources to read up on in order to frame your problem and how to solve it:
Google's Neural Machine Translation
Fine Tuning Tasks with BERT
There is always a trade-off between using traditional machine learning models and using deep learning models.
Deep learning models require large data to train and there will be an increase in training time & testing time. But it will give better results.
Traditional ML models work well with fewer data with moderate performance comparatively. The inference time is also less.
For Chatbots, latency matters a lot. And the latency depends on the application/domain.
If the domain is banking or finance, people are okay with waiting for a few seconds but they are not okay with wrong results. On the other hand in the entertainment domain, you need to deliver the results at the earliest.
The decision depends on the application domain + the data size you are having + the expected precision.
RASA is something worth looking into.
Suppose you're trying to use machine learning for a classification task like, let's say, looking at photographs of animals and distinguishing horses from zebras. This task would seem to be within the state of the art.
But if you take a bunch of labelled photographs and throw them at something like a neural network or support vector machine, what happens in practice is that zebras are so much rarer than horses that the system just ends up learning to say 'always a horse' because this is actually the way to minimize its error.
Minimal error that may be but it's also not a very useful result. What is the recommended way to tell the system 'I want the best guess at which photographs are zebras, even if this does create some false positives'? There doesn't seem to be a lot of discussion of this problem.
One of the things I usually do with imbalanced classes (or skewed data sets) is simply generate more data. I think this is the best approach. You could go out in the real world and gather more data of the imbalanced class (e.g. find more pictures of zebras). You could also generate more data by simply making copies or duplicating it with transformations (e.g. flip horizontally).
You could also pick a classifier that uses an alternate evaluation (performance) metric over the one usually used - accuracy. Look at precision/recall/F1 score.
Week 6 of Andrew Ng's ML course talks about this topic: link
Here is another good web page I found on handling imbalanced classes: link
With this type of unbalanced data problem, it is a good approach to learn patterns associated with each class as opposed to simply comparing classes - this can be done via unsupervised learning learning first (such as with autoencoders). A good article with this available at https://www.r-bloggers.com/autoencoders-and-anomaly-detection-with-machine-learning-in-fraud-analytics/amp/. Another suggestion - after running the classifier, the confusion matrix can be used to determine where additional data should be pursued (I.e. many zebra errors)
I'm looking at the SAS Viya machine learing demo. It races some machine Learning algorithms against each other on a given dataset. All models produce almost equally good "lift" as shown in lift diagrams in the output.
If you tweak the Learning to perform on a smaller subset of the data; only 0.002% of the total data set (proc partition data=&casdata partition samppct=0.002;), most algorithms get into problems producing lift.
But the neural network is still performing very well. Feature or bug? I could imagine that the script does not re-initilize the network, but it is hard to guess from the calls alone.
I got good answers over at the SAS Community posted by BrettWujek and Xinmin there:
Mats - the short answer without running some studies of my own is that neural networks are highly adaptive and can train very accurate models with far fewer observations than many other techniques. The tree-based models are going to be quite unstable with very few observations. In this case you sampled all the way down to around 20 observations...even that might be sufficient for a neural network if the space it not overly nonlinear.
As for your last comment - it seems you are referring to what is known as warm start, where a previously trained model can be used as a starting point and refined by providing new observations. That is NOT what is happening here, as that capability is only coming available in our upcoming release which is just over a month away.
Brett
And I've got some detail on this from Xinmin:
Mats, PROC NNET initializes weight random, if you specify a seed in the train statement, the initial weights are repeatable. NNET training is powered by a sophiscated nonlinear optimization solver, if the log shows "converged" status, it means the model is fit very well.
I want to teach myself enough machine learning so that I can, to begin with, understand enough to put to use available open source ML frameworks that will allow me to do things like:
Go through the HTML source of pages
from a certain site and "understand"
which sections form the content,
which the advertisements and which
form the metadata ( neither the
content, nor the ads - for eg. -
TOC, author bio etc )
Go through the HTML source of pages
from disparate sites and "classify"
whether the site belongs to a
predefined category or not ( list of
categories will be supplied
beforhand )1.
... similar classification tasks on
text and pages.
As you can see, my immediate requirements are to do with classification on disparate data sources and large amounts of data.
As far as my limited understanding goes, taking the neural net approach will take a lot of training and maintainance than putting SVMs to use?
I understand that SVMs are well suited to ( binary ) classification tasks like mine, and open source framworks like libSVM are fairly mature?
In that case, what subjects and topics
does a computer science graduate need
to learn right now, so that the above
requirements can be solved, putting
these frameworks to use?
I would like to stay away from Java, is possible, and I have no language preferences otherwise. I am willing to learn and put in as much effort as I possibly can.
My intent is not to write code from scratch, but, to begin with putting the various frameworks available to use ( I do not know enough to decide which though ), and I should be able to fix things should they go wrong.
Recommendations from you on learning specific portions of statistics and probability theory is nothing unexpected from my side, so say that if required!
I will modify this question if needed, depending on all your suggestions and feedback.
"Understanding" in machine learn is the equivalent of having a model. The model can be for example a collection of support vectors, the layout and weights of a neural network, a decision tree, or more. Which of these methods work best really depends on the subject you're learning from and on the quality of your training data.
In your case, learning from a collection of HTML sites, you will like to preprocess the data first, this step is also called "feature extraction". That is, you extract information out of the page you're looking at. This is a difficult step, because it requires domain knowledge and you'll have to extract useful information, or otherwise your classifiers will not be able to make good distinctions. Feature extraction will give you a dataset (a matrix with features for each row) from which you'll be able to create your model.
Generally in machine learning it is advised to also keep a "test set" that you do not train your models with, but that you will use at the end to decide on what is the best method. It is of extreme importance that you keep the test set hidden until the very end of your modeling step! The test data basically gives you a hint on the "generalization error" that your model is making. Any model with enough complexity and learning time tends to learn exactly the information that you train it with. Machine learners say that the model "overfits" the training data. Such overfitted models seem to appear good, but this is just memorization.
While software support for preprocessing data is very sparse and highly domain dependent, as adam mentioned Weka is a good free tool for applying different methods once you have your dataset. I would recommend reading several books. Vladimir Vapnik wrote "The Nature of Statistical Learning Theory", he is the inventor of SVMs. You should get familiar with the process of modeling, so a book on machine learning is definitely very useful. I also hope that some of the terminology might be helpful to you in finding your way around.
Seems like a pretty complicated task to me; step 2, classification, is "easy" but step 1 seems like a structure learning task. You might want to simplify it to classification on parts of HTML trees, maybe preselected by some heuristic.
The most widely used general machine learning library (freely) available is probably WEKA. They have a book that introduces some ML concepts and covers how to use their software. Unfortunately for you, it is written entirely in Java.
I am not really a Python person, but it would surprise me if there aren't also a lot of tools available for it as well.
For text-based classification right now Naive Bayes, Decision Trees (J48 in particular I think), and SVM approaches are giving the best results. However they are each more suited for slightly different applications. Off the top of my head I'm not sure which would suit you the best. With a tool like WEKA you could try all three approaches with some example data without writing a line of code and see for yourself.
I tend to shy away from Neural Networks simply because they can get very very complicated quickly. Then again, I haven't tried a large project with them mostly because they have that reputation in academia.
Probability and statistics knowledge is only required if you are using probabilistic algorithms (like Naive Bayes). SVMs are generally not used in a probabilistic manner.
From the sound of it, you may want to invest in an actual pattern classification textbook or take a class on it in order to find exactly what you are looking for. For custom/non-standard data sets it can be tricky to get good results without having a survey of existing techniques.
It seems to me that you are now entering machine learning field, so I'd really like to suggest to have a look at this book: not only it provides a deep and vast overview on the most common machine learning approaches and algorithms (and their variations) but it also provides a very good set of exercises and scientific paper links. All of this is wrapped in an insightful language starred with a minimal and yet useful compendium about statistics and probability
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is machine learning ?
What does machine learning code do ?
When we say that the machine learns, does it modify the code of itself or it modifies history (database) which will contain the experience of code for given set of inputs?
What is a machine learning ?
Essentially, it is a method of teaching computers to make and improve predictions or behaviors based on some data. What is this "data"? Well, that depends entirely on the problem. It could be readings from a robot's sensors as it learns to walk, or the correct output of a program for certain input.
Another way to think about machine learning is that it is "pattern recognition" - the act of teaching a program to react to or recognize patterns.
What does machine learning code do ?
Depends on the type of machine learning you're talking about. Machine learning is a huge field, with hundreds of different algorithms for solving myriad different problems - see Wikipedia for more information; specifically, look under Algorithm Types.
When we say machine learns, does it modify the code of itself or it modifies history (Data Base) which will contain the experience of code for given set of inputs ?
Once again, it depends.
One example of code actually being modified is Genetic Programming, where you essentially evolve a program to complete a task (of course, the program doesn't modify itself - but it does modify another computer program).
Neural networks, on the other hand, modify their parameters automatically in response to prepared stimuli and expected response. This allows them to produce many behaviors (theoretically, they can produce any behavior because they can approximate any function to an arbitrary precision, given enough time).
I should note that your use of the term "database" implies that machine learning algorithms work by "remembering" information, events, or experiences. This is not necessarily (or even often!) the case.
Neural networks, which I already mentioned, only keep the current "state" of the approximation, which is updated as learning occurs. Rather than remembering what happened and how to react to it, neural networks build a sort of "model" of their "world." The model tells them how to react to certain inputs, even if the inputs are something that it has never seen before.
This last ability - the ability to react to inputs that have never been seen before - is one of the core tenets of many machine learning algorithms. Imagine trying to teach a computer driver to navigate highways in traffic. Using your "database" metaphor, you would have to teach the computer exactly what to do in millions of possible situations. An effective machine learning algorithm would (hopefully!) be able to learn similarities between different states and react to them similarly.
The similarities between states can be anything - even things we might think of as "mundane" can really trip up a computer! For example, let's say that the computer driver learned that when a car in front of it slowed down, it had to slow down to. For a human, replacing the car with a motorcycle doesn't change anything - we recognize that the motorcycle is also a vehicle. For a machine learning algorithm, this can actually be surprisingly difficult! A database would have to store information separately about the case where a car is in front and where a motorcycle is in front. A machine learning algorithm, on the other hand, would "learn" from the car example and be able to generalize to the motorcycle example automatically.
Machine learning is a field of computer science, probability theory, and optimization theory which allows complex tasks to be solved for which a logical/procedural approach would not be possible or feasible.
There are several different categories of machine learning, including (but not limited to):
Supervised learning
Reinforcement learning
Supervised Learning
In supervised learning, you have some really complex function (mapping) from inputs to outputs, you have lots of examples of input/output pairs, but you don't know what that complicated function is. A supervised learning algorithm makes it possible, given a large data set of input/output pairs, to predict the output value for some new input value that you may not have seen before. The basic method is that you break the data set down into a training set and a test set. You have some model with an associated error function which you try to minimize over the training set, and then you make sure that your solution works on the test set. Once you have repeated this with different machine learning algorithms and/or parameters until the model performs reasonably well on the test set, then you can attempt to use the result on new inputs. Note that in this case, the program does not change, only the model (data) is changed. Although one could, theoretically, output a different program, but that is not done in practice, as far as I am aware. An example of supervised learning would be the digit recognition system used by the post office, where it maps the pixels to labels in the set 0...9, using a large set of pictures of digits that were labeled by hand as being in 0...9.
Reinforcement Learning
In reinforcement learning, the program is responsible for making decisions, and it periodically receives some sort of award/utility for its actions. However, unlike in the supervised learning case, the results are not immediate; the algorithm could prescribe a large sequence of actions and only receive feedback at the very end. In reinforcement learning, the goal is to build up a good model such that the algorithm will generate the sequence of decisions that lead to the highest long term utility/reward. A good example of reinforcement learning is teaching a robot how to navigate by giving a negative penalty whenever its bump sensor detects that it has bumped into an object. If coded correctly, it is possible for the robot to eventually correlate its range finder sensor data with its bumper sensor data and the directions that sends to the wheels, and ultimately choose a form of navigation that results in it not bumping into objects.
More Info
If you are interested in learning more, I strongly recommend that you read Pattern Recognition and Machine Learning by Christopher M. Bishop or take a machine learning course. You may also be interested in reading, for free, the lecture notes from CIS 520: Machine Learning at Penn.
Machine learning is a scientific discipline that is concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases. Read more on Wikipedia
Machine learning code records "facts" or approximations in some sort of storage, and with the algorithms calculates different probabilities.
The code itself will not be modified when a machine learns, only the database of what "it knows".
Machine learning is a methodology to create a model based on sample data and use the model to make a prediction or strategy. It belongs to artificial intelligence.
Machine learning is simply a generic term to define a variety of learning algorithms that produce a quasi learning from examples (unlabeled/labeled). The actual accuracy/error is entirely determined by the quality of training/test data you provide to your learning algorithm. This can be measured using a convergence rate. The reason you provide examples is because you want the learning algorithm of your choice to be able to informatively by guidance make generalization. The algorithms can be classed into two main areas supervised learning(classification) and unsupervised learning(clustering) techniques. It is extremely important that you make an informed decision on how you plan on separating your training and test data sets as well as the quality that you provide to your learning algorithm. When you providing data sets you want to also be aware of things like over fitting and maintaining a sense of healthy bias in your examples. The algorithm then basically learns wrote to wrote on the basis of generalization it achieves from the data you have provided to it both for training and then for testing in process you try to get your learning algorithm to produce new examples on basis of your targeted training. In clustering there is very little informative guidance the algorithm basically tries to produce through measures of patterns between data to build related sets of clusters e.g kmeans/knearest neighbor.
some good books:
Introduction to ML (Nilsson/Stanford),
Gaussian Process for ML,
Introduction to ML (Alpaydin),
Information Theory Inference and Learning Algorithms (very useful book),
Machine Learning (Mitchell),
Pattern Recognition and Machine Learning (standard ML course book at Edinburgh and various Unis but relatively a heavy reading with math),
Data Mining and Practical Machine Learning with Weka (work through the theory using weka and practice in Java)
Reinforcement Learning there is a free book online you can read:
http://www.cs.ualberta.ca/~sutton/book/ebook/the-book.html
IR, IE, Recommenders, and Text/Data/Web Mining in general use alot of Machine Learning principles. You can even apply Metaheuristic/Global Optimization Techniques here to further automate your learning processes. e.g apply an evolutionary technique like GA (genetic algorithm) to optimize your neural network based approach (which may use some learning algorithm). You can approach it purely in form of a probablistic machine learning approach for example bayesian learning. Most of these algorithms all have a very heavy use of statistics. Concepts of convergence and generalization are important to many of these learning algorithms.
Machine learning is the study in computing science of making algorithms that are able to classify information they haven't seen before, by learning patterns from training on similar information. There are all sorts of kinds of "learners" in this sense. Neural networks, Bayesian networks, decision trees, k-clustering algorithms, hidden markov models and support vector machines are examples.
Based on the learner, they each learn in different ways. Some learners produce human-understandable frameworks (e.g. decision trees), and some are generally inscrutable (e.g. neural networks).
Learners are all essentially data-driven, meaning they save their state as data to be reused later. They aren't self-modifying as such, at least in general.
I think one of the coolest definitions of machine learning that I've read is from this book by Tom Mitchell. Easy to remember and intuitive.
A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E
Shamelessly ripped from Wikipedia: Machine learning is a scientific discipline that is concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases.
Quite simply, machine learning code accomplishes a machine learning task. That can be a number of things from interpreting sensor data to a genetic algorithm.
I would say it depends. No, modifying code is not normal, but is not outside the realm of possibility. I would also not say that machine learning always modifies a history. Sometimes we have no history to build off of. Sometime we simply want to react to the environment, but not actually learn from our past experiences.
Basically, machine learning is a very wide-open discipline that contains many methods and algorithms that make it impossible for there to be 1 answer to your 3rd question.
Machine learning is a term that is taken from the real world of a person, and applied on something that can't actually learn - a machine.
To add to the other answers - machine learning will not (usually) change the code, but it might change it's execution path and decision based on previous data or new gathered data and hence the "learning" effect.
there are many ways to "teach" a machine - you give weights to many parameter of an algorithm, and then have the machine solve it for many cases, each time you give her a feedback about the answer and the machine adjusts the weights according to how close the machine answer was to your answer or according to the score you gave it's answer, or according to some results test algorithm.
This is one way of learning and there are many more...