I have seen that in Machine Learning, the terms "feature" and "label" are used to refer to what I think of as "independent variable" and "dependent variable" (more synonyms from Wikipedia).
The Wikipedia page describing the term "feature" appears to describe independent variables. This discussion also seems to support the idea they are equivalent.
I would like to know if the terms are equivalent and can be used interchangeably. If they are not, what is the difference?
Historical background of the terms would be especially welcome.
"Feature" and "independent variable" are different terms for the same thing. "Feature" is more common in machine learning, whereas "independent variable" is more common in statistics.
Some more mostly equivalent terms are "covariate", "predictor", and "regression input".
They can be used interchangeably
Related
This is based on my earlier question. Can I use machine learning algorithms to help me with understanding sentences?
(I will use a closely related example as I used in my previous question). For example, I want my algorithm/code to start a program based on what the user says. For instance, if he says "turn on the program," then the algorithm should do that. However, if the user says "turn on the car," the computer shouldn't turn on the program, obviously. (BUT how would a computer know?) I am sure there are hundreds of different ways to say "start" or "turn on the program." What I am asking is how can a computer differentiate between "program" and "car"? How can the algorithm know that in the first sentence, it has to start the program, but not in the second one? Is there a way for the algorithm to know what the sentence is talking about?
Could I use an unsupervised learning algorithm for this, that is, one that can learn what the sentence is about?
Thanks
Natural language understanding is a very hard problem and many researchers are working on it. To begin with, basic Natural language understanding systems start off as rule based. You manually write down rules that will be matched against an input, and if a match is found, you fire a corresponding action. So, you restrict the format of your input and come up with rules while keeping them as general as possible. For example, instead of matching the exact statement "turn on the program", you can have a rule such as: unless the word "program" occurs in the command, don't start the program, OR ignore every sentence unless it contains "program". And then, combine your rules to develop more complex "understanding". How to write/represent rules is another tough problem. You can start off with Regular Expressions.
Regarding various ways of expressing the action of "Start"ing something, you are going to look at Synonyms for "start", e.g. "begin". This can be obtained from a thesaurus and a commonly used resource for such tasks is WordNet
You need to figure out what information do you exactly want to extract from the sentence. Most natural language techniques are task specific, there isn't be a general one-size-fits-all natural language understanding tool.
no machine learning algorithms can learn without enough information input. If there are enough information about a car versus a program, then the learning algorithms may differentiate them. Machine learning group things that have similar properties and separate them into different group if thing has different properties.
While my research area is in Machine Learning (ML), I am required to take a project in Programming Languages (PL). Therefore, I'm looking to find a project that is inclined towards ML.
One intersection I know of between the two fields is Natural Language Processing (NLP), but I couldn't find concrete papers in that topic that are related to PL; perhaps due to my poor choice of keywords in the search query.
The main topics in the PL course are : Syntax & Symantics, Static Program Analysis, Functional Programming, and Concurrency and Logic programming
If you could suggest papers or keywords that are Machine Learning enthusiast friendly, that would be highly appreciated!
Another very important intersection in these fields is probabilistic programming languages, which provide probabilistic inference over models specified as actual computer programs. It's a growing research field, including a recently started DARPA program on this topic.
If you are interested in NLP, then I would focus on two aspects of listed PL disciplines:
Syntax & Semantics - as this is incredibly closely realted to the NLP field, where in most cases the understanding is based on the various language grammars. Searching for papers regarding language modeling, information extraction, deep parsing would yield dozens of great research topics which are heavil related to the sytax/semantics problems.
logic programming -"in good old years" people believed that this is a future of AI, even though it is not (currently) true, it is still quite widely used forreasoning in some fields. In particular, prolog is a good example of language that can be used to reson (for example spatial-temporal reasoning) or even parse language (due to its "grammar like" productions).
If you wish to tackle some more ML related problem rather then NLP then you could focus on concurrency (parallelism) as it is very hot topic - making ML models more scalable, more efficient, "bigger, faster, stronger" ;) Just lookup keywords like GPU Machine Learning, large scale machine learning, scalable machine learning etc.
I also happen to know that there's a project at the University of Edinburgh on using machine learning to analyse source code. Here's the first publication that came out of it
What are the real time examples of NFA and epsilon NFA i.e. practical examples other than that it is used in designing compilers
Any time anyone uses a regular expression, they're using a finite automaton. And if you don't know a lot about regexes, let me tell you they're incredibly common -- in many ecosystems, it's the first tool most people try to apply when faced with getting structured data out of strings. Understanding automata is one way to understand (and reason about) regexes, and a quite viable one at that if you're mathematically inclined.
Actually, today's regex engines have grown beyond these mathematical concepts and added features that permit doing more than an FA allows. Nevertheless, many regexes don't use these features, or use them in such a limited way that it's feasible to implement them with FAs.
Now, I only spoke of finite automata in general before. An NFA is a specific FA, just like a DFA is, and the two can be converted into one another (technically, any DFA already is a NFA). So while you can just substitute "finite automaton" with "NFA" in the above, be aware that it doesn't have to be an NFA under the hood.
Like explained by #delnan, automatas are often used in the form of regular expressions. However, they are used a bit more than just that. Automatas are often used to model hardware and software systems and verifying their certain properties. You can find more information by looking at model checking. One really really simplified motivating example can be found in the introduction of Introduction to Automata, Languages, and Computation.
And let's not forget Markov chains which are basically based on on finite automata as well. In combination with the hardware and sofware modelling that bellpeace mentioned, a very powerful tool.
If you are wondering why epsilon NFAs are considered a variation of NFAs, then I don't think there is a good reason. They are interpreted in the same way, except every step may not be a unit time anymore, but an NFA is that not really either.
A somewhat obscure but effective example would be the aho-corasick algorithm, which uses a finite automaton to search for multiple strings within text
I have some time occupying myself with motion planning for robots, and have for some time wanted to explore the possibility of improving the opportunities as "potential field" method offers. My challenge is to avoid that the robot gets trapped in "local minimum" when using the "potential field" method. Instead of using a "random walk" approach to avoid that the robot gets trapped I have thought about whether it is possible to implement a variation of A* which could act as a sort of guide for precisely to avoid that the robot gets trapped in "local minimum".
Is there some of the experiences of this kind, or can refer to literature, which avoids local minimum in a more effective way than the one used in the "random walk" approach.
A* and potential fields are all search strategies. The problem you are experiencing is that some search strategies are more "greedy" than others, and more often than not, algorithms that are too greedy get trapped in local minimum.
There are some alternatives where the tension between greediness (the main cause of getting trapped on local minimum) and diversity (trying new alternatives that don't seem to be a good choice in the short term) are parameterized.
A few years ago I've researched a bit about ant algorithms (search for Marco Dorigo, ACS, ACO) and they have a family of search algorithms that can be applied to pretty much anything, and they can control the greediness vs. exploration of your search space. In one of their papers, they even compared the search performance solving the TSP (the canonical traveling salesman problem) using genetic algorithms, simulated annealing and others. Ant won.
I've solved the TSP in the past using genetic algorithms and I still have the source code in delphi if you like.
Use harmonic function path planning. Harmonic functions are potential functions that describe fluid flow and other natural phenomena. If they are setup correctly using boundary conditions, then they have no local minima. These have been in use since the early 90s by Rod Grupen and Chris Connolly. These functions have been shown to be a specific form of optimal control that minimizes collision probabilities. They can be computed efficiently in low dimensional spaces using difference equations (i.e. Gauss-seidel, successive over-relaxation, etc.).
I am asking this question because I know there are a lot of well-read CS types on here who can give a clear answer.
I am wondering if such an AI exists (or is being researched/developed) that it writes programs by generating and compiling code all on it's own and then progresses by learning from former iterations. I am talking about working to make us, programmers, obsolete. I'm imagining something that learns what works and what doesn't in a programming languages by trial and error.
I know this sounds pie-in-the-sky so I'm asking to find out what's been done, if anything.
Of course even a human programmer needs inputs and specifications, so such an experiment has to have carefully defined parameters. Like if the AI was going to explore different timing functions, that aspect has to be clearly defined.
But with a sophisticated learning AI I'd be curious to see what it might generate.
I know there are a lot of human qualities computers can't replicate like our judgement, tastes and prejudices. But my imagination likes the idea of a program that spits out a web site after a day of thinking and lets me see what it came up with, and even still I would often expect it to be garbage; but maybe once a day I maybe give it feedback and help it learn.
Another avenue of this thought is it would be nice to give a high-level description like "menued website" or "image tools" and it generates code with enough depth that would be useful as a code completion module for me to then code in the details. But I suppose that could be envisioned as a non-intelligent static hierarchical code completion scheme.
How about it?
Such tools exist. They are the subject of a discipline called Genetic Programming. How you evaluate their success depends on the scope of their application.
They have been extremely successful (orders of magnitude more efficient than humans) to design optimal programs for the management of industrial process, automated medical diagnosis, or integrated circuit design. Those processes are well constrained, with an explicit and immutable success measure, and a great amount of "universe knowledge", that is a large set of rules on what is a valid, working, program and what is not.
They have been totally useless in trying to build mainstream programs, that require user interaction, because the main item a system that learns needs is an explicit "fitness function", or evaluation of the quality of the current solution it has come up with.
Another domain that can be seen in dealing with "program learning" is Inductive Logic Programming, although it is more used to provide automatic demonstration or language / taxonomy learning.
Disclaimer: I am not a native English speaker nor an expert in the field, I am an amateur - expect imprecisions and/or errors in what follow. So, in the spirit of stackoverflow, don't be afraid to correct and improve my prose and/or my content. Note also that this is not a complete survey of automatic programming techniques (code generation (CG) from Model-Driven Architectures (MDAs) merits at least a passing mention).
I want to add more to what Varkhan answered (which is essentially correct).
The Genetic Programming (GP) approach to Automatic Programming conflates, with its fitness functions, two different problems ("self-compilation" is conceptually a no-brainer):
self-improvement/adaptation - of the synthesized program and, if so desired, of the synthesizer itself; and
program synthesis.
w.r.t. self-improvement/adaptation refer to Jürgen Schmidhuber's Goedel machines: self-referential universal problem solvers making provably optimal self-improvements. (As a side note: interesting is his work on artificial curiosity.) Also relevant for this discussion are Autonomic Systems.
w.r.t. program synthesis, I think is possible to classify 3 main branches: stochastic (probabilistic - like above mentioned GP), inductive and deductive.
GP is essentially stochastic because it produces the space of likely programs with heuristics such as crossover, random mutation, gene duplication, gene deletion, etc... (than it tests programs with the fitness function and let the fittest survive and reproduce).
Inductive program synthesis is usually known as Inductive Programming (IP), of which Inductive Logic Programming (ILP) is a sub-field. That is, in general the technique is not limited to logic program synthesis or to synthesizers written in a logic programming language (nor both are limited to "..automatic demonstration or language/taxonomy learning").
IP is often deterministic (but there are exceptions): starts from an incomplete specification (such as example input/output pairs) and use that to constraint the search space of likely programs satisfying such specification and then to test it (generate-and-test approach) or to directly synthesize a program detecting recurrences in the given examples, which are then generalized (data-driven or analytical approach). The process as a whole is essentially statistical induction/inference - i.e. considering what to include into the incomplete specification is akin to random sampling.
Generate-and-test and data-driven/analytical§ approaches can be quite fast, so both are promising (even if only little synthesized programs are demonstrated in public until now), but generate-and-test (like GP) is embarrassingly parallel and then notable improvements (scaling to realistic program sizes) can be expected. But note that Incremental Inductive Programming (IIP)§, which is inherently sequential, has demonstrated to be orders of magnitude more effective of non-incremental approaches.
§ These links are directly to PDF files: sorry, I am unable to find an abstract.
Programming by Demonstration (PbD) and Programming by Example (PbE) are end-user development techniques known to leverage inductive program synthesis practically.
Deductive program synthesis start with a (presumed) complete (formal) specification (logic conditions) instead. One of the techniques leverage automated theorem provers: to synthesize a program, it constructs a proof of the existence of an object meeting the specification; hence, via Curry-Howard-de Bruijn isomorphism (proofs-as-programs correspondence and formulae-as-types correspondence), it extracts a program from the proof. Other variants include the use of constraint solving and deductive composition of subroutine libraries.
In my opinion inductive and deductive synthesis in practice are attacking the same problem by two somewhat different angles, because what constitute a complete specification is debatable (besides, a complete specification today can become incomplete tomorrow - the world is not static).
When (if) these techniques (self-improvement/adaptation and program synthesis) will mature, they promise to rise the amount of automation provided by declarative programming (that such setting is to be considered "programming" is sometimes debated): we will concentrate more on Domain Engineering and Requirements Analysis and Engineering than on software manual design and development, manual debugging, manual system performance tuning and so on (possibly with less accidental complexity compared to that introduced with current manual, not self-improving/adapting techniques). This will also promote a level of agility yet to be demonstrated by current techniques.