The Little ML'er - Good training for F#? - f#

I want to get up to speed on F# and was wondering if the book "The Little ML'er" would be of help since F# is based on OCaml which is a derivative of ML. Or, Is ML too different from F# to be of any help?
Thanks.

If you read it without typing in the examples, it will familiarize you with the core concepts. But OCaml's syntax that F# borrows is different enough that half the examples won't work without (trivial) changes.
This said, in my memory, "The little MLer" is very basic and more aimed at someone who knows no programming language at all.

Related

Tips for writing an algorithm for paraphrasing sentences(machine learning)

I am doing a project at the university and I need to train an algorithm to rephrase sentences, what can you advise for implementation? Is it possible to use a translator to translate into another language in the end to get a paraphrased sentence? Also i want to use Word2Vec, or it's a bad idea?
This kind of broad-advice question – and about a very-tough problem, paraphrasing text, that is still a very active research problem – would be better answered by surveyin the research literature.
A great site for searching relevant papers – and then finding other related papers once you've set some positive examples – is http://www.arxiv-sanity.com/.
Searching for [paraphrasing] or [summarization] would give you a running start in seeing major techniques & their limitations. And, once you start bookmarking papers by the little 'disk' icon, it can autosuggest important related papers... so even if your 1st few finds are tangential or far-from-usefulness, it can lead you to the seminal papers, & prevailing cutting-edge algorithms/libraries, pretty quickly.

Machine Learning for repetitive form filling

I'm trying to use machine learning algorithms for repetitive form filling.
Here is a picture to illustrate that a little bit.
If you enter values in field A and B i would like to have a suggestion for field C.
For this case i really would like to implement a Machine learning algorithm so that the system stays really flexible and only makes suggestions by the knowledge that was build.
I've already started reading programming collective intelligence and Artificial intelligence a modern approach. I also started to play around with Weka a little bit and found a pretty good microsoft research paper on my problem too. But my main problem is that I can't really identify what algorithm group I should use. I'm primarily looking at Descision trees like C 4.5 but I'm not sure if this is the right way. Could you please give me any suggestions on my problem?
It looks like you're starting out... good luck.
Go for a Huffman tree / Genetic algorithm randomizer... for a quick solution.
Go for implementing everything you can think of, then using an external efficacy classifier to figure out what to use for the next iteration, and randomize something along the way.... for the more complex solution.
Decision trees are incredibly inflexible when it comes to this type of stuff. Try fuzzy logic algorithms.

The intersection of Machine Learning and Programming Languages fields

While my research area is in Machine Learning (ML), I am required to take a project in Programming Languages (PL). Therefore, I'm looking to find a project that is inclined towards ML.
One intersection I know of between the two fields is Natural Language Processing (NLP), but I couldn't find concrete papers in that topic that are related to PL; perhaps due to my poor choice of keywords in the search query.
The main topics in the PL course are : Syntax & Symantics, Static Program Analysis, Functional Programming, and Concurrency and Logic programming
If you could suggest papers or keywords that are Machine Learning enthusiast friendly, that would be highly appreciated!
Another very important intersection in these fields is probabilistic programming languages, which provide probabilistic inference over models specified as actual computer programs. It's a growing research field, including a recently started DARPA program on this topic.
If you are interested in NLP, then I would focus on two aspects of listed PL disciplines:
Syntax & Semantics - as this is incredibly closely realted to the NLP field, where in most cases the understanding is based on the various language grammars. Searching for papers regarding language modeling, information extraction, deep parsing would yield dozens of great research topics which are heavil related to the sytax/semantics problems.
logic programming -"in good old years" people believed that this is a future of AI, even though it is not (currently) true, it is still quite widely used forreasoning in some fields. In particular, prolog is a good example of language that can be used to reson (for example spatial-temporal reasoning) or even parse language (due to its "grammar like" productions).
If you wish to tackle some more ML related problem rather then NLP then you could focus on concurrency (parallelism) as it is very hot topic - making ML models more scalable, more efficient, "bigger, faster, stronger" ;) Just lookup keywords like GPU Machine Learning, large scale machine learning, scalable machine learning etc.
I also happen to know that there's a project at the University of Edinburgh on using machine learning to analyse source code. Here's the first publication that came out of it

Machine Learning in practice: Writing algorithms yourself or using Weka?

I asked myself the question whether most people normally code the machine learning algorithms themselves or whether they are likely to use existing solutions like Weka or R packages.
Of course it depends on the problem - but let's say that I want to use a common solution like a neural network. Is there still a reason to code it myself? To understand the mechanism better and adapt it? Or is the thought of standardized solutions more important?
This is not a good question for Stackoverflow. It's an opinion question, not a programming problem.
Nevertheless, here is my take:
It depends on what you want to do.
If you want to find which algorithm works best for your data problem at hand, try ELKI, Weka, R, Matlab, SciPy, whatever. Try out all the algorithms you can find, and spend even more time on preprocessing your data.
If you know which algorithm you need and need to get it into production, many of these tools will not perform good enough or be easy enough to integrate. Instead, check if you can find low level libraries such as libSVM that provide the functionality you need. If these don't exist, roll your own optimized code.
If you want to do research in this domain, you are best off with extending the existing tools. ELKI and Weka have APIs that you can plug into to provide extensions. R doesn't really have an API (CRAN it's a mess...) but people just dump their code somewhere and (hopefully) add a manual how to use it. Extending these frameworks can save you a lot of effort: you have comparison methods ready to use, and you can re-use a lot of their code. ELKI for example has a lot of index structures to accelerate algorithms. Most of the time, the index acceleration is much harder to write than the actual algorithm. So if you can reuse the existing indexes, this will make your algorithms much faster, too (and you will also benefit from future enhancements to these frameworks).
If you want to learn about existing algorithms you better implement them yourself. You'll be surprised how much more there is to optimizing some algorithms than what is taught in class. E.g. APRIORI. The basic idea is quite simple. But getting all the pruning details right, I say 1 out of 20 students gets these details. If you implement APRIORI, then benchmark it against a known good implementation and try to understand why yours is much slower, then you'll actually discover the subtle details to the algorithms. And don't be surprised to see a factor of 100 performance difference between ELKI, R, Weka etc. - it's can still be the same algorithm, just implemented more or less efficiently when it comes to actual data structures used, memory layout etc.

Intelligent code-completion? Is there AI to write code by learning?

I am asking this question because I know there are a lot of well-read CS types on here who can give a clear answer.
I am wondering if such an AI exists (or is being researched/developed) that it writes programs by generating and compiling code all on it's own and then progresses by learning from former iterations. I am talking about working to make us, programmers, obsolete. I'm imagining something that learns what works and what doesn't in a programming languages by trial and error.
I know this sounds pie-in-the-sky so I'm asking to find out what's been done, if anything.
Of course even a human programmer needs inputs and specifications, so such an experiment has to have carefully defined parameters. Like if the AI was going to explore different timing functions, that aspect has to be clearly defined.
But with a sophisticated learning AI I'd be curious to see what it might generate.
I know there are a lot of human qualities computers can't replicate like our judgement, tastes and prejudices. But my imagination likes the idea of a program that spits out a web site after a day of thinking and lets me see what it came up with, and even still I would often expect it to be garbage; but maybe once a day I maybe give it feedback and help it learn.
Another avenue of this thought is it would be nice to give a high-level description like "menued website" or "image tools" and it generates code with enough depth that would be useful as a code completion module for me to then code in the details. But I suppose that could be envisioned as a non-intelligent static hierarchical code completion scheme.
How about it?
Such tools exist. They are the subject of a discipline called Genetic Programming. How you evaluate their success depends on the scope of their application.
They have been extremely successful (orders of magnitude more efficient than humans) to design optimal programs for the management of industrial process, automated medical diagnosis, or integrated circuit design. Those processes are well constrained, with an explicit and immutable success measure, and a great amount of "universe knowledge", that is a large set of rules on what is a valid, working, program and what is not.
They have been totally useless in trying to build mainstream programs, that require user interaction, because the main item a system that learns needs is an explicit "fitness function", or evaluation of the quality of the current solution it has come up with.
Another domain that can be seen in dealing with "program learning" is Inductive Logic Programming, although it is more used to provide automatic demonstration or language / taxonomy learning.
Disclaimer: I am not a native English speaker nor an expert in the field, I am an amateur - expect imprecisions and/or errors in what follow. So, in the spirit of stackoverflow, don't be afraid to correct and improve my prose and/or my content. Note also that this is not a complete survey of automatic programming techniques (code generation (CG) from Model-Driven Architectures (MDAs) merits at least a passing mention).
I want to add more to what Varkhan answered (which is essentially correct).
The Genetic Programming (GP) approach to Automatic Programming conflates, with its fitness functions, two different problems ("self-compilation" is conceptually a no-brainer):
self-improvement/adaptation - of the synthesized program and, if so desired, of the synthesizer itself; and
program synthesis.
w.r.t. self-improvement/adaptation refer to Jürgen Schmidhuber's Goedel machines: self-referential universal problem solvers making provably optimal self-improvements. (As a side note: interesting is his work on artificial curiosity.) Also relevant for this discussion are Autonomic Systems.
w.r.t. program synthesis, I think is possible to classify 3 main branches: stochastic (probabilistic - like above mentioned GP), inductive and deductive.
GP is essentially stochastic because it produces the space of likely programs with heuristics such as crossover, random mutation, gene duplication, gene deletion, etc... (than it tests programs with the fitness function and let the fittest survive and reproduce).
Inductive program synthesis is usually known as Inductive Programming (IP), of which Inductive Logic Programming (ILP) is a sub-field. That is, in general the technique is not limited to logic program synthesis or to synthesizers written in a logic programming language (nor both are limited to "..automatic demonstration or language/taxonomy learning").
IP is often deterministic (but there are exceptions): starts from an incomplete specification (such as example input/output pairs) and use that to constraint the search space of likely programs satisfying such specification and then to test it (generate-and-test approach) or to directly synthesize a program detecting recurrences in the given examples, which are then generalized (data-driven or analytical approach). The process as a whole is essentially statistical induction/inference - i.e. considering what to include into the incomplete specification is akin to random sampling.
Generate-and-test and data-driven/analytical§ approaches can be quite fast, so both are promising (even if only little synthesized programs are demonstrated in public until now), but generate-and-test (like GP) is embarrassingly parallel and then notable improvements (scaling to realistic program sizes) can be expected. But note that Incremental Inductive Programming (IIP)§, which is inherently sequential, has demonstrated to be orders of magnitude more effective of non-incremental approaches.
§ These links are directly to PDF files: sorry, I am unable to find an abstract.
Programming by Demonstration (PbD) and Programming by Example (PbE) are end-user development techniques known to leverage inductive program synthesis practically.
Deductive program synthesis start with a (presumed) complete (formal) specification (logic conditions) instead. One of the techniques leverage automated theorem provers: to synthesize a program, it constructs a proof of the existence of an object meeting the specification; hence, via Curry-Howard-de Bruijn isomorphism (proofs-as-programs correspondence and formulae-as-types correspondence), it extracts a program from the proof. Other variants include the use of constraint solving and deductive composition of subroutine libraries.
In my opinion inductive and deductive synthesis in practice are attacking the same problem by two somewhat different angles, because what constitute a complete specification is debatable (besides, a complete specification today can become incomplete tomorrow - the world is not static).
When (if) these techniques (self-improvement/adaptation and program synthesis) will mature, they promise to rise the amount of automation provided by declarative programming (that such setting is to be considered "programming" is sometimes debated): we will concentrate more on Domain Engineering and Requirements Analysis and Engineering than on software manual design and development, manual debugging, manual system performance tuning and so on (possibly with less accidental complexity compared to that introduced with current manual, not self-improving/adapting techniques). This will also promote a level of agility yet to be demonstrated by current techniques.

Resources