When was im2col first invented for use in CNNs? - image-processing

This one's a bit more of a historical question, but I don't know where else would be better than here.
Title's fairly self-explanatory - when was im2col first used for CNNs? From my scouring of the internet, the earliest I can date im2col itself is at least 2006: MathWorks claims the im2col function was provided at some point before the R2006a release on the documentation for im2col. It does not give any further specifics. Searching MathWorks' release history gives no further clues.
I'm more just curious than anything else - does anyone else have any idea?

So I think this is an interesting question that deserves to remain. While it's debated whether it stays, I'll post my findings here.
It appears the earliest known reference to 'unrolling' convolutional operations into matrix-multiplies for CNNs specifically, was in 'High Performance Convolutional Neural Networks for Document Processing', by several Microsoft researchers way back in 2006. It is very clear from the figures provided that this is the im2col transform, although it was not specifically named as such - just 'unrolling'. Some of their terminology is also a bit old-fashioned by today's standards - strides are referred to in that paper as sub-sampling, so it seems pretty churlish to deny them the accolade based on not using the right name. Caffe developers wrongly claim that using im2col is the 'Caffe trick' - it is clearly not unique to Caffe, they didn't come up with it, and in fact this paper is referenced on their site.
It's worth noting that this was by no means the first usage of mat-muls for speeding up convolutions. The paper above specifically states "Simple unfolding of convolution is a well known technique. It is commonly implemented in signal processing and communications applications." It's pretty clear that the first usage of mat-muls for convolutions is likely to be much, much older than this, although the history on that remains a bit murky.
gemm as a routine was only formally introduced with BLAS level 3 in 1990, so it's possible that convolutions using mat-muls would have picked up considerably from then onwards due to the portability provided by BLAS. Of course, it's entirely possible - and likely - that earlier implementations would have used dedicated, non-standard matrix-multiply routines.

Related

Tips for writing an algorithm for paraphrasing sentences(machine learning)

I am doing a project at the university and I need to train an algorithm to rephrase sentences, what can you advise for implementation? Is it possible to use a translator to translate into another language in the end to get a paraphrased sentence? Also i want to use Word2Vec, or it's a bad idea?
This kind of broad-advice question – and about a very-tough problem, paraphrasing text, that is still a very active research problem – would be better answered by surveyin the research literature.
A great site for searching relevant papers – and then finding other related papers once you've set some positive examples – is http://www.arxiv-sanity.com/.
Searching for [paraphrasing] or [summarization] would give you a running start in seeing major techniques & their limitations. And, once you start bookmarking papers by the little 'disk' icon, it can autosuggest important related papers... so even if your 1st few finds are tangential or far-from-usefulness, it can lead you to the seminal papers, & prevailing cutting-edge algorithms/libraries, pretty quickly.

How to extract features from fmri?

I'm having fmri dataset for the classification of Normal Controls and Alzheimer diseased patients. Now, as a newbie I'm unable to extract features from my dataset. I want to extract activation patterns, GM,WM, CSF, volumetric measures and hemo-dynamics in numerical form. Please guide me how and where to start from and please suggest some easy and efficient softwares for my work... I'll be obliged...
Take a look at the software packages called FSL (FMRIB Software Library) and SPM (Statistical Parametric Mapping).
Each of them can do the kind of analyses you're asking about. However, be warned that none of these analyses are trivial. You should probably read up a bit on the subject, first. The Handbook of Functional MRI Data Analysis is a great place to start for beginners.
Like #WeirdAlchemy says, these are many analyses you want to carry out, and all of them non-trivial. You typically learn to these over weeks at a relevant intensive course or months during a neuro Masters programme. To answer your question very explicitly:
GM, WM & CSF volumetric measures - You can do this with FSL SIENA, SPM VBM, AFNI 3Dclust, among others.
"Extract activation patterns" is too vague. In all probability, you likely have task-related BOLD fMRI data and want to perform a general linear model (GLM) analysis. FSL FEAT, SPM fMRI, AFNI and others support this. However, without knowing the experimental design, the nature of the data, and what you want to learn from it, it's hard to be more specific about which tool is appropriate.
"Haemodynamics in numerical form" This can mean a number of things, but if you are thinking about the amount of haemodynamic signal modulation (e.g. Condition led to a 2% change in BOLD signal), you get that out of the GLM analysis mentioned above.

Increasing the efficiency of equipment using Amazon Machine Learning

The problem statement is kind of vague but i am looking for directions because of privacy policy i can't share exact details. so please help out.
We have a problem at hand where we need to increase the efficiency of equipment or in other words decide on which values across multiple parameters should the machines operate to produce optimal outputs.
My query is whether it is possible to come up with such numbers using Linear Regression or Multinomial Logistic Regression algorithms, if no then can you please specify which algorithms will be more suitable. Also can you please point me to some active research done on this kind of problem that is available in public domain.
Does the type of problem i am asking suggestions for comes in the area of Machine Learning ?
Lots of unknowns here but I’ll make some assumptions.
What you are attempting to do could probably be achieved with multiple linear regression. I have zero familiarity with the Amazon service (I didn’t even know it existed until you brought this up, it’s not available in Europe). However, a read of the documentation suggests that the Amazon service would be capable of doing this for you. The problem you will perhaps have is that it’s geared to people unfamiliar with this field and a lot of its functionality might be removed or clumped together to prevent confusion. I am under the impression that you have turned to this service because you too are somewhat unfamiliar with this field.
Something that may suit your needs better is Response Surface Methodology (RSM), which I have applied to industrial optimisation problems that I think are similar to what you suggest. RSM works best if you can obtain your data through an experimental design such as a Central Composite Design or Box-Behnken design. I suggest you spend some time Googling these terms to get your head around them, I don’t think it’s an unmanageable burden to learn how to apply these with no prior experience in this area. Because your question is vague, only you can determine if this really is suitable. If you already have the data in an unstructured format, you can still generate an RSM but it is less robust. There are plenty of open-access articles using these techniques but Science Direct is conveniently down at the moment!
Minitab is a software package that will do all the regression and RSM for you. Its strength is that it has a robust GUI and partially reflects Excel so it is far less daunting to get into than something like R. It also has plenty of guides online. They offer a 30 day free trial so it might be worth doing some background reading, collecting the tutorials you need and develop a plan of action before downloading the trial.
Hope that is some help.

Can the Hough Transform be used in commercial software?

Can the Hough Transform be used in commercial software?
I mean, it is one of those things that seem research only and unstable.
You would not put it in a commercial compositing software for example
and have the user rely on it at all times.
Any opinions?
Thanks
The Hough transform has been in use in commercial and industrial applications all over the world for years, decades even. From the wikipedia page you can see that it was first developed in 1972, based on earlier ideas from 1962. That means it is older than the CCD that you use to capture the images you use in the compositing software.
Given that it "seems research only and unstable" to you, I would suggest you spend some time learning various computer vision and image analysis algorithms and techniques, and get a good mathematical basis in the field in general before you implement the Hough transform in commercial compositing software.
And when you are done studying I'd suggest you use a well tested open source implementation.
Yes. In fact, I've written Hough transform code for a piece of commercial software that wasn't meant to be a research tool like MATLAB. Though I put a lot of time into its robustness towards a specific application, it worked great.
The Hough transform by itself can sometimes be unreliable in applications where you have some level noise, such in webcams, or when there are some distortions in the shape you need to extract. This may be what you are seeing. In this case you may need to do a little more tuning towards your application, or try some basic image preprocessing.
I'm a bit annoyed with the condescending tone in both the comment to the question (by High Performance Mark), as well as the accepted answer here.
Firstly, that programming libraries/frameworks provide an implementation of an algorithm does not mean it is used, or rather, suited for commercial applications (i.e. getting the job done, robustly, on less pristine conditions). The Hough transform is a well defined algorithm (with possible uses and limitations) which is simple enough to understand, and very commonly taught in introductory image processing courses. Not surprisingly, it has been implemented in general purpose libraries such as Matlab's, Octave's and OpenCV. I don't believe the question was intended to discuss the robustness of an implementation and possibility of inclusion in commercial image processing frameworks, but rather if the algorithm itself is well suited for end user software (an application that counts circles, or what not).
The accepted answer, as it stands, is "The algorithm is very old. Here is a book on image processing, here is a link to a image processing library that has implemented it". The other answer with zero score seems to be on topic (i.e. discussion possible applications), though isn't very specific ("worked for me").
So, why do some people get the impression that the hough transform is unreliable for shape detection? Here is a good example: Unreliable results with cv2.HoughCircles
The input seems to be very well defined circles. However, the more robust, suggested working solution doesn't use Hough transform. I've had similar experience with my own projects. Usually, the more robust way is some kind of object segmentation, distance transform, watershed and peak localization. Have I ever used Hough transform with good results? No. I think it could be useful in some cases. In particular if the shapes of the imaged objects are perfectly defined, and partially occluded.
In other words, I'm also curious as to commercial applications that ended up benefiting from Hough transform. That's how I came across this question, and subsequently disappointed in the "you wouldn't ask that question if you understood the subject better", responses.

Intelligent code-completion? Is there AI to write code by learning?

I am asking this question because I know there are a lot of well-read CS types on here who can give a clear answer.
I am wondering if such an AI exists (or is being researched/developed) that it writes programs by generating and compiling code all on it's own and then progresses by learning from former iterations. I am talking about working to make us, programmers, obsolete. I'm imagining something that learns what works and what doesn't in a programming languages by trial and error.
I know this sounds pie-in-the-sky so I'm asking to find out what's been done, if anything.
Of course even a human programmer needs inputs and specifications, so such an experiment has to have carefully defined parameters. Like if the AI was going to explore different timing functions, that aspect has to be clearly defined.
But with a sophisticated learning AI I'd be curious to see what it might generate.
I know there are a lot of human qualities computers can't replicate like our judgement, tastes and prejudices. But my imagination likes the idea of a program that spits out a web site after a day of thinking and lets me see what it came up with, and even still I would often expect it to be garbage; but maybe once a day I maybe give it feedback and help it learn.
Another avenue of this thought is it would be nice to give a high-level description like "menued website" or "image tools" and it generates code with enough depth that would be useful as a code completion module for me to then code in the details. But I suppose that could be envisioned as a non-intelligent static hierarchical code completion scheme.
How about it?
Such tools exist. They are the subject of a discipline called Genetic Programming. How you evaluate their success depends on the scope of their application.
They have been extremely successful (orders of magnitude more efficient than humans) to design optimal programs for the management of industrial process, automated medical diagnosis, or integrated circuit design. Those processes are well constrained, with an explicit and immutable success measure, and a great amount of "universe knowledge", that is a large set of rules on what is a valid, working, program and what is not.
They have been totally useless in trying to build mainstream programs, that require user interaction, because the main item a system that learns needs is an explicit "fitness function", or evaluation of the quality of the current solution it has come up with.
Another domain that can be seen in dealing with "program learning" is Inductive Logic Programming, although it is more used to provide automatic demonstration or language / taxonomy learning.
Disclaimer: I am not a native English speaker nor an expert in the field, I am an amateur - expect imprecisions and/or errors in what follow. So, in the spirit of stackoverflow, don't be afraid to correct and improve my prose and/or my content. Note also that this is not a complete survey of automatic programming techniques (code generation (CG) from Model-Driven Architectures (MDAs) merits at least a passing mention).
I want to add more to what Varkhan answered (which is essentially correct).
The Genetic Programming (GP) approach to Automatic Programming conflates, with its fitness functions, two different problems ("self-compilation" is conceptually a no-brainer):
self-improvement/adaptation - of the synthesized program and, if so desired, of the synthesizer itself; and
program synthesis.
w.r.t. self-improvement/adaptation refer to Jürgen Schmidhuber's Goedel machines: self-referential universal problem solvers making provably optimal self-improvements. (As a side note: interesting is his work on artificial curiosity.) Also relevant for this discussion are Autonomic Systems.
w.r.t. program synthesis, I think is possible to classify 3 main branches: stochastic (probabilistic - like above mentioned GP), inductive and deductive.
GP is essentially stochastic because it produces the space of likely programs with heuristics such as crossover, random mutation, gene duplication, gene deletion, etc... (than it tests programs with the fitness function and let the fittest survive and reproduce).
Inductive program synthesis is usually known as Inductive Programming (IP), of which Inductive Logic Programming (ILP) is a sub-field. That is, in general the technique is not limited to logic program synthesis or to synthesizers written in a logic programming language (nor both are limited to "..automatic demonstration or language/taxonomy learning").
IP is often deterministic (but there are exceptions): starts from an incomplete specification (such as example input/output pairs) and use that to constraint the search space of likely programs satisfying such specification and then to test it (generate-and-test approach) or to directly synthesize a program detecting recurrences in the given examples, which are then generalized (data-driven or analytical approach). The process as a whole is essentially statistical induction/inference - i.e. considering what to include into the incomplete specification is akin to random sampling.
Generate-and-test and data-driven/analytical§ approaches can be quite fast, so both are promising (even if only little synthesized programs are demonstrated in public until now), but generate-and-test (like GP) is embarrassingly parallel and then notable improvements (scaling to realistic program sizes) can be expected. But note that Incremental Inductive Programming (IIP)§, which is inherently sequential, has demonstrated to be orders of magnitude more effective of non-incremental approaches.
§ These links are directly to PDF files: sorry, I am unable to find an abstract.
Programming by Demonstration (PbD) and Programming by Example (PbE) are end-user development techniques known to leverage inductive program synthesis practically.
Deductive program synthesis start with a (presumed) complete (formal) specification (logic conditions) instead. One of the techniques leverage automated theorem provers: to synthesize a program, it constructs a proof of the existence of an object meeting the specification; hence, via Curry-Howard-de Bruijn isomorphism (proofs-as-programs correspondence and formulae-as-types correspondence), it extracts a program from the proof. Other variants include the use of constraint solving and deductive composition of subroutine libraries.
In my opinion inductive and deductive synthesis in practice are attacking the same problem by two somewhat different angles, because what constitute a complete specification is debatable (besides, a complete specification today can become incomplete tomorrow - the world is not static).
When (if) these techniques (self-improvement/adaptation and program synthesis) will mature, they promise to rise the amount of automation provided by declarative programming (that such setting is to be considered "programming" is sometimes debated): we will concentrate more on Domain Engineering and Requirements Analysis and Engineering than on software manual design and development, manual debugging, manual system performance tuning and so on (possibly with less accidental complexity compared to that introduced with current manual, not self-improving/adapting techniques). This will also promote a level of agility yet to be demonstrated by current techniques.

Resources