Erlang bindings for CUDA or OpenCL - erlang

I have found this post on Erlang and CUDA, it is rather old so I would like to learn if something has changed since this question was posted. I would like to know if there is any implementation of CUDA/OPENCL bindings for Erlang?
In general, I investigate if it is possible to scale ERLANG program vertically to GPU using CUDA/OPENCL to process a data stream.

OpenCL is here: https://github.com/tonyrog/cl
(You should use the nif branch if that isn't merged to master yet)

I'd wait for this talk http://erlang-factory.com/conference/SFBay2011/speakers/KevinSmith (they will upload video & slides after the conference)

I gave the talk Yurii mentioned and I'm not sure when the videos will be available. The code I demoed is available here: http://github.com/kevsmith/pteracuda. It's minimal but should illustrate what's possible with CUDA and NIFs. I'm hoping to improve it further once my machine arrives back home from SF.

You should also look at https://github.com/vascokk/NumEr
I've been using bit from both this project and Smith's project.

Related

how to read video file and split it into frames for android

My goal is as follows: I have to read in a video that is stored on the sd card, process it frame for frame and then store it in a new file on the SD card again,In each image to do image processing.
At first I wanted to use opencv for android but I did not seem to be able to read the video
here.
I am guessing you already know that doing this on a mobile device or any compute limited devices is not ideal, simply because video manipulation is very computer intensive which translates to slow execution and heavy battery usage on many devices. If you do have the option to do the processing on the server side it is definitely worth considering.
Assuming that for your use case you need to do it on the mobile device, then OpenCV on Android will now allow you to read in a video and access each frame - #StephenG mentions this in his answer to the question you refer to above.
In the past, functionality like this did not get ported to the Android OpenCv as the guidance was to use ffmpeg for frame grabbing on Android devices.
According to more recent documentation, however, this should be available for Android now using the VideoCapture class (note I have not used this myself...):
http://docs.opencv.org/java/2.4.11/org/opencv/highgui/VideoCapture.html
It is worth noting that OpenCV Android examples are all currently based around Eclipse and if you want to use Studio, getting things up an running initially can be quite tricky. The following worked for me recently, but as both studio and OpenCV can change over time you may find you have to do some forum hunting if it does not work for you:
https://stackoverflow.com/a/35135495/334402
Taking a different approach, you can use ffmpeg itself, in a wrapper in Android, for tasks like this.
The advantage of the wrapper approach is that you can use all the usual command line syntax and there is a lot of info on the web to help you get the right parameters.
The disadvantage is that ffmpeg was not really designed to be wrapped in this way so you do sometimes see issues. Having said that it is a common approach now and so long as you choose a well used wrapper library you should at least have a good community to discuss any issues you come across with. I have used this approach in a hand crafted way in the past but if I was doing it again I would use one of the popular examples such as:
https://github.com/WritingMinds/ffmpeg-android-java

how useful is Cling C++ JIT interpreter developed at CERN?

I recently watched great google talks speech about Cling - C++ language interpreter. But I wonder if anyone except people at CERN (where it is developed) are using Cling, and how good it is from non-collider-physics-scientist point of view, can you write desktop apps with it?
There are some videos of uses cases different from the High Energy Physics: http://www.youtube.com/results?search_query=cling+c%2B%2B (I think first couple are the relevant ones)
It has the potential to be very useful, but it is very young. There is no documentation that I could find, no dedicated mailing list, no online tutorials. I was able to get small toy code to run, but couldn't figure out how to use it productively on a large library yet.
Cling project is well established one. You can find more information in their official website cling. They also have a forum
Thanks

OpenCV and Computer Vision, where do we stand now?

I want to do a project involving Computer Vision. Mostly object detection/identification. After some research, I keep coming back to OpenCV. But all of the tutorials are from 2008 (I guess it was big for a bit then). It doesn't compile in Python on the mac apparently. I'm using the C++ framework right out of Xcode, but none of the tutorials work as they're outdated and the documentation sucks from what I can parse.
Is there a better solution for what I'm doing, and does anyone have any suggestions as to learning how to to use OpenCV?
Thanks
I have had similar problems getting started with OpenCV and from my experience this is actually the biggest hurdle to learning it. Here is what worked for me:
This book: "OpenCV 2 Computer Vision Application Programming Cookbook." It's the most up-to-date book and has examples on how to solve different Computer Vision problems (You can see the table of contents on Amazon with "Look Inside!"). It really helped ease me into OpenCV and get comfortable with how the library works.
Like have others have said, the samples are very helpful. For things that the book skips or covers only briefly you can usually find more detailed examples when looking through the samples. You can also find different ways of solving the same problem between the book and the samples. For example, for finding keypoints/features, the book shows an example using FAST features:
vector<KeyPoint> keypoints;
FastFeatureDetector fast(40);
fast.detect(image, keypoints);
But in the samples you will find a much more flexible way (if you want to have the option of choosing which keypoint detection algorithm to use):
vector<KeyPoint> keypoints;
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("FAST");
featureDetector->detect(image, keypoints);
From my experience things eventually start to click and for more specific questions you start finding up-to-date information on blogs or right here on StackOverflow.
Let me add a couple of things. First, I can assure you that the Python bindings to OpenCV work on a Mac. I use them every day.
Many people like OpenCV for many reasons:
The license is good, friendly to integration into commercial products, etc.
It is quite good from a technical stand point. It gives you a reference implementation of state of the art algorithms.
It tends to be quite fast compared to the alternatives (Matlab I'm looking at you).
Like everything in life, it is not perfect:
It is a good example of a software library that is a moving target.
I have a 300 line python program that uses OpenCV and every few
months when a new version of OpenCV is released I have to change it
to adapt to the new function names/calling conventions, etc. The
library does advance, a lot, however it is a pain to have to change
the same program 3 times per year.
It has a learning curve, like computer vision itself, it is quite
technical and not easy to learn.
There are alternatives (with other pros and cons) MATLAB with the Image Processing Toolbox is one such example.
The simplest answer that comes to mind, is to read the example code with a bit of understanding, and to try out if Your ideas work. The api does change, and most of the tutorials are writen for the first versions of OpenCV, and it looks that nobody bothered to rewrite them. Nevertheless the core ideas behind it are not changing. So if You find a tutorial answering Your questions, but written in old API just look in the documentation for modern replacements of used functions. It’s not easy and quick, but looks like it works. If You use the newest (actually 2.3) version, I suggest using both the 2.1 documntation and 2.3 docs + tutorials . You should also look into the samples, which should have been installed alongside the library. There are lots of hints about how to use certain structures and tricks that weren't mentioned in documentation. Finally, don't be afraid to look inside the code of the library itself (if You compiled it on Your own). Unfortunately, thats the only source I know to check for example what code corresponds to which type of Mat object.

What's best for your Video Tracking? Why?

Best as in reliable, maintainable and fast.
Considering Processing, VVVV or OpenFrameworks?
I know Processing doesn't handle big video frames very well.
VVVV (Nodes use OpenCV) is just for Windows.
OpenFrameworks (OpenCv) is more complicated than the
above.
You can try to implement your app in Processing and see if it fits your needs and is fast enough. It should a little more easy and faster to write Java instead of C++.
Here can you find how to setup with processing with examples: http://ubaa.net/shared/processing/opencv/
If you don't want to code anything you can try VVVV, should be little faster but only on Windows as you mentioned.
If your Processing app is running too slow, you can try openFrameworks.
download it the new OF 007 from http://www.openframeworks.cc/ and check out the setup guide.
If you have done the install you can play around with the openCV examples from
<your-OF-folder>/apps/addonsExamples/opencvExample
<your-OF-folder>/apps/addonsExamples/opencvHaarFinderExample/
Personally I prefer OF because you can do any custom thing with the most performance, but its good to make your prototype with Processing to see if it works and implement it after that again in OF.
As far as I can see from your question, VVVV and OF are the options your looking at, but you prefer VVVV's node based programming over OF, but aren't happy that VVVV is Windows only.
Have you considered other alternatives like MaxMSPJitter or PureData ?
Both are similar to VVVV or the other way around :)
MaxMSP has a package for 'optimized matrix operations'(3D/video) called Jitter.
For Jitter there is a cv.jit free collection of external objects and the samples/tutorials are great.
Similarly PureData has an add-on called Gem, which is similar to Max's Jitter package.
I haven't tried with PureData, but there are OpenCV bindings for it, through Gem.
cv.jit
pdp OpenCV PureData Bindings - via Piksel.no
MaxMSP uses quicktime on osx and can use directX on windows, but it's commercial.
PureData runs on windows/osx/linux, it's free and opensource.
HTH

Partial forking of OpenCV

I am currently developing an image processing application using OpenCV's cxcore as the basic structure (Matrix class and the their functions are very convenient). However, I don't really use the image processing capabilities provided by OpenCV (cv and cvaux). All I need is the cxcore module, and some highgui for debugging purpose.
Is it possible to fork openCV's cxcore into my own project, legally and technically? Have anyone ever done this before? My intention is to have my application be able to be compiled in any system without having to install openCV as an intermediate step.
Thanks :-) ,
Andree
P.S.: I have posted the same question in OpenCV's mailing list.
Since OpenCV is licensed with BSD license, you should be able to do anything to the code, even regardless of whether your application is proprietary or free software. Anything includes using only part of that code in your application.
That being said, sharing won't hurt you and it's nice and polite :)

Resources