Samples are often distributed with SDKs to provide real world details on how to use the product. Some samples are complete applications, some are bare bones console applications and some are just plain text tutorials with code snippets.
How do you like SDK samples to be presented?
All of the examples you provided can be helpful to different people and/or at different times.
Personally, I like to see a lot of deep textual treatment of the relevant subjects and lots of small, focused code snippets demonstrating salient aspects of the SDK.
A 'toy' application used to demonstrate both the breadth and the depth of the SDK can be very useful as well.
Related
I have read the documentation for the Roboforth environment from STrobotics and recognized that this a nice way for programming a robot. What I missed is a sophisticated software library with predefined motion primitives. For example, for picking up a object, for regrasping or for changing a tool.
In other programming languages like Python or C++, a library is a convenient way for programming repetitive tasks and for storing expert knowledge into machine-readable files. Also a library is good way for not-so-talented programmers to get access on higher-level-functions. In my opinion Forth is the perfect language for implementing such an API, but I didn't find information about it. Where should I search? Are there any examples out there?
I am author of RoboForth, and you make a good point. I have approached the problem of starting off new users with videos on YouTube; see How to... (playlist with 6 items, e.g "ST Robotics How-to number 1 - getting started") which is a playlist covering basics and indeed tool changing.
I never wrote any starter programs, because the physical positions (coordinates) would be different from one user to the next, however I think it can be done, and I will do it. Thanks for the heads up.
I'm developing this project where I'm trying to create a distributed version of Tensorflow (the actual open source version is single node) and where the cluster is entirely composed by mobile devices (e.g. smartphones).
In your opinion, what is a possible application or use case where this could be useful? Can you give me some example please?
I know that this is not a "standard" Stack Overflow question, but I didn't know where to post it (if you know a better place where to post it, please let me know it). Thanks so much for your help!
http://www.google.com.hk/search?q=teonsoflow+android
TensorFlow can be used for image identification and there is an example using the camera for Android.
There could be many distributed uses for this. Face recognition, 3D space construction from 2D images.
TensorFlow can be used for a chat bot. I am working towards using it for a personal assistant. The Ai on one phone could communicate with the Ai on other phones.
It could use vision and GPS to 'reserve' a lane for you on the road. Intelligent crowd planned roads and intersections would be safer.
I am also interested in using it for distributed mobile. Please contact me with my user name at gmail or Skype.
https://boinc.berkeley.edu
I think all my answers could run on individual phones with communication between them. If you want them to act like a cluster as #Yaroslav pointed out there is Seti#home and other projects running in the BOINC client.
TensorFlow could be combined with a game engine. You could have a proceduraly generated Ai learning augumented reality game generating the story as multiple players interact with it. I have seen research papers for each of these components.
I recently watched great google talks speech about Cling - C++ language interpreter. But I wonder if anyone except people at CERN (where it is developed) are using Cling, and how good it is from non-collider-physics-scientist point of view, can you write desktop apps with it?
There are some videos of uses cases different from the High Energy Physics: http://www.youtube.com/results?search_query=cling+c%2B%2B (I think first couple are the relevant ones)
It has the potential to be very useful, but it is very young. There is no documentation that I could find, no dedicated mailing list, no online tutorials. I was able to get small toy code to run, but couldn't figure out how to use it productively on a large library yet.
Cling project is well established one. You can find more information in their official website cling. They also have a forum
Thanks
I want to do a project involving Computer Vision. Mostly object detection/identification. After some research, I keep coming back to OpenCV. But all of the tutorials are from 2008 (I guess it was big for a bit then). It doesn't compile in Python on the mac apparently. I'm using the C++ framework right out of Xcode, but none of the tutorials work as they're outdated and the documentation sucks from what I can parse.
Is there a better solution for what I'm doing, and does anyone have any suggestions as to learning how to to use OpenCV?
Thanks
I have had similar problems getting started with OpenCV and from my experience this is actually the biggest hurdle to learning it. Here is what worked for me:
This book: "OpenCV 2 Computer Vision Application Programming Cookbook." It's the most up-to-date book and has examples on how to solve different Computer Vision problems (You can see the table of contents on Amazon with "Look Inside!"). It really helped ease me into OpenCV and get comfortable with how the library works.
Like have others have said, the samples are very helpful. For things that the book skips or covers only briefly you can usually find more detailed examples when looking through the samples. You can also find different ways of solving the same problem between the book and the samples. For example, for finding keypoints/features, the book shows an example using FAST features:
vector<KeyPoint> keypoints;
FastFeatureDetector fast(40);
fast.detect(image, keypoints);
But in the samples you will find a much more flexible way (if you want to have the option of choosing which keypoint detection algorithm to use):
vector<KeyPoint> keypoints;
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("FAST");
featureDetector->detect(image, keypoints);
From my experience things eventually start to click and for more specific questions you start finding up-to-date information on blogs or right here on StackOverflow.
Let me add a couple of things. First, I can assure you that the Python bindings to OpenCV work on a Mac. I use them every day.
Many people like OpenCV for many reasons:
The license is good, friendly to integration into commercial products, etc.
It is quite good from a technical stand point. It gives you a reference implementation of state of the art algorithms.
It tends to be quite fast compared to the alternatives (Matlab I'm looking at you).
Like everything in life, it is not perfect:
It is a good example of a software library that is a moving target.
I have a 300 line python program that uses OpenCV and every few
months when a new version of OpenCV is released I have to change it
to adapt to the new function names/calling conventions, etc. The
library does advance, a lot, however it is a pain to have to change
the same program 3 times per year.
It has a learning curve, like computer vision itself, it is quite
technical and not easy to learn.
There are alternatives (with other pros and cons) MATLAB with the Image Processing Toolbox is one such example.
The simplest answer that comes to mind, is to read the example code with a bit of understanding, and to try out if Your ideas work. The api does change, and most of the tutorials are writen for the first versions of OpenCV, and it looks that nobody bothered to rewrite them. Nevertheless the core ideas behind it are not changing. So if You find a tutorial answering Your questions, but written in old API just look in the documentation for modern replacements of used functions. It’s not easy and quick, but looks like it works. If You use the newest (actually 2.3) version, I suggest using both the 2.1 documntation and 2.3 docs + tutorials . You should also look into the samples, which should have been installed alongside the library. There are lots of hints about how to use certain structures and tricks that weren't mentioned in documentation. Finally, don't be afraid to look inside the code of the library itself (if You compiled it on Your own). Unfortunately, thats the only source I know to check for example what code corresponds to which type of Mat object.
This is something I've pondered/struggled with and would love to hear some opinions on. I have a good deal of familiarity with the iOS sdk but not so much with the opengl related aspects and not really any with the various SDKs, especially game SDKs build to work on iOS.
If I want to create 2D games for iPhone/iPad, is it easy/better/practical to use some simple collection of iOS SDK objects such as the UIImageViews etc to build a plethora of sprites interacting on the screen, or much better to go with an SDK for that? I'm assuming that going with gl is overboard for 2d requirements, but please voice any dissent if I'm wrong there.
I'm mainly interested in what the quickest route to getting things done is, combined with the smallest requirements to ramp up on technologies. Obviously if it is well worth it to use an SDK simply because it is cross platform for other OSs, that is reasonable to mention.
The advantage of using a framework on top of OpenGL can greatly increase productivity, maintainability and reduce programming errors.
Personally I work with cocos2d-for-iphone. It's written in Objective-C and is based on top of OpenGL. It was created with the aims to create 2D games and thus unlike UIKit or QuartzCore, it's designed for that. It provides a lot of convenience API to manage scenes or sprites, to create animations, etc. Or even libraries for the sounds for example.
There is a very good article which describes some open-source game engines available on iphone here. It could help you in your search.