Programming a drone to flight indoor using opencv - opencv

I am newbie with drones. I would like to develop a program to manage a drone using opencv to fly indoor over a line.
I am searching a lot of languages but most all of them are GPS based. I saw there is an alternative which calls SLAM to detect the position using the sensors.
Well I have a line in the floor and a camera on my drone. I like mission planner but I am not quite sure if it is the best choice. I will be using Parrot AR, but I would like to use any drone.
So I would like to use mission planner but I am not sure if it is the best choice.
What would be the best SDK you would recommend me to use in order to manage the drone not using the GPS points but relative locations or SLAM?

Well, you have the Parrot API ,and a couple of wrappers in different languages. Node-AreDrone for nodeJs, PyArdrone for python, and there is a wrapper coded in C# which I have used AR.Drone. It has a good user interface which you can see the both cameras, record and replay the videos, control the drone by clicking on buttons, you can see the metrics and configuration of the drone and you have also a way to send commands in a queue. Because I love c# and the features I've mentioned you have already in a user interface, I prefer this. Most of them are quite the same as they use the Parrot API inside by sending udp messages. I couldn't try others, so, there are a lot, and anybody could tell me which one is the best. For mission planner I couldn't find a good solution for indoors. So, for anyone who is lost and do not know here to start as I was. I recommend to select the language you want and search for the corresponding wrapper. If you like c# as me, so AR.Drone is a good choice.
Also if you want to do something with OpenCV. Copterface is a good example. You could implement it in any language with OpenCV.

Related

Robotics library in Forth?

I have read the documentation for the Roboforth environment from STrobotics and recognized that this a nice way for programming a robot. What I missed is a sophisticated software library with predefined motion primitives. For example, for picking up a object, for regrasping or for changing a tool.
In other programming languages like Python or C++, a library is a convenient way for programming repetitive tasks and for storing expert knowledge into machine-readable files. Also a library is good way for not-so-talented programmers to get access on higher-level-functions. In my opinion Forth is the perfect language for implementing such an API, but I didn't find information about it. Where should I search? Are there any examples out there?
I am author of RoboForth, and you make a good point. I have approached the problem of starting off new users with videos on YouTube; see How to... (playlist with 6 items, e.g "ST Robotics How-to number 1 - getting started") which is a playlist covering basics and indeed tool changing.
I never wrote any starter programs, because the physical positions (coordinates) would be different from one user to the next, however I think it can be done, and I will do it. Thanks for the heads up.

Possible use case/real application for mobile distributed version of Tensorflow?

I'm developing this project where I'm trying to create a distributed version of Tensorflow (the actual open source version is single node) and where the cluster is entirely composed by mobile devices (e.g. smartphones).
In your opinion, what is a possible application or use case where this could be useful? Can you give me some example please?
I know that this is not a "standard" Stack Overflow question, but I didn't know where to post it (if you know a better place where to post it, please let me know it). Thanks so much for your help!
http://www.google.com.hk/search?q=teonsoflow+android
TensorFlow can be used for image identification and there is an example using the camera for Android.
There could be many distributed uses for this. Face recognition, 3D space construction from 2D images.
TensorFlow can be used for a chat bot. I am working towards using it for a personal assistant. The Ai on one phone could communicate with the Ai on other phones.
It could use vision and GPS to 'reserve' a lane for you on the road. Intelligent crowd planned roads and intersections would be safer.
I am also interested in using it for distributed mobile. Please contact me with my user name at gmail or Skype.
https://boinc.berkeley.edu
I think all my answers could run on individual phones with communication between them. If you want them to act like a cluster as #Yaroslav pointed out there is Seti#home and other projects running in the BOINC client.
TensorFlow could be combined with a game engine. You could have a proceduraly generated Ai learning augumented reality game generating the story as multiple players interact with it. I have seen research papers for each of these components.

webgl viewer/game engine for low poly turntables?

I am looking for the best tool to achieve something like this (this is Blender's game engine, no real reflections, etc.) in an webgl viewer.
http://youtu.be/9-n12ZH5O6k
The idea is to prepare several basic scenes like this and then for the user to upload his design and have it previewed on a car (or other far more basic objects).
While p3d is nice, I don't think it does the job. There's no API for these cases yet. What are some options to pull this off? The requirement would be to have a library that doesn't have a too large footprint, since the feature/product is planned for the Asian market, so internet speed has to be considered.
you should look into three.js/babylon.js maybe? But you surely won't achieve that app just by a fingersnap, so read the tutorials as well, but these libs will surely ease your task by much.

How to start with Augmented reality to create my own framework (Not AR App)

I have been working Augmented Reality for quite a few months. I have used third party tools like Unity/Vuforia to create augmented reality applications for android.
I would like to create my own framework in which I will create my own AR apps. Can someone guide me to right tutorials/links to achieve my target. On a higher level, my plan is to create an application which can recognize multiple markers and match it with cloud stored models.
That seems like a massive undertaking: model recognition is not an easy task. I recommend looking at OpenCV (which has some standard algorithms you can use as a starting point) and then looking at a good computer vision book (e.g., Richard Szeliski's book or Hartley and Zisserman).
But you are going to run into a host of practical problems. Consider that systems like Vuforia provide camera calibration data for most Android devices, and it's hard to do computer vision without it. Then, of course, there's efficiently managing the whole pipeline which (again) companies like Qualcomm and Metaio invest huge amounts of $$ in.
I'm working on a project that does framemarker tracking and I've started exporting bits of it out to a project I'm calling OpenAR. Right now I'm in the process of pulling out unpublishable pieces and making Vuforia and the OpenCV versions of marker tracking interchangeable. You're certainly welcome to check out the work as it progresses. You can see videos of some of the early work on my YouTube channel.
The hard work is improving performance to be as good as Vuforia.

Carmen Robotics

I have been working with Carmen http://carmen.sourceforge.net/ for a while now, and I really like the software but I need to make some changes inside the source code.
I am therefore interesting in some students reports/projects there have been working with Carmen, or any documentation of the source code.
I have been reading the documentation on the webpage for Carmen, but with all respect I think the literature there is a bit outdated and insufficient.
ROS is the new hot navigation toolkit for robotics. It has a professional development group and a very active community. The documentation is okay, but it's the best I've seen for robotic operating systems.
There are a lot of student project teams that are using it.
Check it out at www.ros.org
I'll be more specific on why ROS is awesome...
Built in visualizer/simulator rviz
- It has a record function which will record all of the messages passed out of nodes, this allows you take in a lot of raw data store it in a "ros bag" and then play it back later when you need to test your AI, but want to sit in your bed.
Built in navigation capabilities,
-all you have to do is write the publishers of data for your sensors.
-It has standard messages that you need to fill out so that the stack has enough information.
There is an Extended Kalman Filter which is pretty awesome because I didn't want to write one. Currently implementing it, i'll let you know how that turns out.
It also has built in message levels, by that I mean you can change which severity of print messages are printed during runtime, fairly handy for debugging.
There's a robot monitor node that you can publish the status of your sensors to and it bundles all of that information into a GUI for your viewing pleasure.
There are some basic drivers already written. For example SICK lidars are supported right out of the box.
There is also a built in transform function, to help you move everything to the right coordinate system.
ROS was made to run across multiple computers, but can work on just one.
Data transfer is handled over TCP ports.
I hope that's more helpful.

Resources