I am confused about the DRAKE simulate. Do you need to use the simulate class with a real robot arm?
What does drake simulate do?
Drake's Simulator class is used for advancing time for a Drake System. The System may contain a simulated robot, or may communicate directly with a real robot. Please consider looking through the overview material on the Drake web site and at some of the examples that come with Drake to get the big picture.
Related
I was wondering whether I could leverage the modularity drake gives to test Visual SLAM algorithms on realtime data. I would like to create 3 blocks that output acceleration, angular speed, and RGBD data. The blocks should pull information from a real sensor. Another block would process the data and produce the current transform of the camera and a global map. Effectively, I would like to cast my problem into a "Systems" framework so I can easily add filters where I need them.
My question is: Given other people's experience with this library, is Drake the right tool for the job for this usecase? Specifically, can I use this library to process real time information in a production setting?
Visual SLAM is not a use case I've implemented myself, but I believe the Drake Systems framework should be up to the task, depending on what you mean by "realtime".
We definitely ship RGBD data through the framework often.
We haven't made any attempt to support running Drake in hard realtime, but certainly can run at high rates. If you were to hit a performance bottleneck, we tend to be pretty responsive and would welcome PRs.
As for the "production-level", it is certainly our intention for the code / process to be mature enough for that setting, and numerous teams do already.
I have the following products:
drone iris+
Pixhawk
For my last year project I want to process the image from the drone in real time and to control the drone by the image.
I don't find which product will be the best for me... is it the Raspberry pi or maybe something else that I'm not familiar with.
Thanks
Any embedded linux computer should work. The Odroid series has more computing power than a raspberry pi, which will be helpful here. See this article for setup instructions: http://dev.ardupilot.com/wiki/odroid-via-mavlink/
Regarding software: I would suggest using the OpenCV (computer vision) library for your image processing needs. There's a nice built in function for camera input that interfaces nicely with both Python and C++ programming languages. Depending on your experience writing software, I would recommend python (higher level, possibly slower, portable) or C++ (fighter jet: hard to use, higher ceiling in terms of performance). C++ might be appropriate for the speed necessary to operate a drone. I would check the docs to see if the package serves your needs before diving in.
Regarding hardware: Consider using Arduino to interface with peripheral hardware, but I'm definitely not experienced with this sort of thing.
Have fun!
I'm currently working on a project that aims to recognize objects in an indoor household type of environment and to roughly represent the location of these objects on a map. It is hoped that this can all be done using a single Kinect camera.
So far I have managed to implements a basic object recognition system using OpenCV's SURF library. I have followed and used similar techniques to what has been described in "OpenCV 2 Computer Vision Application Programming Cookbook".
I'm now slowly shifting focus to the mapping portion of this project. I have looked into RGBDSLAM as a method to create 3D maps and represent any objects found. However, I can't seem to find a way to do this. I have already asked a question about this at http://answers.ros.org/question/37595/create-semantic-maps-using-rgbdslam-octomaps/ with no luck so far.
I have also briefly researched GMapping and MonoSLAM but I'm finding it difficult to assess whether these are suitable since I've only just started learning about SLAM.
So, any advise on these SLAM techniques would be much appreciated!
I'm also open to alternatives to what I've talked about. If you know of any other methods of creating semantic maps of environment then please feel free to share!
Cheers.
I have used A.J Davison's MonoSLAm method and it is only suitable for small environments like a desktop or a small room (using a fish eye lens). Try to use PTAMM (by Dr. Robert Castle), its much more robust and the source code is free for academic use.
I'd like to programatically do some signal processing on a live sound feed.
Specifically I'd like to be able to isolate certain bands of frequencies and play around with phase shifting.
I've not worked in this area before from a purely software perspective and a quick google search turned up very little useful information.
Does anyone know of any good information resources for this topic area?
Matlab is a good starting point. It has the necessary toolboxes and functions that will allow you to capture audio signals, run different kind of filters over them and write them to wav files. The UI is easy to navigate through and it's simple enough to generate plots and visualize results.
http://www.mathworks.com/products/signal/
If, however, you're looking to develop real-world applications, then Python can come in handy. They have toolkits like SciPy, Numpy, Audiolab that offer the same functions as Matlab does.
http://www.scipy.org
Link
http://scikits.appspot.com/audiolab
In a nutshell, Matlab is good for testing ideas and prototyping, Python is good for testing as well as real-world application development. And Python is free. Matlab might cost you if you're not a student anymore.
http://www.dspguide.com/
This is a super excellent reference on digital signal processing techniques in general. It's not a programming guide, per se, but covers the techniques and the theory clearly and simply, and provides pseudocode and examples so that you can implement in the language of your choice. You'll be hard up to find a more complete reference, and you can download it for free online!
I am soon to start with Microsoft Robotics Studio.
My question is to all the gurus of MSRS, Can simple simulation (as obstacle avoidance and wall following) be done without any hardware ?
Does MSRS have 3-dimensional as well as 2-dimensional rendering? As of now I do not have any hardware and I am only interested in simulation, when I have the robot hardware I may try to interface it!
Sorry for a silly question, I am a MSRS noob, but have previous robotics h/w and s/w experience.
Other than MSRS and Player Project (Player/Stage/Gazebo) is there any other Software to simulate robots, effectively ?
MSRS tackles several key areas. One of them is simulation. The 3D engine is based on the AGeia Physics engine and can simulate not only your robot and its sensors, but a somewhat complex environment.
The demo I saw had a Pioneer with a SICK lidar running around a cluttered appartment living room, with tables, chairs and etc.
The idea is that your code doesn't even need to know if it's running on the simulator or the real robot.
Edit:
A few links as requested:
Start here: http://msdn.microsoft.com/en-us/library/dd939184.aspx
alt text http://i.msdn.microsoft.com/Dd939184.image001(en-us,MSDN.10).jpg
Then go here: http://msdn.microsoft.com/en-us/library/dd939190.aspx
alt text http://i.msdn.microsoft.com/Dd939190.image008(en-us,MSDN.10).jpg
Then take a look at some more samples: http://msdn.microsoft.com/en-us/library/cc998497.aspx
alt text http://i.msdn.microsoft.com/Cc998496.Sumo1(en-us,MSDN.10).jpg
simple answer is yes, MRDS simulator and player/stage have very similar capabilities. MRDS uses a video game quality physics engine under the hood, so you can do collisions, and some basic physics on your robots, but its not going to be the level of accuracy of a matlab simulation (on the flip side its realtime and easier to develop with though). You can do a lot in MRDS without any hardware.
MRDS uses some pretty advanced programming abstractions, so can be a bit intimidating at first, but do the tutorials, and the course that has been posted to codeplex "software engineering for robotics" and you will be fine. http://swrobotics.codeplex.com/