I am a beginner in robotics and I am trying to use Google Cartographer to make my simulated Turtlebot build a map autonomously of its environment.
I have done already all the tutorials in ROS and I can make the robot build the map using teleoperation, but I don't know how to make it build the map by itself. I want to use Google Cartographer for this.
I can however run the demo provided by Google and it works (it builds the map of a Museum).
# Launch the 2D depth camera demo.
roslaunch cartographer_turtlebot demo_depth_camera_2d.launch bag_filename:=${HOME}/Downloads/cartographer_turtlebot_demo.bag
The questions:
How can I run it on my own world instead of the bag file of that museum?
Does it need a yaml map like the one I built with teleoperation? what is the command to make it use my yaml map instead of a bag file?
Can I use a .png image with yaml context?
Could it use the gazebo simulated worlds that are .sdf? What is the command to input this then?
These are the specifications I have:
Ubuntu Xenial
ROS Kinetic
Gazebo 7
Turtlebot2
Google Cartographer for turtlebot
Thanks a lot! It's like I understand the concepts but I don't know how to link things together to make them work.
Related
So I want to navigate the file system from within a Fiji script, and create files. macOS uses / as a separator while windows uses \ as a file separator. I can't for the life of me find an easy way to do this given the poor search ability of the documentation.
For example, in Matlab, I can use a built it variable filesep or ispc() or ismac() to find out.
Is there a similar function in imageJ?
Thanks
I just found the answer. Using Fiji, you can use File.separator.
See further documentation here
I'm looking into using HighCharts Dart library to build visualisation for my flutter app. Base on the example here, I'm not too sure how to integrate it into the flutter app.
After looking around, it seems like there isn't chart library for flutter app, other than building from scratch. Do you know if this issue has been addressed or still in progress?
Thanks for your time!
I know your looking for direct integration, but you can use the node export server.
https://www.highcharts.com/blog/news/256-introducing-the-highcharts-node-js-export-server/
How I use it is, I post to highchairs on the server, get the SVG, and display that in flutter. It's not as nice as local, but it works for a good number of use cases.
The same technique can be applied using any server-side SVG chart renderer.
Is it possible to port my existing Viz3d-based visualisation to iOS? It is using VTK under the hood so in theory, it should be doable since VTK can run on iOS.
If yes, is there a working example of this, or can you provide one?
i'm going to do the intrinsic calibration for my kinect camera, I've followed this tutorial http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration
However, when I use command rostopic list, I just found 2 topics:
/rosout
/rosout_agg
but didn't find wanted topics that are:
/camera/camera_info
/camera/image_raw
I've installed camera_calibration package, and I'm using ROS Indigo.
In my case, I use a kinect and freenect driver for it, so the problem is I need to "roslaunch detection freenect.launch" first, then I can see my wanted topics that are /camera/rgb/image_color and camera:=/camera/rgb.
I want to train openCV from a server and send the xml generated by openCV to an ios device where an app will recognize the face using the xml trainned by the server. I will use openCV in both app but the server has window (trainning) and the device has ios (recognition).
So my main question is very simple:
The xml generated in openCV window version can be used an openCV IOS version without any trouble? Somebody made something similar who can give me some tips?
In window I will use .Net.
I think they won't have trouble because they are same libraries (openCV), so I suppose they have same internal algorithms but I want to be sure before start the project.
Best Regards and thanks for your time
There is no problem, but you must train with images taken from your devices. It is normal to have multiple xml sets depending on your different cameras. Normally you release these with the binary, and not as a download but still...