I am trying to do localization just using lidar . I uses the map generated by cartographer_ros and do localization using mrpt_localization(http://wiki.ros.org/mrpt_localization).
I tried using the command 'roslaunch mrpt_localization demo.launch', it create an existing map from the bag file, similar to 'rosbag play '.
THIS IS THE LAUNCH FILE:
However, after finish the mapping, the lidar location (tf) disappear. I wanted to locate the lidar location in real-time on a generated/existing map. Even after i launch the rplidar_s1.launch it did not work. SEND HELP!!
The 'roslaunch mrpt_localization demo.launch' launch files already includes a map for demo purposes.
Try making a copy of that launch file, and remove the part that publishes the map (the demo_rosbag part). Then, make sure that all the TF and topic names in the mrpt_localization section of the launch file matches your settings... without more details on your files, TFs, etc. is hard to give more help, I guess!
Related
I am working on a ROS project and struggling with this problem.
Currently, I am using aruco_ros.
https://github.com/pal-robotics/aruco_ros
I have a bag file that contains raw images, and I want to get published topics from aruco_ros based on the file. So, I modified single.launch like below.
or
And then, I did this.
$ roslaunch aruco_ros single.launch
However, I could receive nothing from all published topics. From rviz, I recognized that image from aruco_ros is located at image/debug, so I modified the path to "/camera/color/image_raw" after writing a command
$ rosbag play -l my_bag.bag
At rviz, the image is well displayed, but the published topics are still emtpy.
I tried multiple attempts but I have no idea how this will work. Could you help me out?
I'm working hard to get up to speed with OpenMapTiles. The quickstart.sh script usually runs to completion so I've preferred it as a source of truth over the sometimes inconsistent documentation. Time to evolve.
What is the most efficient way to build an MBTiles file that contains, say, planet-level data for zooms 0-6 and bounded data for zooms 7-13, ideally for multiple bounded areas (e.g., a handful of metro areas). Seems a common use case during development. Can it be done with the existing Docker tools?
Did you try to download a OSM file from http://download.geofabrik.de/index.html and place it in /data folder, as stated in the quickstart.md (https://github.com/openmaptiles/openmaptiles/blob/master/QUICKSTART.md) ?
Placing the osm.pbf file in your /data folder and adjusting the .env and openmaptiles.yaml file to your preferred zoom should help you with a next step.
I'm not sure what you mean with the bounds.
following the developer support page, I download the offline map using the DEMO app software downloaded with the SDK.
Then I moved the packaged in the right path inside the SKMaps.bundle, PreinstalledMaps\v1 folder.
Now I'm lost, how do I tell the system to use the offline map instead of another one or instead of downloading the map of the current position?
thanks
UPDATE
From what I understand, once we set:
[SKMapsService sharedInstance].connectivityMode = SKConnectivityModeOffline;
The application will pick up automatically the maps that are present offline, the others will not show details. But as final test I download one city and it works, then I add a second one. In this scenario I still can see the first city but once moved to the second city I don't see any details. Am I missing something?
UPDATE/2
Once moved to the real divice I see the details of both the cities.
I've got my camera saving images fine to the app path, but my client wants to be able to access the photos from the photos directory (the path that images appear in when you use the native camera apps on the ipad)... I seem to get a "File I/O Error" if i try to save to the "///var/mobile/Media/Photos/" directory, which seems to be the location... is it possible? or is there no security or something involved with that?
I've never used it, but you want to use the CameraRoll class. It should do what you want, and seems straight forward to use based on it's simple/short API.
Here's an Adobe cookbook example that uses the CameraUI class as well.
Does anyone have some sample code demonstrating how to make a "file browser" view? I'd like to be able to navigate through directories and drill-down the sub-directories and see files located within the various folders. I want the user to be able to create new directories/files and even select an existing file. Is there sample code out there already available to do this?
I don't know about sample code, but this wouldn't be too complicated to achieve using NSFileManager and a UITableView.
You can obtain arrays of directory contents using the subpathsOfDirectoryAtPath:error and associated methods of a file manager. These arrays in turn can populate a UITableView. It would be fairly easy to put together a navigation controller that could display a series of table views showing a file hiearchy.
Bear in mind, however, that you'll only be able to access the directories inside your application sandbox, unless you're running on a jailbroken device.
The iOS programming guide says that
You should never present users with the list of files in this directory and ask them to decide what to do with those files. Instead, sort through the files programmatically and add files without prompting.
This is assuming you are trying to implement file browse feature for your documents directory.
I'm an author of FileExplorer which is a file browser for iOS and fulfills most of your requirements.
Here are some of the features of my control:
Possibility to choose files or/and directories if there is a need for that
Possiblity to remove files or/and directories if there is a need for that
Built-in search functionality
View Audio, Video, Image and PDF files.
Possibility to add support for any file type.
You can find my control here.