I'm doing some reaserch for navigation algorithms in ROS and so far I have been using amcl for localization. Now I want to try different localization algotithms like hector_slam, so I'm a bit confused. When I run hector_slam does it publish map to odom transformation like amcl or does it publish its own odom and I need amcl for transformation from map to odom?
Thanks
hector_mapping DOES publish odometry. It can be used by setting "pub_odometry" parameter to "true". It also publishes "map" frame to "odom" frame transform.
You can use hector_mapping instead of amcl, but you can do so only while also mapping the environment, and not on a pre-made map, as you would do with amcl.
If you wish to exploit the fact that hector_mapping gives good odometry, you have the option of publishing odometry via the node and then use amcl with it, as explained in this q&a.
It is a straightforward process. Remap the map frame and topic name in hector launch file to something else. Set pub_odometry to true. Set pub_map_odom_transform to false (because this will be published by amcl). If your robot base also publishes odom, remap it to something else so it doesn't interfere with hector's. Give the odom topic to amcl. I've tested it on my robot and it works great, a lot better than odom from encoders which get inaccurate when surface friction changes.
hector_slam is only launch file, actual node is hector_mapping, on Wiki page following topics are mentioned as published:
map_metadata (nav_msgs/MapMetaData)
map (nav_msgs/OccupancyGrid)
slam_out_pose (geometry_msgs/PoseStamped)
poseupdate (geometry_msgs/PoseWithCovarianceStamped)
amcl topics are:
amcl_pose (geometry_msgs/PoseWithCovarianceStamped)
particlecloud (geometry_msgs/PoseArray)
tf (tf/tfMessage)
Not sure if I understood your question, but you need additional tf topic for hector if you want to publish map to different frame.
Related
I wart to create a web app where a user enters certain data via a form and then receives a custom rendered image. The image is from a smart object in a psd. It's kind of like a mock-up which definitely requires needs some photoshop filters to be properly rendered.
This should all happen in real time and should be doable from my understanding since the rendering of a single images doesn't need much computing power
I've done some research and haven't really found a solution the matches my problem. Is it necessary to run Photoshop on a server and then remotely run a photoshop script and then upload the generated image somewhere else?
I've used The After Effects Plugin Template by DataClay in the past which offers similar functionality but for video.
Looking forward to hearing your ideas.
Thanks
You can use the Dataclay plugin to handle still image exports out of After Effects. Make a single-frame duration composition in After Effects and rig the layers with the Templater plugin. Then use the PNG Sequence output module to render out a single frame.
From Dataclay's forums:
Exporting
A few extra steps are required to correctly render a project file as a PNG sequence using Templater. By default, a file rendered as a PNG sequence will have the frame number appended to the end of the file name, i.e.:
filename.png00000, filename.png00001, filename.png00002, etc.
In order to designate where in the filename the frame number should be added, we’ll need to use the output column. First, add a column named output to your data source. Next, add a filename with a set of brackets with five # signs to designate where the frame numbering should be added. For example:
filename[#####] would result in filename00001.png
or
[#####]filename would result in 00001filename.png
What is Direct2d command analog to OpenGl's SwapBuffers? I am using this in a VCL environment such as Delphi and CPP Builder. Thanks
d3ddev->Present(NULL, NULL, NULL, NULL);
There are a couple of ways you can do the equivalent in Direct2D. The simplest way is to create an ID2D1HwndRenderTarget. See http://msdn.microsoft.com/en-us/library/windows/desktop/dd371275(v=vs.85).aspx for details. You'll be interested in the D2D1_HWND_RENDER_TARGET_PROPERTIES parameter. This has a D2D1_PRESENT_OPTIONS field, which can be set to different values depending on the behavior you want. See http://msdn.microsoft.com/en-us/library/windows/desktop/dd368144(v=vs.85).aspx for details. With this in place, the rough equivalent of SwapBuffers is ID2D1RenderTarget::EndDraw.
The other option is using Direct3D interop. In this case you create a DXGI surface render target. (I'd post a link to the docs, but I don't have enough StackOverflow reputation to post more than two hyperlinks. Google "ID2D1Factory::CreateDxgiSurfaceRenderTarget" for the docs). This allows you to use Direct2D to issue 2D rendering commands to the surface, but then present using Direct3D/DXGI. This is more complicated but gives you more flexibility.
I have been trying for two and a half weeks so far to get a local copy of OpenStreetMap running on a server. I have downloaded the planet file and imported it into a PostGIS database called 'osm'. I have used OSM Mapnik tools to generate an XML stylesheet for Mapnik to use. I have used TileLite to prove that Mapnik can render OSM tiles from the database. The tiles even look the way that I want them to look.
My problem now is that I cannot get TileCache to work with Mapnik. I have a MapServer instance installed that I am using to serve Shapefiles. This works with TileCache. The default 'basic' layer in the TileCache configuration file works as well. Please help with my OSM layer:
[osm]
type=Mapnik
mapfile=/var/maps/bin/mapnik/osm.xml
spherical_mercator=true
bbox=-16697000,8610000,-16667000,8640000
maxResolution=156543.0339/4
levels=18
srs=EPSG:900913
I have read every last blog post, forum post, and tutorial I can find. Any help would be appreciated. I suspect I have either missed something or I am doing something stupid.
Nik,
I can understand the potential difficulties here and that you've tried a number of things. You did not say what exact problems you ran into however, so I'll guess that this is your problem:
You are using OpenLayers to test that the tiles are being produced correctly, but things don't line up when you connect to the tiles generated by TileCache.
That it? If not, please provide a bit more detail.
If that is the problem then likely what you need to do is to make sure to use a "TMS" layer type in OpenLayers and to match that with your TileCache.cfg layer params. "TMS" is very similar to the OSM tile scheme except that the y value is flipped.
Anyway, something like this should work:
tilecache.cfg
[osm]
type=Mapnik
mapfile=/full/path/to/osm.xml
spherical_mercator=true
OpenLayers Layer
var tms = new OpenLayers.Layer.TMS("TileCache TMS Layer","http://localhost:8000/",
{ serviceVersion: "1.0.0", layername: "osm", type: "png" });
map.addLayers([tms]);
I pulled this from an old example of mine from the first time I got this working: http://mapnik-utils.googlecode.com/svn/example_code/tilecache/openlayers_osm.html
I am trying to understand blobtrack.cpp code provided as a sample code with OpenCV. In this code class named CvBlobTrackerAuto is used. I tried to find some documentation about this class but it does not provide a detailed explanation.
I am particularly interested in
CvBlobTrackerAuto::Process(IplImage *pImg, IplImage *pMask = NULL) function. What does this do and what is the task of this mask used here?
Thank you in advance
I've been working with CvBlobTrackerAuto in the last few weeks. Here are some of things I have figured out.
CvBlobTrackerAuto::Process is used to process the last captured image in order to update the tracking information (blob ids and positions). Actually, CvBlobTrackerAuto is an abstract class since it doesn't provide an implementation for CvBlobTrackerAuto::Process. The only concrete implementation there is (as far as I can tell) is CvBlobTrackerAuto1, which can be found in blobtrackingauto.cpp.
What CvBlobTrackerAuto1::Process does is to implement the following pipeline:
Foreground detection: this produces a binary mask corresponding to the foreground.
Blob tracking: updates the position of blobs. It may use mean shift, particle filters or a combination of these.
Post processing: (I'm not sure of what this section does).
Blob deletion: it is "experimental and simple" according to a comment in there. It deletes blobs which have been too small or near the image borders in the last frames.
Blob detection: detects new blobs. See enteringblobdetection.cpp.
Trajectory generation: (not sure of what it does).
Track analysis: (not sure of what it does. But I do remember having read the code and deciding that it had no influence on blob tracking, so I disabled it.)
In this particular implementation of CvBlobTrackerAuto::Process, the pMask parameter is used for nothing at all. It has a default value of NULL and it is assigned to a variable once, only to be overwritten some lines later.
The OpenCv sample to be found in samples/c/blobtrack_sample.cpp is built around this CvBlobTrackerAuto1 class, providing different options to each module in the pipeline.
I hope it helps.
I was directed to this link when I posted the same question in OpenCV mailing group. This doc explains OpenCV Blobtracker and its modules.
Hope this helps anyone interested.
I'm running Ant with output fed to a log file:
ant -logfile file.txt target-name
I'd also like to print some simple progress information to the console though. The answer seems to be a BuildEvent listener that writes to the console every time a new target is hit, but the documentation explicitly states:
A listener must not access System.out and System.err directly since ouput on these streams is redirected by Ant's core to the build event system.
Did I miss something? Is there a way to do this?
Ant replaces the System.out & System.err streams to remap messages printed there through it's own logging system.
That said, you can still get access to the ACTUAL OS streams by using java.io.FileDescriptor#out
Actually, the answer is Log4jListener.
There is a sample log4j configuration for logging into both console and file shown in the above link. You can then use an <echo> task with an appropriate level parameter to selectively decide what gets printed to console.
Thanks for the answers! I'm slow, but this is still something that I'd like to get right.
I've managed to get something working more or less like I want using carej's suggested approach with the java.io.FileDescriptor#out stream and an Ant scriptdef like this:
<scriptdef name="progress-text" language="javascript" >
output = new java.io.PrintStream(new java.io.FileOutputStream(java.io.FileDescriptor.err))
output.println(self.text)
</scriptdef>
Now I'm just left wondering how wize is this approach? Is there inherit risk in using the underlying OS streams directly?
EDIT:
2 Points which might be useful to anyone else with a similar question:
This article has a very good description of the Ant I/O system: http://codefeed.com/blog/?p=68
java.lang.System does something very similar to set System.out and System.err in the first place.
All of this gave me a little more confidence in this approach.