I am trying to understand blobtrack.cpp code provided as a sample code with OpenCV. In this code class named CvBlobTrackerAuto is used. I tried to find some documentation about this class but it does not provide a detailed explanation.
I am particularly interested in
CvBlobTrackerAuto::Process(IplImage *pImg, IplImage *pMask = NULL) function. What does this do and what is the task of this mask used here?
Thank you in advance
I've been working with CvBlobTrackerAuto in the last few weeks. Here are some of things I have figured out.
CvBlobTrackerAuto::Process is used to process the last captured image in order to update the tracking information (blob ids and positions). Actually, CvBlobTrackerAuto is an abstract class since it doesn't provide an implementation for CvBlobTrackerAuto::Process. The only concrete implementation there is (as far as I can tell) is CvBlobTrackerAuto1, which can be found in blobtrackingauto.cpp.
What CvBlobTrackerAuto1::Process does is to implement the following pipeline:
Foreground detection: this produces a binary mask corresponding to the foreground.
Blob tracking: updates the position of blobs. It may use mean shift, particle filters or a combination of these.
Post processing: (I'm not sure of what this section does).
Blob deletion: it is "experimental and simple" according to a comment in there. It deletes blobs which have been too small or near the image borders in the last frames.
Blob detection: detects new blobs. See enteringblobdetection.cpp.
Trajectory generation: (not sure of what it does).
Track analysis: (not sure of what it does. But I do remember having read the code and deciding that it had no influence on blob tracking, so I disabled it.)
In this particular implementation of CvBlobTrackerAuto::Process, the pMask parameter is used for nothing at all. It has a default value of NULL and it is assigned to a variable once, only to be overwritten some lines later.
The OpenCv sample to be found in samples/c/blobtrack_sample.cpp is built around this CvBlobTrackerAuto1 class, providing different options to each module in the pipeline.
I hope it helps.
I was directed to this link when I posted the same question in OpenCV mailing group. This doc explains OpenCV Blobtracker and its modules.
Hope this helps anyone interested.
Related
I wart to create a web app where a user enters certain data via a form and then receives a custom rendered image. The image is from a smart object in a psd. It's kind of like a mock-up which definitely requires needs some photoshop filters to be properly rendered.
This should all happen in real time and should be doable from my understanding since the rendering of a single images doesn't need much computing power
I've done some research and haven't really found a solution the matches my problem. Is it necessary to run Photoshop on a server and then remotely run a photoshop script and then upload the generated image somewhere else?
I've used The After Effects Plugin Template by DataClay in the past which offers similar functionality but for video.
Looking forward to hearing your ideas.
Thanks
You can use the Dataclay plugin to handle still image exports out of After Effects. Make a single-frame duration composition in After Effects and rig the layers with the Templater plugin. Then use the PNG Sequence output module to render out a single frame.
From Dataclay's forums:
Exporting
A few extra steps are required to correctly render a project file as a PNG sequence using Templater. By default, a file rendered as a PNG sequence will have the frame number appended to the end of the file name, i.e.:
filename.png00000, filename.png00001, filename.png00002, etc.
In order to designate where in the filename the frame number should be added, we’ll need to use the output column. First, add a column named output to your data source. Next, add a filename with a set of brackets with five # signs to designate where the frame numbering should be added. For example:
filename[#####] would result in filename00001.png
or
[#####]filename would result in 00001filename.png
Please I need solution with correcting and running this code for mask rcnn
As commented by Gtomika, please provide error trace in text format. Also provide more details about the repo where you got the model from and any other information that you think is relevant.
Based on my past experience, I'm pretty sure that you are using Matterplot's mask RCNN and your issue is due to class count mismatch. You are trying to load weight of a model that was trained on a different class count. You should exclude some layers such as 'mrcnn_bbox_fc','mrcnn_class_logits’.
Fix is to change model.load_weights(COCO_MODEL_PATH, by_name=True) to model.load_weights(filepath, by_name=True, exclude=[ "mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])
Refer this issue for more information.
I'm trying to use the new 'evaluation' action after inference to generate some metrics for my output. However, the .csv files just show scores of '0' for average_distance and '1' for Jaccard and Dice for each of my data volumes. I can't seem to find any documentation for the evaluation action, so I'm not sure what I'm doing wrong. Also, the --dataset_to_infer=Validation option doesn't seem to work, both inference and evaluation are being applied to all data rather than just the validation set.
Thanks!
For the evaluation issue, we're working on the documentation. The dataset_to_infer option is only tested for the applications in NiftyNet/niftynet/application; applications from the model zoo are not upgraded to support it yet (please file an issue with more details https://github.com/NifTK/NiftyNet/issues if you believe it's a bug).
For the time being, pointing directly to the inference result in the config.ini worked for me.
e.g.
[inferred]
csv_file = model_dir/save_seg_dir/inferred.csv
I believe this file is not found currently and then evaluation defaults to comparing labels to labels. See the issue on GitHub.
I have been looking through the threads at the Qualcomm Forums but no luck since I don't know exactly how to look for what I want.
I'm working with the ImageTargets Sample for iOS and I want to change the teapot to another image (a text rather) I had.
I already have the render and I got the .h using opengl library but I can't figure out what do I need to change to make this work and since this is the very basic and I haven't been able to make it work I really haven't ventured to try anything else.
Could anyone please help me out?
I would paste code here but it's a whole project so I don't know exactly what to put if needed please let me know.
If the case is still valid, here's what you have to do:
get header file for 3D object
get texture image for this object
in EAGLView.mm make this changes:
import "yourobject3d.h"
add your texture to textureFilenames array(this should be at the begining of EAGLView
eventually take care about kObjectScale (by deafult it was about 3.0f, for one object I did have to change it even up to 120.0f)
in setup3dObjects method assign proper arrays of vertices/normals/texture coords (check in "yourobject3d.h" file for proper arrays and naming) to Object3D *object
make this change in renderFrameQCAR
//glDrawElements(GL_TRIANGLES, obj3D.numIndices, GL_UNSIGNED_SHORT, (const GLvoid*)obj3D.indices);
glDrawArrays(GL_TRIANGLES, 0, obj3D.numVertices);
I believe that is all... if something take a look at Vuforia's forum, i.e. here: https://developer.vuforia.com/node/2047669
NOTE: default teapot.h does (!) have indices, which are not present in banana.h (from comment below) so take care about that too
Have a look at the EAGLView.mm file. There you'll have to load the textures (images) and 3d objects (you'll need to import your .h instead of teapot.h and modify setup3dObjects accordingly).
They are finally rendered by calling the renderFrameQCAR function.
Actually, teapot is not an image. It's a 3D model stored in .h format which includes Vertices, Normals, and Texture coordinates. You should have a good knowledge of OpenGL ES to understand those codes in sample app.
An easier way to change the 3D model to whatever you want is to use a rendering engine which facilitates the drawing and rendering stuffs and you don't need to bother OpenGL APIs. I've done it with jPCT-AE for Android platform but for iOS there is a counterpart called OpenFrameworks engine. It has some plugins to load 3Ds or MD2 files and since it's written in C++ you can easily integrate it with QCAR.
This is a short video of my result with jPCT and QCAR:
Qualcomm Vuforia + jPCT-AE test video
I have been trying for two and a half weeks so far to get a local copy of OpenStreetMap running on a server. I have downloaded the planet file and imported it into a PostGIS database called 'osm'. I have used OSM Mapnik tools to generate an XML stylesheet for Mapnik to use. I have used TileLite to prove that Mapnik can render OSM tiles from the database. The tiles even look the way that I want them to look.
My problem now is that I cannot get TileCache to work with Mapnik. I have a MapServer instance installed that I am using to serve Shapefiles. This works with TileCache. The default 'basic' layer in the TileCache configuration file works as well. Please help with my OSM layer:
[osm]
type=Mapnik
mapfile=/var/maps/bin/mapnik/osm.xml
spherical_mercator=true
bbox=-16697000,8610000,-16667000,8640000
maxResolution=156543.0339/4
levels=18
srs=EPSG:900913
I have read every last blog post, forum post, and tutorial I can find. Any help would be appreciated. I suspect I have either missed something or I am doing something stupid.
Nik,
I can understand the potential difficulties here and that you've tried a number of things. You did not say what exact problems you ran into however, so I'll guess that this is your problem:
You are using OpenLayers to test that the tiles are being produced correctly, but things don't line up when you connect to the tiles generated by TileCache.
That it? If not, please provide a bit more detail.
If that is the problem then likely what you need to do is to make sure to use a "TMS" layer type in OpenLayers and to match that with your TileCache.cfg layer params. "TMS" is very similar to the OSM tile scheme except that the y value is flipped.
Anyway, something like this should work:
tilecache.cfg
[osm]
type=Mapnik
mapfile=/full/path/to/osm.xml
spherical_mercator=true
OpenLayers Layer
var tms = new OpenLayers.Layer.TMS("TileCache TMS Layer","http://localhost:8000/",
{ serviceVersion: "1.0.0", layername: "osm", type: "png" });
map.addLayers([tms]);
I pulled this from an old example of mine from the first time I got this working: http://mapnik-utils.googlecode.com/svn/example_code/tilecache/openlayers_osm.html