Openframeworks Colour Tracking - opencv

I'm working with OpenCV within Openframeworks in order to track a certain colour. My question may be difficult if you are not familiar with the colour tracking code but I'll try to explain the best I can.
What the code does now is follow a certain colour with a red circle and I am working to create a line that does basically the same thing but each point will be stored so that a squiggly type of drawing application is created. Right now it's a straight line that you can pull.
I'll post more code if necessary. Any advice would really help. Thanks!
tespApp.cpp
void testApp::draw(){
ofSetColor(255,255,255);
//draw coloured cv image
rgb.draw(0,0);
contours.draw(0,480);
// draw line that follows the blobs
for (int i=0; i<contours.nBlobs; i++) {
ofSetColor(0);
ofLine( contours.blobs[i].pos.x, contours.blobs[i].pos.y, contours.blobs[i].lastpos.x, contours.blobs[i].lastpos.y );
}
}
ofxCvContourFinder.cpp
for( int i = 0; i < MIN(nConsidered, (int)cvSeqBlobs.size()); i++ ) {
blobs.push_back( ofxCvBlob() );
float area = cvContourArea( cvSeqBlobs[i], CV_WHOLE_SEQ, bFindHoles ); // oriented=true for holes
CvRect rect = cvBoundingRect( cvSeqBlobs[i], 0 );
cvMoments( cvSeqBlobs[i], myMoments );
blobs[i].area = bFindHoles ? fabs(area) : area; // only return positive areas
blobs[i].length = cvArcLength(cvSeqBlobs[i]);
blobs[i].boundingRect.x = rect.x;
blobs[i].boundingRect.y = rect.y;
blobs[i].boundingRect.width = rect.width;
blobs[i].boundingRect.height = rect.height;
blobs[i].centroid.x = (myMoments->m10 / myMoments->m00);
blobs[i].centroid.y = (myMoments->m01 / myMoments->m00);
blobs[i].pos.x =0;
blobs[i].pos.y =0;
blobs[i].lastpos.x = blobs[i].pos.x;
blobs[i].lastpos.y = blobs[i].pos.y;
blobs[i].pos.x =(myMoments->m10 / myMoments->m00);
blobs[i].pos.y = (myMoments ->m01 / myMoments->m00);

A complete example of what you are trying to achieve can be found in here:
https://github.com/kylemcdonald/ofxCv/tree/master/example-contours-color
It uses ofxCv, an add-on that let you integrate openframeworks with openCv in a clean way. Moreover you can use native OpenCV calls, without the need of any kind of wrapper. The internet is full of similar examples that uses pure OpenCV code and with this add ons you can use those snippets as it is, without any kind of problem and it gives some easy way to go from OpenCV data structures to Openframeworks ones and vice-versa. It is great!

Related

Matchingproblems when using OpenCVs matchShapes function

I´m trying to find a objekt in a larger Picture with the findContour/matchShape functions (the object can vary so it´s not possible to look after the color or something similar, Featuredetectors like SIFT also doesn´t work because the object could be symetric)
I have written following code:
Mat scene = imread...
Mat Template = imread...
Mat imagegray1, imagegray2, imageresult1, imageresult2;
int thresh=80;
double ans=0, result=0;
// Preprocess pictures
cvtColor(scene, imagegray1,CV_BGR2GRAY);
cvtColor(Template,imagegray2,CV_BGR2GRAY);
GaussianBlur(imagegray1,imagegray1, Size(5,5),2);
GaussianBlur(imagegray2,imagegray2, Size(5,5),2);
Canny(imagegray1, imageresult1,thresh, thresh*2);
Canny(imagegray2, imageresult2,thresh, thresh*2);
vector<vector <Point> > contours1;
vector<vector <Point> > contours2;
vector<Vec4i>hierarchy1, hierarchy2;
// Template
findContours(imageresult2,contours2,hierarchy2,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
// Szene
findContours(imageresult1,contours1,hierarchy1,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
imshow("template", Template);
double helper = INT_MAX;
int idx_i = 0, idx_j = 0;
// Match all contours with eachother
for(int i = 0; i < contours1.size(); i++)
{
for(int j = 0; j < contours2.size(); j++)
{
ans=matchShapes(contours1[i],contours2[j],CV_CONTOURS_MATCH_I1 ,0);
// find the best matching contour
if((ans < helper) )
{
idx_i = i;
helper = ans;
}
}
}
// draw the best contour
drawContours(scene, contours1, idx_i,
Scalar(255,255,0),3,8,hierarchy1,0,Point());
When I'm using a scene where only the Template is located in, i get a good matching result:
But when there are more objects in the pictures i have trouble detecting the object:
Hope someone can tell me whats the problem with the code i´m using. Thanks
You have a huge amount of contours in the second image (almost each letter).
As the matchShape checks for scale-invariant Hu-moments (http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gab001db45c1f1af6cbdbe64df04c4e944) also a very small contours may fit the shape you are looking for.
Furthermore, the original shape is not distinguished properly like can be seen when excluding all contours with an area smaller 50.
if(contourArea(contours1[i]) > 50)
drawContours(scene, contours1, i, Scalar(255, 255, 0), 1);
To say it with other words, there is no problem with your code. The contour can simply not be detected very well. I would suggest to have a look at approxCurve and convexHull and try to close the contour this way. Or improve the use of Canny in some way.
Then you could use a priori knowledge to restrict the size (and maybe rotation?) of the contour you are looking for.

OpenCV MultiBandBlender doesn't work

I try to blend my images into pano with MultiBandBlender, but it return black pano. But FeatherBlender works fine. What I doing wrong?
blendImages(const std::vector<cv::Point> &corners, std::vector<cv::Mat> images)
{
std::vector<cv::Size> sizes;
for(int i = 0; i < images.size(); i++)
sizes.push_back(images[i].size());
float blend_strength = 5;
cv::Size dst_sz = cv::detail::resultRoi(corners, sizes).size();
float blend_width = sqrt(static_cast<float>(dst_sz.area())) * blend_strength / 100.f;
cv::Ptr<cv::detail::Blender> blender = cv::detail::Blender::createDefault(cv::detail::Blender::MULTI_BAND);
//cv::detail::FeatherBlender* fb = dynamic_cast<cv::detail::FeatherBlender*>(blender.get());
//fb->setSharpness(1.f/blend_width);
cv::detail::MultiBandBlender* mb = dynamic_cast<cv::detail::MultiBandBlender*>(blender.get());
mb->setNumBands(static_cast<int>(ceil(log(blend_width)/log(2.)) - 1.));
blender->prepare(corners, sizes);
for(int i = 0; i < images.size(); i++)
{
cv::Mat image_s;
images[i].convertTo(image_s, CV_16SC3);
blender->feed(image_s, cv::Mat::ones(image_s.size(), CV_8UC1), corners[i]);
}
cv::Mat pano;
cv::Mat panoMask = cv::Mat::ones(dst_sz, CV_8UC1);
blender->blend(pano, panoMask);
return pano;
}
Three possible causes:
Try keeping all image_s and masks in a vector, and feed with the following structure:
for (int i = 0; i < images_s.size(); ++i)
blender->feed(images_s[i], masks[i], corners[i]);
Don't initialize panoMask to ones before blending.
Make sure corners are well defined
Actually, I can't compile your code with OpenCV 2.4, because of blender.get function. There is no such a function in my build of OpenCV 2.4.
Anyway, if you wish to make a panorama, you'd better not use resultRoi function. You need boundingRect. I suppose, it is really hard to get all horizontally aligned images for one panorama.
Also, look at my answer here. It demonstrates how to use MultiBandBlender.
Hey I was getting the same black pano while using MultiBand blender in opencv. Actually the issue was resolved by changing
cv::Mat::ones(image_s.size(), CV_8UC1)
to
cv::Mat::ones(image_s.size(), CV_8UC1)*255
This is because Mat::ones initialize all the pixels to a value of numerical 1, Thus, we need to muliply it with 255 in order to get a pure black & white mask.
And, thanks, your issue solved my problem :)

opencv sliding window

Is there any built in library for sliding a window (custom size) over an image in opencv version 2.x?
I tried to write the algorithm by myself but I found it very painful and probably error-prone.
I need to slide over an image and create histogram for the input of svm.
there is one for HOG Descriptor, which calculates HOG features but I have my own feature set so I just need an algorithm to let me slide over an image.
You can define a Region of Interest (ROI) on a cv::Mat object, which gives you a new Mat object referring to the sub-window. This does not copy the underlying data, merely a new header with the appropriate metadata.
cv::Mat::operator()
See also this other question:
OpenCV C++, getting Region Of Interest (ROI) using cv::Mat
Basic code can looks like. The code is described good enought. I hope.
This is single scale slideing window 60x60 witch Step 30.
Result of this simple example is ROI.
You can visit this basic tutorial Tutorial Here.
// Parameters of your slideing window
int windows_n_rows = 60;
int windows_n_cols = 60;
// Step of each window
int StepSlide = 30;
for (int row = 0; row <= LoadedImage.rows - windows_n_rows; row += StepSlide)
{
for (int col = 0; col <= LoadedImage.cols - windows_n_cols; col += StepSlide)
{
Rect windows(col, row, windows_n_rows, windows_n_cols);
Mat Roi = LoadedImage(windows);
}
}

Vertices behind are being drawn infront - XNA

Well, I am creating a minecraft terrain thing just for the heck of it. The problem I have is that if you look on a certain angle some faces which are actually behind others are drawn in front of them, thus not showing the actual ones which are there. Like minecraft I have the terrain seperated into regions, and when the vertexbuffer is built, only the "outside" faces are shown. It is also for some reason is not drawing the very top nor the left blocks. In addition to all of these problems faces seem to be overlapping ie (only half of a face can be seen)
Here is what it looks like (note I am only drawing the top faces because I have to fix the others up): http://s1100.photobucket.com/albums/g420/darestium/?action=view&current=minecraftliketerrain.png
I am also drawing all the regions in one go with the following method (i'm using reimer's effect file until I can write my own :):
public void Draw(Player player, World world)
{
effect.CurrentTechnique = effect.Techniques["TexturedNoShading"];
effect.Parameters["xWorld"].SetValue(Matrix.Identity);
effect.Parameters["xProjection"].SetValue(player.Camera.ProjectionMatrix);
effect.Parameters["xView"].SetValue(player.Camera.ViewMatrix);
effect.Parameters["xCamPos"].SetValue(player.Camera.Position);
effect.Parameters["xTexture"].SetValue(world.TextureAlias.SheetTexture);
effect.Parameters["xCamUp"].SetValue(player.Camera.UpDownRotation);
for (int x = 0; x < world.regions.GetLength(0); x++)
{
for (int y = 0; y < world.regions.GetLength(1); y++)
{
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
Region region = world.regions[x, y];
if (player.Camera.BoundingFrustum.Contains(region.BoundingBox) != ContainmentType.Disjoint)
{
device.SetVertexBuffer(region.SolidVertexBuffer);
//device.Indices = region.SolidIndices;
device.DrawPrimitives(PrimitiveType.TriangleList, 0, region.VertexCount);
//device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, region.VertexCount, 0, region.SolidIndices.IndexCount / 3);
}
}
}
}
}
Help would be much appreciated thanks :)
Looks like the Z-buffer is disabled. Try setting:
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
The issue with it not drawing the top/left blocks, that you mention, is probably an issue to do with your vertex buffer generation code (if so, try asking another question specifically about that issue).

Flicker removal using OpenCV?

I am a newbie to openCV. I have installed the opencv library on a ubuntu system, compiled it and trying to look into some image/video processing apps in opencv to understand more.
I am interested to know if OpenCV library has any algorithm/class for removal flicker in captured videos? If yes what document or code should I should look deeper into?
If openCV does not have it, are there any standard implementations in some other Video processing library/SDK/Matlab,.. which provide algorithms for flicker removal from video sequences?
Any pointers would be useful
Thank you.
-AD.
I don't know any standard way to deflicker a video.
But VirtualDub is a Video Processing software which has a Filter for deflickering the video. You can find it's filter source and documents (algorithm description probably) here.
I wrote my own Deflicker C++ function. here it is. You can cut and paste this code as is - no headers needed other than the usual openCV ones.
Mat deflicker(Mat,int);
Mat prevdeflicker;
Mat deflicker(Mat Mat1,int strengthcutoff = 20){ //deflicker - compares each pixel of the frame to a previously stored frame, and throttle small changes in pixels (flicker)
if (prevdeflicker.rows){//check if we stored a previous frame of this name.//if not, theres nothing we can do. clone and exit
int i,j;
uchar* p;
uchar* prevp;
for( i = 0; i < Mat1.rows; ++i)
{
p = Mat1.ptr<uchar>(i);
prevp = prevdeflicker.ptr<uchar>(i);
for ( j = 0; j < Mat1.cols; ++j){
Scalar previntensity = prevp[j];
Scalar intensity = p[j];
int strength = abs(intensity.val[0] - previntensity.val[0]);
if(strength < strengthcutoff){ //the strength of the stimulus must be greater than a certain point, else we do not want to allow the change
//value 25 works good for medium+ light. anything higher creates too much blur around moving objects.
//in low light however this makes it worse, since low light seems to increase contrasts in flicker - some flickers go from 0 to 255 and back. :(
//I need to write a way to track large group movements vs small pixels, and only filter out the small pixel stuff. maybe blur first?
if(intensity.val[0] > previntensity.val[0]){ // use the previous frames value. Change it by +1 - slow enough to not be noticable flicker
p[j] = previntensity.val[0] + 1;
}else{
p[j] = previntensity.val[0] - 1;
}
}
}
}//end for
}
prevdeflicker = Mat1.clone();//clone the current one as the old one.
return Mat1;
}
Call it as: Mat= deflicker(Mat). It needs a loop, and a greyscale image, like so:
for(;;){
cap >> frame; // get a new frame from camera
cvtColor( frame, src_grey, CV_RGB2GRAY ); //convert to greyscale - simplifies everything
src_grey = deflicker(src_grey); // this is the function call
imshow("grey video", src_grey);
if(waitKey(30) >= 0) break;
}

Resources