Is is possible to eliminate space between non-stroked polygons in Processing? If so, how? - image-processing

I'm working on a Processing project to simulate very basic hard-shadows. For the most part I've got it working; each edge of each object checks if its back is facing the light. If it is, a shadow polygon is added using that edge and others cast back away from the point directly away from the light.
However, when I tried to shift from solid shadows to transparent I ran into some problems. Namely, because the shadows are made of multiple shapes the borders tended to overlap, making them darker than everywhere else:
I disabled the stroke on the shadows, which improved the effect but left small lines between the shadows, despite the edges for the polygons being identical:
Is there a way to eliminate this artifact? If so, how?

The solution is to not draw the shadows as separate pieces, but to draw the combined polygon of all the shadow pieces as one polygon.
Here's a little example that exhibits your problem:
void setup(){
size(500, 500);
}
void draw(){
background(255);
noStroke();
fill(0);
ellipse(mouseX, mouseY, 10, 10);
fill(128, 128, 128, 128);
beginShape();
vertex(mouseX, mouseY);
vertex(0, height);
vertex(width, height);
endShape();
fill(128, 128, 128, 128);
beginShape();
vertex(mouseX, mouseY);
vertex(width, height);
vertex(width, 0);
endShape();
}
Notice the white line between the two polygons:
But if I instead draw the two polygons as one:
void setup(){
size(500, 500);
}
void draw(){
background(255);
noStroke();
fill(0);
ellipse(mouseX, mouseY, 10, 10);
fill(128, 128, 128, 128);
beginShape();
vertex(mouseX, mouseY);
vertex(0, height);
vertex(width, height);
vertex(width, 0);
endShape();
}
Then the white line goes away:

Related

How to improve Hough Circle Transform to detect a circle made up of scattered points

I have a very basic code that uses the standardized HoughCircles command in openCV to detect a circle. However, my problem is that my data (images) are generated using an algorithm (for the purpose of data simulation) that plots a point in the range of +-15% (randomly in this range) of r (where r is the radius of the circle, that has been randomly generated to be between 5 and 10 (real numbers)) and does so for 360 degrees using the equation of a circle. (Attached a sample image).
http://imgur.com/a/iIZ1N
Now using the Hough circle command, I was able to detect a circle of approximately the same radius by manually playing around with the parameters (by settings up trackbars, inspired from a github code of the same nature) but I want to automate the process as I have over a 1000 images that I want to do this over and over on. Is there a better way to do that? Or if anyone has any suggestions, I would highly appreciate them as I am a beginner in the field of image processing and have a physics background rather than a CS one.
A rough sample of my code (without trackbars etc is below):
Mat img = imread("C:\\Users\\walee\\Documents\\MATLAB\\plot_2 .jpeg", 0);
Mat cimg,copy;
copy = img;
medianBlur(img, img, 5);
GaussianBlur(img, img, Size(1, 5), 1.1, 0);
cvtColor(img, cimg, COLOR_GRAY2BGR);
vector<Vec3f> circles;
HoughCircles(img, circles, HOUGH_GRADIENT,1, 10, 94, 57, 120, 250);
for (size_t i = 0; i < circles.size(); i++)
{
Vec3i c = circles[i];
circle(cimg, Point(c[0], c[1]), c[2], Scalar(0, 0, 255), 1, LINE_AA);
circle(cimg, Point(c[0], c[1]), 2, Scalar(0, 255, 0), 1, LINE_AA);
}
imshow("detected circles", cimg);
waitKey();
return 0;
If all images have the same nature (black axis and points as circles) I would suggest to do following:
1) remove axis by finding black elements and replace them with background
2) invert colours to have black background
3) perform morphological closing to fill the circles and create more solid points
4) (optional) if the density of the points is high you can try to apply another morphological operation, namely dilation to make the data circle thinner
5) apply Hough circle

iOS - OpenCV Video processing optimization

I'm working on an iOS project. I need to detect colored circles on live video.
I'm using CvVideoCameraDelegate. Here is my code:
-(void)processImage:(cv::Mat &)image{
cv::medianBlur(image, image, 3);
Mat hvs;
cvtColor(image, hvs, COLOR_BGR2HSV);
Mat lower_red;
Mat upper_red;
inRange(hvs, Scalar(0, 100, 100), Scalar(10, 255, 255), lower_red);
inRange(hvs, Scalar(160, 100, 100), Scalar(179, 255, 255), upper_red);
Mat red_hue;
addWeighted(lower_red, 1, upper_red, 1, 0, red_hue);
GaussianBlur(red_hue, red_hue, cv::Size(9,9), 2,2);
HoughCircles(red_hue, circles, CV_HOUGH_GRADIENT, 1, red_hue.rows/8,100,20,0,0);
if(circles.size() != 0){
for(cv::String::size_type current = 0;current<circles.size();++current){
cv::Point center(std::round(circles[current][0]),std::round(circles[current][1]));
int radius = std::round(circles[current][2]);
cv::circle(image, center, radius, cv::Scalar(0, 255, 0), 5);
}
}
}
It works fine but takes a lot of time and the video is a bit laggy.
I wanted to put my code in an other queue but than I started getting EXC_BAD_ACCESS on this line: cv::medianBlur(image, image, 3);.
I started using objective-c just for this project so it is a bit hard for me to understand what is going on behind the scenes but I realized that image variable holds the address of that Mat (at least that is what my C++ knowledge says) so when my that gets to the point to execute the code it no longer exist. (Am I right?)
Than I tried to get around that problem. I've added this
Mat m;
image.copyTo(m);
before my queue. But this caused a memory leak. (Why isn't is released automatically? (Again, not too much obj-c knowledge)
Then I had a last idea. I've added this line: Mat m = image; on the first line of the queue. This way I started getting EXC_BAD_ACCES from inside the cv::Mat and it was still lagging. Here is how my code looks now:
-(void)processImage:(cv::Mat &)image{
//First attempt
//Mat m;
//image.copyTo(m);
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
Mat m = image; // second one
cv::medianBlur(m, m, 3);
Mat hvs;
cvtColor(m, hvs, COLOR_BGR2HSV);
Mat lower_red;
Mat upper_red;
inRange(hvs, Scalar(0, 100, 100), Scalar(10, 255, 255), lower_red);
inRange(hvs, Scalar(160, 100, 100), Scalar(179, 255, 255), upper_red);
Mat red_hue;
addWeighted(lower_red, 1, upper_red, 1, 0, red_hue);
GaussianBlur(red_hue, red_hue, cv::Size(9,9), 2,2);
HoughCircles(red_hue, circles, CV_HOUGH_GRADIENT, 1, red_hue.rows/8,100,20,0,0);
if(circles.size() != 0){
for(cv::String::size_type current = 0;current<circles.size();++current){
cv::Point center(std::round(circles[current][0]),std::round(circles[current][1]));
int radius = std::round(circles[current][2]);
cv::circle(m, center, radius, cv::Scalar(0, 255, 0), 5);
}
}
});
}
I would appreciate any help or maybe a tutorial about Video processing in iOS because everything I've found were using other environments or weren't taking enough process time to need optimization.
Ok, for those who will have the same problem, I've managed to figure out the solution. My second attempt way very close, the problem (I think) way that I tried to process all the frames so I made a copy of all of them into the memory and because the processing takes a lot mire time, it got stacked up and filled the memory. So what I did was that I modified the code so it processes one frame a time and skips (shows without processing) all the others.

poly opencv semi-transperent

My question is for Opencv experts, I've detected road lines (left and right lines) so I was aiming to paint the road area with semi-transparent blue. So I used :
cv::fillPoly(image, ppt, npt, 1, CV_RGB(0, 0,200), lineType);
ppt- contain the points for right and left,
npt- number of points
But, what I got it filled area over the road which is not my aim.
So, my question is there any solution to paint the road area with semi-transparent? I was told to add another channel like:
cv::Mat channel[3];
split(image, channel);
channel[0] = cv::Mat::zeros(image.rows, image.cols, CV_8UC1);
merge(channel, 3, image);cv::imshow("kkk",image);
But the thing is I got all the image in semi-transparent and I want only the road area. Any ideas appreciated!!
thanks
try this code (couldnt test it on the mobile):
cv::Mat polyImage = cv::Mat(image.rows, image.cols, CV_8UC3,cv::Scalar (0,0,0));
cv::fillPoly(polyImage, ppt, npt, 1, CV_RGB(0, 0,200), lineType);
float transFactor = 0.5f; // the bigger the more transparent
for(int y=0;y <image.rows;++y)
for(int x=0;x <image.cols; ++x)
{
if(polyImage.at<cv::Vec3b>(y,x) != cv::Vec3b(0,0,0) )
image.at<cv::Vec3b>(y,x) = (transFactor)*image.at<cv::Vec3b>(y,x) + (1.0f - transFactor)*polyImage.at<cv::Vec3b>(y,x);
}

Face detection after background substraction using openCv

I'm trying to improve face detection from a camera capture so I thought it would be better if before the face detection process I removed the background from the image,
I'm using BackgroundSubtractorMOG and CascadeClassifier with lbpcascade_frontalface for face detection,
My question is: how can I grab the foreground image in order to use it as the input to face detection? this is what I have so far:
while (true) {
capture.retrieve(image);
mog.apply(image, fgMaskMOG, training?LEARNING_RATE:0);
if (counter++ > LEARNING_LIMIT) {
training = false;
}
// I think something should be done HERE to 'apply' the foreground mask
// to the original image before passing it to the classifier..
MatOfRect faces = new MatOfRect();
classifier.detectMultiScale(image, faces);
// draw faces rect
for (Rect rect : faces.toArray()) {
Core.rectangle(image, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(255, 0, 0));
}
// show capture in JFrame
frame.update(image);
frameFg.update(fgMaskMOG);
Thread.sleep(1000 / FPS);
}
Thanks
I can answer in C++ using the BackgroundSubtractorMOG2:
You can either use erosion or pass a higher threshold value to the MOG background subtractor to remove the noise. In order to completely get rid of the noise and false positives, you can also blur the mask image and then apply a threshold:
// Blur the mask image
blur(fgMaskMOG2, fgMaskMOG2, Size(5,5), Point(-1,-1));
// Remove the shadow parts and the noise
threshold(fgMaskMOG2, fgMaskMOG2, 128, 255, 0);
Now you can easily find the rectangle bounding the foreground region and pass this area to the cascade classifier:
// Find the foreground bounding rectangle
Mat fgPoints;
findNonZero(fgMaskMOG2, fgPoints);
Rect fgBoundRect = boundingRect(fgPoints);
// Crop the foreground ROI
Mat fgROI = image(fgBoundRect);
// Detect the faces
vector<Rect> faces;
face_cascade.detectMultiScale(fgROI, faces, 1.3, 3, 0|CV_HAAR_SCALE_IMAGE, Size(32, 32));
// Display the face ROIs
for(size_t i = 0; i < faces.size(); ++i)
{
Point center(fgBoundRect.x + faces[i].x + faces[i].width*0.5, fgBoundRect.y + faces[i].y + faces[i].height*0.5);
circle(image, center, faces[i].width*0.5, Scalar(255, 255, 0), 4, 8, 0);
}
In this way, you will reduce the search area for the cascade classifier, which not only makes it faster but also reduces the false positive faces.
If you have the input image and the foreground mask, this is straight forward.
In C++, I would simply add (just where you put your comment): image.copyTo(fgimage,fgMaskMOG);
I'm not familiar with the java interface, but this should be quite similar. Just don't forget to correctly initialize fgimage and reset it each frame.

How do I draw thousands of squares with glkit, opengl es2?

I'm trying to draw up to 200,000 squares on the screen. Or a lot of squares basically. I believe I'm just calling way to many draw calls, and it's crippling the performance of the app. The squares only update when I press a button, so I don't necessarily have to update this every frame.
Here's the code i have now:
- (void)glkViewControllerUpdate:(GLKViewController *)controller
{
//static float transY = 0.0f;
//float y = sinf(transY)/2.0f;
//transY += 0.175f;
GLKMatrix4 modelview = GLKMatrix4MakeTranslation(0, 0, -5.f);
effect.transform.modelviewMatrix = modelview;
//GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
GLKMatrix4 projection = GLKMatrix4MakeOrtho(0, 768, 1024, 0, 0.1f, 20.0f);
effect.transform.projectionMatrix = projection;
_isOpenGLViewReady = YES;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
if(_model.updateView && _isOpenGLViewReady)
{
glClear(GL_COLOR_BUFFER_BIT);
[effect prepareToDraw];
int pixelSize = _model.pixelSize;
if(!_model.isReady)
return;
//NSLog(#"UPDATING: %d, %d", _model.rows, _model.columns);
for(int i = 0; i < _model.rows; i++)
{
for(int ii = 0; ii < _model.columns; ii++)
{
ColorModel *color = [_model getColorAtRow:i andColumn:ii];
CGRect rect = CGRectMake(ii * pixelSize, i*pixelSize, pixelSize, pixelSize);
//[self drawRectWithRect:rect withColor:c];
GLubyte squareColors[] = {
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255
};
// NSLog(#"Drawing color with red: %d", color.red);
int xVal = rect.origin.x;
int yVal = rect.origin.y;
int width = rect.size.width;
int height = rect.size.height;
GLfloat squareVertices[] = {
xVal, yVal, 1,
xVal + width, yVal, 1,
xVal, yVal + height, 1,
xVal + width, yVal + height, 1
};
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, squareVertices);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_UNSIGNED_BYTE, GL_TRUE, 0, squareColors);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribColor);
}
}
_model.updateView = YES;
}
First, do you really need to draw 200,000 squares? Your viewport only has 786,000 pixels total. You might be able to reduce the number of drawn objects without significantly impacting the overall quality of your scene.
That said, if these are smaller squares, you could draw them as points with a pixel size large enough to cover your square's area. That would require setting gl_PointSize in your vertex shader to the appropriate pixel width. You could then generate your coordinates and send them all to be drawn at once as GL_POINTS. That should remove the overhead of the extra geometry of the triangles and the individual draw calls you are using here.
Even if you don't use points, it's still a good idea to calculate all of the triangle geometry you need first, then send all that in a single draw call. This will significantly reduce your OpenGL ES API call overhead.
One other thing you could look into would be to use vertex buffer objects to store this geometry. If the geometry is static, you can avoid sending it on each drawn frame, or only update a part of it that has changed. Even if you just change out the data each frame, I believe using a VBO for dynamic geometry has performance advantages on the modern iOS devices.
Can you not try to optimize it somehow? I'm not terribly familiar with graphics type stuff, but I'd imagine that if you are drawing 200,000 squares chances that all of them are actually visible seems to be unlikely. Could you not add some sort of isVisible tag for your mySquare class that determines whether or not the square you want to draw is actually visible? Then the obvious next step is to modify your draw function so that if the square isn't visible, you don't draw it.
Or are you asking for someone to try to improve the current code you have, because if your performance is as bad as you say, I don't think making small changes to the above code will solve your problem. You'll have to rethink how you're doing your drawing.
It looks like what your code is actually trying to do is take a _model.rows × _model.columns 2D image and draw it upscaled by _model.pixelSize. If -[ColorModel getColorAtRow:andColumn:] is retrieving 3 bytes at a time from an array of color values, then you may want to consider uploading that array of color values into an OpenGL texture as GL_RGB/GL_UNSIGNED_BYTE data and letting the GPU scale up all of your pixels at once.
Alternatively, if scaling up the contents of your ColorModel is the only reason that you’re using OpenGL ES and GLKit, you might be better off wrapping your color values into a CGImage and allowing UIKit and Core Animation do the drawing for you. How often do the color values in the ColorModel get updated?

Resources