Approximating a contour with rotated rectangles - emgucv

After some color detection and binary thresholding, I use the following code to find the contours and draw them onto the image:
using (MemStorage stor = new MemStorage())
{
Contour<Point> contours = img.FindContours(
Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_LIST,
stor);
for (; contours != null; contours = contours.HNext)
{
Contour<Point> currentContour = contours.ApproxPoly(contours.Perimeter * poly, stor);
img.Draw(currentContour,new Bgr(255,255,255),1);
Rectangle currentrect = currentContour.BoundingRectangle;
img.Draw(currentrect,new Bgr(255,255,255),2);
}
}
My problem is, as I expected, that if the contour is a rectangle but is rotated in the image, the bounding rectangle does not change its orientation to fit the rotation. Is their another way to accomplish this function? Any help would be greatly appreciated.

Yes, there is another way to accomplish this. You can use
contour.GetConvexHull(ORIENTATION.CV_CLOCKWISE);
using Moments, you can easily get the orientation and adjust the rectangle accordingly.

The method you are looking for is:
PointCollection.MinAreaRect(points);
Worked example is here:
http://www.emgu.com/wiki/index.php/Minimum_Area_Rectangle_in_CSharp
Complete documentation (which has little more than the above) is located here:
http://www.emgu.com/wiki/files/2.4.0/document/html/0d5fd148-0afb-fdbf-e995-6dace8c8848d.htm

Related

aruco::detectMarkers is not finding true edges of markers

I'm using ArUco markers to correct perspective and calculate sizes in an image. In this image I know the exact distance between the outer edges of the markers and am using that to calculate the sizes of the black rectangles.
My problem is that aruco::detectMarkers doesn't always identify the true edges of the markers (as shown in the detail image). When I correct the perspective based on the corners of the markers, it causes distortion that affects the size calculations of the objects in the image.
Is there a way to improve the edge detection accuracy of aruco::detectMarkers?
Here's a scaled-down photo of the entire board:
Here's the detail of the lower-left marker showing the inaccuracy of the edge detection:
Here's the detail of the upper-right marker showing an accurate edge detection of the same marker ID:
It's hard to see in this shrunken image but the upper-left marker is accurate and the lower-right marker is inaccurate.
My function that calls detectMarkers:
bool findMarkers(const Mat image, Point2d outerMarkerCoordinates[], Point2d innerMarkerCoordinates[], Size2d *boardSize) {
Ptr<aruco::Dictionary> theDictionary = aruco::getPredefinedDictionary(aruco::DICT_4X4_1000);
vector<vector<Point2f> > markers;
vector<int> ids;
aruco::detectMarkers(image, theDictionary, markers, ids);
aruco::drawDetectedMarkers(image, markers, ids);
return true; //There's actually more code here that makes sure there are four markers.
}
Examination of the optional detectorParameters argument to detectMarkers showed a parameter called doCornerRefinement. Its description is "do subpixel refinement or not". Since the error I'm seeing is larger than a pixel, I didn't think this was applicable to my situation. I gave it a try anyway and experimented with the cornerRefinementWinSize value and found that it did indeed solve my problem. Now I'm thinking that "pixel" in the ArUco sense is the size of one of the squares within the marker, not an image pixel.
The modified call to detectMarkers:
bool findMarkers(const Mat image, Point2d outerMarkerCoordinates[], Point2d innerMarkerCoordinates[], Size2d *boardSize) {
Ptr<aruco::Dictionary> theDictionary = aruco::getPredefinedDictionary(aruco::DICT_4X4_1000);
vector<vector<Point2f> > markers;
vector<int> ids;
Ptr<aruco::DetectorParameters> detectorParameters = new aruco::DetectorParameters;
detectorParameters->doCornerRefinement = true;
detectorParameters->cornerRefinementWinSize = 11;
aruco::detectMarkers(image, theDictionary, markers, ids, detectorParameters);
aruco::drawDetectedMarkers(image, markers, ids);
return true; //There's actually more code here that makes sure there are four markers.
}
Success!

Finding largest blob in image

I am having some issues extracting a blob from an image using EmguCV. Everything I see online uses the Contours object, but I guess that was removed from EmguCV3.0? I get an exception every time I try to use it. I haven't found many recent/relevant SO topics that aren't out of date.
Basically, I have a picture of a leaf. The background might be white, green, black, etc. I want to essentially remove the background so that I can perform operations on the leaf without interference with the background. I'm just not sure where I'm going wrong here:
Image<Bgr, Byte> Original = Core.CurrentLeaf.GetImageBGR;
Image<Gray, Byte> imgBinary = Original.Convert<Gray, Byte>();
imgBinary.PyrDown().PyrUp(); // Smoothen a little bit
imgBinary = imgBinary.ThresholdBinaryInv(new Gray(100), new Gray(255)); // Apply inverse suppression
// Now, copy pixels from original image that are black in the mask, to a new Mat. Then scan?
Image<Gray, Byte> imgMask;
imgMask = imgBinary.Copy(imgBinary);
CvInvoke.cvCopy(Original, imgMask, imgBinary);
VectorOfVectorOfPoint contoursDetected = new VectorOfVectorOfPoint();
CvInvoke.FindContours(imgBinary, contoursDetected, null, Emgu.CV.CvEnum.RetrType.List, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
var contoursArray = new List<VectorOfPoint>();
int count = contoursDetected.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint currContour = contoursDetected[i])
{
contoursArray.Add(currContour);
}
}
With this, I get a black image with a tiny bit of white lines. I've racked my brain back and forth and haven't been able to come up with something. Any pointers would be much appreciated!
I think that you need to find which one is the largest area using ContourArea on each one of the contours.
After you find the largest contour you need to fill it (because the contour is just the putline of the blob and not all the pixel in it) using FillPoly and create a mask that as the leaf pixels with value 1 and the everything else with 0.
In the end use the mask to extract the leaf pixels from the original image
I am not so proficient in c# so i attach a code in python with opencv to give you some help.
The resulted image:
Hope this will be helpful enough.
import cv2
import numpy as np
# Read image
Irgb = cv2.imread('leaf.jpg')
R,G,B = cv2.split(Irgb)
# Do some denosiong on the red chnnale (The red channel gave better result than the gray because it is has more contrast
Rfilter = cv2.bilateralFilter(R,25,25,10)
# Threshold image
ret, Ithres = cv2.threshold(Rfilter,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Find the largest contour and extract it
im, contours, hierarchy = cv2.findContours(Ithres,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
maxContour = 0
for contour in contours:
contourSize = cv2.contourArea(contour)
if contourSize > maxContour:
maxContour = contourSize
maxContourData = contour
# Create a mask from the largest contour
mask = np.zeros_like(Ithres)
cv2.fillPoly(mask,[maxContourData],1)
# Use mask to crop data from original image
finalImage = np.zeros_like(Irgb)
finalImage[:,:,0] = np.multiply(R,mask)
finalImage[:,:,1] = np.multiply(G,mask)
finalImage[:,:,2] = np.multiply(B,mask)
cv2.imshow('final',finalImage)
I recommend you look into Otsu thresholding. It gives you a threshold which you can use to divide the image into two classes (background and foreground). using OpenCV's threshold method you can then create a mask if necessary.

Draw minimum enclosing circle around a contour in EmguCV

How to draw a minimum enclosing circle around a contour in EmguCV?. FindContours() method returns a set of points. But to draw the circle it asks for pointf. Is there a work around this? Thanks.
Yes there is a way. You will just have to convert the Contour into an array of PointF like this:
for (var contour = binaryImage.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_CCOMP, mem);
contour != null;
contour = contour.HNext)
{
//assuming you have a way of getting the index of
//point you wanted to extract from the current contour
var index = 0;
PointF[] pointfs = Array.ConvertAll(contour.ToArray(),
input => new PointF(input.X, input.Y));
//just an example
var circle = new CircleF(pointfs[index], 2.0f);
}

how to embed a watermark on an image using edge in matlab?

in a school project i would like to do the following step to have a watermaked image in matlab
extract the edges from an image
insert a mark on this edge
reconstruct the image
extract the mark
could some one give me a link to have a good idea how to do it or help me to do that?
thank you in advance
You want to add a watermark to an image? Why not just overlay the whole thing.
if you have an image
img = imread('myimage.jpg')
wm = imread('watermark.jpg')
You can just resize the watermark to the size of the image
wm_rs = imresize(wm, [size(img,1) size(img,2)], 'lanczos2');
img_wm(wm_rs ~= 0) = wm_rs; %This sets non-black pixels to be the watermark. (You'll have to slightly modify this for color images)
If you want to put it on the edges of the image, you can extract them like this
edges = edge(rgb2gray(img),'canny')
Then you can set the pixels where the edges exist to be watermark pixels
img_wm = img;
img_wm(edges ~= 0) = wm_rs(edges~=0);
Instead of direct assignment you can play around with using a mix of the img and wm_rs pixel values if you want transparency.
You'll probably have to adjust some of what I said to color images, but most should be the same.
Here, is a nice and simple example how you can embed watermarks using MATLAB (in the spatial domain): http://imageprocessingblog.com/digital-watermarking/
see example below(R2017b or later release):
% your params
img = imread('printedtext.png');
Transparency = 0.6;
fontColor = [1,1,1]; % RGB,range [0,1]
position = [700,200];
%% add watermark
mask = zeros(size(img),'like',img);
outimg = insertText(mask,position,'china', ...
'BoxOpacity',0,...
'FontSize',200,...
'TextColor', 'white');
bwMask = imbinarize(rgb2gray(outimg));
finalImg = labeloverlay(img,bwMask,...
'Transparency',Transparency,...
'Colormap',fontColor);
imshow(finalImg)

How can I perform clipping on rotated rectangles?

So I have this Panel class. It's a little like a Window where you can resize, close, add buttons, sliders, etc. Much like the status screen in Morrowind if any of you remember. The behavior I want is that when a sprite is outside of the panel's bounds it doesn't get drawn and if it's partially outside only the part inside gets drawn.
So what it does right now is first get a rectangle that represents the bounds of the panel, and a rectangle for the sprite, it finds the rectangle of intersection between the two then translates that intersection to the local coordinates of the sprite rectangle and uses that for the source rectangle. It works and as clever as I feel the code is I can't shake the feeling that there's a better way to do this. Also, with this set up I cannot utilize a global transformation matrix for my 2D camera, everything in the "world" must be passed a camera argument to draw. Anyway, here's the code I have:
for the Intersection:
public static Rectangle? Intersection(Rectangle rectangle1, Rectangle rectangle2)
{
if (rectangle1.Intersects(rectangle2))
{
if (rectangle1.Contains(rectangle2))
{
return rectangle2;
}
else if (rectangle2.Contains(rectangle1))
{
return rectangle1;
}
else
{
int x = Math.Max(rectangle1.Left, rectangle2.Left);
int y = Math.Max(rectangle1.Top, rectangle2.Top);
int height = Math.Min(rectangle1.Bottom, rectangle2.Bottom) - Math.Max(rectangle1.Top, rectangle2.Top);
int width = Math.Min(rectangle1.Right, rectangle2.Right) - Math.Max(rectangle1.Left, rectangle2.Left);
return new Rectangle(x, y, width, height);
}
}
else
{
return null;
}
}
and for actually drawing on the panel:
public void DrawOnPanel(IDraw sprite, SpriteBatch spriteBatch)
{
Rectangle panelRectangle = new Rectangle(
(int)_position.X,
(int)_position.Y,
_width,
_height);
Rectangle drawRectangle = new Rectangle();
drawRectangle.X = (int)sprite.Position.X;
drawRectangle.Y = (int)sprite.Position.Y;
drawRectangle.Width = sprite.Width;
drawRectangle.Height = sprite.Height;
if (panelRectangle.Contains(drawRectangle))
{
sprite.Draw(
spriteBatch,
drawRectangle,
null);
}
else if (Intersection(panelRectangle, drawRectangle) == null)
{
return;
}
else if (Intersection(panelRectangle, drawRectangle).HasValue)
{
Rectangle intersection = Intersection(panelRectangle, drawRectangle).Value;
if (Intersection(panelRectangle, drawRectangle) == drawRectangle)
{
sprite.Draw(spriteBatch, intersection, intersection);
}
else
{
sprite.Draw(
spriteBatch,
intersection,
new Rectangle(
intersection.X - drawRectangle.X,
intersection.Y - drawRectangle.Y,
intersection.Width,
intersection.Height));
}
}
}
So I guess my question is, is there a better way to do this?
Update: Just found out about the ScissorRectangle property. This seems like a decent way to do this; it requires a RasterizerState object to be made and passed into the spritebatch.Begin overload that accepts it. Seems like this might be the best bet though. There's also the Viewport which I can apparently change around. Thoughts? :)
There are several ways to limit drawing to a portion of the screen. If the area is rectangular (which seems to be the case here), you could set the viewport (see GraphicsDevice) to the panel's surface.
For non-rectangular areas, you can use the stencil buffer or use some tricks with the depth buffer. Draw the shape of the surface in the stencil buffer or the depth buffer, set your render state to draw only pixels located in the shape you just rendered in the stencil/depth buffer, finally render your sprites.
One way of doing this is simple per-pixel collision. Although this is a bad idea if the sprites are large or numerous, this can be a very easy and fast way to get the job done with small sprites. First, do a bounding circle or bounding square collision check against the panel to see if you even need to do per-pixel detection.
Then, create a contains method that checks if the position, scale, and rotation of the sprite put it so far inside the panel that it must be totally enclosed by the panel, so you don't need per-pixel collision in that case. This can be done pretty easily by just creating a bounding square that has the width and height of the length of the sprite's diagonal, and checking for collision with that.
Finally, if both of these fail, we must do per-pixel collision. Go through and check against every pixel in the sprite to see if it is within the bounds of the panel. If it isn't set the alpha value of the pixel to 0.
Thats it.

Resources