Draw a border around the bright part of the image - opencv

I have an image with a bright center but dark edges. I need to draw rectangle at a certain brightness, so that the darker areas remain outside. Or get the coordinates of this frame. Preferably using OpenCV, the programming language is not important.

Related

Metal - How to overlap textures based on color

I'm trying to use a render pass descriptor to draw two grayscale textures. I am drawing a black square first, then a light gray square after. The second square partially covers the first.
With this setup, the light gray square will always appear in front of the black square because it was drawn most recently in the render pass. However, I would like to know if there is a way to draw the black square above the light gray one based on its brightness. Since the squares only partially overlap is there a way to still have the black square appear on top simply because it has a darker pixel value?
Currently it looks something like this, where the gray square is drawn second so it appears on top.
What I would like is to be able to still draw the gray square second, but have it appear underneath based on the pixel brightness, like so:
I think MTLBlendOperationMin will do what you want: https://developer.apple.com/documentation/metal/mtlblendoperation/mtlblendoperationmin?language=objc

Is there a way to detect near-rectangle in opencv?

I'm going to find the most look-like rectangles among shapes. The first image is the original image with shapes which possibly be rectangles but they are not. The green rectangles in the second image is what I want. So is there a way to do this with opencv? I've tried hough lines but the result's not good
The source image:
And what I want is to find out the most look-like rectangle among these shapes, like the rectangles in green.
What I want:
A very simple approach is, after you have a rectangle bounding box around your shape, count the percentage of pixels inside the box which are white.
The higher the percentage of white pixels, the closest to a rectangle it is.
To get the bounding boxes you should take a look at either findContours from opencv, or some Blob extracting algorithm, you will find plenty of questions regarding those.
Edit:
Maybe you should first get the Minimum bounding rectangles of the shapes and then do this kind of heuristic:
Shrink the rectangle dimensions until the white-pixel percentage inside the rectangle reaches some threshold defined by you (like 90% of white pixels inside the rectangle).
To get the Minimum bounding rectangle (the smallest rectangle which contains the whole shape), you might check this tutorial:
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html
One thing that might also help is doing the difference of sizes from the minimum bounding rectangle and the maximum inner rectangle (the biggest rectangle you can fit inside the white shape). The less difference there is between those rectangle's properties (width, height, area, center coordinates) the closest is the shape to a rectangle.

Retrieve Circle Points

In OpenCV, i know how to draw circles, but is there a way to get back all the points that makeup the circle? I hope I dont have to go through calculating contours.
Thanks
If you know how to draw the circle,
Create a black image with the same size of as that of original image
Then draw the circle on the black image with white color
Now in this black image, check the points which are white
If you are using Python API, you can do as follows :
import numpy as np
import cv2
img = np.zeros((500,500),np.uint8)
cv2.circle(img,(250,250),100,255)
points = np.transpose(np.where(img==255))
You can do similar thing to the answer implemented in python in C/C++
If you know how to draw the circle,
Create a black image with the same size of as that of original image
Then draw the circle on the black image with white color
Now instead of checking which pixels have certain value you can find a contour (represented as vector of points) of the circle's edge.
To do this you can use OpenCV's findContours function which will give you the points on the circles edge.
Actually the background doesn't have to be black and the circle white, but the background should be plain and the circle should have different color than background.

How to overlay an picture with a given mask

I want to overlay an image in a given image. I have created a mask with an area, where I can put this picture:
Image Hosted by ImageShack.us http://img560.imageshack.us/img560/1381/roih.jpg
The problem is, that the white area contains a black area, where I can't put objects.
How can I calculate efficiently where the subimage must to put on? I know about some functions like PointPolygonTest. But it takes very long.
EDIT:
The overlay image must put somewhere on the white place.
For example at the place from the blue rectangle.
Image Hosted by ImageShack.us http://img513.imageshack.us/img513/5756/roi2d.jpg
If I understood correctly, you would like to put an image in a region (as big as the image) that is completely white in the mask.
In this case, in order to get valid regions, I would apply an erosion to the mask using a kernel of the same size as the image to be inserted. After erosion, all valid regions will be white.
The image you show however has no 200*200 regions that is entirely white, so I must have misunderstood...
But if you what to calculate the region with the least black in the mask, you could apply a blur instead of an erosion and look for the maximal intensity pixel in the blurred mask.
In both case you want to insert the sub-image so that its centre is on the position maximal intensity pixel of the eroded/blurred mask.
Edit:
If you are interested in finding the region that would be the most distant from any black pixel to put the sub-image, you can define its centre as the maximal value of the distance transform of the mask.
Good luck,

what is the relationship between image edges and gradient?

Is there anybody can help me interpret
"Edge points may be located by the maxima of the
module of the gradient, and the direction of edge contour is orthogonal to the direction of the gradient."
Paul R has given you an answer, so I'll just add some images to help make the point.
In image processing, when we refer to a "gradient" we usually mean the change in brightness over a series of pixels. You can create gradient images using software such as GIMP or Photoshop.
Here's an example of a linear gradient from black (left) to white (right):
The gradient is "linear" meaning that the change in intensity is directly proportional to the distance between pixels. This particular gradient is smooth, and we wouldn't say there is an "edge" in this image.
If we plot the brightness of the gradient vs. X-position (left to right), we get a plot that looks like this:
Here's an example of an object on a background. The edges are a bit fuzzy, but this is common in images of real objects. The pixel brightness does not change from black to white from one pixel to the next: there is a gradient that includes shades of gray. This is not obvious since you typically have to zoom into a photo to see the fuzzy edge.
In image processing we can find those edges by looking at sharp transitions (sharp gradients) from one brightness to another. If we zoom into the upper left corner of that box, we can see that there is a transition from white to black over just a few pixels. This transition is a gradient, too. The difference is that the gradient is located between two regions of constant color: white on the left, black on the right.
The red arrow shows the direction of the gradient from background to foreground: pixels are light on the left, and as we move in the +x direction the pixels become darker. If we plot the brightness sampled along the arrow, we'll get something like the following plot, with red squares representing the brightness for a specific pixel. The change isn't linear, but instead will look like one side of a bell curve:
The blue line segment is a rough approximation of the slope of the curve at its steepest. The "true" edge point is the point at which slope is steepest along the gradient corresponding to the edge of an object.
Gradient magnitude and direction can be calculated using horizontal and vertical Sobel filters. You can then calculate the direction of the gradient as:
gradientAngle = arctan(gradientY / gradientX)
The gradient will be steepest when it is perpendicular to the edge of the object.
If you look at some black and white images of real scenes, you can zoom in, look at individual pixel values, and develop a good sense of how these principles apply.
Object edges typically result in a step change in intensity. So if you take the derivative of intensity it will have a large (positive or negative) value at edges and a smaller value elsewhere. If you can identify the direction of steepest gradient then this will be at right angles to (orthogonal to) the object edge.

Resources