how to add inner buffer of thes polygons - buffer

i do a buffer zone around a the polygons with a specific distance.
Now i want to do the same processing but this once with inside the polygone (inner polygon) with distance of 3 meters like this pictures:
enter image description here

You can try - distance for inside buffer on buffer screen. enter image description here

Related

OpenCV: Distance between fixed camera and an object

I have a fixed camera so I know the distance between the camera to the ground, as well as the distance between the camera and the bottom line in the image (which is the floor).
I have an object in the image and I need to calculate the distance to it. However, the actual dimensions of the object are not available.
In the first image, the distance to the object is 75cm. In the second image the distance is 33cm.
How can I calculate the distances using the fixed camera? I found a few tutorials which used the focal length and the width of the object, however I cannot use it.
I can detect the object and have a bounding box around it.
Thanks

Coordinates of bounding box in an image

I am doing object detection in order to count penguins on a UAV georeferenced dataset, so for practical reasons let's say they appear as dots on the images. After running the object detection model, it returns inferred images with the corresponding bounding boxes for each penguin detected.
I need to extract the coordinate of the center of the bounding box (something like x,y), so, as the image is georeferenced, I would be able to convert image b.box center coordinates into GPS coordinates.
This picture is a good example. Here, the authors are counting banana plants, and after detecting the plants of the same regions in 3 differently-treated pictures of the same area, they see that up to three boxes appear around some of the plants (left). So in order to count each plant as one, despite having some of them up to 3 bboxes, this is what they do (quoted from the original article):
Collect bounding boxes of detection from each ROI tiles.
Calculate centroid of each bounding box.
Add the tile number information on x and y-value of centroids to overlay them on original ROI image.
And this is exactly what I am looking for, the step number 3, how to calculate the centroid of each bbox and how to obtain the x,y coords, so then I would be able to transform those coords into real ones, as the image is georeferenced, and then display each real coord on a mosaic.
Thank you very much in advance.
You could use the Intersection over Union algorithm to select one of the boxes and then use the coordinates of the selected box to plot the output circle or box over detected objects.

How to find euclidean distance from specific camera instead of perpendicular distance from base line in stereo triangulation?

I want to calculate the depth of image pixel in world unit using stereo triangulation principle. Using stereo triangulation principle one get perpendicular distance from base line connecting left and right camera. I want to calculate depth from left camera instead of base line. I am using single camera to capture two images. My camera is mounted on moving vehicle. I take consecutive two images. As vehicle is moving baseline between two view is not horizontal to viewing direction. I calculate the distance of particular image pixel using triangulation principle. I select the corresponding pixel from left and right image manually with mouse click.
Let (x1,y1) and (x2,y2) are corresponding pixels from left and right image repectively. 'CamDistance' is Capturing distance between two views. By calculating disparity I get the depth of pixel in world unit. But obtained depth is perpendicular distance from base line. I want to calculate depth from left or right camera. Please suggest me what changes I shold make in below formula to calculate depth from camera instead of base line.
I tried following function for calculating depth of pixel
def PixDistance(x1,y1,x2,y2):
Disparity=x1-x2
Depth=CamDistance*f/Disparity
AbsDepth=abs(Depth)
return AbsDepth;
Thanks in advance !!

Display 2-D GIS polygon in 3-D street-level view

I would like to drape some 2-D GIS polygon into a ā€˜3-Dā€™ street level?
As a a picture is worth a thousand words, please check this:
So I have the azimuth angle, the view angle and the position where the image was taken along with the field limit in 2D and I would like to have the field plotted on my street-level image (as on the top image).

How to merge 2 CvRects with minimal distance that are result of cvContour

in my project i use cvFindContours to detect objects.
With the result(s), i want to mark the roi of the input image(If the distance between the detected blobs are high i want to iterate the tagging of the roi).
My problem is, that a few rects from the found blobs are overlapped or is part of a bigger blob.
Is there a fast solution to remove inner blobs and merge blobs with minimal distance?
For example:
You can check if rectangles are overlaping using operator& of cv::Rect:
cv::Rect a(x1,y1,w1,h1);
cv::Rect b(x2,y2,w2,h2);
cv::Rect intersect = a&b; // if intersect is not empty, the rect overlaps
As for your "minimal distance", there is no way to do that using standard opencv functions. You have to determine what is the "distance" between the rectangles: distance between their centers (not recommanded) ? Distance between their borders? Then remind you have 2 dimensions. You can do it, but you have to code it yourself.

Resources