Get Bounding Polygon from contour images - opencv

I have a dataset of contour images.
In my dataset, each image contain SINGLE object (on black background) which corresponds to a contour-image (i.e. image corresponding to a particular detected contour earlier), retrieved earlier.
I just have these images, and no other contour information.
I need to get contour polygon (height, width, polygon coordinates) for each image so that I can use this dataset for training in Tensorflow models.
Will running cv2.findContours() make sense (as each image is already a single contour) or is there another faster way to extract the bounding polygon from the contour images ?
Thank you so much in advance.

Related

Calculate the Correct Translation and Scaling Transformations for Segmentation Polygons

I have an image with a list of segmentation polygons. Getting the bounding box with the polygons is easy enough. I used the bounding box to crop out the objects of interest.
Now, I want to scale the polygons so that they wrap around the cropped object. I scaled the polygons using the ratio of the original image size to the crop size. However, I do not know how to add the correct offsets to the bounding boxes. As a result, the polygon points incorrectly wrap around the object.
For object crops, what is the correct transformation for scaling and translating arbitrary points within the crops?

Coordinates of bounding box in an image

I am doing object detection in order to count penguins on a UAV georeferenced dataset, so for practical reasons let's say they appear as dots on the images. After running the object detection model, it returns inferred images with the corresponding bounding boxes for each penguin detected.
I need to extract the coordinate of the center of the bounding box (something like x,y), so, as the image is georeferenced, I would be able to convert image b.box center coordinates into GPS coordinates.
This picture is a good example. Here, the authors are counting banana plants, and after detecting the plants of the same regions in 3 differently-treated pictures of the same area, they see that up to three boxes appear around some of the plants (left). So in order to count each plant as one, despite having some of them up to 3 bboxes, this is what they do (quoted from the original article):
Collect bounding boxes of detection from each ROI tiles.
Calculate centroid of each bounding box.
Add the tile number information on x and y-value of centroids to overlay them on original ROI image.
And this is exactly what I am looking for, the step number 3, how to calculate the centroid of each bbox and how to obtain the x,y coords, so then I would be able to transform those coords into real ones, as the image is georeferenced, and then display each real coord on a mosaic.
Thank you very much in advance.
You could use the Intersection over Union algorithm to select one of the boxes and then use the coordinates of the selected box to plot the output circle or box over detected objects.

How to extract the paper contours in this image (opencv)?

I'm trying to extract the geometries of the papers in the image below, but I'm having some trouble with grabbing the contours. I don't know which threshold algorithm to use (here I used static threshold = 10, which is probably not ideal.
And as you can see, I can get the correct number of images, but I can't get the proper bounds using this method.
Simply applying Otsu just doesn't work, it doesn't capture the geometries.
I assume I need to apply some edge detection, but I'm not sure what to do once I apply Canny or some other.
I also tried sobel in both directions (+ve and -ve in x and y), but unsure how to extract these contours from there.
How do I grab these contours?
Below is some previews of the images in the process of the final convex hull results.
**Original Image** **Sharpened**
**Dilate,Sharpen,Erode,Sharpen** **Convex Of Approximated Polygons Hulls (which doesn't fully capture desired regions)**
Sorry in advance about the horrible formatting, I have no idea how to make images smaller or title them nicely in SOF

Should I gray scale the image?

I'm categorizing 30 types of clothes from the image using R-CNN Object Detection Library from tensorflow : https://github.com/tensorflow/models/tree/master/research/object_detection
Does color matter when we collect images for training and testing?
If I put only purple and blue shirts, I guess it won't recognize red shirts?
Should I gray scale all images to detect the types of clothes? :)
Yes, colour does matter. The underlying visual feature extraction is based on a convolutional neural network, pre-trained to perform image recognition on colour images in the ImageNet dataset.
The R-CNN repository instructions on bringing in your own dataset asks for RGB images.
Dataset Requirements
For every example in your dataset, you should have the following information:
An RGB image for the dataset encoded as jpeg or png.
A list of bounding boxes for the image. Each bounding box should contain:
A bounding box coordinates (with origin in top left corner) defined by 4 floating point numbers [ymin, xmin, ymax, xmax]. Note that we store the normalized coordinates (x / width, y / height) in the TFRecord dataset.
The class of the object in the bounding box.

Creating contour and then perform pixel analysis (OpenCV)

If I have an RGB image and a binary mask (1 channel), and I want to create contours for the RGB image based on the connected pixels of the binary mask. After that I want to compare the pixel values (e.g. check if each pixel in the contours is having a blue value > 150), then how can I implement the above by using OpenCV?
Thanks a lot!
Assuming the images are the same size and shape then simply scan over the pixels in the binary image looking for the contours and check the pixel values at the same row/col in the color image
See Fastest way to extract individual pixel data? for details

Resources