Creating contour and then perform pixel analysis (OpenCV) - image-processing

If I have an RGB image and a binary mask (1 channel), and I want to create contours for the RGB image based on the connected pixels of the binary mask. After that I want to compare the pixel values (e.g. check if each pixel in the contours is having a blue value > 150), then how can I implement the above by using OpenCV?
Thanks a lot!

Assuming the images are the same size and shape then simply scan over the pixels in the binary image looking for the contours and check the pixel values at the same row/col in the color image
See Fastest way to extract individual pixel data? for details

Related

Fitting a 2d Gaussian with data points on python

I have an 2048x2048 image/array with some blobs and I try to fit a 2D gaussian on each blob to find its width. Image with blob
I know the position of the blobs, their intensity and an approximation of the blob rays.
What can I do?
All the codes that I found need the gaussin width...
thanks

Blurring image with RGB values without convolving it with a kernel

I'm using an app for face redaction that doesn't allow access to the source code but only allows me to pass pixel values for red, green and blue channel upon which it creates a matrix with the same average RGB values for every ROI pixel value. For eg. if I give Red=32,Blue=123 and Green=233 it will assign these RGB values for every pixel of the ROI and then draws a colored patch on the face.
So I was wondering is there a general combination of RGB values of a pixel to distort it and make it look like it's blurred. I can also set the opacity value in the app.
Thanks.

Normalize an image using the mean pixel value in a ROI

I want to normalize several images in imageJ using the mean pixel value in a ROI, so that after normalization the mean in this ROI has the same value in all the images. How can I do it? Thanks
It is hard to say with out a particular example but a priory I would select the ROI and press control + Mto measure the region. If it is grey scale image you should obtain the gray mean of the grey pixels. You can use then this value to divide all the pixels in you image using Divide function under the Process > Math menu. If you calculate the mean for each image and use that value to divide each corresponding image, your ROI should have the same mean value for all ROI in your pictures.
I hope it helps!

Get Bounding Polygon from contour images

I have a dataset of contour images.
In my dataset, each image contain SINGLE object (on black background) which corresponds to a contour-image (i.e. image corresponding to a particular detected contour earlier), retrieved earlier.
I just have these images, and no other contour information.
I need to get contour polygon (height, width, polygon coordinates) for each image so that I can use this dataset for training in Tensorflow models.
Will running cv2.findContours() make sense (as each image is already a single contour) or is there another faster way to extract the bounding polygon from the contour images ?
Thank you so much in advance.

Analyzing an Image's histogram

I have a photo editing app that is built around Brad Larson's amazing
GPUImage framework.
I need a way to analyze an image's histogram so i can return it's range.
Meaning the real range of "activity".
I need this to improve a tool i have in the app that controls the RGB composite curve.
Is there a way to analyze or retrieve hard numbers from the histogram filter in GPUImage ? any other way to do it?
As I document in the Readme.md for the GPUImageHistogramFilter:
The output of this filter is a 3-pixel-high, 256-pixel-wide image with
the center (vertical) pixels containing pixels that correspond to the
frequency at which various color values occurred. Each color value
occupies one of the 256 width positions, from 0 on the left to 255 on
the right. This histogram can be generated for individual color
channels (kGPUImageHistogramRed, kGPUImageHistogramGreen,
kGPUImageHistogramBlue), the luminance of the image
(kGPUImageHistogramLuminance), or for all three color channels at once
(kGPUImageHistogramRGB).
To get the numerical values for the histogram, have the GPUImageHistogramFilter output to a GPUImageRawDataOutput and grab bytes from that. The result will be a 256x3 (width, height) array of 0-255 values indicating the intensity at each color component. You can ignore the first and last rows, as the values are only present in the center row.
From there, you can analyze the histogram obtained by this operation.

Resources