I'm processing depth image from Kinect sensor using OpenCV with Emgu wrapper for motion detection using background substraction technic. On frames from Kinect I've noticed places with white spots, which I would like to filter off, make them in color of background. Which OpenCV technic/function should be used for this purpose?
White places are presented on pic:
inpaint will do that,
For this,
Create a mask corresponding to the region to be filled, use Threshold Binary Inverted with high value to create mask.
Now apply inpaint, on source with above mask, adjust inpaintRadius till you get better result.
Also you can use erosion filter after thresold .
Related
Even though I am using OpenCVForUnity, I don't think the problem is about the wrapper, but OpenCV in general.
I basically just use this scipt (which is based on the native c++ aruco implementation) to detect contours by calling the FindRectangularContours function. If the markers are aligned with the camera i.e. parallel to the image border, then most of the rectangles are not detected, but when I rotate the camera they are detected.
The gray scale image is just to show the contours. The found contours are then outlined green.
I would like to use GPUImage's Histogram Equalization filter (link to .h) (link to .m) for a camera app. I'd like to use it in real time and present it as an option to be applied on the live camera feed. I understand this may be an expensive operation and cause some latency.
I'm confused about how this filter works. When selected in GPUImage's example project (Filter Showcase) the filter shows a very dark image that is biased toward red and blue which does not seem to be the way equalization should work.
Also what is the difference between the histogram types kGPUImageHistogramLuminance and kGPUImageHistogramRGB? Filter Showcase uses kGPUImageHistogramLuminance but the default in the init is kGPUImageHistogramRGB. If I switch Filter Showcase to kGPUImageHistogramRGB, I just get a black screen. My goal is an overall contrast optimization.
Does anyone have experience using this filter? Or are there current limitations with this filter that are documented somewhere?
Histogram equalization of RGB images is done using the Luminance as equalizing the RGB channels separately would render the colour information useless.
You basically convert RGB to a colour space that separates colour from intensity information. Then equalize the intensity image and finally reconvert it to RGB.
According to the documentation: http://oss.io/p/BradLarson/GPUImage
GPUImageHistogramFilter: This analyzes the incoming image and creates
an output histogram with the frequency at which each color value
occurs. The output of this filter is a 3-pixel-high, 256-pixel-wide
image with the center (vertical) pixels containing pixels that
correspond to the frequency at which various color values occurred.
Each color value occupies one of the 256 width positions, from 0 on
the left to 255 on the right. This histogram can be generated for
individual color channels (kGPUImageHistogramRed,
kGPUImageHistogramGreen, kGPUImageHistogramBlue), the luminance of the
image (kGPUImageHistogramLuminance), or for all three color channels
at once (kGPUImageHistogramRGB).
I'm not very familiar with the programming language used so I can't tell if the implementation is correct. But in the end, colours should not change too much. Pixels should just become brighter or darker.
I am trying to detect ROI for a fixed repetitive pattern in an image using opencv C++.
The ROI which I am trying to find - is shown with red boundary as shown in the pic:
I tried canny edge detection after blurring but it detects edge of the vertical/horizontal black and white lines. This is not something I am trying to detect.
What is the best approach to my problem?
Since you're starting with a binary image you could use
findContours()
to get the contours for the individual strips. Since there are a couple of solitary pixels from noise you should then filter for size using
contourArea(contour)
and merge the points of all contours meeting your size criteria into a combined contour. Then get the bounding box for the combined contour:
boundingRect(combinedContour)
I have an random shape bitmap cut out by user. I want to fade out its borders i.e. contours, so as to make it appear smooth. What should I do? To get the borders and color of every pixel in bitmap, I am traversing it pixel by pixel. It takes long time, still I am ok with it. Is openCV my only option? If yes, can anybody point me towards any tutorial or suggestion for logical approach?
You can just run a smoothing filter on your shape.
In opencv you can use the blur fnnction or gaussainBlur. Look at http://docs.opencv.org/2.4/doc/tutorials/imgproc/gausian_median_blur_bilateral_filter/gausian_median_blur_bilateral_filter.html.
You don't have to use opencv but i think it would be easier and faster.
If you still don't want can use any other code that implement smoothing an image.
In case you just want to effect the border pixels do the following:
Make a copy of the original image
Filter the entire image.
Extract the border pixel using opencv findContours.
Copy from the blurred image only the pixels in the border and in there neighborhood and copy them to the copy you did in step 1.
I'm required to create a map of galaxies based on the following image,
http://www.nasa.gov/images/content/690958main_p1237a1.jpg
Basically I need to smooth the image first using a mean filter then apply thresholding to the image.
However, I'm also asked to detect only large galaxies in the image. So what should I adjust the smoothing mask or thresholding in order to achieve that goal?
Both: by smoothing the picture first, the pixels around smaller galaxies will "blend" with the black space and, thus, shift to a lower intensity value. This lower intensity can then be thresholded, leaving only the white centres of bigger galaxies.