I am new to OpenCV and I have this project where I need to detect brighter white color blobs on grey background. Could someone help me with some possible solution, ideally with OpenCV?
Original Image:
Blobs to be detected:
Related
I'm trying to use pytesseract to recognize text from this image but I'm unable to get satisfactory results.
I've tried a number of things to make it easier for the tesseract to recognize the text. My tesseract version is 5.0
Taken color out of the image, leaving it only with black and white
Converting it into grayscale and then reading it
Attempted a gaussian blur
Blew up the image to ensure that it could read it more efficiently
Tried an inverse threshold to make the images stand out more but still no positive outcome.
Image
Black and White Image
Black and White Binary image
Have a label over part of an UIImageView. If image is too bright white text can not read. Any easy way to detect how bright portion of the image?
There is no "easy" way of doing it at least for processor. For developer it is easiest to get access to raw RGBA buffer of image data and find out what is the average color. Then convert that color to HSV and check saturation to determine the brightness. You can even use GPU to make things a bit quicker; openGL should be perfect for that.
But before you get too far: The result will most likely not be what you are looking for. There are always cases that will make your text unreadable no matter what color they are. Consider you have a white text and will convert it to black once image is too bright. But then the image consists purely of black and white stripes so that every odd letter is over white stripe and the rest are on black. The text is simply unreadable.
I suggest you try with stroke, dropping shadows or adding a background. You can for instance have white text on a label and use semitransparent black background color with some layer corner radius. The background will barely be visible on all but the brightest images and text will always be readable.
I'm looking to write a script to look over a series of images that are essentially white canvas with a few black rectangles on them.
My question is this: what's the best modus operandi that would identify each of the black rectangles in turn.
Obviously I'd scan the image pixel by pixel and work out if it's colour was black or white. So far so good, Identifying and isolating each rectangle - now that's the tricky part :) A pointer in the right direction would be a great help, thank you.
I'm trying to do Floodfill on UIImage and ended up using OpenCV framework. I can replace the color with a solid color by defining the color as cv::Scalar(255,0,0). However I want the floodfill selection to be transparent.
I don't know how I can define a transparent color in OpenCV and to the best of my knowledge it's not possible and the only option is to merge the image to a transparent background. Again it doesn't make much sense to Floodfill using a solid color and then merge it with a transparent layer as the result will be the original image with solid color in the fill areas.
Please correct me if I'm wrong.
Much appreciate your help in solving this.
Cheers
You cannot define a transparent color in OpenCV since it currently do not support image with alpha channel.
However, there exists a tricky solution. You may first refer to this question to create a mask of your floodfill area. Then you can easily calculate the alpha channel from this mask.
I need to process some images in a real-time situation. I am receiving the images from a camera using OpenCV. The language I use is C++. An example of the images is attached. After applying some threshold filters I have an image like this, Of course there may be some pixel noises here and there, but not that much.
I need to detect the center and the rotation of the squares, and the center of the white circles. I'm totally clueless about how to do it, as it needs to be really fast. The number of the squares can be predefined. Any help would be great, thanks in advance.
Is the following straight forward approch too slow?
Binarize the image, so that the originally green background is black and the rest (black squares are white dots) are white.
Use cv::findContours.
Get the centers.
Binarize the image, so that the everything except the white dots is black.
Use cv::findContours.
Get the centers.
Assign every dot contours to the squate contour, for that is an inlier.
Calculate the squares rotations by the angle of the line between their centers and the centers of their dots.