Straightening Jagged/Pixelated Edges - image-processing

Suppose I have a simple black and white image with some pixelated edges, and I want to make them look clean and straight. I have attached an example of what I mean. I want to make all the edges straight instead of having the pixellated, "sawtooth" look to them. What is the easiest way to go about doing this, short of going in and editing pixel by pixel with paintbrush (which would be difficult to do, without breaking the tessellation and without making the shapes non-uniform)? The image is jagged because it is based on a low-resolution image I got off the internet.

First idea: apply a Gaussian blur filter and binarize the image. This will soften the sharp angles a little, but is easy to implement.

Related

Image Processing - Film negative cutting

I'm trying to figure out how to automatically cut some images like the one below (this is a negative film), basically, I want to remove the blank parts at the top and at the bottom. I'm not looking for complete code for it, I just want to understand a way to do it. The language is not important at this point, but I think this kind of thing normally is accomplished with Python.
I think there are several ways to do that, ranging from simple to complex. You can see the problem as detecting white rectangles or segmenting the image I would say.
I can suggest you opencv (which is available in more than one language, among which python), you can have a look here at the image processing examples
First we need to find the white part, then remove it.
Finding the white part
Thresholding
Let's start with an easy one: thresholding
Thresholding means dividing the image into two parts (usually black and white). You can do that by selecting a threshold (in your case, the threshold would be towards white - or black if you invert the image). By doing so, however, you may also threshold some parts of the images (for example the chickens and the white part above the kid). Do you have information about position of the white stripes? Are they always on top and bottom of the image? If so, you can apply the thresholding operation only on the top 25% and bottom 25% of the image. You would then most likely not "ruin" the image.
Finding the white rectangle
If that does not work or you would like to try something else, you can see the white stripes as rectangles and try to find their contour. You can see how in this tutorial. In this case you do not get a binary image, but a bounding box of the white areas. You most likely find the chickens also in this case, but by looking at the bounding box is easy to understand which one are correct and which one not. You can also check this calculating areas of the bounding box (width * height) and keep only the big ones.
Removing the part
Once you get the binary image (white part and not white part) or the bounding box, you have to crop the image. This also can be done in several ways, but I think the easiest one would be cropping by just selecting the central part of the image. For example, if the image has H pixels vertically, you would keep only the pixel from H1 (the height of the first white space) to H-H2 (where H2 is the height of the second white space). There is no tutorial on cropping, but several questions here on SO, as for example this one.
Additional notes
You could use more advanced segmentation algorithms as the Watershed one or even learn and use advanced techinques as machine learning to do that (here an article), as you can see the rabbit hole is pretty deep in this case. But I believe that would be an overkill and already the easy techniques would give you some results in this case.
Hope this was helpful and have fun!

Find best rectangular fit for segmented contour

After segmentation of Objects in noisy data, I need to fit the best possible retangulat fit.
currently I just use opencv findContours and minAreaRect which will give me all around. I know that those objects will always be horizontal in the image with a maximum small angle like in this image.
This can be seen as the green rectanlge in the images, however I would need something like the red drawn rectangles, or even just the middle line (blue) since thats what I do need in the end.
Further, I also do have some conjunctions, like seen in this image:
Here I want to only detect the horizontal part and maybe know that there could be a junction.
Any idea how to solve this problem? I need some fast approach and have not found anything feasable yet.
Got much better results using distance transform (as mentioned from #Micka) on the masked Image, afterwars find the Line with the biggest distance as the middle of the rectangle (using some Filters, cuting off the curve) and in the End fitting a Line on the middle estimate.

How to do a perspective transformation of an image which is missing corners using opencv java

I am trying to build a document scanner using openCV. I am trying to auto crop an uploaded image. I have few use cases where there is a gap in the border when the document is out of frame(captured image).
Ex image
Below is the canny edge detection of the given image.
The borders are missing here and findContours does not return me proper results due to this.
How can I handle such images.
Both automatic canny edge detection as well as dilate does not work in such cases because it can join only small edges.
Also few documents might have only 2 sides or 3 sides captured using camera and how can we crop the other areas which is not required.
Example Image:
Is there any specific technique for handling such documents?
Please suggest few ideas.
Your problem is unusual. One way to solve this problem which comes to my mind is to:
Add white borders around image.
https://docs.opencv.org/3.4/dc/da3/tutorial_copyMakeBorder.html
Find lines in edges
http://www.robindavid.fr/opencv-tutorial/chapter5-line-edge-and-contours-detection.html
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
Make Probablistic HoughLines
Crop image by these lines. It will for sure work for image like 1st one.
For image like 2nd one you can use perpendicular and parallel lines.
For sure your algorithm must be pretty complex to works good. The easiest way is to take a picture of whole document if it is possible.
Good luck!

Opencv hough circles perform better if I resize image to double the size

I have an image like below :
I am trying to detect the circles via HoughCircles function.
Prior to detection, I threshold the image, and blur it via gaussian technique. The result is like follows :
The inverted image is larger, because I happened to find out that, if I dont resize image with same aspect ratio, hough circles algorithm goes nuts and finds either very few circles, or very wrong set of circles. I do understand the hough transformation algorithm to an extent. I use this snippet to detect the circles :
circles = cv2.HoughCircles(invertedBlurredImg, cv2.HOUGH_GRADIENT, 1, 30, param1=100, param2=23, minRadius=7, maxRadius=20)
I tried a lot of different dp values ranging from 1 to 2. I do think that if I get it close to 2, the sensitivity drops and, it becomes somewhat more possible to find the circles in a bad quality image. However, even if I dont enlargen the inverted image, I think the circles are quite clear, and I dont understand why it cannot find all the circles, unless I enlargen the image.
Here are the detected circle in case of the original sized image, and enlarged image, respectively.
What is the positive effect I receive from enlarging the image?
Does it kinda work like dilation because of the interpolation that goes on during the resizing to a larger image ?
Thanks
You had a problem and you have solved it in another way. Your problem is the parameters of the HoughCircle. They are too high for your small circles. Instead of changing them, you changed the image size. Thus gave you a good results since your new image is suitable for the old paramters.
The solution is to change your HoughCircle parameters until you got a good results on your original image. I am pretty sure it is the minRaduis which need to be decreased a bit.

How to remove the black portion from the top of the given image and make is same as the other background?

I am using openCV library. Histogram equalize or normalize will not give a good output, also the sharpness in bone will go down.
I need a output that has sharp bone without the black area at the top. Please help.
Also if my question is not clear, please feedback me so that i can make it more clear. Thank you for you support and suggestion.
Picture link is here
This is caused by the different illumination in different parts of image its common in X-ray image.
I think you need a adapted threshold for better results.
Try divide the image in 2 parts. The centre and the borders. Apply the equalization in the centre and do the same in the borders region.
You can do in blocks to. This preserves the information along small tiles.
Look this:
Equalization

Resources