What would be a good method to remove this type of noise from an image?
Morphological closing (dilation then eorsion) with a structuring element larger than the stroke achieves this:
It won't be possible to completely get rid of the residual irregularities.
Please do not accept this answer, and if you vote for it, please also vote for Yves's answer since I am merely illustrating how to implement his method and the credit is due to him.
So, you can just use ImageMagick at the command-line to do as Yves suggested like this:
convert ~/Desktop/utRjy.jpg -morphology close octagon:13 result.png
Related
I'm working on a Text Extraction algorithm in which I need some assistance with thresholding an image. My development platform is LabVIEW 2015 and I'm using the "AutoBThreshold2.vi" from Vision Development Module 2015. I decided to go with Otsu's Algorithm for thresholding which is available as "Inter Class Variance" Method. Now, The problem is that I need to specify the "Look for" option to extract the text! Unfortunately, my input images will not always be same.
Kindly refer the attached source code along with sample images. My question is that Is there any way to find whether the image has Dark objects/Bright Objects on Dark Background/Bright Background? Meanwhile I'm also playing with Histogram to find out the BG & FG type!
I'd really appreciate your help...
With the help of NI forum, I'm able to solve this problem.
https://forums.ni.com/t5/LabVIEW/Auto-Thresholding-an-image-for-text-extraction/m-p/3904533#M1108133
Use equalize vi to solve this problem before thresholding, see below image to find it.
I decided I wanted to change my gravatar to be circular. I have it circular on my blog with css and prefer the effect so decided to use a bit of imagemagick to give my image a circular alpha channel (so I could have it on SO, e.t.c. as well). a couple of quick commands later I was sorted:
# crop to square 422x422 with 0x0 offset
convert mike_gravatar.jpeg -crop 422x422+0+0 mike_gravatar_square.jpeg
# give circular alpha channel around it NOTE this is for a 422x422 image (note the 211x211)
convert mike_gravatar_square.jpeg \( +clone -threshold -1 -negate -fill white -draw "circle 211,211,211,0" \) -alpha off -compose copy_opacity -composite mike_gravatar_circle.png
brilliant, now we just upload this to gravatar, I will have a nice circular cropped image and all will be well with the world.
as you have probably guessed from the question all is not well:
ok, I must have clearly messed up my imagemagick and not checked that the before and after image are the same, reopen the images next to one another, and see that they are indeed the same. I try uploading again to gravatar and notice that they seem to process the images after the "cropping" stage, here is what it looks like in the browser after the file upload (before the cropping messes it up):
Alright, lets do some digging, someone else must have stumbled upon this before, so I have a look around and one lone soul in a desolate forum wasteland cries out. Now there is no response to this, but the relevant text is here:
It seems that if a photo or picture uploaded to Gravatar's cropper
doesn't have jet black, it will auto-level the nearest grey to black
and darken the whole image, including darkening whites into greys. Can
confirm that this occurs with any PNG image that has a grey background
or has a large enough proportion of it, whether or not it has 255
whites and regardless if it has alpha-blending or not
So it seems like I can fix this by putting in a single black pixel, that sounds alright so I try adding a black pixel, then a single black and a single white pixel, result:
So basically now I'm out of ideas:
does anyone have any idea what post-processing gravatar does, so I can undo it or counteract it's effects with pre-processing?
is this "feature" documented anywhere, or can it be turned off, or gotten around?
I think it would be quite cool to preprocess the image to counteract the darkening they would do to it but that would require knowing exactly what they do in order to change things and obviously might not be possible (depends on the relative movement of each colour, I suppose)
EDIT:
I tried making an inverse image to see if it was basing the processing on the average or the extreme values and that was also darkened, it seems that it's more likely to be the average:
Alright, I've got a solution that "worked for me" unfortunately it is just empirical and "good enough" (I'd quite like to check what's actually happening but unfortunately don't have any more time to devote to being nerd sniped) I'm gonna post what I did, what I think might have happened and how I think this should be solved if someone has enough time.
First off what I did, I simply tried whitening the image by random amounts, what ended up working was a gamma around 2
convert mike_gravatar_circle.png -gamma 2 mike_gravatar_circle_light_2.png
here is what the picture looks like both before and after processing by gravatar:
I feel it's pretty ridiculous that I need to clobber my picture like I do on the left to make it look normal so I'm going to leave this question open to see if anyone can show me a better/cleaner way of doing this.
EDIT: forgot to mention my (completely unfounded) guesses as to how this should be solved. So my guess is that gravatar might try and make the average color of the image some type of midrange value (as that might seem sensible... I guess, I don't know) and picks up the alpha as being all white. trying some experiments to determine could be interesting, but only if they had an api to automate uploading and downloading the images or it would be painful effort, I'm looking forward to any suggestions as to what people think is happening.
I am working on project in C#/Emgu CV, but answer in any language with OpenCv should be ok.
I have following image: http://i42.tinypic.com/2z89h5g.jpg
Or it might look like this: http://i43.tinypic.com/122iwsk.jpg
I am trying to do automatic calibration and I would like to know how to find corners of the field. They are marked by LEDs, but I would prefer to find it by color tags. If need I can replace all tags by same color tags. (Note that light in room is changing so the colors might be bit different next time)
Edge detection might be ok too, but I am afraid that I would not find the corner correctly.
Please help.
Thank you.
Edit:
Thanks aardvarkk for advice, but I think I need to give you little bit more info.
I am already able to detect and identify robots in field and get their position and rotation. But for that I have to set corners of field manually first. So I was looking for aa automatic way, but I was worried I would not be able to distinguish color tags from background because light in the room is changing quite often.
And as for the camera angle. Point of this is that camera can be every time from different (reasonable) angle.
I would start by searching for the colours. The LEDs won't be much help to you as they're not much brighter than anything else in the scene. I would look for the rectangular pieces of coloured tape. Try segmenting the image based on colour. That may allow you to retrieve the corner tape pieces directly without needing to know their exact colour in advance. After that, you may look for pairs of the same colour blob that are close to each other to define the corners where the pieces of tape are the same. Knowing what kinds of camera angles you are going to have to solve is also very important -- if you need this to work when viewing from the side, then this approach certainly won't work. If it's almost top down, this would probably be a good start. Nobody will be able to provide you with a start to finish solution, but this might be a good base to begin with.
I'm trying to convert hundreds of images that
Have an unknown subject centered in the image
Have a white background
I've used ImageMagick's convert utility in the following way
convert ORIGINAL.jpg -fuzz 2% -matte -transparent "#FFFFFF" TRANSPARENT.png
The problem is, some of my subjects are within the "white" scale, so, just like the weatherman wearing a green tie, some of my subjects seem to be disitegrating.
Is there any way to solve this via ImageMagick? Are there any alternative solutions? Scripting GIMP?
As you said, GIMP has a magic wand tool that can be used to select continuous areas of the same color, and so it can avoid the "green tie syndrome". The problem is that it may introduce a problem if there is something like a human hair crossing the image (that will seperate some of the white areas). Another common problem, especially with pictures of people, is when they put their hand next to the body and between the hand and the body there is a small hole.
Basically, it is not too hard to create a GIMP script that opens in batch many images, uses the magic wand to select the pixel at some corner (or if desired, in several known fixed places, not just one) and then removes the selection.
If it's hard to find a white area at a fixed spot, it is possible to do a search inside - meaning that the script searches for a white pixel on the borders, and it goes inside gradually in a spiral untill it finds some white pixel. But this is very very unefficient in the basic scripting engine, so I hope you don't need this.
If any of the suggested options above is OK, tell me and I'll create a gimp script for it. It will be even better if you can post some samples images, but I'll try to help even without these.
I was wondering how would you print an image that's scaled three times its original size without making it look like crap? If you change the dpi to 300 and print it'll look like crap. Is there a way to convert it gracefully?
You may have the problem of trying to add detail that isn't there. Hopefully you're aware of this.
The best way to enlarge an image that I know of is to use bicubic interpolation. If it's any help, Photoshop recommends using 'bicubic smoother' for enlargement.
Also, be careful with DPI vs PPI.
This is called supersampling or interpolation. There's no 'perfect' algorithm, since that would imply generating new information where there was none ('between' the pixels); but some methods are better than others in fooling the eye/brain to fill the voids, or at least not making big square boxes.
Start with the wikipedia articles on Nearest-Neighbor, Bilinear and Bicubic interpolations (the three offered by PhotoShop). A few more Tricubic interpolation, Lanczos resampling could be of interest, also check the theory, and comparison links.
In short, this isn't a cut-and-clear issue; but an active investigation field, full of subjectivity and practical trade-offs.
You should vectorize your image, scale it, and if you wish you may convert it back to the original format (jpg, gif, png...). However this works best for simple images.
Do you know how to vectorize? There are some sites that do it online, just do some Google research and you'll find some.
Changing the DPI won't matter if you don't have enough pixels in your image for the size you are printing. In the biz it's called GIGO (Garbage In, Garbage Out).
If your image is in HTML then create a media="print" stylesheet and feed a high-res image that way.