How does gravatar adjust the colour in your images automatically? - imagemagick

I decided I wanted to change my gravatar to be circular. I have it circular on my blog with css and prefer the effect so decided to use a bit of imagemagick to give my image a circular alpha channel (so I could have it on SO, e.t.c. as well). a couple of quick commands later I was sorted:
# crop to square 422x422 with 0x0 offset
convert mike_gravatar.jpeg -crop 422x422+0+0 mike_gravatar_square.jpeg
# give circular alpha channel around it NOTE this is for a 422x422 image (note the 211x211)
convert mike_gravatar_square.jpeg \( +clone -threshold -1 -negate -fill white -draw "circle 211,211,211,0" \) -alpha off -compose copy_opacity -composite mike_gravatar_circle.png
brilliant, now we just upload this to gravatar, I will have a nice circular cropped image and all will be well with the world.
as you have probably guessed from the question all is not well:
ok, I must have clearly messed up my imagemagick and not checked that the before and after image are the same, reopen the images next to one another, and see that they are indeed the same. I try uploading again to gravatar and notice that they seem to process the images after the "cropping" stage, here is what it looks like in the browser after the file upload (before the cropping messes it up):
Alright, lets do some digging, someone else must have stumbled upon this before, so I have a look around and one lone soul in a desolate forum wasteland cries out. Now there is no response to this, but the relevant text is here:
It seems that if a photo or picture uploaded to Gravatar's cropper
doesn't have jet black, it will auto-level the nearest grey to black
and darken the whole image, including darkening whites into greys. Can
confirm that this occurs with any PNG image that has a grey background
or has a large enough proportion of it, whether or not it has 255
whites and regardless if it has alpha-blending or not
So it seems like I can fix this by putting in a single black pixel, that sounds alright so I try adding a black pixel, then a single black and a single white pixel, result:
So basically now I'm out of ideas:
does anyone have any idea what post-processing gravatar does, so I can undo it or counteract it's effects with pre-processing?
is this "feature" documented anywhere, or can it be turned off, or gotten around?
I think it would be quite cool to preprocess the image to counteract the darkening they would do to it but that would require knowing exactly what they do in order to change things and obviously might not be possible (depends on the relative movement of each colour, I suppose)
EDIT:
I tried making an inverse image to see if it was basing the processing on the average or the extreme values and that was also darkened, it seems that it's more likely to be the average:

Alright, I've got a solution that "worked for me" unfortunately it is just empirical and "good enough" (I'd quite like to check what's actually happening but unfortunately don't have any more time to devote to being nerd sniped) I'm gonna post what I did, what I think might have happened and how I think this should be solved if someone has enough time.
First off what I did, I simply tried whitening the image by random amounts, what ended up working was a gamma around 2
convert mike_gravatar_circle.png -gamma 2 mike_gravatar_circle_light_2.png
here is what the picture looks like both before and after processing by gravatar:
I feel it's pretty ridiculous that I need to clobber my picture like I do on the left to make it look normal so I'm going to leave this question open to see if anyone can show me a better/cleaner way of doing this.
EDIT: forgot to mention my (completely unfounded) guesses as to how this should be solved. So my guess is that gravatar might try and make the average color of the image some type of midrange value (as that might seem sensible... I guess, I don't know) and picks up the alpha as being all white. trying some experiments to determine could be interesting, but only if they had an api to automate uploading and downloading the images or it would be painful effort, I'm looking forward to any suggestions as to what people think is happening.

Related

How to remove background of a low contrast image using imagemagick

I am new to Imagemagick and am trying to work out a script which I can use to remove the background of a number of images.
The problems is that some images (see sample below) involve objects whose main color is very close to the background.
Can someone help me pointing the right approach and/or providing real-life examples with which I can play?
Thanks a lot!
!(http://dev.gmce.com.br/foto2-small.png)
convert test.jpeg -white-threshold 90% -transparent white output.png
just play with the threshold percentage

How to recognize Text-Presence pattern in a scanned image and crop it?

Smart Cropping for Scanned Docs
Recently I took over a preservation project of old books/manuscripts. They are huge in quantity, almost 10,000 pages. I had to scan them manually with a portable scanner as they were not in a condition to be scanned in an automated book scanner.
The real problem shows up when I start editing them in Photoshop. Note that all of them are basically documents (in JPG format) and that there are absolutely no images in those documents. They are in a different language (Oriya) for which I am sure there won't be any OCR software available in near future. (If there is please let me know.)
To make those images (docs) look clean and elegant I have to crop them, position them, increase contrast a bit, clean unnecessary spots with eraser, et cetera. I was able to automate most of these processes in Photoshop, but cropping is the point where I am getting stuck. I can't automate cropping as the software can't recon the presence of text or content in a certain area of that img (doc); it just applies the value given to it for cropping.
I want a solution to automate this cropping process. I have figured out an idea for this, I don't know if it's practical enough to implement and as far as I know there's no software present in market that does this kind of thing.
The possible solution to this: This might be possible if a tool can recognize the presence of text in an image (that's not very critical as all of them are normal document images, no images in them, no patterns just plain rectangles) and crop it out right from the border of those text from each side so it can output a document image without any margin. After this rest of the tasks can be automated using Photoshop such as adding white spaces for margin, tweaking with the contrast and color make it more readable etc.
Here is an album link to the gallery. I can post more sample images if it would be useful - just let me know.
http://imageshack.us/g/1/9800204/
Here is one example from the bigger sample of images available through above link:
Using the sample from tinypic,
with ImageMagick I'd construct an algorithm along the following lines:
Contrast-stretch the original image
Values of 1% for the the black-point and 10% for the white-point seem about right.
Command:
convert \
http://i46.tinypic.com/21lppac.jpg \
-contrast-stretch 1%x10% \
contrast-stretched.jpg
Result:
Shave off some border pixels to get rid of the dark scanning artefacts there
A value of 30 pixels on each edge seems about right.
Command:
convert \
contrast-stretched.jpg \
-shave 30x30 \
shaved.jpg
Result:
De-speckle the image
No further parameter here. Repeat process 3x for better results.
Command:
convert \
shaved.jpg \
-despeckle \
-despeckle \
-despeckle \
despeckled.jpg
Result:
Apply a threshold to make all pixels either black or white
A value of roughly 50% seems about right.
Command:
convert \
despeckled.jpg \
-threshold 50% \
b+w.jpg
Result:
Re-add the shaved-off pixels
Using identify -format '%Wx%H' 21lppac.jpg established that the original image had a dimension of 1536x835 pixels.
Command:
convert \
b+w.jpg \
-gravity center \
-extent 1536x835 \
big-b+w.jpg
Result:
(Note, this step was only optional. It's purpose is to get back to the original image dimensions, which you may want in case you'd go from here and overlay the result with the original, or whatever...)
De-Skew the image
A threshold of 40% (the default) seems to work here too.
Command:
convert \
big-b+w.jpg \
-deskew 40% \
deskewed.jpg
Result:
Remove from each edge all rows and colums of pixels which are purely white
This can be achieved by simply using the -trim operator.
Command:
convert \
deskewed.jpg \
-trim \
trimmmed.jpg
Result:
As you can see, the result is not yet perfect:
there remain some random artefacts on the bottom edge of the image, and
the final trimming didn't remove all white-space from the edges because of other minimal artifacts;
also, I didn't (yet) attempt to apply a distortion correction to the image in order to fix (some of) the distortion. (You can get an idea about what it could achieve by looking at this answer to "Understanding Perspective Projection Distortion ImageMagick".)
Of course, you can easily achieve even better results by playing with a few of the parameters used in each step.
And of course, you can easily automate this process by putting each command into a shell or batch script.
Update
Ok, so here is a distortion to roughly rectify the deformation.
*Command:
convert \
trimmmed.jpg \
-distort perspective '0,0 0,0 1300,0 1300,0 0,720 0,720 1300,720 1300,770' \
distort.jpg
Result: (once more with the original underneath, to make direct visual comparison more easy)
There is still some portion of barrel-like distortion in the image, which can probably be removed by applying the -barrelinverse operator -- we'd just need to find the fitting parameters.
We addressed many "smart cropping" issues in our open-source DjVu->PDF converter. The converter also allows you to load a set of scanned images instead of DjVu (just press SHIFT with Open command) and output a resulting set of images instead of PDF.
It is a free cross-platform GUI tool, written in Java.
One technique to segment text from the background is the Stroke Width Transform. You'll find several posts here on Stack Overflow about it, including this one:
Stroke Width Transform (SWT) implementation (Java, C#...)
If the text shown in the Wikipedia page is representative of written Oriya, then I'm confident that the SWT (or your customized version of it) will perform well. You may still have to do some manual tweaking after you review an image, but an SWT-based method should do a lot of the work for you.
Although the SWT may not identify every single stroke, it should give you a good estimate of the dimensions of the space occupied by strokes (and characters). The simplest method
A newish algorithm that might work for you is "content-aware resizing" algorithms such as "seam carving," which automatically removes paths of pixels of low information content (e.g. background pixels). Here's a video about seam carving:
http://www.youtube.com/watch?v=qadw0BRKeMk
There's a seam carving plugin ("liquid resizing") for GIMP:
http://liquidrescale.wikidot.com/
This blog post reports a plugin for Photoshop:
http://wordpress.brainfight.com/195/photoshop-cs5-content-aware-aka-seam-carving-aka-liquid-resize-fun-marketing/
For an overview of OCR techniques, I recommend the book Character Recogntion Systems by Cheriet, Kharma, Liu, and Suen. The references in that book could keep you busy for quite some time.
http://www.amazon.com/Character-Recognition-Systems-Students-Practitioners/dp/0471415707
Finally, consider joining the Optical Character Recognition group on LinkedIn to post more specific questions. There are academics, researchers, and engineers in the industry who can answer questions in great detail, and you might also be able to make contact via email with researchers in India who are developing OCR for languages similar to Oriya, though they may not have published the software yet.

ImageMagick - Transparent background - Act like Photoshop's "Magic wand"

I'm trying to convert hundreds of images that
Have an unknown subject centered in the image
Have a white background
I've used ImageMagick's convert utility in the following way
convert ORIGINAL.jpg -fuzz 2% -matte -transparent "#FFFFFF" TRANSPARENT.png
The problem is, some of my subjects are within the "white" scale, so, just like the weatherman wearing a green tie, some of my subjects seem to be disitegrating.
Is there any way to solve this via ImageMagick? Are there any alternative solutions? Scripting GIMP?
As you said, GIMP has a magic wand tool that can be used to select continuous areas of the same color, and so it can avoid the "green tie syndrome". The problem is that it may introduce a problem if there is something like a human hair crossing the image (that will seperate some of the white areas). Another common problem, especially with pictures of people, is when they put their hand next to the body and between the hand and the body there is a small hole.
Basically, it is not too hard to create a GIMP script that opens in batch many images, uses the magic wand to select the pixel at some corner (or if desired, in several known fixed places, not just one) and then removes the selection.
If it's hard to find a white area at a fixed spot, it is possible to do a search inside - meaning that the script searches for a white pixel on the borders, and it goes inside gradually in a spiral untill it finds some white pixel. But this is very very unefficient in the basic scripting engine, so I hope you don't need this.
If any of the suggested options above is OK, tell me and I'll create a gimp script for it. It will be even better if you can post some samples images, but I'll try to help even without these.

Best way to get photoshop to optimise 35 related pictures for fast transmission

I have 35 pictures taken from a stationary camera aimed at a lightbox in which an object is placed, rotated at 10 degrees in each picture. If I cycle through the pictures quickly, the image looks like it is rotating.
If I wished to 'rotate' the object in a browser but wanted to transmit as little data as possible for this, I thought it might be a good idea to split the picture into 36 pictures, where 1 picture is any background the images have in common, and 35 pictures minus the background, just showing the things that have changed.
Do you think this approach will work? Is there a better route? How would I achieve this in photoshop?
Hmm you'd probably have to take a separate picture of just the background, then in the remaining pictures, use Photoshop to remove the background and keep only the object. I guess if the pictures of the background have transparency in the place where the background was this could work.
How are you planning to "rotate" this? Flash? JavaScript? CSS+HTML? Is this supposed to be interactive or just a repeating movie? Do you have a sample of how this has already been done? Sounds kinda cool.
If you create a multiple frame animated GIF in Photoshop you can control the quality of the final output, including optimization that automatically converts the whole sequence to indexed color. The result is that your background, though varied, will share most of the same color space, and should be optimized such that it won't matter if it differ slightly in each frame. (Unless your backgrounds are highly varied between photos, though by your use of a light box, they shouldn't be.) Photoshop will let you control the overall output resolution, and color remapping, which will affect the final size.
Update: Adobe discontinued ImageReady in Photoshop CS3+, I am still using CS2 so I wasn't aware of this until someone pointed it out.
Unless The background is much bigger than the gif in the foreground i doubt that you would benefit greatly from using separate transparent images. Even if they are smaller in size,
Would the difference be large enough to improve the speed, taken into consideration the average speed with which pages are loaded?

How do you scale an image for print without degrading the quality?

I was wondering how would you print an image that's scaled three times its original size without making it look like crap? If you change the dpi to 300 and print it'll look like crap. Is there a way to convert it gracefully?
You may have the problem of trying to add detail that isn't there. Hopefully you're aware of this.
The best way to enlarge an image that I know of is to use bicubic interpolation. If it's any help, Photoshop recommends using 'bicubic smoother' for enlargement.
Also, be careful with DPI vs PPI.
This is called supersampling or interpolation. There's no 'perfect' algorithm, since that would imply generating new information where there was none ('between' the pixels); but some methods are better than others in fooling the eye/brain to fill the voids, or at least not making big square boxes.
Start with the wikipedia articles on Nearest-Neighbor, Bilinear and Bicubic interpolations (the three offered by PhotoShop). A few more Tricubic interpolation, Lanczos resampling could be of interest, also check the theory, and comparison links.
In short, this isn't a cut-and-clear issue; but an active investigation field, full of subjectivity and practical trade-offs.
You should vectorize your image, scale it, and if you wish you may convert it back to the original format (jpg, gif, png...). However this works best for simple images.
Do you know how to vectorize? There are some sites that do it online, just do some Google research and you'll find some.
Changing the DPI won't matter if you don't have enough pixels in your image for the size you are printing. In the biz it's called GIGO (Garbage In, Garbage Out).
If your image is in HTML then create a media="print" stylesheet and feed a high-res image that way.

Resources