Does anyone know how to adjust the brightness of an image using RMagick? Rmagick has a number of different functions available, including ones to adjust levels and the hue/brightness/saturation levels, but I need to adjust the old-fashioned brightness/contrast levels.
There are custom functions for me to individually adjust each color channel (RGBA), but I'm not sure how to use levels to adjust the overall brightness. Messing with the different channels has yielded images that are color-altered. On GIMP, in the levels menu, the desired functionality I want is under 'Output Levels'. By dragging this below 255, I can achieve a 'darkening' effect. Is there some kind of equivalent in RMagick to control Output Levels? I don't see a channel for it.
Examples:
THIS IS ORIGINAL IMAGE:
THIS IS WHAT I WANT:
THIS IS WHAT HAPPENS WHEN I ADJUST LIGHTNESS (Rmagick's Modulate)
I think this should do what you need.
img = Magick::Image.read('bT9xc.png')
img.first.level(-Magick::QuantumRange * 0.25, Magick::QuantumRange * 1.25, 1.0).write('out.png')
This sets the black point and the white point 'further away' from the range found in the image, which has the effect of making the brightest white in the source image darker, and the darkest black in the source image lighter.
If you want to make it darker overall, just increase the second factor to Magick::QuantumRange * 1.5 or higher.
I think you can use the modulate method: http://www.imagemagick.org/RMagick/doc/image2.html#modulate
So to increase the brightness by 50% it would be something like:
img.modulate(1.5)
Related
I am looking for a good way to do batch-image processing of various scanned images. The images are quite large (300dpi at a4), but image processing time should not be a concern.
The general problem is this: The image may be colored but doesn't necessarily have to be, some of them are black and white. They have the quite typical fold-mark in the middle that comes from scanning books. What I would like to have is an image cleaned of that fold mark, which looks like a gradient, and (as an added bonus) color-shifted so that the actual background appears white / light gray. An example image is this (source is Haeckel's Kunstformen der Natur, for anyone interested):
I pondered on doing it in Python with an adaptive contrast filter, but didn't really come up with good solution yet; pretty much any input for any framework / tool / language could help me.
Here's a quick hack using vips. It does a huge blur (gaussian with sigma 200) to get the background, then scales up the image by the amount it's dark by.
#!/usr/bin/env python
import sys
import gi
gi.require_version('Vips', '8.0')
from gi.repository import Vips
image = Vips.Image.new_from_file(sys.argv[1])
# huge gaussian blur to remove all high frequences ... we will just be left with
# the changes in the background
smooth = image.gaussblur(200)
# scale up the darker areas of the original ... we scale up by the proportion they
# are darker than white by
image *= smooth.max() / smooth
# that'll make rather a dazzling white ... knock it back a bit
image *= 0.9
image.write_to_file(sys.argv[2])
Run like this:
$ time ./remove_gradient.py ~/Desktop/orig.jpg x.jpg
real 0m16.369s
user 0m55.704s
sys 0m0.218s
There's still some vignetting, but it seems reduced, to me.
We are developing an app where we need to crop an image according to the selecting object area. User will draw a line and we need to select the object and crop it .This crop need to be like the app: YourMoji
So far we have tried to get the color of the pixels along the line and then comparing those with the color of every pixel in the image and making a path from it to clip the image. But the almost going no where.
Is it possible through this way to crop an image or we are going in the wrong way? Can anyone provide a way to do this Or suggest a way to modify the way we have worked so far? Any advice and suggestions will be greatly appreciated!
Thanks in advance.
I guess what you want is the image segmentation algorithm called Graph Cut.
Here are two Github repositories, hope these would help:
GraphCut
GrabCutIOS
I'm not exactly clued up on image manipulation, but the first algorithm that comes to mind is something like this:
Take the average of the pixels in the line (as you have)
Since you appear to want faces, you might want to weight reds and blues over green. Not much green in faces of any skin tone.
For each pixel, if the colour is within a given threshold outside of your selected average, remove it / make transparent.
Perhaps the closer to the original line (or centroid), the less strict the threshold becomes.
I'd then provide the user with some tools for:
Sensitivity: how large the threshold is
Eraser: to remove parts of the image that your algorithm missed
Paintbrush: to replace parts of the image that your algorithm incorrectly removed.
I have a .tiff video file with growing fibers that look like the image below
Now
imagine that this fiber will constantly grow and shrink in a straight line. Now I'd like to somehow crop out the region of the video that contains just the fiber with, for example, a black background image.
Now when I play the video I'd like to just see the growing fiber region of the video with the black background everywhere else.
Question: Is there a way to preform a "custom" crop of irregular shaped objects in ImageJ?
If you don't know if ImageJ can do this sort of image processing any other software options are welcome.
Thanks for any help
Yes, you can do this in ImageJ. If you can find a threshold method that captures your fiber, you can turn that into a selection (ROI), and then Clear Outside to turn everything else black:
Image > Adjust > Threshold and choose the threshold, or use one of the automatic methods. But don't apply the threshold!
Edit > Selection > Create Selection (turns the thresholded area into an ROI)
Edit > Clear Outside (makes the background black -- assuming you have set your background color to black)
If you want to make the window smaller, you can do Image > Crop with the selection active. This will crop the image to the rectangular bounding box of the ROI. But this size will vary according to the size of the fiber. So you might want to do this when the fiber is at its largest.
I have a project to customize clothes ,let say a t-shirt, that have following features:
change colors.
add few lines of text ( <= 4) and change the font from a list.
add image or photo to the t-shirt.
rotate the t-shirt to custom back side.
rotate the image and zoom in/out.
save the result as a project locally and send it to a webservice ( i think to use NSDictionary/json ).
save as an image.
so my question is :
Should I use multiples images to simulate colors changes. Or should I use QuartzCore ( I am not an expert in QuartzCore but if I have to use it I'll learn). Or is there a better approach for this ?
Thank you.
The simple way to do this is to render the T-Shirt image into a CGContext, then walk the rows and columns and change pixels showing a "strong" primary color to the desired tint. You would take a photo of a person wearing a bright red (or other primary color) t-shirt, then in your code only change pixels where the red color has a high luminance and saturation (i.e. the "r" value is over some threshold and the b and g components are low).
The modified image is then going to look a bit flat, as when you change the pixels to one value (the new tint) there will be no variation in luminance. To make this more realistic, you would want to make each pixel have the same luminance as it did before. You can do this by converting back and forth from RGB to a color space like HCL. Apple has a great doc on color (in the Mac section) that explains color spaces (google 'site:developer.apple.com "Color Spaces"')
To reach your goal, you will have to tackle these technologies:
create a CGContext and render an image into it using Quartz
figure out how to read each pixel (pixels can have alpha and different orderings)
figure out a good way to identify the proper pixels (test by making these black or white)
for each pixel you want to change, convert the RGB to HCL to get its luminance
replace the pixel with a pixel of a different Color and Hue but the same Luminence
use the CGContext to make a new image
If all this seems to difficult then you'll have to have different images for every color you want.
My usual method of 100% contrast and some brightness adjusting to tweak the cutoff point usually works reasonably well to clean up photos of small sub-circuits or equations for posting on E&R.SE, however sometimes it's not quite that great, like with this image:
What other methods besides contrast (or instead of) can I use to give me a more consistent output?
I'm expecting a fairly general answer, but I'll probably implement it in a script (that I can just dump files into) using ImageMagick and/or PIL (Python) so if you have anything specific to them it would be welcome.
Ideally a better source image would be nice, but I occasionally use this on other folk's images to add some polish.
The first step is to equalize the illumination differences in the image while taking into account the white balance issues. The theory here is that the brightest part of the image within a limited area represents white. By blurring the image beforehand we eliminate the influence of noise in the image.
from PIL import Image
from PIL import ImageFilter
im = Image.open(r'c:\temp\temp.png')
white = im.filter(ImageFilter.BLUR).filter(ImageFilter.MaxFilter(15))
The next step is to create a grey-scale image from the RGB input. By scaling to the white point we correct for white balance issues. By taking the max of R,G,B we de-emphasize any color that isn't a pure grey such as the blue lines of the grid. The first line of code presented here is a dummy, to create an image of the correct size and format.
grey = im.convert('L')
width,height = im.size
impix = im.load()
whitepix = white.load()
greypix = grey.load()
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 * impix[x,y][0] / whitepix[x,y][0], 255 * impix[x,y][1] / whitepix[x,y][1], 255 * impix[x,y][2] / whitepix[x,y][2]))
The result of these operations is an image that has mostly consistent values and can be converted to black and white via a simple threshold.
Edit: It's nice to see a little competition. nikie has proposed a very similar approach, using subtraction instead of scaling to remove the variations in the white level. My method increases the contrast in the regions with poor lighting, and nikie's method does not - which method you prefer will depend on whether there is information in the poorly lighted areas which you wish to retain.
My attempt to recreate this approach resulted in this:
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 + impix[x,y][0] - whitepix[x,y][0], 255 + impix[x,y][1] - whitepix[x,y][1], 255 + impix[x,y][2] - whitepix[x,y][2]))
I'm working on a combination of techniques to deliver an even better result, but it's not quite ready yet.
One common way to remove the different background illumination is to calculate a "white image" from the image, by opening the image.
In this sample Octave code, I've used the blue channel of the image, because the lines in the background are least prominent in this channel (EDITED: using a circular structuring element produces less visual artifacts than a simple box):
src = imread('lines.png');
blue = src(:,:,3);
mask = fspecial("disk",10);
opened = imerode(imdilate(blue,mask),mask);
Result:
Then subtract this from the source image:
background_subtracted = opened-blue;
(contrast enhanced version)
Finally, I'd just binarize the image with a fixed threshold:
binary = background_subtracted < 35;
How about detecting edges? That should pick up the line drawings.
Here's the result of Sobel edge detection on your image:
If you then threshold the image (using either an empirically determined threshold or the Ohtsu method), you can clean up the image using morphological operations (e.g. dilation and erosion). That will help you get rid of broken/double lines.
As Lambert pointed out, you can pre-process the image using the blue channel to get rid of the grid lines if you don't want them in your result.
You will also get better results if you light the page evenly before you image it (or just use a scanner) cause then you don't have to worry about global vs. local thresholding as much.