I have a grayscale photo that I am trying to colour programmatically so that it looks 'real', with user input 'painting' the colour (eg red). It feels like it should be simple, but I've been stuck trying a few ways that don't look right, so thought I'd ask the community in case I've missed something obvious. I've tried the following
Converting to HSV, and combining the "Hue" and Saturation from the colour selected by the user, with the "Value" from the image.
Building a colour transformation matrix to multiply the BGR values (ie R = 0.8R + 1.1G + 1.0B). This works well for 'tinting', and adds a nice pastel effect, but doesn't really keep the depth or boldness of colour I want.
(favourite so far - see answers) multiply RGB from colour channel by RGB of image.
To add to the comment by user Alexander Reynolds, the question that you're asking is a known open research problem in the field of computer graphics, because the problem is under-constrained without using statistical priors of some sort. The state of the art in the CG community is found here, presented at SIGGRAPH 2016.
http://hi.cs.waseda.ac.jp/~iizuka/projects/colorization/en/
Also see:
http://richzhang.github.io/colorization/
I've had another think and play with photoshop, and implemented a multiply blend mode on BGR space to get an ok result.
Implemented in java
Mat multiplyBlend(Mat values, Mat colours) {//values 1 channel, colours 3
//simulates BGR Multiply blend mode
ArrayList<Mat> splitColours = new ArrayList<Mat>();
Core.split(colours, splitColours);
Core.multiply(values, splitColours.get(0), splitColours.get(0), 1/255.0f);
Core.multiply(values, splitColours.get(1), splitColours.get(1), 1/255.0f);
Core.multiply(values, splitColours.get(2), splitColours.get(2), 1/255.0f);
Mat ret = new Mat();
Core.merge(splitColours, ret);
return ret;
}
Related
I am building a "Curves" editor for an image and would like to split out each color channel to run through the CIToneCurve filter before compositing them back together into a single color image. (I'm aware of the CIColorCurves filter, but that doesn't give me the control I want.)
I am able to separate the channels using three separate CIColorCube filters to generate the 3 separate color channels, but I'm not sure how to put them back together to form a single color image.
Using the maximumCompositingFilter and minimumCompositing filters works, but when I run the individual color photos through the ToneCurve, adjusting the highs or the lows (depending on which compositing filter I used) messes up the colors.
You could do this with Accelerate.vImage.
Apple has an article that discusses converting an interleaved image to separate planar buffers: https://developer.apple.com/documentation/accelerate/optimizing_image_processing_performance
...and there's an article that discusses vImage / Core Image interoperability using CIImageProcessorKernel: https://developer.apple.com/documentation/accelerate/reading_from_and_writing_to_core_video_pixel_buffers. I can't remember if CIImageProcessorKernel supports single channel 8-bit images such as R8.
...also, this Apple sample code project may be of interest: Applying Tone Curve Adjustments to Images.
Ended up using the suggestion posted by Frank Schlegel and using simple additive compositing. I had to write my own CIFilter to do that, but it was quite simple.
half4 rgbaComposite(sample_h redColor, sample_h greenColor, sample_h blueColor, sample_h alphaColor) {
return half4(redColor.r, greenColor.g, blueColor.b, alphaColor.a);
}
This is for a metal backed CIFilter. Each input assumes that it only contains a single color channel.
Summary:
I've been having issues correcting white balance and color casts on images specifically when testing on an iPhone 7. So far everything runs fine on an iPhone 6, but the camera on a 7 creates sometimes a purple or yellow or blue tint in any image with the flash on. The app I'm developing relies heavily on color detection using OpenCV, so I'm trying different methods to correct the color cast. The scenario I'm running into is this: the user has a piece of paper and some items on the paper to identify by color, but when an iPhone 7 is used close to the paper with the flash on the entire image takes on a tint. The paper is used to make it easier to separate images from the background, as well as used in white-balance to know what part of the image should be white to potentially fix white-balance/color cast problems.
Details:
I'm able to correct slight tints in color using a background adjust method with OpenCV:
- (void)adjustBackground:(Mat &)inputROI image:(Mat &)imageInBG{
Scalar meanCol = mean(inputROI);
// original
Mat labOrig, labFloat, ROIfloat;
std::vector<Mat>planes(3);
inputROI.convertTo(ROIfloat, CV_32FC3, 1.0/255.0f);
cvtColor(ROIfloat,labFloat,CV_BGR2Lab);
split(labFloat,planes);
double l_v,a_v,b_v;
rgb2lab(meanCol(2), meanCol(1), meanCol(0), l_v, a_v, b_v);
add(planes[1], -a_v, planes[1]);
add(planes[2], -b_v, planes[2]);
merge(planes,labFloat);
cvtColor(labFloat, ROIfloat, CV_Lab2BGR);
ROIfloat.convertTo(inputROI , CV_8UC3, 255.0f);
planes.clear();
labOrig.release();
labFloat.release();
ROIfloat.release();
}
Where rgb2lab does just what it implies, converting rgb to the lab color space. I also convert the image to float for better precision. This is able to correct small color casts but if the image is heavily tinted it still results in slightly tinted colors and color detection with OpenCV still results in too much of the tint color being detected.
What I tried next was more of a direct adjustment of the camera settings, which I feel is a better approach to fixing the problem initially instead of after the fact with a sort of post-processing color correction. I found some documentation for modifying the camera's temperature and tint values, but it just results in the user having to manually adjust sliders to get the desired white-balanced image:
Class captureDeviceClass = NSClassFromString(#"AVCaptureDevice");
if (captureDeviceClass != nil) {
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([device isWhiteBalanceModeSupported: AVCaptureWhiteBalanceModeLocked]){
if ([device lockForConfiguration:nil]){
AVCaptureWhiteBalanceTemperatureAndTintValues temperatureAndTint = {
.temperature = tempVal,
.tint = tintVal,
};
AVCaptureWhiteBalanceGains wbGains = [device deviceWhiteBalanceGainsForTemperatureAndTintValues:temperatureAndTint];
if((NSLocationInRange(wbGains.redGain, NSMakeRange(1, (device.maxWhiteBalanceGain - 1.0))))&&(NSLocationInRange(wbGains.greenGain, NSMakeRange(1, (device.maxWhiteBalanceGain - 1.0))))&&(NSLocationInRange(wbGains.blueGain, NSMakeRange(1, (device.maxWhiteBalanceGain - 1.0)))))
{
NSLog(#"Good values");
[device deviceWhiteBalanceGainsForTemperatureAndTintValues:temperatureAndTint];
[device setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:[device deviceWhiteBalanceGainsForTemperatureAndTintValues:temperatureAndTint] completionHandler:^(CMTime syncTime) {
}];
[device unlockForConfiguration];
}
else{
NSLog(#"Bad values");
}
}
}
Where tempVal and tintVal are inputs from sliders.
Is there any way to turn off auto adjustments on the iPhone camera, or is there a better way in OpenCV to adjust for more extreme color casts?
Edit:
Here are some examples. Disregard the graphs in the middle, I was trying something with histograms. The one image shows a blue tint on the entire image, and the other shows my color cast correction working in the ROI in the middle, but it changes the colors on the image too much (I need the color bands on the resistors to be as accurate as possible).
http://i.imgur.com/jlc4MDa.jpg and http://i.imgur.com/PG81pAl.jpg
For anyone this might help, I found a decent solution to adjust the color cast. It turns out the way I was adjusting in the first place with OpenCV was pretty close, and I just needed to adjust the color ranges I was looking for to resemble the kind of colors I was getting after adjustment. I also stopped using the rgb2lab function I found and just directly transformed the mat itself from bgr (float for precision) to lab. This seems like its closer to true white-balanced or color-corrected images, but it could just be the adjusted color ranges I came up with that really made it better.
The rest is pretty much the same, just find the mean of the A and B channel and split the mat into LAB planes to adjust them "back to center". Here is the code for using float LAB mats and getting the offset in A and B:
Mat labFloat, ROIfloat;
std::vector<Mat>lab_planes(3);
input.convertTo(ROIfloat, CV_32FC3, 1.0/255.0f);
cvtColor(ROIfloat,labFloat,CV_BGR2Lab);
split(labFloat,lab_planes);
Scalar meanVals = mean(labFloat);
// valA is the mean of the A channel (offset to add/subtract)
// valB is the mean of the B channel (offset to add/subtract)
double valA = meanVals(1);
double valB = meanVals(2);
EDIT:
I just want to add that I also started only color-cast correcting spot areas that I needed instead of the entire image. Different areas of an image have a different cast based on lighting etc. so it makes sense to just use small areas at a time for correcting. This turned out to be a lot more accurate and gave better results.
I have a picture with a black background and gray objects and I want to filter out the objects. But the light incidence makes it impossible to just look after a color. It will either filter not the whole object or it will filter as well the background.
If some can give an hint or an example in C# it would be nice.
sory because i'm using VB.NET but hope I can help you
Dim myBitmap As New Bitmap("Grapes.jpg")
Dim pixelColor As Color = myBitmap.GetPixel(X, Y)
if pixelColor = Color.Black
myBitmap.SetPixel(X, Y, Color.Blue)
If in objects have black color. I guess you have to change others way to get the object like save object's area.
Without any example image is hard to guess. But for start you should look into histogram equalization. This method enables to balance the lightning in the image.
Wikipedia has a nice example of what hist equalization is capable of.
I have a project to customize clothes ,let say a t-shirt, that have following features:
change colors.
add few lines of text ( <= 4) and change the font from a list.
add image or photo to the t-shirt.
rotate the t-shirt to custom back side.
rotate the image and zoom in/out.
save the result as a project locally and send it to a webservice ( i think to use NSDictionary/json ).
save as an image.
so my question is :
Should I use multiples images to simulate colors changes. Or should I use QuartzCore ( I am not an expert in QuartzCore but if I have to use it I'll learn). Or is there a better approach for this ?
Thank you.
The simple way to do this is to render the T-Shirt image into a CGContext, then walk the rows and columns and change pixels showing a "strong" primary color to the desired tint. You would take a photo of a person wearing a bright red (or other primary color) t-shirt, then in your code only change pixels where the red color has a high luminance and saturation (i.e. the "r" value is over some threshold and the b and g components are low).
The modified image is then going to look a bit flat, as when you change the pixels to one value (the new tint) there will be no variation in luminance. To make this more realistic, you would want to make each pixel have the same luminance as it did before. You can do this by converting back and forth from RGB to a color space like HCL. Apple has a great doc on color (in the Mac section) that explains color spaces (google 'site:developer.apple.com "Color Spaces"')
To reach your goal, you will have to tackle these technologies:
create a CGContext and render an image into it using Quartz
figure out how to read each pixel (pixels can have alpha and different orderings)
figure out a good way to identify the proper pixels (test by making these black or white)
for each pixel you want to change, convert the RGB to HCL to get its luminance
replace the pixel with a pixel of a different Color and Hue but the same Luminence
use the CGContext to make a new image
If all this seems to difficult then you'll have to have different images for every color you want.
My usual method of 100% contrast and some brightness adjusting to tweak the cutoff point usually works reasonably well to clean up photos of small sub-circuits or equations for posting on E&R.SE, however sometimes it's not quite that great, like with this image:
What other methods besides contrast (or instead of) can I use to give me a more consistent output?
I'm expecting a fairly general answer, but I'll probably implement it in a script (that I can just dump files into) using ImageMagick and/or PIL (Python) so if you have anything specific to them it would be welcome.
Ideally a better source image would be nice, but I occasionally use this on other folk's images to add some polish.
The first step is to equalize the illumination differences in the image while taking into account the white balance issues. The theory here is that the brightest part of the image within a limited area represents white. By blurring the image beforehand we eliminate the influence of noise in the image.
from PIL import Image
from PIL import ImageFilter
im = Image.open(r'c:\temp\temp.png')
white = im.filter(ImageFilter.BLUR).filter(ImageFilter.MaxFilter(15))
The next step is to create a grey-scale image from the RGB input. By scaling to the white point we correct for white balance issues. By taking the max of R,G,B we de-emphasize any color that isn't a pure grey such as the blue lines of the grid. The first line of code presented here is a dummy, to create an image of the correct size and format.
grey = im.convert('L')
width,height = im.size
impix = im.load()
whitepix = white.load()
greypix = grey.load()
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 * impix[x,y][0] / whitepix[x,y][0], 255 * impix[x,y][1] / whitepix[x,y][1], 255 * impix[x,y][2] / whitepix[x,y][2]))
The result of these operations is an image that has mostly consistent values and can be converted to black and white via a simple threshold.
Edit: It's nice to see a little competition. nikie has proposed a very similar approach, using subtraction instead of scaling to remove the variations in the white level. My method increases the contrast in the regions with poor lighting, and nikie's method does not - which method you prefer will depend on whether there is information in the poorly lighted areas which you wish to retain.
My attempt to recreate this approach resulted in this:
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 + impix[x,y][0] - whitepix[x,y][0], 255 + impix[x,y][1] - whitepix[x,y][1], 255 + impix[x,y][2] - whitepix[x,y][2]))
I'm working on a combination of techniques to deliver an even better result, but it's not quite ready yet.
One common way to remove the different background illumination is to calculate a "white image" from the image, by opening the image.
In this sample Octave code, I've used the blue channel of the image, because the lines in the background are least prominent in this channel (EDITED: using a circular structuring element produces less visual artifacts than a simple box):
src = imread('lines.png');
blue = src(:,:,3);
mask = fspecial("disk",10);
opened = imerode(imdilate(blue,mask),mask);
Result:
Then subtract this from the source image:
background_subtracted = opened-blue;
(contrast enhanced version)
Finally, I'd just binarize the image with a fixed threshold:
binary = background_subtracted < 35;
How about detecting edges? That should pick up the line drawings.
Here's the result of Sobel edge detection on your image:
If you then threshold the image (using either an empirically determined threshold or the Ohtsu method), you can clean up the image using morphological operations (e.g. dilation and erosion). That will help you get rid of broken/double lines.
As Lambert pointed out, you can pre-process the image using the blue channel to get rid of the grid lines if you don't want them in your result.
You will also get better results if you light the page evenly before you image it (or just use a scanner) cause then you don't have to worry about global vs. local thresholding as much.