OpenLayers 3 : put one layer in grayscale without changing other layers - openlayers-3

I have a tricky question concerning the possibility to put layers in grayscale with Open Layers 3.
I already achieve to put the whole map in grayscale using the possibilities of canvas element, like it can be seen in the examples proposed in the following discussion : OpenLayers 3 - tile canvas context
But my need is slightly different from these examples : I want to give users the possibility to put layers one by one in grayscale, without changing the colour of other layers. The idea is for instance to have one background layer in grayscale with other data over it in colour.
Does someone know how such a thing can be achieved ?
Thanks

ol.source.Raster is what you are looking for. Here is an example.
var raster = new ol.source.Raster({
sources: [new ol.source.Stamen({
layer: 'watercolor'
})],
operation: function(pixels, data) {
// convert pixels to grayscale
return pixels;
}
});
This enables you to manipulate the pixel data of arbitrary sources on a per-layer base.

Related

CIFilters for compositing Red, Green, and Blue color channels

I am building a "Curves" editor for an image and would like to split out each color channel to run through the CIToneCurve filter before compositing them back together into a single color image. (I'm aware of the CIColorCurves filter, but that doesn't give me the control I want.)
I am able to separate the channels using three separate CIColorCube filters to generate the 3 separate color channels, but I'm not sure how to put them back together to form a single color image.
Using the maximumCompositingFilter and minimumCompositing filters works, but when I run the individual color photos through the ToneCurve, adjusting the highs or the lows (depending on which compositing filter I used) messes up the colors.
You could do this with Accelerate.vImage.
Apple has an article that discusses converting an interleaved image to separate planar buffers: https://developer.apple.com/documentation/accelerate/optimizing_image_processing_performance
...and there's an article that discusses vImage / Core Image interoperability using CIImageProcessorKernel: https://developer.apple.com/documentation/accelerate/reading_from_and_writing_to_core_video_pixel_buffers. I can't remember if CIImageProcessorKernel supports single channel 8-bit images such as R8.
...also, this Apple sample code project may be of interest: Applying Tone Curve Adjustments to Images.
Ended up using the suggestion posted by Frank Schlegel and using simple additive compositing. I had to write my own CIFilter to do that, but it was quite simple.
half4 rgbaComposite(sample_h redColor, sample_h greenColor, sample_h blueColor, sample_h alphaColor) {
return half4(redColor.r, greenColor.g, blueColor.b, alphaColor.a);
}
This is for a metal backed CIFilter. Each input assumes that it only contains a single color channel.

How do I recolour a photo using OpenCV?

I have a grayscale photo that I am trying to colour programmatically so that it looks 'real', with user input 'painting' the colour (eg red). It feels like it should be simple, but I've been stuck trying a few ways that don't look right, so thought I'd ask the community in case I've missed something obvious. I've tried the following
Converting to HSV, and combining the "Hue" and Saturation from the colour selected by the user, with the "Value" from the image.
Building a colour transformation matrix to multiply the BGR values (ie R = 0.8R + 1.1G + 1.0B). This works well for 'tinting', and adds a nice pastel effect, but doesn't really keep the depth or boldness of colour I want.
(favourite so far - see answers) multiply RGB from colour channel by RGB of image.
To add to the comment by user Alexander Reynolds, the question that you're asking is a known open research problem in the field of computer graphics, because the problem is under-constrained without using statistical priors of some sort. The state of the art in the CG community is found here, presented at SIGGRAPH 2016.
http://hi.cs.waseda.ac.jp/~iizuka/projects/colorization/en/
Also see:
http://richzhang.github.io/colorization/
I've had another think and play with photoshop, and implemented a multiply blend mode on BGR space to get an ok result.
Implemented in java
Mat multiplyBlend(Mat values, Mat colours) {//values 1 channel, colours 3
//simulates BGR Multiply blend mode
ArrayList<Mat> splitColours = new ArrayList<Mat>();
Core.split(colours, splitColours);
Core.multiply(values, splitColours.get(0), splitColours.get(0), 1/255.0f);
Core.multiply(values, splitColours.get(1), splitColours.get(1), 1/255.0f);
Core.multiply(values, splitColours.get(2), splitColours.get(2), 1/255.0f);
Mat ret = new Mat();
Core.merge(splitColours, ret);
return ret;
}

How can I change hue of an UIImage programmatically only in few parts?

How can I change hue of an UIImage programmatically only in few parts? I have followed this link
How to programmatically change the hue of UIImage?
and used the same code in my application. It's working fine but the complete image hue is getting changed. According to my requirement I want to change only the tree color in the above snap. How can I do that?
This is a specific case of a more general problem of using masking. I assume you have some way of knowing what pixels are in the "tree" part, and which ones are not. (If not, that's a whole other question/problem).
If so, first draw the original to the result context, then create a mask (see here: http://mobiledevelopertips.com/cocoa/how-to-mask-an-image.html), and draw the changed-hue version with the mask representing the tree active.
I recommend you take a look at the CoreImage API and the CIColorCube or CIColorMap filter in particular. Now how to define the color cube or color map is where the real magic lies. You'll need to transform tree tones (browns, etc), though this will obviously transform all browns, not just your tree.

What processing steps should I use to clean photos of line drawings?

My usual method of 100% contrast and some brightness adjusting to tweak the cutoff point usually works reasonably well to clean up photos of small sub-circuits or equations for posting on E&R.SE, however sometimes it's not quite that great, like with this image:
What other methods besides contrast (or instead of) can I use to give me a more consistent output?
I'm expecting a fairly general answer, but I'll probably implement it in a script (that I can just dump files into) using ImageMagick and/or PIL (Python) so if you have anything specific to them it would be welcome.
Ideally a better source image would be nice, but I occasionally use this on other folk's images to add some polish.
The first step is to equalize the illumination differences in the image while taking into account the white balance issues. The theory here is that the brightest part of the image within a limited area represents white. By blurring the image beforehand we eliminate the influence of noise in the image.
from PIL import Image
from PIL import ImageFilter
im = Image.open(r'c:\temp\temp.png')
white = im.filter(ImageFilter.BLUR).filter(ImageFilter.MaxFilter(15))
The next step is to create a grey-scale image from the RGB input. By scaling to the white point we correct for white balance issues. By taking the max of R,G,B we de-emphasize any color that isn't a pure grey such as the blue lines of the grid. The first line of code presented here is a dummy, to create an image of the correct size and format.
grey = im.convert('L')
width,height = im.size
impix = im.load()
whitepix = white.load()
greypix = grey.load()
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 * impix[x,y][0] / whitepix[x,y][0], 255 * impix[x,y][1] / whitepix[x,y][1], 255 * impix[x,y][2] / whitepix[x,y][2]))
The result of these operations is an image that has mostly consistent values and can be converted to black and white via a simple threshold.
Edit: It's nice to see a little competition. nikie has proposed a very similar approach, using subtraction instead of scaling to remove the variations in the white level. My method increases the contrast in the regions with poor lighting, and nikie's method does not - which method you prefer will depend on whether there is information in the poorly lighted areas which you wish to retain.
My attempt to recreate this approach resulted in this:
for y in range(height):
for x in range(width):
greypix[x,y] = min(255, max(255 + impix[x,y][0] - whitepix[x,y][0], 255 + impix[x,y][1] - whitepix[x,y][1], 255 + impix[x,y][2] - whitepix[x,y][2]))
I'm working on a combination of techniques to deliver an even better result, but it's not quite ready yet.
One common way to remove the different background illumination is to calculate a "white image" from the image, by opening the image.
In this sample Octave code, I've used the blue channel of the image, because the lines in the background are least prominent in this channel (EDITED: using a circular structuring element produces less visual artifacts than a simple box):
src = imread('lines.png');
blue = src(:,:,3);
mask = fspecial("disk",10);
opened = imerode(imdilate(blue,mask),mask);
Result:
Then subtract this from the source image:
background_subtracted = opened-blue;
(contrast enhanced version)
Finally, I'd just binarize the image with a fixed threshold:
binary = background_subtracted < 35;
How about detecting edges? That should pick up the line drawings.
Here's the result of Sobel edge detection on your image:
If you then threshold the image (using either an empirically determined threshold or the Ohtsu method), you can clean up the image using morphological operations (e.g. dilation and erosion). That will help you get rid of broken/double lines.
As Lambert pointed out, you can pre-process the image using the blue channel to get rid of the grid lines if you don't want them in your result.
You will also get better results if you light the page evenly before you image it (or just use a scanner) cause then you don't have to worry about global vs. local thresholding as much.

overlaying images when displaying in OpenCV

I have two images that I want to display on top of each other. one image a single channel image and the second image is a RGB image but with most of the area being transparent.
How these two images are generated in different functions. I know to just display these on top of each other, i can use the same window name when calling cvShowImage() but this doesn't work when they are drawn from different functions. When trying this, I used cvCvtcolor() to convert he binary image from single channel to RGB and then displaying the second image from another function. But this didn't work. Both images are same dimension, depth and number of channels (after conversion).
I want to avoid passing in one image into the second function and then draw them. So I'm looking for a quick dirty trick to display these two images overlapped.
Thank you
EDIT:
I don't think that's possible. You'll have to create a new image or modify an existing one. Here's an article that shows how to do this: Transparent image overlays in OpenCV
There is no way to "overlay" images. cvShowImage() displays a single image from memory. You'll need to blend/combine them together. There are several ways to do this.
You can copy one into 1 or 2 channels of the other, you can use logical operations like AND, OR or XOR, you can use arithmetic operations like Add, Multiply and MultiplyScale (these operations will saturate values larger than 255). All these can also be done with an optional mask image like your blob image.
Naturally, you may want to do this into a third buffer so as not to overwrite your originals.
Apparently now it can be done using OpenCV 2.1 version
http://opencv.willowgarage.com/documentation/cpp/highgui_qt_new_functions.html#cv-displayoverlay

Resources