Desaturate tile layer in OpenLayers 3 - openlayers-3

I have a OL3 map with one tile layer and one vector layer. Since the features on the vector layer don't stand out enough against the tile layer in the background, I want to desaturate the tile layer.
I'm aware of the Hue/Saturation Example, but this approach works only with WebGL. WebGL in turn does not support vector layers.
How can I desaturate an OpenLayers 3 tile layer when using the canvas renderer?
NOTE: I cannot desaturate the tiles on the server, because I don't control the server that hosts the tiles.

The comment by #tsauerwein pointed me into the right direction, here is the solution.
The OL3 color manipulation example nicely shows how color manipulation can be applied to a source. The important piece in the puzzle is the Raster source. It is a kind of a proxy source that loads data from another source and is able to apply manipulations before rendering.
var rasterSource = new ol.source.Raster({
sources: [
// original source here, e.g. ol.source.WMTS
],
operation: (pixels, data) => {
var pixel = pixels[0];
var lightness = (pixel[0] * 0.3 + pixel[1] * 0.59 + pixel[2] * 0.11);
return [lightness, lightness, lightness, pixel[3]];
}
});
Here, the operation works on each pixel that is about to be rendered. It combines the R, G, and B components of the pixel into a lightness value. It then returns a new RGBA pixel by using the lightness for RGB (resulting in some grayscale value) and copying the alpha value from the original pixel.

Related

Can't determine document edges from camera with OpenCV

I need find edges of document that in user hands.
1) Original image from camera:
2) Then i convert image to BG:
3) Then i make blur:
3) Finds edges in an image using the Canny:
4) And use dilate :
As you can see on the last image the contour around the map is torn and the contour is not determined. What is my error and how to solve the problem in order to determine the outline of the document completely?
This is code how i to do it:
final Mat mat = new Mat();
sourceMat.copyTo(mat);
//convert the image to black and white
Imgproc.cvtColor(mat, mat, Imgproc.COLOR_BGR2GRAY);
//blur to enhance edge detection
Imgproc.GaussianBlur(mat, mat, new Size(5, 5), 0);
if (isClicked) saveImageFromMat(mat, "blur", "blur");
//convert the image to black and white does (8 bit)
int thresh = 128;
Imgproc.Canny(mat, mat, thresh, thresh * 2);
//dilate helps to connect nearby line segments
Imgproc.dilate(mat, mat,
Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(3, 3)),
new Point(-1, -1),
2,
1,
new Scalar(1));
This answer is based on my above comment. If someone is holding the document, you cannot see the edge that is behind the user's hand. So, any method for detecting the outline of the document must be robust to some missing parts of the edge.
I suggest using a variant of the Hough transform to detect the document. The Wikipedia article about the Hough transform makes it sound quite scary (as Wikipedia often does with mathematical subjects), but don't be discouraged, actually they are not too difficult to understand or implement.
The original Hough transform detected straight lines in images. As explained in this OpenCV tutorial, any straight line in an image can be defined by 2 parameters: an angle θ and a distance r of the line from the origin. So you quantize these 2 parameters, and create a 2D array with one cell for every possible line that could be present in your image. (The finer the quantization you use, the larger the array you will need, but the more accurate the position of the found lines will be.) Initialize the array to zeros. Then, for every pixel that is part of an edge detected by Canny, you determine every line (θ,r) that the pixel could be part of, and increment the corresponding bin. After processing all pixels, you will have, for each bin, a count of how many pixels were detected on the line corresponding to that bin. Counts which are high enough probably represent real lines in the image, even if parts of the line are missing. So you just scan through the bins to find bins which exceed the threshold.
OpenCV contains Hough detectors for straight lines and circles, but not for rectangles. You could either use the line detector and check for 4 lines that form the edges of your document; or you could write your own Hough detector for rectangles, perhaps using the paper Jung 2004 for inspiration. Rectangles have at least 5 degrees of freedom (2D position, scale, aspect ratio, and rotation angle), and memory requirement for a 5D array obviously goes up pretty fast. But since the range of each parameter is limited (ie, the document's aspect ratio is known, and you can assume the document will be well centered and not rotated much) it is probably feasible.

How to achieve adaptive threshold filter with color

I'm looking for a algorithm similar adaptive thresholding, but that keeps color. I'm trying take an image like this:
And make it look like this:
If it matters, I'm working in ios.
Here's a CIKernel that works well on your sample image
kernel vec4 coreImageKernel (sampler i)
{
vec2 dc = destCoord();
// center pixel color
vec4 c = unpremultiply(sample(i, samplerTransform(i,dc+vec2(0.0,0.0))));
// for a whiteboard, the max of a neighborhood is likely to be the color
// of the whiteboard
vec4 cmax = c;
cmax = max(unpremultiply(sample(i, samplerTransform(i,dc+vec2(10.0,0.0)))), cmax);
cmax = max(unpremultiply(sample(i, samplerTransform(i,dc+vec2(-10.0,0.0)))), cmax);
cmax = max(unpremultiply(sample(i, samplerTransform(i,dc+vec2(0.0,10.0)))), cmax);
cmax = max(unpremultiply(sample(i, samplerTransform(i,dc+vec2(0.0,-10.0)))), cmax);
// normalize the center color according to the whiteboard color
vec4 r = c / cmax;
return premultiply(r);
}
So how does this work? Well the first part of the kernel, the part that calculates cmax, is calculating the local color of the whiteboard. This is the tricky part. Basically it determines (approximately) the color that whiteboard would be if there where no markings on it. To do this the kernel makes three key assumptions:
the whiteboard color does not locally vary much
the markers subtract from whiteboard color and
that for each pixel, it or a nearby pixel (10 pixels N,S,E or W) don't have any markings. In effect the kernel assumes is that marked lines are thinner than 10 pixels though that constant could be adjusted)
Here's what the output of cmax looks like:
Once the local whiteboard color is approximated it is just a matter of dividing the current pixel by the local background. This is similar to how a color cast is removed from an image.
This algorithm is similar to the Haze Removal example from the WWDC13 Core Image presentation. In that example a local min is subtracted to make blacker blacks. In this case a local max is divided to make whiter whites.
:
Thresholding always results in a binary mask, i.e. pixels that are below the (local adaptive) threshold and pixels that are above. If you have that mask you can of course keep the color information of the original image.
Therefore a simple approach would result in the following workflow:
Image with red, green, blue value
Produce gray scale image by adding red+green+blue
Create mask of gray scaled image by local adaptive treshold on gray scale image
Apply mask to original image with red, green, blue values
Alternatively maybe:
Image with red, green, blue value
Create three masks for the image with only red (or green or blue respectively) values
Combine all three masks (logical and) to obtain a single mask
Apply mask to original image with red, green, blue values
These two ways might not be ideal but probably work already for a large number of cases including the example in the question.

Converting matches from 8-bit 4 channels to 64-bit 1 channel in OpenCV

I have a vector of Point2f which have color space CV_8UC4 and need to convert them to CV_64F, is the following code correct?
points1.convertTo(points1, CV_64F);
More details:
I am trying to use this function to calculate the essential matrix (rotation/translation) through the 5-point algorithm, instead of using the findFundamentalMath included in OpenCV, which is based on the 8-point algorithm:
https://github.com/prclibo/relative-pose-estimation/blob/master/five-point-nister/five-point.cpp#L69
As you can see it first converts the image to CV_64F. My input image is a CV_8UC4, BGRA image. When I tested the function, both BGRA and greyscale images produce valid matrices from the mathematical point of view, but if I pass a greyscale image instead of color, it takes way more to calculate. Which makes me think I'm not doing something correctly in one of the two cases.
I read around that when the change in color space is not linear (which I suppose is the case when you go from 4 channels to 1 like in this case), you should normalize the intensity value. Is that correct? Which input should I give to this function?
Another note, the function is called like this in my code:
vector<Point2f>imgpts1, imgpts2;
for (vector<DMatch>::const_iterator it = matches.begin(); it!= matches.end(); ++it)
{
imgpts1.push_back(firstViewFeatures.second[it->queryIdx].pt);
imgpts2.push_back(secondViewFeatures.second[it->trainIdx].pt);
}
Mat mask;
Mat E = findEssentialMat(imgpts1, imgpts2, [camera focal], [camera principal_point], CV_RANSAC, 0.999, 1, mask);
The fact I'm not passing a Mat, but a vector of Point2f instead, seems to create no problems, as it compiles and executes properly.
Is it the case I should store the matches in a Mat?
I am no sure do you mean by vector of Point2f in some color space, but if you want to convert vector of points into vector of points of another type you can use any standard C++/STL function like copy(), assign() or insert(). For example:
copy(floatPoints.begin(), floatPoints.end(), doublePoints.begin());
or
doublePoints.insert(doublePoints.end(), floatPoints.begin(), floatPoints.end());
No, it is not. A std::vector<cv::Pointf2f> cannot make use of the OpenCV convertTo function.
I think you really mean that you have a cv::Mat points1 of type CV_8UC4. Note that those are RxCx4 values (being R and C the number of rows and columns), and that in a CV_64F matrix you will have RxC values only. So, you need to be more clear on how you want to transform those values.
You can do points1.convertTo(points1, CV_64FC4) to get a RxCx4 matrix.
Update:
Some remarks after you updated the question:
Note that a vector<cv::Point2f> is a vector of 2D points that is not associated to any particular color space, they are just coordinates in the image axes. So, they represent the same 2D points in a grey, rgb or hsv image. Then, the execution time of findEssentialMat doesn't depend on the image color space. Getting the points may, though.
That said, I think your input for findEssentialMat is ok (the function takes care of the vectors and convert them into their internal representation). In this cases, it is very useful to draw the points in your image to debug the code.

Convert image to grayscale with custom luminosity formula

I have images containing gray gradations and one another color. I'm trying to convert image to grayscale with opencv, also i want the colored pixels in the source image to become rather light in the output grayscale image, independently to the color itself.
The common luminosity formula is smth like 0.299R+0.587G+0.114B, according to opencv docs, so it gives very different luminosity to different colors.
I consider the solution is to set some custom weights in the luminosity formula.
Is it possible in opencv? Or maybe there is a better way to perform such selective desaturation?
I use python, but it doesnt matter
This is the perfect case for the transform() function. You can treat grayscale conversion as applying a 1x3 matrix transformation to each pixel of the input image. The elements in this matrix are the coefficients for the blue, green, and red components, respectively since OpenCV images are BGR by default.
im = cv2.imread(image_path)
coefficients = [1,0,0] # Gives blue channel all the weight
# for standard gray conversion, coefficients = [0.114, 0.587, 0.299]
m = np.array(coefficients).reshape((1,3))
blue = cv2.transform(im, m)
So you have custom formula,
Load source,
Mat src=imread(fileName,1);
Create gray image,
Mat gray(src.size(),CV_8UC1,Scalar(0));
Now in a loop, access BGR pixel of source like,
Vec3b bgrPixel=src.at<cv::Vec3b>(y,x); //gives you the BGR vector of type cv::Vec3band will be in row, column order
bgrPixel[0]= Blue//
bgrPixel[1]= Green//
bgrPixel[2]= Red//
Calculate new gray pixel value using your custom equation.
Finally set the pixel value on gray image,
gray.at<uchar>(y,x) = custom intensity value // will be in row, column order

OpenCV haar training with images that have transparency

I'll be using OpenCV's cascade training functions.
But before that I need to prepare training data.
I just want to know if OpenCV can support it if my positive samples have transparency? Like for example if I want the classifier to learn how a vehicle looks, then can I supply positive sample images that have vehicles standing on a transparent background?
As mentioned in the comments above, the haar features are only computed on the grayscale image. This might pose a problem as you mentioned, when the default color of 0 might cause the "wheels" to lose contrast. You can probably "standardize" the transparent color rather than have it default to 0.
The first thing is you can load in all 4 channels (including your alpha channel) and then use the alpha channel to set the transparent part to a certain value.
Python version
I = cv2.imread("image.jpg", cv2.CV_LOAD_IMAGE_UNCHANGED)
alpha = I[:, :, 3]
G = cv2.cvtColor(I, cv2.COLOR_BGRA2GRAY)
G[alpha == 0] = 125 # Set transparent region to 125. Change to suit your needs.
C++
vector<cv::Mat> channels;
cv::split(I, channels);
cv::Mat alpha = channels[3];
alpha = 255 - alpha; // Invert mask so we select the transparent regions.
cv::Mat G = cv::cvtColor(I, cv::COLOR_BGRA2GRAY);
G.setTo(cv::Scalar(125), alpha);
As a note of caution, I think you might have to be careful about some of the operations above, e.g., loading image with alpha and "alpha = 255 - alpha;". I believe they are only available only in later versions of OpenCV. I'm using OpenCV 2.4.7 and it works (for the python version. I haven't tried the C++ but it should be the same). So if things don't work, check whether these operations are supported for your version of OpenCV. If not there are ways to get round them.

Resources