I am trying to detect the white shapes in an object and can successfully do it for 1 video.
// Create and display a new matrix for triangles
triangles = src.clone();
GaussianBlur(triangles, triangles, Size(5, 5), 0, 0);
inRange(triangles, Scalar(150,150,150), Scalar(255, 255, 255), triangles);
imshow("triangles", triangles);
This gives me the result
http://s8.postimg.org/o9xg284jp/triangles.png
However, if I use a different video - then the scalar value of 150 may not be appropriate (for example if it is a light environment... everything gets detected)
http://s8.postimg.org/m09brgvlx/bad_triangles.png
For this video I would need to change the minimum scalar to be around 190-200 for it to work properly. My question - is there a good way to determine the correct scalar value to use? I know it sounds simple to some, but ive got a headache because of it!
http://colorizer.org/
If you check here you can see what your problem is. RGB = (255, 155, 155) is probably not a "white" but your inRange method is giving true output to that one.
Try to use HSL color space. Lightness > 90 is white for sure, no matter what H and S channel values are. Use BGR2HLS conversion. Then use inRange with L channel between 90-100.
Actually, for color detection problems, mostly used color spaces are HSV and HSL, not RGB!
There is probably no way to automatically determine a threshold that works for all kind of videos. But to make it less dependent on the overall lightning of the video you could make it depend on the mean or median pixel value of the image.
Or if you know how big your object appears in the image, you could choose the threshold accordingly.
Another approach could be to normalize the brightness of the video.
But which approach is best strongly dependents on your exact situation and requirements.
Related
I apologize in advance if a question like this was already answered. All of my searches for adding filters resulted in how to add dog faces. I wasn't sure what the proper terminology is.
What techniques do phone apps (such as Snapchat's text overlay or a QR code program for android) use to "darken" a section of the image? I am looking to replace this functionality in OpenCV. Is it possible to do this with other colors? (Such as adding a blue accent)
Example: Snapchat text overlay
https://i.imgur.com/9NHfiBY.jpg
Another Example: Google Allo QR code search
https://i.imgur.com/JnMzvWT.jpg
Any questions or comments would be appreciated
In General:
Change of brightness can be achieved via Addition/Subtraction.
If you want to brighten your Image, you can add a specific amount (e.g. 20) to each channel of the image. The other way around for darkening (Subtraction).
If you would subtract 50 from each channel of the image, you would get:
To darken pixel dependent you could also use Division. This is how a division with 1.5 would change the image:
Another way would be to use the Exponential Operator. This operator takes the value of each channel and will then calculate pixel^value. The resulting value will be then scaled back to the 0-255 range (for 8 bit RGB) via looking the highest value and then calculating the scaling factor via 255/resulting value.
If use it with values > one, it will darker the image. This is because
Here a chart how the exponential operator will change the value of each pixel. As you can see, values for the operator above 1 will darken the image (meaning the channels will be shifted towards lower values), whilst values below 0 will shift all pixels towards white and thus increase brightness.
Here is an example image for application of the operator using the value 0.5, meaning you take each pixel^0.5 and scale it back to the range of 0-255:
For a value of 2 you get:
Sadly i can not help you further, because i am not familiar with OpenCV, but it should be easy enough to implement yourself.
For your question about tinting: Yes, that is also possible. Instead of shifting towards white, you would have to shift the values of each pixel towards the respective color. I recommend to inform you about blending.
Original image taken from here
Update: I was able to darken an image by blending an image matrix with a black matrix. After that, it was just a matter of darkening certain parts of the image to replicate an overlay.
The lower the alpha value is, the darker the image.
Result
void ApplyFilter(cv::Mat &inFrame, cv::Mat &outFrame, double alpha)
{
cv::Mat black = cv::Mat(inFrame.rows, inFrame.cols, inFrame.type(), 0.0);
double beta = (1.0 - alpha);
cv::addWeighted(inFrame, alpha, black, beta, 0.0, outFrame);
}
https://docs.opencv.org/2.4/doc/tutorials/core/adding_images/adding_images.html
Thank you for the help everyone!
I would like to subtract color from another. For example, I have two image 100X100 pixel, one with color R:236 G:226 B:43, and another R:63 G:85 B:235. I would like to cut color R:236 G:226 B:43 from R:63 G:85 B:235. But I know it can't subtract like the mathematically method, by layer R:236-63, G:226-85, B:43-235 because i found that the color that less than 0 and more than 255 can't define.
I found another color space in RYB color space.but i don't know how it really work.
Thank you for your help.
You cannot actually subtract colors. But you surely can detect their difference. I suppose this is what you need, anyway.
Here are some thoughts and remarks:
Convert your images to HSV colorspace which transforms RGB values to
Hue, Saturation and Brightness (Value).
All your images should be around a yellowish color (near 60 deg. on
the Hue circle) so they should all have about the same Hue with
minor differences.
Typically if all images are taken at constant lighting conditions
they should have the same Value (brightness).
Saturation, which corresponds to the mixture of white in a color,
typically represents how intense you perceive a color to be. This
would typically be of about the same value for all your images in
constant lighting conditions.
According to your first description, the main difference should be detected in the Hue channel.
A good thing about HSV is that H (hue) is represented by a counterclockwise circle and colors are just positions on this circle, so positive and negative values all make sense (search google for a description of HSV colorspace to get a view of how it looks and works).
You may either detect differences by a subtraction that will lead you to a value either positive either negative, or by taking the absolute value of the subtraction, which will just give a measure of the difference of the two values of Hue (but without any information on the direction of the difference). If you need the direction of the difference you should just stick to a plain subtraction.
For example:
Hue_1 - Hue_2 = Hue_3 (typically a small value for your problem)
if Hue_3 > 0 this means that Hue_1 is a bit towards Green if
Hue_3 < 0 this means that Hue_1 is a bit towards Red
Of course you may also need to take a look at the differences in the other channels, S and V to see if colors are more saturated or more bright, but I cannot be sure you need to do this since we haven't seen any images here.
Of course you can do a lot more sophisticated things...Like apply clustering or classification techniques on the detected hues and classify them to classes according to your problem needs...
I'm writing an Android app in OpenCV to detect blobs. One task is to threshold the image to differentiate the foreground objects from the background (see image).
It works fine as long as the image is known and I can manually pass a threshold value to threshold()--in this particular image say, 200. But assuming that the image is not known with the only knowledge that there would be a dark solid background and lighter foreground objects how can I dynamically figure out the threshold value?
I've come across the histogram where I can compute the intensity distribution of the grayscale image. But I couldn't find a method to analyze the histogram and choose the value where the objects of interest (lighter) lies. That is; I want to differ the obviously dark background spikes from the lighter foreground spikes--in this case above 200, but in another case could be say, 100 if the objects are grayish.
If all your images are like this, or can be brought to this style, i think cv2.THRESHOLD_OTSU, ie otsu's tresholding algorithm is a good shot.
Below is a sample using Python in command terminal :
>>> import cv2
>>> import numpy as np
>>> img2 = cv2.imread('D:\Abid_Rahman_K\work_space\sofeggs.jpg',0)
>>> ret,thresh = cv2.threshold(img2,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
>>> ret
122.0
ret is the threshold value which is automatically calculated. We just pass '0' as threshold value for this.
I got 124 in GIMP ( which is comparable to result we got). And it also removes the noise. See result below:
If you say that the background is dark (black) and the foreground is lighter, then I recommend to use the YUV color space (or any other YXX like YCrCb, etc.), because the first component of such color spaces is luminance (or lightning).
So after the Y channel is extracted (via the extractChennel function) we need to analyse the histogram of this channel (image):
See the first (left) hump? It represents dark areas (the background in your situation) on your image. So our aim now is to find a segment (on abscissa, it's red part in the image) that contains this hump. Obviously the left point of this segment is zero. The right point is the first point where:
the (local) maximum of histogram is from the left of the point
the value of histogram is less than some small epsilon (you can set it to 10)
I drew a green vertical line to show the location of the right point of the segment in this histogram.
And that's it! This right point of the segment is the needed threshold. Here's the result (epsilon is 10 and the calculated threshold is 50):
I think that it's not a problem for you to delete the noise in the image above.
The following is a C++ implementation of Abid's answer that works with OpenCV 3.x:
// Convert the source image to a 1 channel grayscale:
Mat gray;
cvtColor(src, gray, CV_BGR2GRAY);
// Apply the threshold function with the CV_THRESH_OTSU setting as well
// You can skip having it return the value, but I include it for showing the
// results from OTSU
double thresholdValue = threshold(gray, gray, 0, 255, CV_THRESH_BINARY+CV_THRESH_OTSU);
// Present the threshold value
printf("Threshold value: %f\n", thresholdValue);
Running this against the original image, I get the following:
OpenCV calculated a threshold value of 122 for it, close to the value Abid found in his answer.
Just to verify, I altered the original image as seen here:
And produced the following, with a new threshold value of 178:
I'm attempting to convert some effects created in Photoshop into code for use with php/imagemagick. Right now I'm specifically interested in how to recreate Photoshop's RGB levels feature. I'm not really familiar with the Photoshop interface, but this is the info that I am given:
RGB Level Adjust
Input levels: Shadow 0, Midtone 0.92, Highlight 255
Output levels: Shadow 0, Highlight 255
What exaclty are the input levels vs. the output levels? How would I translate this into ImageMagick? Below you can see what I have tried, but it does not correctly render the desired effect (converting Photoshop's 0-255 scale to 0-65535):
$im->levelImage(0, 0.92, 65535);
$im->levelImage(0, 1, 65535);
This was mostly a stab in the dark since the parameter names don't line up and for output levels the number of parameters don't even match. Basically I don't understand exactly what is going on when photoshop applies the adjustment. I think that's my biggest hurdle right now. Once I get that, I'll need to find corresponding effects in ImageMagick.
Can anyone shed some light on what's going on in Photoshop and how to replicate that with ImageMagick?
Shadows, Midtones and Highlights are colors that fall within a certain range of luminosity. For example, shadows are the lower range of the luminosity histogram, midtones are colors in the middle and highlights are the ones up high. However - you can't use a hard limit on these values, which is why you will have to use curves like these that weight the histogram (a pixel may lie in multiple ranges at the same time).
To adjust shadows, midtones and highlights separately, you will need to create a weighted sum per pixel that uses the current shadow, midtone and highlight values to create a resultant value.
I don't think you can do this directly using ImageMagick API's - perhaps you could simply write it as a filter.
Hope this helps.
So I stumbled across this website: http://www.fmwconcepts.com/imagemagick/levels/index.php
Based on the information there, I was able to come up with the following php which seems pretty effective at simulating what Photoshop does with input and output and all that.
function levels($im, $inshadow, $midtone, $inhighlight, $outshadow, $outhighlight, $channel = self::CHANNEL_ALL) {
$im->levelImage($inshadow, $midtone, $inhighlight, $channel);
$im->levelImage(-$outshadow, 1.0, 255 + (255 - $outhighlight), $channel);
}
This assumes that the parameters to levelImage for blackpoint and whitepoint are on a scale of 0-255. They might actually be 0-65535 on your system. If that's they can it's easy enough to adjust it. You can also check what value your setup uses with $im->getQuantumRange(). It will return an array with a string version and a long version of the quantum. From there it should be easy enough to normalize the values or just use the new range.
See the documentation: The first value is the black point (shadow) input value, the middle is a gamma (which I'm guessing is the same as Photoshop's midpoint), and the last is the white point (highlight) input value.
The output values are fixed at the quantum values of the image type, there's no need to specify them.
I need a robust motion detection and tracking in web cam's video frames. The background is always the same. The aim is to identify the position of the object, if possible without the shadows, but not so urgent to remove shadows. I've tried the opencv algorithm for background subtraction and thresholding, but this depends on only one image as a background, what if the background changes a little bit in brightness (or camera auto-focus), I need the algorithm to be strong for little changes as brightness or some shadows.
Robust method for tracking are part of broad research interests that are being developed all around the world...
Here are maybe keys to solve your problem that is very interesting but wide and open.
First a lot of them assumes brightness constancy (therefore what you ask is difficult to achieve). For instance:
Lucas-Kanade
Horn-Schunk
Block-matching
is widely used for tracking but assumes brightness constancy.
Then other interesting ones could be meanshift or camshift tracking, but you need a projection to follow... However you can use a back-projection computed accordingly to certain threshold to fit your needs for robustness...
I'll post later about that,
Julien,
When you try the thresholding in OpenCV are you doing this with RGB (red,green,blue) or HSV (hue,saturation,value) colour formats? From personal experience, I find the HSV encoding to be far superior for tracking coloured objects in video footage when used in conjunction with OpenCV for thresholding and cvBlobsLib for identifying the blob location.
HSV is easier since HSV has the advantage of only having to use a single number to detect the colour (“hue”), in spite of the very real probability of there being several shades of that colour, ranging from light to darker shades. (The amount of colour and the brightness of the colour are handled by the “saturation” and “value” parameters respectively).
I threshold the HSV reference image ('imgHSV') to obtain a binary (black and white) image using a call to the cvInRange() OpenCV API:
cvInRangeS( imgHSV,
cvScalar( 104, 178, 70 ),
cvScalar( 130, 240, 124 ),
imgThresh );
In the above example, the two cvScalar parameters are lower and upper bounds of HSV values that represents hues that are blueish in colour. In my own experiments I was able to obtain some suitable max/min values by grabbing screenshots of the object(s) I was interested in tracking and observing the kinds of hue/saturation/lum values that occur.
More detailed descriptions with a code sample can be found on this blog posting.
Andrian has a cool tutorial http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/
I followed and have an good experiment test
https://youtu.be/HJBOOZVefXA
I use static image as well
frameDelta = cv2.absdiff(firstFrame, gray)
thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.dilate(thresh, None, iterations=2)
(cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
4 lines code find motion well
good luck