I'm making a simple camera app for iOS and MAC. After the user snaps a picture it generates a UIimage on iOS (NSImage on MAC). I want to be able to highlight the areas in the image that is over exposed. Basically the overexposed areas would blink when that image is displayed.
Anybody knows the algorithm on how to tell where on the image is overexposed. Do I just add up the R,G,B values at each pixel. And if the total at each pixel is greater than a certain amount, then start blinking that pixel, and do that for all pixels?
Or do I have to do some complicated math from outer space to figure it out?
Thanks
rough
you will have to traverse the image, depending on your desired accuracy and precision, you can combine skipping and averaging pixels to come up with a smooth region...
it will depend on the details of you color space, but imagine YUV space (because you only need to look at one value, the Y or luminance):
if 240/255 is considered white, then a greater value of say 250/255 would be over exposed and you could mark it, then display in an overlay.
Related
I'm trying to blindly detect signals in a spectra.
one way that came to my mind is to detect rectangles in the waterfall (a 2D matrix that can be interpret as an image) .
Is there any fast way (in the order of 0.1 second) to find center and width of all of the horizontal rectangles in an image? (heights of rectangles are not considered for me).
an example image will be uploaded (Note I know that all rectangles are horizontal.
I would appreciate it if you give me any other suggestion for this purpose.
e.g. I want the algorithm to give me 9 center and 9 coordinates for the above image.
Since the rectangle are aligned, you can do that quite easily and efficiently (this is not the case with unaligned rectangles since they are not clearly separated). The idea is first to compute the average color of each line and for each column. You should get something like that:
Then, you can subtract the background color (blue), compute the luminance and then compute a threshold. You can remove some artefact using a median/blur before.
Then, you can just scan the resulting 1D array filled with binary values so to locate where each rectangle start/stop. The center of each rectangle is ((x_start+x_end)/2, (y_start+y_end)/2).
I apologize in advance if a question like this was already answered. All of my searches for adding filters resulted in how to add dog faces. I wasn't sure what the proper terminology is.
What techniques do phone apps (such as Snapchat's text overlay or a QR code program for android) use to "darken" a section of the image? I am looking to replace this functionality in OpenCV. Is it possible to do this with other colors? (Such as adding a blue accent)
Example: Snapchat text overlay
https://i.imgur.com/9NHfiBY.jpg
Another Example: Google Allo QR code search
https://i.imgur.com/JnMzvWT.jpg
Any questions or comments would be appreciated
In General:
Change of brightness can be achieved via Addition/Subtraction.
If you want to brighten your Image, you can add a specific amount (e.g. 20) to each channel of the image. The other way around for darkening (Subtraction).
If you would subtract 50 from each channel of the image, you would get:
To darken pixel dependent you could also use Division. This is how a division with 1.5 would change the image:
Another way would be to use the Exponential Operator. This operator takes the value of each channel and will then calculate pixel^value. The resulting value will be then scaled back to the 0-255 range (for 8 bit RGB) via looking the highest value and then calculating the scaling factor via 255/resulting value.
If use it with values > one, it will darker the image. This is because
Here a chart how the exponential operator will change the value of each pixel. As you can see, values for the operator above 1 will darken the image (meaning the channels will be shifted towards lower values), whilst values below 0 will shift all pixels towards white and thus increase brightness.
Here is an example image for application of the operator using the value 0.5, meaning you take each pixel^0.5 and scale it back to the range of 0-255:
For a value of 2 you get:
Sadly i can not help you further, because i am not familiar with OpenCV, but it should be easy enough to implement yourself.
For your question about tinting: Yes, that is also possible. Instead of shifting towards white, you would have to shift the values of each pixel towards the respective color. I recommend to inform you about blending.
Original image taken from here
Update: I was able to darken an image by blending an image matrix with a black matrix. After that, it was just a matter of darkening certain parts of the image to replicate an overlay.
The lower the alpha value is, the darker the image.
Result
void ApplyFilter(cv::Mat &inFrame, cv::Mat &outFrame, double alpha)
{
cv::Mat black = cv::Mat(inFrame.rows, inFrame.cols, inFrame.type(), 0.0);
double beta = (1.0 - alpha);
cv::addWeighted(inFrame, alpha, black, beta, 0.0, outFrame);
}
https://docs.opencv.org/2.4/doc/tutorials/core/adding_images/adding_images.html
Thank you for the help everyone!
I'm trying to show students how the RGB color model works to create a particular color (or moreover to convince them that it really does). So I want to take a picture and convert each pixel to an RGB representation so that when you zoom in, instead of a single colored pixel, you see the RGB colors.
I've done this but for some very obvious reasons the converted picture is either washed out or darker than the original (which is a minor inconvenience but I think it would be more powerful if I could get it to be more like the original).
Here are two pictures "zoomed out":
Here is a "medium zoom", starting to show the RGB artifacts in the converted picture:
And here is a picture zoomed in to the point that you can clearly see individual pixels and the RGB squares:
You'll notice the constant color surrounding the pixels; that is the average RGB of the picture. I put that there so that you could see individual pixels (otherwise you just see rows/columns of shades of red/green/blue). If I take that space out completely, the image is even darker and if I replace it with white, then the image looks faded (when zoomed out).
I know why displaying this way causes it to be darker: a "pure red" will come with a completely black blue and green. In a sense if I were to take a completely red picture, it would essentially be 1/3 the brightness of the original.
So my question is:
1: Are there any tools available that already do this (or something similar)?
2: Any ideas on how to get the converted image closer to the original?
For the 2nd question, I could of course just increase the brightness for each "RGB pixel" (the three horizontal stripes in each square), but by how much? I certainly can't just multiply the RGB ints by 3 (in apparent compensation for what I said above). I wonder if there is some way to adjust my background color to compensate for me? Or would it just have to be something that needs to be fiddled with for each picture?
You were correct to assume you could retain the brightness by multiplying everything by 3. There's just one small problem: the RGB values in an image use gamma correction, so the intensity is not linear. You need to de-gamma the values, multiply, then gamma correct them again.
You also need to lose the borders around each pixel. Those borders take up 7/16 of the final image which is just too much to compensate for. I tried rotating every other pixel by 90 degrees, and while it gives the result a definite zig-zag pattern it does make clear where the pixel boundaries are.
When you zoom out in an image viewer you might see the gamma problem too. Many viewers don't bother to do gamma correction when they resize. For an in-depth explanation see Gamma error in picture scaling, and use the test image supplied at the end. It might be better to forgo scaling altogether and simply step back from the monitor.
Here's some Python code and a crop from the resulting image.
from PIL import Image
im = Image.open(filename)
im2 = Image.new('RGB', (im.size[0]*3, im.size[1]*3))
ld1 = im.load()
ld2 = im2.load()
for y in range(im.size[1]):
for x in range(im.size[0]):
rgb = ld1[x,y]
rgb = [(c/255)**2.2 for c in rgb]
rgb = [min(1.0,c*3) for c in rgb]
rgb = tuple(int(255*(c**(1/2.2))) for c in rgb)
x2 = x*3
y2 = y*3
if (x+y) & 1:
for x3 in range(x2, x2+3):
ld2[x3,y2] = (rgb[0],0,0)
ld2[x3,y2+1] = (0,rgb[1],0)
ld2[x3,y2+2] = (0,0,rgb[2])
else:
for y3 in range(y2, y2+3):
ld2[x2,y3] = (rgb[0],0,0)
ld2[x2+1,y3] = (0,rgb[1],0)
ld2[x2+2,y3] = (0,0,rgb[2])
Don't waste so much time on this. You cannot make two images look the same if you have less information in one of them. You still have your computer that will subsample your image in weird ways while zooming out.
Just pass a magnifying glass through the class so they can see for themselves on their phones or other screens or show pictures of a screen in different magnification levels.
If you want to stick to software, triple the resolution of your image, don't use empty rows and columns or at least make them black to increase contrast and scale the RGB components to full range.
Why don't you keep the magnified image for the background ? This will let the two images look identical when zoomed out, while the RGB strips will remain clearly visible in the zoom-in.
If not, use the average color over the whole image to keep a similar intensity, but the washing effect will remain.
An intermediate option is to apply a strong lowpass filter on the image to smoothen all details and use that as the background, but I don't see a real advantage over the first approach.
Light Field captures the scene from slightly different points. This means I would have two images of the same scene with a slight shift, as shown in the following figure:
Assuming the red squares in the images above are pixels. I know that the spatial difference between those two pixels is a shift. Nevertheless, what other information do these two pixels give us in terms of scene radiance? I mean is there a way to find (or compute) the difference in image irradiance values between those two points?
Look for color space representations other than RGB. Some of them have explicit channel(s) carrying luminance information of a pixel.
A varaiant of the same idea is to convert to a Black and White image and examine the pixel values.
When we look at a photo of a group of trees, we are able to identify that the photo is predominantly green and brown, or for a picture of the sea we are able to identify that it is mostly blue.
Does anyone know of an algorithm that can be used to detect the prominent color or colours in a photo?
I can envisage a 3D clustering algorithm in RGB space or something similar. I was wondering if someone knows of an existing technique.
Convert the image from RGB to a color space with brightness and saturation separated (HSL/HSV)
http://en.wikipedia.org/wiki/HSL_and_HSV
Then find the dominating values for the hue component of each pixel. Make a histogram for the hue values of each pixel and analyze in which angle region the peaks fall in. A large peak in the quadrant between 180 and 270 degrees means there is a large portion of blue in the image, for example.
There can be several difficulties in determining one dominant color. Pathological example: an image whose left half is blue and right half is red. Also, the hue will not deal very well with grayscales obviously. So a chessboard image with 50% white and 50% black will suffer from two problems: the hue is arbitrary for a black/white image, and there are two colors that are exactly 50% of the image.
It sounds like you want to start by computing an image histogram or color histogram of the image. The predominant color(s) will be related to the peak(s) in the histogram.
You might want to change the image from RGB to indexed, then you could use a regular histogram and detect the pics (Matlab does this with rgb2ind(), as you probably already know), and then the problem would be reduced to your regular "finding peaks in an array".
Then
n = hist(Y,nbins) bins the elements in vector Y into 10 equally spaced containers and returns the number of elements in each container as a row vector.
Those values in n will give you how many elements in each bin. Then it's just a matter of fiddling with the number of bins to make them wide enough, and with how many elements in each would make you count said bin as a predominant color, then taking the bins that contain those many elements, calculating the index that corresponds with their middle, and converting it to RGB again.
Whatever you're using for your processing probably has similar functions to those
Average all pixels in the image.
Remove all pixels that are farther away from the average color than standard deviation.
GOTO 1 with remaining pixels until arbitrarily few are left (1 or maybe 1%).
You might also want to pre-process the image, for example apply high-pass filter (removing only very low frequencies) to even out lighting in the photo — http://en.wikipedia.org/wiki/Checker_shadow_illusion