What is the theory behind the Light Glow effect of "After Effects"? - image-processing

What is the theory behind the Light Glow effect of "After Effects"?
I wanna use GLSL to make it happen. But if I at least get closer to the theory behind it, I could replicate it.

I've recently been implementing something similar. My render pipeline looks something like this:
Render Scene to texture (full screen)
Filter scene ("bright pass") to isolate the high luminance, shiny bits
Down-sample (2) to smaller texture (for performance), and do H Gaussian blur
Perform a V Gaussian blur on (3).
Blend output from (4) with the output from (1)
Display to screen.
With some parameter tweaking, you get get it looking pretty nice. Google things like "bright pass" (low pass filter), Gaussian Blur, FBO (Frame Buffer Objects) and so on. Effects like "bloom" and "HDR" also have a wealth of information about different ways of doing each of these things. I tried out about 4 different ways of doing Gaussian blur before settling on my current one.

Look at how to make shadow volumes, and instead of stenciling out a shadow, you could run a multi-pass blur on the volume, set its material to a very emissive, additive blended shader, and I imagine you'll get a similar effect.
Atlernatively, you could do the GPUGems implementation:

I will answer my own question just in case someone gets to here at the same point. With more precision (actually 100% of precision) I got to the exact After Effects's glow. The way it works is:
Apply a gaussian blur to the original image.
Extract the luma of this blurred image
Like in After Effects, you have two colors (A and B). So the secret is to make a gradient map between these color, acoording to the desired "Color Looping". If you don't know, a gradient map is an interpolation between colors (A and B in this case). Following the same vocabulary of After Effects, you need to loop X times over the "Color Looping" you chose... it means, suppose you are using the Color Looping like A->B->A, it will be considered one loop over your image (one can try this on Photoshop).
Take the luma your extract on step 2 and use as a parameter of your gradient map... in other words: luma=(0%, 50%, 100%) maps to color (A, B, A) respectively... the mid points are interpolated.
Blend your image with the original image according to the "Glow Operation" desired (Add, Multiply, etc)
This procedure work like After Effects for every single pixel. The other details of the Glow may be easily done after in basic procedure... things like "Glow Intensity", "Glow Threshold" and so on needs to be calibrated in order to get the same results with the same parameters.

Related

Add a darkening filter to an image OpenCV

I apologize in advance if a question like this was already answered. All of my searches for adding filters resulted in how to add dog faces. I wasn't sure what the proper terminology is.
What techniques do phone apps (such as Snapchat's text overlay or a QR code program for android) use to "darken" a section of the image? I am looking to replace this functionality in OpenCV. Is it possible to do this with other colors? (Such as adding a blue accent)
Example: Snapchat text overlay
https://i.imgur.com/9NHfiBY.jpg
Another Example: Google Allo QR code search
https://i.imgur.com/JnMzvWT.jpg
Any questions or comments would be appreciated
In General:
Change of brightness can be achieved via Addition/Subtraction.
If you want to brighten your Image, you can add a specific amount (e.g. 20) to each channel of the image. The other way around for darkening (Subtraction).
If you would subtract 50 from each channel of the image, you would get:
To darken pixel dependent you could also use Division. This is how a division with 1.5 would change the image:
Another way would be to use the Exponential Operator. This operator takes the value of each channel and will then calculate pixel^value. The resulting value will be then scaled back to the 0-255 range (for 8 bit RGB) via looking the highest value and then calculating the scaling factor via 255/resulting value.
If use it with values > one, it will darker the image. This is because
Here a chart how the exponential operator will change the value of each pixel. As you can see, values for the operator above 1 will darken the image (meaning the channels will be shifted towards lower values), whilst values below 0 will shift all pixels towards white and thus increase brightness.
Here is an example image for application of the operator using the value 0.5, meaning you take each pixel^0.5 and scale it back to the range of 0-255:
For a value of 2 you get:
Sadly i can not help you further, because i am not familiar with OpenCV, but it should be easy enough to implement yourself.
For your question about tinting: Yes, that is also possible. Instead of shifting towards white, you would have to shift the values of each pixel towards the respective color. I recommend to inform you about blending.
Original image taken from here
Update: I was able to darken an image by blending an image matrix with a black matrix. After that, it was just a matter of darkening certain parts of the image to replicate an overlay.
The lower the alpha value is, the darker the image.
Result
void ApplyFilter(cv::Mat &inFrame, cv::Mat &outFrame, double alpha)
{
cv::Mat black = cv::Mat(inFrame.rows, inFrame.cols, inFrame.type(), 0.0);
double beta = (1.0 - alpha);
cv::addWeighted(inFrame, alpha, black, beta, 0.0, outFrame);
}
https://docs.opencv.org/2.4/doc/tutorials/core/adding_images/adding_images.html
Thank you for the help everyone!

Detect triangles, ellipses and rectangles from an image

I am trying to detect the regions of traffic signs. Using OpenCV, my approach is as follows:
The color image:
Using the TanTriggs Preprocessing get rid of the illumination variances:
Equalize histogram:
And binarize (Cv2.Threshold(blobs, blobs, 127, 255, ThresholdTypes.BinaryInv):
Iterate each blob using ConnectedComponents and get the mean color value using the blob as mask. If it is a red color then it may be a red sign.
Then get contours of this blob using FindContours.
Simplify the contours using ApproxPolyDP and check the points of each contour:
If 3 points then triangle shape is acceptable --> candidate for triangle sign
If 4 points then shape is acceptable --> candidate
If more than 4 points, BBox dimensions are acceptable and most of the points are on the ellipse fitted (FitEllipse) --> candidate
This approach works for the separated blobs in the binary image, like the circular 100km sign in my example. However if there is a connection to the outside objects, like the triangle left bottom part in the binary image, it fails.
Because, the mean value of this blob is far from red!
Using Erosion helps in some cases, however makes it worse in many of the other images.
Using different threshold values for the binarization also works for some, but fails on many; like the erosion.
Using HoughCircle is just very slow and I couldn't manage to get good results playing with the parameters.
I have tried using matchShapes but couldn't get good results.
Can anybody show me another way the achieve what I want (with a reasonable computational time)?
Any information, or code in any language is wellcome.
Edit:
Using circularity measure (C=P^2/4πA) or the approach I have described above, triangle and ellips shapes can be found when they are separated. However when the contour is like this for example:
I could not find a robust way to extract the triangle piece. If I could, I would check the mean color, and decide if its a red sign candidate.
Sorry, I don't have the kudos to comment, but can't you use the red colour?
import common
myshow = common.myshow
img = cv2.imread("ms0QB.png")
grey = np.zeros(img.shape[:2],np.uint8)
hsv = cv2.cvtColor(img,cv2.COLOR_mask = np.logical_or(hsv[:,:,0]>160,hsv[:,:,0]<10 )
grey[mask] = 255
cv2.imshow("160<hue<182",grey)
cv2.waitKey()

Perlin noise, how to detect bright/dark areas?

I need some help with perlin noise.
I want to create a random terrain generator. I am using perlin noise in order to determine where the mountains and sea should go. From the random noise, I get something like this:
http://prntscr.com/df0rqp
Now, how can I actually detect where the brighter and darker areas are?
I tried using display.colorSample which returns the RGB color and alpha of the pixel, but this doesn't really help me much.
If it was only white and red, I could easily detect where the bright area is (white would be really big, where red would be small number) and the opposite.
However, since I have red, GREEN AND BLUE, this makes it a hard job.
To sum up, how can I detect where the white and where the red areas at?
You have a fundamental misunderstanding here. The perlin noise function really only goes from (x,y)->p . [It also works in higher dimensions]. But what you are seeing is just your library being nice. The noise function goes from two reals to one. It is being helpful by mapping the one result value p to a color gradient. But that is only for visualization. p is not a color, just another number. Use that one directly! If p<0 you might do water.
I would suggest this:
1. Shift hue of the image into red color like here
2. Use red channel to retrieve some mask.
3. Optional: scale max/min brightness into 0-255 range.

Are the GPUImageBilateralFilter parameters of the GPUImage iOS Library the equivilent of Photoshop Surface Blur parameters?

I understand that a Surface Blur in Photoshop is a bilateral filter.
In the iOS Library GPUImage there is a filter GPUImageBilateralFilter with parameters of texelSpacingMultiplier and distanceNormalizationFactor.
Would these match up directly to the Photoshop Surface Blur options of radius and threshold (respectively)? And would the values to these parameters be the same?
Thanks!
Not exactly.
With your standard Gaussian blurs, you typically specify a pixel radius (or sigma value in pixels) to define the blur strength. Before the recent overhaul of the blurs, you couldn't specify something like this in GPUImage. The blurs instead used a fixed number of samples, with a fixed value of sigma. You could expand them slightly by adjusting the pixel spacing between samples (the texelSpacingMultiplier, which is 1.0 by default).
With my recent revamp of the blurs, you now can specify a true pixel radius for the blur, which in the case of a Gaussian blur sets the size of sigma. When you do this, it generates a shader on the fly that works over the appropriate number of pixels to yield a blur of that strength. The use of sigma here matches Core Image's behavior exactly, although I haven't tested it against Photoshop.
However, the bilateral blur is the last of the blurs that I've needed to update to bring it inline with the others. I haven't yet, so you don't have a great way to bring it in line with Photoshop's behavior yet. This is on my to-do list, but that's fairly lengthy at this point. You're welcome to take a look at how I converted over something like the box blur and to try to adapt the bilateral blur math to enable this for that filter.
The distanceNormalizationFactor isn't related to the blur size, but it works to weight the way that pixel colors are processed from the sampled pixels around the central one. A bilateral blur works by blurring the central color only with surrounding colors that are close enough to the central one (thus preserving edges). This weighting value controls how close a color needs to be to the central pixel's color in order for the center pixel to be blended with it.

Recognize pattern in images

I am looking for a fast idea/algorithm letting me to find squares (as mark points) in the image file. It shouldn't be so much challenge, however...
I started doing this by changing the color of the source image to a grey scale image and scanning each line of the image looking for two, three longest lines (pixel by pixel).
Then having an array of "lines" I am finding elements which may create the desire square.
The better idea would be to find the pattern with known traits, like: it is square, beyond of the square there are no distortion (there is just white space) etc.
The goal is to analyze the image 5000 X 5000 px in less than 1-2s.
Is it possible?
One of the OpenCV samples squares.cpp does just this, see here for code, . Alternatively you could look up the Hough transform to detect all lines in your image, and then test for two lines intersecting at right angles.
There are also a number of resources on this site which may help you:
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
Are there any opencv function like "cvHoughCircles()" for square detection?
square detection, image processing
I'm sure there are others, these are just the first few I came across.
See Scale-invariant feature transform, template matching, and Hough transform. A quick and inaccurate guess may be to make a histogram of color and compare it. If the image is complicated enough, you might be able to distinguish between several sets of images.
To make the matter simple, assume we have three buckets for R, G, and B. A completely white image would have (100%, 100%, 100%) for (R, G, B). A completely red image would have (100%, 0%, 0%). A complicated image might have something like (23%, 53%, 34%). If you take the distance between the points in that (R, G, B) space, you can compare which one is "closer".
I guess links by chris solved the question :)

Resources