Is there a way to get a Motion Blur effect on a UIImage?
I tried GPUImage, Filtrr and the iOS Core Image but all of these have regular blur - no motion blur.
I also tried UIImage-DSP but it's Motion Blur is almost non visible. I need something much stronger.
As I commented on the repository, I just added motion and zoom blurs to GPUImage. These are the GPUImageMotionBlurFilter and GPUImageZoomBlurFilter classes. This is an example of the zoom blur:
For the motion blur, I do a 9-hit Gaussian blur over a single direction. This is achieved using the following vertex and fragment shaders:
Vertex:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform highp vec2 directionalTexelStep;
varying vec2 textureCoordinate;
varying vec2 oneStepBackTextureCoordinate;
varying vec2 twoStepsBackTextureCoordinate;
varying vec2 threeStepsBackTextureCoordinate;
varying vec2 fourStepsBackTextureCoordinate;
varying vec2 oneStepForwardTextureCoordinate;
varying vec2 twoStepsForwardTextureCoordinate;
varying vec2 threeStepsForwardTextureCoordinate;
varying vec2 fourStepsForwardTextureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
oneStepBackTextureCoordinate = inputTextureCoordinate.xy - directionalTexelStep;
twoStepsBackTextureCoordinate = inputTextureCoordinate.xy - 2.0 * directionalTexelStep;
threeStepsBackTextureCoordinate = inputTextureCoordinate.xy - 3.0 * directionalTexelStep;
fourStepsBackTextureCoordinate = inputTextureCoordinate.xy - 4.0 * directionalTexelStep;
oneStepForwardTextureCoordinate = inputTextureCoordinate.xy + directionalTexelStep;
twoStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 2.0 * directionalTexelStep;
threeStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 3.0 * directionalTexelStep;
fourStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 4.0 * directionalTexelStep;
}
Fragment:
precision highp float;
uniform sampler2D inputImageTexture;
varying vec2 textureCoordinate;
varying vec2 oneStepBackTextureCoordinate;
varying vec2 twoStepsBackTextureCoordinate;
varying vec2 threeStepsBackTextureCoordinate;
varying vec2 fourStepsBackTextureCoordinate;
varying vec2 oneStepForwardTextureCoordinate;
varying vec2 twoStepsForwardTextureCoordinate;
varying vec2 threeStepsForwardTextureCoordinate;
varying vec2 fourStepsForwardTextureCoordinate;
void main()
{
lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
fragmentColor += texture2D(inputImageTexture, oneStepBackTextureCoordinate) * 0.15;
fragmentColor += texture2D(inputImageTexture, twoStepsBackTextureCoordinate) * 0.12;
fragmentColor += texture2D(inputImageTexture, threeStepsBackTextureCoordinate) * 0.09;
fragmentColor += texture2D(inputImageTexture, fourStepsBackTextureCoordinate) * 0.05;
fragmentColor += texture2D(inputImageTexture, oneStepForwardTextureCoordinate) * 0.15;
fragmentColor += texture2D(inputImageTexture, twoStepsForwardTextureCoordinate) * 0.12;
fragmentColor += texture2D(inputImageTexture, threeStepsForwardTextureCoordinate) * 0.09;
fragmentColor += texture2D(inputImageTexture, fourStepsForwardTextureCoordinate) * 0.05;
gl_FragColor = fragmentColor;
}
As an optimization, I calculate the step size between texture samples outside of the fragment shader by using the angle, blur size, and the image dimensions. This is then passed into the vertex shader, so that I can calculate the texture sampling positions there and interpolate across them in the fragment shader. This avoids dependent texture reads on iOS devices.
The zoom blur is much slower, because I still do these calculations in the fragment shader. No doubt there's a way I can optimize this, but I haven't tried yet. The zoom blur uses a 9-hit Gaussian blur where the direction and per-sample offset distance vary as a function of the placement of the pixel vs. the center of the blur.
It uses the following fragment shader (and a standard passthrough vertex shader):
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp vec2 blurCenter;
uniform highp float blurSize;
void main()
{
// TODO: Do a more intelligent scaling based on resolution here
highp vec2 samplingOffset = 1.0/100.0 * (blurCenter - textureCoordinate) * blurSize;
lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + samplingOffset) * 0.15;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + (2.0 * samplingOffset)) * 0.12;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + (3.0 * samplingOffset)) * 0.09;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + (4.0 * samplingOffset)) * 0.05;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - samplingOffset) * 0.15;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - (2.0 * samplingOffset)) * 0.12;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - (3.0 * samplingOffset)) * 0.09;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - (4.0 * samplingOffset)) * 0.05;
gl_FragColor = fragmentColor;
}
Note that both of these blurs are hardcoded at 9 samples for performance reasons. This means that at larger blur sizes, you'll start to see artifacts from the limited samples here. For larger blurs, you'll need to run these filters multiple times or extend them to support more Gaussian samples. However, more samples will lead to much slower rendering times because of the limited texture sampling bandwidth on iOS devices.
CoreImage has a Motion Blur filter.
It's called CIMotionBlur... http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Reference/CoreImageFilterReference/Reference/reference.html#//apple_ref/doc/filter/ci/CIMotionBlur
Related
How can I apply such a mask
to get effect such as bokeh
need to blur edge in mask and apply on image texture. How do that?
Vertex shader:
attribute vec4 a_Position;
void main()
{
gl_Position = a_Position;
}
Fragment shader:
precision lowp float;
uniform sampler2D u_Sampler; // textureSampler
uniform sampler2D u_Mask; // maskSampler
uniform vec3 iResolution;
vec4 blur(sampler2D source, vec2 size, vec2 uv) {
vec4 C = vec4(0.0);
float width = 1.0 / size.x;
float height = 1.0 / size.y;
float divisor = 0.0;
for (float x = -25.0; x <= 25.0; x++)
{
C += texture2D(source, uv + vec2(x * width, 0.0));
C += texture2D(source, uv + vec2(0.0, x * height));
divisor++;
}
C*=0.5;
return vec4(C.r / divisor, C.g / divisor, C.b / divisor, 1.0);
}
void main()
{
vec2 uv = gl_FragCoord.xy / iResolution.xy;
vec4 videoColor = texture2D(u_Sampler, uv);
vec4 maskColor = texture2D(u_Mask, uv);
gl_FragColor = blur(u_Sampler, iResolution.xy, uv);
}
vec4 blurColor = blur(u_Sampler, iResolution.xy, uv);
gl_FragColor = mix(blurColor, videoColor, maskColor.r);
But FYI it's not common to blur in one pass like you have. It's more common to blur in one direction (horizontally), then blur the result of that vertically, then mix the results, blurredTexture, videoTexture, mask.
In Brad Larson's excellent GPUImage, there is a halftone filter which also turns the picture black and white. I am just wanting the halftone effect without the black and white and I was wondering can anyone tell me how what I can remove from the following code to fix this? Have been playing around with it, but virtually have no experience in openGL and am not sure what to eliminate.
NSString *const kGPUImageHalftoneFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float fractionalWidthOfPixel;
uniform highp float aspectRatio;
uniform highp float dotScaling;
const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);
void main()
{
highp vec2 sampleDivisor = vec2(fractionalWidthOfPixel, fractionalWidthOfPixel / aspectRatio);
highp vec2 samplePos = textureCoordinate - mod(textureCoordinate, sampleDivisor) + 0.5 * sampleDivisor;
highp vec2 textureCoordinateToUse = vec2(textureCoordinate.x, (textureCoordinate.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp vec2 adjustedSamplePos = vec2(samplePos.x, (samplePos.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromSamplePoint = distance(adjustedSamplePos, textureCoordinateToUse);
lowp vec3 sampledColor = texture2D(inputImageTexture, samplePos ).rgb;
highp float dotScaling = 1.0 - dot(sampledColor, W);
lowp float checkForPresenceWithinDot = 1.0 - step(distanceFromSamplePoint, (fractionalWidthOfPixel * 0.5) * dotScaling);
gl_FragColor = vec4(vec3(checkForPresenceWithinDot), 1.0);
}
);
You can change the last line to
gl_FragColor = vec4(checkForPresenceWithinDot * sampledColor, 1.0);
This will make the effect have color instead of black and white only.
I want a non-horizontal GPUImageTiltShiftFilter rotation. I want to rotate it to an arbitrary rotation angle. I also want the filter to be fast it can be rotated with a UI with a UIRotationGestureRecongizer.
How do I do this?
Ah, figured it out!
Instead of modifying GPUImageTiltShiftFilter, make a new GPUImageFilterGroup as a modified version of GPUImageGaussianSelectiveBlurFilter to add rotation.
I added:
uniform highp float rotation;
within kGPUImageSMTiltShiftFragmentShaderString, and I added the distanceFromCenter line to the main of GPUImageGaussianSelectiveBlurFilter to turn GPUImageGaussianSelectiveBlurFilter into a tilt shift with rotation:
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromCenter = abs((textureCoordinate2.x - excludeCirclePoint.x) * aspectRatio*cos(rotation) + (textureCoordinate2.y-excludeCirclePoint.y)*sin(rotation));
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
I re-implemented Apple's GLImageProcessing with OpenGL ES 2 shaders. The effects are perfect but the performance of the Sharpness filter is not as good — it runs only at 20 FPS.
The shader code is simple:
Pass 0 for horizontal blur.
Pass 1 for vertical blur.
Pass 2 to mix the blur texture with the original texture.
Basically, the texture mix in Pass 2 is the cause of slowness since Pass 0 and Pass 1 are only done once and do not contribute to the bad performance.
How can I improve the performance?
Vertex shader:
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying highp vec2 v_texCoord;
varying highp vec2 v_texCoord1;
varying highp vec2 v_texCoord2;
varying highp vec2 v_texCoord1_;
varying highp vec2 v_texCoord2_;
uniform mat4 u_modelViewProjectionMatrix;
uniform lowp int u_pass;
const highp float blurSizeH = 1.0 / 320.0;
const highp float blurSizeV = 1.0 / 480.0;
void main()
{
v_texCoord = a_texCoord;
if (u_pass == 0) {
v_texCoord1 = a_texCoord + vec2(1.3846153846 * blurSizeH, 0.0);
v_texCoord1_ = a_texCoord - vec2(1.3846153846 * blurSizeH, 0.0);
v_texCoord2 = a_texCoord + vec2(3.2307692308 * blurSizeH, 0.0);
v_texCoord2_ = a_texCoord - vec2(3.2307692308 * blurSizeH, 0.0);
} else if (u_pass == 1) {
v_texCoord1 = a_texCoord + vec2(0.0, 1.3846153846 * blurSizeV);
v_texCoord1_ = a_texCoord - vec2(0.0, 1.3846153846 * blurSizeV);
v_texCoord2 = a_texCoord + vec2(0.0, 3.2307692308 * blurSizeV);
v_texCoord2_ = a_texCoord - vec2(0.0, 3.2307692308 * blurSizeV);
}
gl_Position = u_modelViewProjectionMatrix * a_position;
}
Fragment shader:
varying highp vec2 v_texCoord;
varying highp vec2 v_texCoord1;
varying highp vec2 v_texCoord2;
varying highp vec2 v_texCoord1_;
varying highp vec2 v_texCoord2_;
uniform lowp int u_pass;
uniform sampler2D u_texture;
uniform sampler2D u_degenTexture;
uniform mediump mat4 u_filterMat;
void main()
{
if (u_pass == 0) {
gl_FragColor = texture2D(u_texture, v_texCoord) * 0.2270270270;
gl_FragColor += texture2D(u_texture, v_texCoord1) * 0.3162162162;
gl_FragColor += texture2D(u_texture, v_texCoord1_) * 0.3162162162;
gl_FragColor += texture2D(u_texture, v_texCoord2) * 0.0702702703;
gl_FragColor += texture2D(u_texture, v_texCoord2_) * 0.0702702703;
} else if (u_pass == 1) {
gl_FragColor = texture2D(u_degenTexture, v_texCoord) * 0.2270270270;
gl_FragColor += texture2D(u_degenTexture, v_texCoord1) * 0.3162162162;
gl_FragColor += texture2D(u_degenTexture, v_texCoord1_) * 0.3162162162;
gl_FragColor += texture2D(u_degenTexture, v_texCoord2) * 0.0702702703;
gl_FragColor += texture2D(u_degenTexture, v_texCoord2_) * 0.0702702703;
} else {
gl_FragColor = u_filterMat * texture2D(u_texture, v_texCoord) + (mat4(1.0) - u_filterMat) * texture2D(u_degenTexture, v_texCoord);
}
}
Before you continue with this, may I suggest you take a look at my open source GPUImage project? I have several hand-optimized sharpening effects in there, including the unsharp mask you're attempting here. I also make it reasonably easy to pull in image and video sources.
To your specific question, there are a couple of reasons why your shader is running slower than expected. The first is that you are using branching within your fragment shader. This kills performance on iOS devices, and should be avoided if at all possible. If you really need to have different conditions for different passes, split these apart into separate shader programs and swap the programs out as needed rather than using a uniform for control flow.
I'm also not sure that writing to gl_FragColor repeatedly is the fastest thing you can do here. I'd use a lowp or mediump intermediate color variable, add your Gaussian components to that, and then write the final result to gl_FragColor when done.
I do see that you've moved your sampling offset calculations to the vertex shader, and then passed those offsets into the fragment shader, which is a good thing that people usually miss. Once you implement the above tweaks (or give my framework a try to see how I handle this), you should get much better results from your filtering.
It turns out that it is really simple. The cause of bad performance is matrix-vector multiplication:
varying highp vec2 v_texCoord;
uniform sampler2D u_texture;
uniform sampler2D u_degenTexture;
uniform lowp float u_filterValue;
void main()
{
gl_FragColor = u_filterMat * texture2D(u_texture, v_texCoord) + (mat4(1.0) - u_filterMat) * texture2D(u_degenTexture, v_texCoord);
}
I initially wrote my code using matrix this way so that all my filters could share the same color mixing code. Now that I learned the lesson, I simply go back to write filter specific code and use scalar operations as much as possible:
varying highp vec2 v_texCoord;
uniform sampler2D u_texture;
uniform sampler2D u_degenTexture;
uniform lowp float u_filterValue;
void main()
{
gl_FragColor = u_filterValue * texture2D(u_texture, v_texCoord) + (1.0 - u_filterValue) * texture2D(u_degenTexture, v_texCoord);
}
Now it is awesome 60 fps!
Never thought it was such a naive issue, but it is.
How can I perform the following image processing tasks using OpenGL ES 2.0 shaders?
Colorspace transform ( RGB/YUV/HSL/Lab )
Swirling of the image
Converting to a sketch
Converting to an oil painting
I just added filters to my open source GPUImage framework that perform three of the four processing tasks you describe (swirling, sketch filtering, and converting to an oil painting). While I don't yet have colorspace transforms as filters, I do have the ability to apply a matrix to transform colors.
As examples of these filters in action, here is a sepia tone color conversion:
a swirl distortion:
a sketch filter:
and finally, an oil painting conversion:
Note that all of these filters were done on live video frames, and all but the last filter can be run in real time on video from iOS device cameras. The last filter is pretty computationally intensive, so even as a shader it takes ~1 second or so to render on an iPad 2.
The sepia tone filter is based on the following color matrix fragment shader:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform lowp mat4 colorMatrix;
uniform lowp float intensity;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 outputColor = textureColor * colorMatrix;
gl_FragColor = (intensity * outputColor) + ((1.0 - intensity) * textureColor);
}
with a matrix of
self.colorMatrix = (GPUMatrix4x4){
{0.3588, 0.7044, 0.1368, 0},
{0.2990, 0.5870, 0.1140, 0},
{0.2392, 0.4696, 0.0912 ,0},
{0,0,0,0},
};
The swirl fragment shader is based on this Geeks 3D example and has the following code:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp vec2 center;
uniform highp float radius;
uniform highp float angle;
void main()
{
highp vec2 textureCoordinateToUse = textureCoordinate;
highp float dist = distance(center, textureCoordinate);
textureCoordinateToUse -= center;
if (dist < radius)
{
highp float percent = (radius - dist) / radius;
highp float theta = percent * percent * angle * 8.0;
highp float s = sin(theta);
highp float c = cos(theta);
textureCoordinateToUse = vec2(dot(textureCoordinateToUse, vec2(c, -s)), dot(textureCoordinateToUse, vec2(s, c)));
}
textureCoordinateToUse += center;
gl_FragColor = texture2D(inputImageTexture, textureCoordinateToUse );
}
The sketch filter is generated using Sobel edge detection, with edges shown in varying grey shades. The shader for this is as follows:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform mediump float intensity;
uniform mediump float imageWidthFactor;
uniform mediump float imageHeightFactor;
const mediump vec3 W = vec3(0.2125, 0.7154, 0.0721);
void main()
{
mediump vec3 textureColor = texture2D(inputImageTexture, textureCoordinate).rgb;
mediump vec2 stp0 = vec2(1.0 / imageWidthFactor, 0.0);
mediump vec2 st0p = vec2(0.0, 1.0 / imageHeightFactor);
mediump vec2 stpp = vec2(1.0 / imageWidthFactor, 1.0 / imageHeightFactor);
mediump vec2 stpm = vec2(1.0 / imageWidthFactor, -1.0 / imageHeightFactor);
mediump float i00 = dot( textureColor, W);
mediump float im1m1 = dot( texture2D(inputImageTexture, textureCoordinate - stpp).rgb, W);
mediump float ip1p1 = dot( texture2D(inputImageTexture, textureCoordinate + stpp).rgb, W);
mediump float im1p1 = dot( texture2D(inputImageTexture, textureCoordinate - stpm).rgb, W);
mediump float ip1m1 = dot( texture2D(inputImageTexture, textureCoordinate + stpm).rgb, W);
mediump float im10 = dot( texture2D(inputImageTexture, textureCoordinate - stp0).rgb, W);
mediump float ip10 = dot( texture2D(inputImageTexture, textureCoordinate + stp0).rgb, W);
mediump float i0m1 = dot( texture2D(inputImageTexture, textureCoordinate - st0p).rgb, W);
mediump float i0p1 = dot( texture2D(inputImageTexture, textureCoordinate + st0p).rgb, W);
mediump float h = -im1p1 - 2.0 * i0p1 - ip1p1 + im1m1 + 2.0 * i0m1 + ip1m1;
mediump float v = -im1m1 - 2.0 * im10 - im1p1 + ip1m1 + 2.0 * ip10 + ip1p1;
mediump float mag = 1.0 - length(vec2(h, v));
mediump vec3 target = vec3(mag);
gl_FragColor = vec4(mix(textureColor, target, intensity), 1.0);
}
Finally, the oil painting look is generated using a Kuwahara filter. This particular filter is from the outstanding work of Jan Eric Kyprianidis and his fellow researchers, as described in the article "Anisotropic Kuwahara Filtering on the GPU" within the GPU Pro book. The shader code from that is as follows:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform int radius;
precision highp float;
const vec2 src_size = vec2 (768.0, 1024.0);
void main (void)
{
vec2 uv = textureCoordinate;
float n = float((radius + 1) * (radius + 1));
vec3 m[4];
vec3 s[4];
for (int k = 0; k < 4; ++k) {
m[k] = vec3(0.0);
s[k] = vec3(0.0);
}
for (int j = -radius; j <= 0; ++j) {
for (int i = -radius; i <= 0; ++i) {
vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
m[0] += c;
s[0] += c * c;
}
}
for (int j = -radius; j <= 0; ++j) {
for (int i = 0; i <= radius; ++i) {
vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
m[1] += c;
s[1] += c * c;
}
}
for (int j = 0; j <= radius; ++j) {
for (int i = 0; i <= radius; ++i) {
vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
m[2] += c;
s[2] += c * c;
}
}
for (int j = 0; j <= radius; ++j) {
for (int i = -radius; i <= 0; ++i) {
vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
m[3] += c;
s[3] += c * c;
}
}
float min_sigma2 = 1e+2;
for (int k = 0; k < 4; ++k) {
m[k] /= n;
s[k] = abs(s[k] / n - m[k] * m[k]);
float sigma2 = s[k].r + s[k].g + s[k].b;
if (sigma2 < min_sigma2) {
min_sigma2 = sigma2;
gl_FragColor = vec4(m[k], 1.0);
}
}
}
Again, these are all built-in filters within GPUImage, so you can just drop that framework into your application and start using them on images, video, and movies without having to touch any OpenGL ES. All the code for the framework is available under a BSD license, if you'd like to see how it works or tweak it.
You could start by checking out this list of shaders here. If you want to dig in a bit more, I'd recommend you check out the orange book found here.