How can I do these image processing tasks using OpenGL ES 2.0 shaders? - image-processing

How can I perform the following image processing tasks using OpenGL ES 2.0 shaders?
Colorspace transform ( RGB/YUV/HSL/Lab )
Swirling of the image
Converting to a sketch
Converting to an oil painting

I just added filters to my open source GPUImage framework that perform three of the four processing tasks you describe (swirling, sketch filtering, and converting to an oil painting). While I don't yet have colorspace transforms as filters, I do have the ability to apply a matrix to transform colors.
As examples of these filters in action, here is a sepia tone color conversion:
a swirl distortion:
a sketch filter:
and finally, an oil painting conversion:
Note that all of these filters were done on live video frames, and all but the last filter can be run in real time on video from iOS device cameras. The last filter is pretty computationally intensive, so even as a shader it takes ~1 second or so to render on an iPad 2.
The sepia tone filter is based on the following color matrix fragment shader:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform lowp mat4 colorMatrix;
uniform lowp float intensity;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 outputColor = textureColor * colorMatrix;
gl_FragColor = (intensity * outputColor) + ((1.0 - intensity) * textureColor);
}
with a matrix of
self.colorMatrix = (GPUMatrix4x4){
{0.3588, 0.7044, 0.1368, 0},
{0.2990, 0.5870, 0.1140, 0},
{0.2392, 0.4696, 0.0912 ,0},
{0,0,0,0},
};
The swirl fragment shader is based on this Geeks 3D example and has the following code:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp vec2 center;
uniform highp float radius;
uniform highp float angle;
void main()
{
highp vec2 textureCoordinateToUse = textureCoordinate;
highp float dist = distance(center, textureCoordinate);
textureCoordinateToUse -= center;
if (dist < radius)
{
highp float percent = (radius - dist) / radius;
highp float theta = percent * percent * angle * 8.0;
highp float s = sin(theta);
highp float c = cos(theta);
textureCoordinateToUse = vec2(dot(textureCoordinateToUse, vec2(c, -s)), dot(textureCoordinateToUse, vec2(s, c)));
}
textureCoordinateToUse += center;
gl_FragColor = texture2D(inputImageTexture, textureCoordinateToUse );
}
The sketch filter is generated using Sobel edge detection, with edges shown in varying grey shades. The shader for this is as follows:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform mediump float intensity;
uniform mediump float imageWidthFactor;
uniform mediump float imageHeightFactor;
const mediump vec3 W = vec3(0.2125, 0.7154, 0.0721);
void main()
{
mediump vec3 textureColor = texture2D(inputImageTexture, textureCoordinate).rgb;
mediump vec2 stp0 = vec2(1.0 / imageWidthFactor, 0.0);
mediump vec2 st0p = vec2(0.0, 1.0 / imageHeightFactor);
mediump vec2 stpp = vec2(1.0 / imageWidthFactor, 1.0 / imageHeightFactor);
mediump vec2 stpm = vec2(1.0 / imageWidthFactor, -1.0 / imageHeightFactor);
mediump float i00 = dot( textureColor, W);
mediump float im1m1 = dot( texture2D(inputImageTexture, textureCoordinate - stpp).rgb, W);
mediump float ip1p1 = dot( texture2D(inputImageTexture, textureCoordinate + stpp).rgb, W);
mediump float im1p1 = dot( texture2D(inputImageTexture, textureCoordinate - stpm).rgb, W);
mediump float ip1m1 = dot( texture2D(inputImageTexture, textureCoordinate + stpm).rgb, W);
mediump float im10 = dot( texture2D(inputImageTexture, textureCoordinate - stp0).rgb, W);
mediump float ip10 = dot( texture2D(inputImageTexture, textureCoordinate + stp0).rgb, W);
mediump float i0m1 = dot( texture2D(inputImageTexture, textureCoordinate - st0p).rgb, W);
mediump float i0p1 = dot( texture2D(inputImageTexture, textureCoordinate + st0p).rgb, W);
mediump float h = -im1p1 - 2.0 * i0p1 - ip1p1 + im1m1 + 2.0 * i0m1 + ip1m1;
mediump float v = -im1m1 - 2.0 * im10 - im1p1 + ip1m1 + 2.0 * ip10 + ip1p1;
mediump float mag = 1.0 - length(vec2(h, v));
mediump vec3 target = vec3(mag);
gl_FragColor = vec4(mix(textureColor, target, intensity), 1.0);
}
Finally, the oil painting look is generated using a Kuwahara filter. This particular filter is from the outstanding work of Jan Eric Kyprianidis and his fellow researchers, as described in the article "Anisotropic Kuwahara Filtering on the GPU" within the GPU Pro book. The shader code from that is as follows:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform int radius;
precision highp float;
const vec2 src_size = vec2 (768.0, 1024.0);
void main (void)
{
vec2 uv = textureCoordinate;
float n = float((radius + 1) * (radius + 1));
vec3 m[4];
vec3 s[4];
for (int k = 0; k < 4; ++k) {
m[k] = vec3(0.0);
s[k] = vec3(0.0);
}
for (int j = -radius; j <= 0; ++j) {
for (int i = -radius; i <= 0; ++i) {
vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
m[0] += c;
s[0] += c * c;
}
}
for (int j = -radius; j <= 0; ++j) {
for (int i = 0; i <= radius; ++i) {
vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
m[1] += c;
s[1] += c * c;
}
}
for (int j = 0; j <= radius; ++j) {
for (int i = 0; i <= radius; ++i) {
vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
m[2] += c;
s[2] += c * c;
}
}
for (int j = 0; j <= radius; ++j) {
for (int i = -radius; i <= 0; ++i) {
vec3 c = texture2D(inputImageTexture, uv + vec2(i,j) / src_size).rgb;
m[3] += c;
s[3] += c * c;
}
}
float min_sigma2 = 1e+2;
for (int k = 0; k < 4; ++k) {
m[k] /= n;
s[k] = abs(s[k] / n - m[k] * m[k]);
float sigma2 = s[k].r + s[k].g + s[k].b;
if (sigma2 < min_sigma2) {
min_sigma2 = sigma2;
gl_FragColor = vec4(m[k], 1.0);
}
}
}
Again, these are all built-in filters within GPUImage, so you can just drop that framework into your application and start using them on images, video, and movies without having to touch any OpenGL ES. All the code for the framework is available under a BSD license, if you'd like to see how it works or tweak it.

You could start by checking out this list of shaders here. If you want to dig in a bit more, I'd recommend you check out the orange book found here.

Related

How blending image mask?

How can I apply such a mask
to get effect such as bokeh
need to blur edge in mask and apply on image texture. How do that?
Vertex shader:
attribute vec4 a_Position;
void main()
{
gl_Position = a_Position;
}
Fragment shader:
precision lowp float;
uniform sampler2D u_Sampler; // textureSampler
uniform sampler2D u_Mask; // maskSampler
uniform vec3 iResolution;
vec4 blur(sampler2D source, vec2 size, vec2 uv) {
vec4 C = vec4(0.0);
float width = 1.0 / size.x;
float height = 1.0 / size.y;
float divisor = 0.0;
for (float x = -25.0; x <= 25.0; x++)
{
C += texture2D(source, uv + vec2(x * width, 0.0));
C += texture2D(source, uv + vec2(0.0, x * height));
divisor++;
}
C*=0.5;
return vec4(C.r / divisor, C.g / divisor, C.b / divisor, 1.0);
}
void main()
{
vec2 uv = gl_FragCoord.xy / iResolution.xy;
vec4 videoColor = texture2D(u_Sampler, uv);
vec4 maskColor = texture2D(u_Mask, uv);
gl_FragColor = blur(u_Sampler, iResolution.xy, uv);
}
vec4 blurColor = blur(u_Sampler, iResolution.xy, uv);
gl_FragColor = mix(blurColor, videoColor, maskColor.r);
But FYI it's not common to blur in one pass like you have. It's more common to blur in one direction (horizontally), then blur the result of that vertically, then mix the results, blurredTexture, videoTexture, mask.

Applying displacement mapping and specular mapping

I am trying to apply both displacement mapping and specular mapping for the earth and only displacement mapping for the moon.
I could transfer height map to normal map but if I use the same height map to apply displacement mapping, it does not work as I expected..
Here is the example image
as you can see the bumpness around the earth and the moon but there are no actual height diffrences.
If I apply specular map to the earth, the earth becomes like this
I want only the ocean of the earth to shine but my code turns the earth into the whole black, I can only see some white dots on the earth...
These textures are from this site
Here is my both vertex shader code and the fragment shader code
"use strict";
const loc_aPosition = 3;
const loc_aNormal = 5;
const loc_aTexture = 7;
const VSHADER_SOURCE =
`#version 300 es
layout(location=${loc_aPosition}) in vec4 aPosition;
layout(location=${loc_aNormal}) in vec4 aNormal;
layout(location=${loc_aTexture}) in vec2 aTexCoord;
uniform mat4 uMvpMatrix;
uniform mat4 uModelMatrix; // Model matrix
uniform mat4 uNormalMatrix; // Transformation matrix of the normal
uniform sampler2D earth_disp;
uniform sampler2D moon_disp;
//uniform float earth_dispScale;
//uniform float moon_dispScale;
//uniform float earth_dispBias;
//uniform float moon_dispBias;
uniform bool uEarth;
uniform bool uMoon;
out vec2 vTexCoord;
out vec3 vNormal;
out vec3 vPosition;
void main()
{
float disp;
if(uEarth)
disp = texture(earth_disp, aTexCoord).r; //Extracting the color information from the image
else if(uMoon)
disp = texture(moon_disp, aTexCoord).r; //Extracting the color information from the image
vec4 displace = aPosition;
float displaceFactor = 2.0;
float displaceBias = 0.5;
if(uEarth || uMoon) //Using Displacement Mapping
{
displace += (displaceFactor * disp - displaceBias) * aNormal;
gl_Position = uMvpMatrix * displace;
}
else //Not using displacement mapping
gl_Position = uMvpMatrix * aPosition;
// Calculate the vertex position in the world coordinate
vPosition = vec3(uModelMatrix * aPosition);
vNormal = normalize(vec3(uNormalMatrix * aNormal));
vTexCoord = aTexCoord;
}`;
// Fragment shader program
const FSHADER_SOURCE =
`#version 300 es
precision mediump float;
uniform vec3 uLightColor; // Light color
uniform vec3 uLightPosition; // Position of the light source
uniform vec3 uAmbientLight; // Ambient light color
uniform sampler2D sun_color;
uniform sampler2D earth_color;
uniform sampler2D moon_color;
uniform sampler2D earth_bump;
uniform sampler2D moon_bump;
uniform sampler2D specularMap;
in vec3 vNormal;
in vec3 vPosition;
in vec2 vTexCoord;
out vec4 fColor;
uniform bool uIsSun;
uniform bool uIsEarth;
uniform bool uIsMoon;
vec2 dHdxy_fwd(sampler2D bumpMap, vec2 UV, float bumpScale)
{
vec2 dSTdx = dFdx( UV );
vec2 dSTdy = dFdy( UV );
float Hll = bumpScale * texture( bumpMap, UV ).x;
float dBx = bumpScale * texture( bumpMap, UV + dSTdx ).x - Hll;
float dBy = bumpScale * texture( bumpMap, UV + dSTdy ).x - Hll;
return vec2( dBx, dBy );
}
vec3 pertubNormalArb(vec3 surf_pos, vec3 surf_norm, vec2 dHdxy)
{
vec3 vSigmaX = vec3( dFdx( surf_pos.x ), dFdx( surf_pos.y ), dFdx( surf_pos.z ) );
vec3 vSigmaY = vec3( dFdy( surf_pos.x ), dFdy( surf_pos.y ), dFdy( surf_pos.z ) );
vec3 vN = surf_norm; // normalized
vec3 R1 = cross( vSigmaY, vN );
vec3 R2 = cross( vN, vSigmaX );
float fDet = dot( vSigmaX, R1 );
fDet *= ( float( gl_FrontFacing ) * 2.0 - 1.0 );
vec3 vGrad = sign( fDet ) * ( dHdxy.x * R1 + dHdxy.y * R2 );
return normalize( abs( fDet ) * surf_norm - vGrad );
}
void main()
{
vec2 dHdxy;
vec3 bumpNormal;
float bumpness = 1.0;
if(uIsSun)
fColor = texture(sun_color, vTexCoord);
else if(uIsEarth)
{
fColor = texture(earth_color, vTexCoord);
dHdxy = dHdxy_fwd(earth_bump, vTexCoord, bumpness);
}
else if(uIsMoon)
{
fColor = texture(moon_color, vTexCoord);
dHdxy = dHdxy_fwd(moon_bump, vTexCoord, bumpness);
}
// Normalize the normal because it is interpolated and not 1.0 in length any more
vec3 normal = normalize(vNormal);
// Calculate the light direction and make its length 1.
vec3 lightDirection = normalize(uLightPosition - vPosition);
// The dot product of the light direction and the orientation of a surface (the normal)
float nDotL;
if(uIsSun)
nDotL = 1.0;
else
nDotL = max(dot(lightDirection, normal), 0.0);
// Calculate the final color from diffuse reflection and ambient reflection
vec3 diffuse = uLightColor * fColor.rgb * nDotL;
vec3 ambient = uAmbientLight * fColor.rgb;
float specularFactor = texture(specularMap, vTexCoord).r; //Extracting the color information from the image
vec3 diffuseBump;
if(uIsEarth || uIsMoon)
{
bumpNormal = pertubNormalArb(vPosition, normal, dHdxy);
diffuseBump = min(diffuse + dot(bumpNormal, lightDirection), 1.1);
}
vec3 specular = vec3(0.0);
float shiness = 12.0;
vec3 lightSpecular = vec3(1.0);
if(uIsEarth && nDotL > 0.0)
{
vec3 v = normalize(-vPosition); // EyePosition
vec3 r = reflect(-lightDirection, bumpNormal); // Reflect from the surface
specular = lightSpecular * specularFactor * pow(dot(r, v), shiness);
}
//Update Final Color
if(uIsEarth)
fColor = vec4( (diffuse * diffuseBump * specular) + ambient, fColor.a); // Specular
else if(uIsMoon)
fColor = vec4( (diffuse * diffuseBump) + ambient, fColor.a);
else if(uIsSun)
fColor = vec4(diffuse + ambient, fColor.a);
}`;
Could you tell me where do I have to check?
If it was me I'd first strip the shader down the simplest thing and see if I get what I want. You want a specular shine so do you get a specular shine with only specular calculations in your shaders
Trimming your shaders to just draw a flat phong shading didn't produce the correct results
This line
fColor = vec4( (diffuse * specular) + ambient, fColor.a);
needed to be
fColor = vec4( (diffuse + specular) + ambient, fColor.a);
You add the specular, not multiply by it.
"use strict";
const loc_aPosition = 3;
const loc_aNormal = 5;
const loc_aTexture = 7;
const VSHADER_SOURCE =
`#version 300 es
layout(location=${loc_aPosition}) in vec4 aPosition;
layout(location=${loc_aNormal}) in vec4 aNormal;
uniform mat4 uMvpMatrix;
uniform mat4 uModelMatrix; // Model matrix
uniform mat4 uNormalMatrix; // Transformation matrix of the normal
out vec3 vNormal;
out vec3 vPosition;
void main()
{
gl_Position = uMvpMatrix * aPosition;
// Calculate the vertex position in the world coordinate
vPosition = vec3(uModelMatrix * aPosition);
vNormal = normalize(vec3(uNormalMatrix * aNormal));
}`;
// Fragment shader program
const FSHADER_SOURCE =
`#version 300 es
precision highp float;
uniform vec3 uLightColor; // Light color
uniform vec3 uLightPosition; // Position of the light source
uniform vec3 uAmbientLight; // Ambient light color
in vec3 vNormal;
in vec3 vPosition;
out vec4 fColor;
void main()
{
vec2 dHdxy;
vec3 bumpNormal;
float bumpness = 1.0;
fColor = vec4(0.5, 0.5, 1, 1);
// Normalize the normal because it is interpolated and not 1.0 in length any more
vec3 normal = normalize(vNormal);
// Calculate the light direction and make its length 1.
vec3 lightDirection = normalize(uLightPosition - vPosition);
// The dot product of the light direction and the orientation of a surface (the normal)
float nDotL;
nDotL = max(dot(lightDirection, normal), 0.0);
// Calculate the final color from diffuse reflection and ambient reflection
vec3 diffuse = uLightColor * fColor.rgb * nDotL;
vec3 ambient = uAmbientLight * fColor.rgb;
float specularFactor = 1.0;
bumpNormal = normal;
vec3 specular = vec3(0.0);
float shiness = 12.0;
vec3 lightSpecular = vec3(1.0);
vec3 v = normalize(-vPosition); // EyePosition
vec3 r = reflect(-lightDirection, bumpNormal); // Reflect from the surface
specular = lightSpecular * specularFactor * pow(dot(r, v), shiness);
fColor = vec4( (diffuse + specular) + ambient, fColor.a); // Specular
}`;
function main() {
const m4 = twgl.m4;
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) { return alert('need webgl2'); }
const prgInfo = twgl.createProgramInfo(gl, [VSHADER_SOURCE, FSHADER_SOURCE]);
const verts = twgl.primitives.createSphereVertices(1, 40, 40);
// calls gl.createBuffer, gl.bindBuffer, gl.bufferData for each array
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
aPosition: verts.position,
aNormal: verts.normal,
aTexCoord: verts.texcoord,
indices: verts.indices,
});
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer for each attribute
twgl.setBuffersAndAttributes(gl, prgInfo, bufferInfo);
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.clearColor(0, 0, 0, 1);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.CULL_FACE);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.useProgram(prgInfo.program);
const fov = 60 * Math.PI / 180;
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const near = 0.1;
const far = 20.0;
const mat = m4.perspective(fov, aspect, near, far);
m4.translate(mat, [0, 0, -3], mat);
// calls gl.activeTexture, gl.bindTexture, gl.uniform
twgl.setUniforms(prgInfo, {
uMvpMatrix: mat,
uModelMatrix: m4.identity(), // Model matrix
uNormalMatrix: m4.identity(), // Transformation matrix of the normal
uLightColor: [1, 1, 1], // Light color
uLightPosition: [2, 2, 10], // Position of the light source
uAmbientLight: [0, 0, 0], // Ambient light color
});
// calls gl.drawArrays or gl.drawElements
twgl.drawBufferInfo(gl, bufferInfo);
}
main();
body { margin: 0 }
canvas { display: block; width: 100vw; height: 100vh; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
So now we can add in the specular map
"use strict";
const loc_aPosition = 3;
const loc_aNormal = 5;
const loc_aTexCoord = 7;
const VSHADER_SOURCE =
`#version 300 es
layout(location=${loc_aPosition}) in vec4 aPosition;
layout(location=${loc_aNormal}) in vec4 aNormal;
layout(location=${loc_aTexCoord}) in vec2 aTexCoord;
uniform mat4 uMvpMatrix;
uniform mat4 uModelMatrix; // Model matrix
uniform mat4 uNormalMatrix; // Transformation matrix of the normal
out vec3 vNormal;
out vec3 vPosition;
out vec2 vTexCoord;
void main()
{
gl_Position = uMvpMatrix * aPosition;
// Calculate the vertex position in the world coordinate
vPosition = vec3(uModelMatrix * aPosition);
vNormal = normalize(vec3(uNormalMatrix * aNormal));
vTexCoord = aTexCoord;
}`;
// Fragment shader program
const FSHADER_SOURCE =
`#version 300 es
precision highp float;
uniform vec3 uLightColor; // Light color
uniform vec3 uLightPosition; // Position of the light source
uniform vec3 uAmbientLight; // Ambient light color
uniform sampler2D specularMap;
in vec3 vNormal;
in vec3 vPosition;
in vec2 vTexCoord;
out vec4 fColor;
void main()
{
vec2 dHdxy;
vec3 bumpNormal;
float bumpness = 1.0;
fColor = vec4(0.5, 0.5, 1, 1);
// Normalize the normal because it is interpolated and not 1.0 in length any more
vec3 normal = normalize(vNormal);
// Calculate the light direction and make its length 1.
vec3 lightDirection = normalize(uLightPosition - vPosition);
// The dot product of the light direction and the orientation of a surface (the normal)
float nDotL;
nDotL = max(dot(lightDirection, normal), 0.0);
// Calculate the final color from diffuse reflection and ambient reflection
vec3 diffuse = uLightColor * fColor.rgb * nDotL;
vec3 ambient = uAmbientLight * fColor.rgb;
float specularFactor = texture(specularMap, vTexCoord).r; //Extracting the color information from the image
bumpNormal = normal;
vec3 specular = vec3(0.0);
float shiness = 12.0;
vec3 lightSpecular = vec3(1.0);
vec3 v = normalize(-vPosition); // EyePosition
vec3 r = reflect(-lightDirection, bumpNormal); // Reflect from the surface
specular = lightSpecular * specularFactor * pow(dot(r, v), shiness);
fColor = vec4( (diffuse + specular) + ambient, fColor.a); // Specular
}`;
function main() {
const m4 = twgl.m4;
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) { return alert('need webgl2'); }
const prgInfo = twgl.createProgramInfo(gl, [VSHADER_SOURCE, FSHADER_SOURCE]);
const verts = twgl.primitives.createSphereVertices(1, 40, 40);
// calls gl.createBuffer, gl.bindBuffer, gl.bufferData for each array
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
aPosition: verts.position,
aNormal: verts.normal,
aTexCoord: verts.texcoord,
indices: verts.indices,
});
const specularTex = twgl.createTexture(gl, {src: 'https://i.imgur.com/JlIJu5V.jpg'});
function render(time) {
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.clearColor(0, 0, 0, 1);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.CULL_FACE);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer for each attribute
twgl.setBuffersAndAttributes(gl, prgInfo, bufferInfo);
gl.useProgram(prgInfo.program);
const fov = 60 * Math.PI / 180;
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const near = 0.1;
const far = 20.0;
const mat = m4.perspective(fov, aspect, near, far);
m4.translate(mat, [0, 0, -3], mat);
const model = m4.rotationY(time / 1000);
m4.multiply(mat, model, mat);
// calls gl.activeTexture, gl.bindTexture, gl.uniform
twgl.setUniforms(prgInfo, {
uMvpMatrix: mat,
uModelMatrix: model, // Model matrix
uNormalMatrix: model, // Transformation matrix of the normal
uLightColor: [1, 1, 1], // Light color
uLightPosition: [2, 2, 10], // Position of the light source
uAmbientLight: [0, 0, 0], // Ambient light color
specularMap: specularTex,
});
// calls gl.drawArrays or gl.drawElements
twgl.drawBufferInfo(gl, bufferInfo);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
}
main();
body { margin: 0 }
canvas { display: block; width: 100vw; height: 100vh; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Then you should argably not use lots of boolean conditionals on your shader. Either make different shaders for find a way to do it without the booleans. So for example we don't need
uniform sampler2D earth_disp;
uniform sampler2D moon_disp;
uniform sampler2D sun_color;
uniform sampler2D earth_color;
uniform sampler2D moon_color;
uniform sampler2D earth_bump;
uniform sampler2D moon_bump;
uniform bool uIsSun;
uniform bool uIsEarth;
uniform bool uIsMoon;
we can just have
uniform sampler2D displacementMap;
uniform sampler2D surfaceColor;
uniform sampler2D bumpMap;
Then we can set the displacementMap and the bumpMap to a single pixel 0,0,0,0 texture and there will be no displacement and no bump.
As for different lighting for sun, given the sun uses neither the bump map nor the displacement map nor even lighting at all it would arguably be better to use a different shader but, we can also just add a maxDot value like this
uniform float maxDot;
...
nDotL = max(dot(lightDirection, normal), maxDot)
If maxDot is zero we'll get a normal dot product. If maxDot is one we get no lighting.
"use strict";
const loc_aPosition = 3;
const loc_aNormal = 5;
const loc_aTexture = 7;
const VSHADER_SOURCE =
`#version 300 es
layout(location=${loc_aPosition}) in vec4 aPosition;
layout(location=${loc_aNormal}) in vec3 aNormal;
layout(location=${loc_aTexture}) in vec2 aTexCoord;
uniform mat4 uMvpMatrix;
uniform mat4 uModelMatrix; // Model matrix
uniform mat4 uNormalMatrix; // Transformation matrix of the normal
uniform sampler2D displacementMap;
out vec2 vTexCoord;
out vec3 vNormal;
out vec3 vPosition;
void main()
{
float disp;
disp = texture(displacementMap, aTexCoord).r;
vec4 displace = aPosition;
float displaceFactor = 0.1;
float displaceBias = 0.0;
displace.xyz += (displaceFactor * disp - displaceBias) * aNormal;
gl_Position = uMvpMatrix * displace;
// Calculate the vertex position in the world coordinate
vPosition = vec3(uModelMatrix * aPosition);
vNormal = normalize(mat3(uNormalMatrix) * aNormal);
vTexCoord = aTexCoord;
}`;
// Fragment shader program
const FSHADER_SOURCE =
`#version 300 es
precision highp float;
uniform vec3 uLightColor; // Light color
uniform vec3 uLightPosition; // Position of the light source
uniform vec3 uAmbientLight; // Ambient light color
uniform sampler2D surfaceColor;
uniform sampler2D bumpMap;
uniform sampler2D specularMap;
uniform float maxDot;
in vec3 vNormal;
in vec3 vPosition;
in vec2 vTexCoord;
out vec4 fColor;
vec2 dHdxy_fwd(sampler2D bumpMap, vec2 UV, float bumpScale)
{
vec2 dSTdx = dFdx( UV );
vec2 dSTdy = dFdy( UV );
float Hll = bumpScale * texture( bumpMap, UV ).x;
float dBx = bumpScale * texture( bumpMap, UV + dSTdx ).x - Hll;
float dBy = bumpScale * texture( bumpMap, UV + dSTdy ).x - Hll;
return vec2( dBx, dBy );
}
vec3 pertubNormalArb(vec3 surf_pos, vec3 surf_norm, vec2 dHdxy)
{
vec3 vSigmaX = vec3( dFdx( surf_pos.x ), dFdx( surf_pos.y ), dFdx( surf_pos.z ) );
vec3 vSigmaY = vec3( dFdy( surf_pos.x ), dFdy( surf_pos.y ), dFdy( surf_pos.z ) );
vec3 vN = surf_norm; // normalized
vec3 R1 = cross( vSigmaY, vN );
vec3 R2 = cross( vN, vSigmaX );
float fDet = dot( vSigmaX, R1 );
fDet *= ( float( gl_FrontFacing ) * 2.0 - 1.0 );
vec3 vGrad = sign( fDet ) * ( dHdxy.x * R1 + dHdxy.y * R2 );
return normalize( abs( fDet ) * surf_norm - vGrad );
}
void main()
{
vec2 dHdxy;
vec3 bumpNormal;
float bumpness = 1.0;
fColor = texture(surfaceColor, vTexCoord);
dHdxy = dHdxy_fwd(bumpMap, vTexCoord, bumpness);
// Normalize the normal because it is interpolated and not 1.0 in length any more
vec3 normal = normalize(vNormal);
// Calculate the light direction and make its length 1.
vec3 lightDirection = normalize(uLightPosition - vPosition);
// The dot product of the light direction and the orientation of a surface (the normal)
float nDotL;
nDotL = max(dot(lightDirection, normal), maxDot);
// Calculate the final color from diffuse reflection and ambient reflection
vec3 diffuse = uLightColor * fColor.rgb * nDotL;
vec3 ambient = uAmbientLight * fColor.rgb;
float specularFactor = texture(specularMap, vTexCoord).r; //Extracting the color information from the image
vec3 diffuseBump;
bumpNormal = pertubNormalArb(vPosition, normal, dHdxy);
diffuseBump = min(diffuse + dot(bumpNormal, lightDirection), 1.1);
vec3 specular = vec3(0.0);
float shiness = 12.0;
vec3 lightSpecular = vec3(1.0);
vec3 v = normalize(-vPosition); // EyePosition
vec3 r = reflect(-lightDirection, bumpNormal); // Reflect from the surface
specular = lightSpecular * specularFactor * pow(dot(r, v), shiness);
//Update Final Color
fColor = vec4( (diffuse * diffuseBump + specular) + ambient, fColor.a); // Specular
}`;
function main() {
const m4 = twgl.m4;
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) { return alert('need webgl2'); }
const prgInfo = twgl.createProgramInfo(gl, [VSHADER_SOURCE, FSHADER_SOURCE]);
const verts = twgl.primitives.createSphereVertices(1, 40, 40);
// calls gl.createBuffer, gl.bindBuffer, gl.bufferData for each array
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
aPosition: verts.position,
aNormal: verts.normal,
aTexCoord: verts.texcoord,
indices: verts.indices,
});
const textures = twgl.createTextures(gl, {
zero: { src: new Uint8Array([0, 0, 0, 0])},
earthSpecular: { src: 'https://i.imgur.com/JlIJu5V.jpg' },
earthColor: { src: 'https://i.imgur.com/eCpD7bM.jpg' },
earthBump: { src: 'https://i.imgur.com/LzFNOP8.jpg' },
sunColor: { src: 'https://i.imgur.com/gl8zBLI.jpg', },
moonColor: { src: 'https://i.imgur.com/oLiU4fm.jpg', },
moonBump: { src: 'https://i.imgur.com/bDnjW8C.jpg', },
});
function render(time) {
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer for each attribute
twgl.setBuffersAndAttributes(gl, prgInfo, bufferInfo);
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.CULL_FACE);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.useProgram(prgInfo.program);
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const fov = 60 * Math.PI / 180 / aspect;
const near = 0.1;
const far = 20.0;
const viewProjection = m4.perspective(fov, aspect, near, far);
m4.translate(viewProjection, [0, 0, -6], viewProjection);
draw([0, 0, 0], {
displacementMap: textures.earthBump,
bumpMap: textures.earthBump,
surfaceColor: textures.earthColor,
specularMap: textures.earthSpecular,
maxDot: 0,
uAmbientLight: [0, 0, 0],
});
draw([-2.2, 0, 0], {
displacementMap: textures.zero,
bumpMap: textures.zero,
surfaceColor: textures.sunColor,
specularMap: textures.zero,
maxDot: 1,
uAmbientLight: [0, 0, 0],
});
draw([2.2, 0, 0], {
displacementMap: textures.moonBump,
bumpMap: textures.moonBump,
surfaceColor: textures.moonColor,
specularMap: textures.zero,
maxDot: 0,
uAmbientLight: [0, 0, 0],
});
function draw(translation, uniforms) {
const model = m4.translation(translation);
m4.rotateY(model, time / 1000, model);
// calls gl.activeTexture, gl.bindTexture, gl.uniform
twgl.setUniforms(prgInfo, {
uMvpMatrix: m4.multiply(viewProjection, model),
uModelMatrix: model, // Model matrix
uNormalMatrix: m4.transpose(m4.inverse(model)), // Transformation matrix of the normal
uLightColor: [1, 1, 1], // Light color
uLightPosition: [2, 2, 10], // Position of the light source
uAmbientLight: [1, 0, 0], // Ambient light color
});
twgl.setUniforms(prgInfo, uniforms);
// calls gl.drawArrays or gl.drawElements
twgl.drawBufferInfo(gl, bufferInfo);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
}
main();
body { margin: 0 }
canvas { display: block; width: 100vw; height: 100vh; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
As for the displacement, displacement only works on vertices so you need a lot of vertices in your sphere to be able to see any displacement
As well there was an bug related to displacement. You're passing in normals as vec4 and this line
displace += (displaceFactor * disp - displaceBias) * aNormal;
Ends up adding a vec4 displacement. In other words let's say you started with an a_Position of vec4(1,0,0,1) which would be on the left side of the sphere. aNormal because you declared it as a vec4 is probably vec4(1,0,0,1) as well. Assuming you're actually passing it vec3 normal data via attributes from your buffer the default value for W is 1. Let's assume disp is 1, displaceFactor is 2 and displaceBias is 0.5 which is what you had. You end up wioth
displace = vec4(1,0,0,1) + (2 * 1 + 0.5) * vec4(1,0,0,1)
displace = vec4(1,0,0,1) + (1.5) * vec4(1,0,0,1)
displace = vec4(1,0,0,1) + vec4(1.5,0,0,1.5)
displace = vec4(2.5,0,0,2.5)
But you don't want W to be 2.5. One fix is to just use the xyz part of the normal.
displace.xyz += (displaceFactor * disp - displaceBias) * aNormal.xyz;
The more normal fix would be to only declare the normal attribute as vec3
in vec3 aNormal;
displace.xyz += (displaceFactor * disp - displaceBias) * aNormal;
In my example above the spheres are only radius = 1 so we only want adjust this displacement a little. I set displaceFactor to 0.1 and displaceBias to 0.

WEBGL Fluid simulation

I am trying to get a fluid simulation to work using WebGL using http://meatfighter.com/fluiddynamics/GPU_Gems_Chapter_38.pdf as a resource. I have implemented everything but I feel like there are multiple things that aren't working correctly. I added boundaries but it seems like they are having no effect, which makes me suspicious about how much pressure and advection are working. I displayed the divergence and I get very little around where I am moving the object around as well as when the velocity hits the edge (boundary), but the pressure that I get is completely empty. I calculate pressure using the diffusion shader as described in the linked resource.
I know the code I am posting is al little confusing due to the nature of what it is about. I can supply any pictures/links to the simulation if that would help.
--EDIT--
After some more investigation I believe the problem is related to my advection function. or at least a problem. I am unsure how to fix it though.
Instead of posting all of my code, the general process I follow is:
advect velocity
diffuse velocity
add velocity
calculate divergence
compute pressure
subtract gradient
for diffusing velocity and computing pressure I am only do 10 iterations because thats all my computer can handle with my implementation (I will optimize once I get it working), but I feel like the computing pressure and subtracting gradient are not having any effect.
here are the shaders I am using:
//advection
uniform vec2 res;//The width and height of our screen
uniform sampler2D velocity;//input velocity
uniform sampler2D quantity;//quantity to advect
void main() {
vec2 pixel = gl_FragCoord.xy / res.xy;
float i0, j0, i1, j1;
float x, y, s0, s1, t0, t1, dxt0, dyt0;
float dt = 1.0/60.0;
float Nx = res.x -1.0;
float Ny = res.y -1.0;
float i = pixel.x;
float j = pixel.y;
dxt0 = dt ;
dyt0 = dt ;
x = gl_FragCoord.x - dxt0 * (texture2D(velocity, pixel).x );
y = gl_FragCoord.y - dyt0 * (texture2D(velocity, pixel).y );
i0=x-0.5;
i1=x+0.5;
j0=y-0.5;
j1=y+0.5;
s1 = x-i0;
s0 = 1.0-s1;
t1 = y-j0;
t0 = 1.0-t1;
float p1 = (t0 * texture2D(quantity, vec2(i0,j0)/res.xy).r);
float p2 = (t1 * texture2D(quantity, vec2(i0,j1)/res.xy).r);
float p3 = (t0 * texture2D(quantity, vec2(i1,j0)/res.xy).r);
float p4 = (t1 * texture2D(quantity, vec2(i1,j1)/res.xy).r);
float total1 = s0 * (p1 + p2);
float total2 = s1 * (p3 + p4);
gl_FragColor.r = total1 + total2;
p1 = (t0 * texture2D(quantity, vec2(i0,j0)/res.xy).g);
p2 = (t1 * texture2D(quantity, vec2(i0,j1)/res.xy).g);
p3 = (t0 * texture2D(quantity, vec2(i1,j0)/res.xy).g);
p4 = (t1 * texture2D(quantity, vec2(i1,j1)/res.xy).g);
total1 = s0 * (p1 + p2);
total2 = s1 * (p3 + p4);
gl_FragColor.g = total1 + total2;
}
//diffusion shader starts here
uniform vec2 res;//The width and height of our screen
uniform sampler2D x;//Our input texture
uniform sampler2D b;
uniform float alpha;
uniform float rBeta;
void main() {
float xPixel = 1.0/res.x;
float yPixel = 1.0/res.y;
vec2 pixel = gl_FragCoord.xy / res.xy;
gl_FragColor = texture2D( b, pixel );
vec4 leftColor = texture2D(x,vec2(pixel.x-xPixel,pixel.y));
vec4 rightColor = texture2D(x,vec2(pixel.x+xPixel,pixel.y));
vec4 upColor = texture2D(x,vec2(pixel.x,pixel.y-yPixel));
vec4 downColor = texture2D(x,vec2(pixel.x,pixel.y+yPixel));
gl_FragColor.r = (gl_FragColor.r * alpha +leftColor.r + rightColor.r + upColor.r + downColor.r) * rBeta;
gl_FragColor.g = (gl_FragColor.g * alpha +leftColor.g + rightColor.g + upColor.g + downColor.g)* rBeta;
gl_FragColor.b = (gl_FragColor.b * alpha +leftColor.b + rightColor.b + upColor.b + downColor.b)* rBeta;
}
//gradient
uniform vec2 res;//The width and height of our screen
uniform sampler2D velocity;//Our input velocity
uniform sampler2D pressure;//Our input pressure
void main() {
float xPixel = 1.0/res.x;
float yPixel = 1.0/res.y;
vec2 pixel = gl_FragCoord.xy / res.xy;
vec4 leftColor = texture2D(pressure, vec2(pixel.x-xPixel,pixel.y));
vec4 rightColor = texture2D(pressure, vec2(pixel.x+xPixel,pixel.y));
vec4 upColor = texture2D(pressure, vec2(pixel.x,pixel.y-yPixel));
vec4 downColor = texture2D(pressure, vec2(pixel.x,pixel.y+yPixel));
vec2 gradient = xPixel/2.0 * vec2((rightColor.x - leftColor.x), (upColor.y - downColor.y));
//Diffuse equation
gl_FragColor = texture2D(velocity, pixel) ;
gl_FragColor.xy -= gradient;
}
uniform vec2 res;//The width and height of our screen
uniform sampler2D velocity;//Our input texture
void main() {
float xPixel = 1.0/res.x;
float yPixel = 1.0/res.y;
vec2 pixel = gl_FragCoord.xy / res.xy;
vec4 leftColor = texture2D(velocity, vec2(pixel.x-xPixel,pixel.y));
vec4 rightColor = texture2D(velocity, vec2(pixel.x+xPixel,pixel.y));
vec4 upColor = texture2D(velocity, vec2(pixel.x,pixel.y-yPixel));
vec4 downColor = texture2D(velocity, vec2(pixel.x,pixel.y+yPixel));
float div = xPixel/2.0 * ((rightColor.x - leftColor.x) + (upColor.y - downColor.y));
//Diffuse equation
gl_FragColor = vec4(div);
}

Remove black and white effect from halftone filter

In Brad Larson's excellent GPUImage, there is a halftone filter which also turns the picture black and white. I am just wanting the halftone effect without the black and white and I was wondering can anyone tell me how what I can remove from the following code to fix this? Have been playing around with it, but virtually have no experience in openGL and am not sure what to eliminate.
NSString *const kGPUImageHalftoneFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float fractionalWidthOfPixel;
uniform highp float aspectRatio;
uniform highp float dotScaling;
const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);
void main()
{
highp vec2 sampleDivisor = vec2(fractionalWidthOfPixel, fractionalWidthOfPixel / aspectRatio);
highp vec2 samplePos = textureCoordinate - mod(textureCoordinate, sampleDivisor) + 0.5 * sampleDivisor;
highp vec2 textureCoordinateToUse = vec2(textureCoordinate.x, (textureCoordinate.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp vec2 adjustedSamplePos = vec2(samplePos.x, (samplePos.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromSamplePoint = distance(adjustedSamplePos, textureCoordinateToUse);
lowp vec3 sampledColor = texture2D(inputImageTexture, samplePos ).rgb;
highp float dotScaling = 1.0 - dot(sampledColor, W);
lowp float checkForPresenceWithinDot = 1.0 - step(distanceFromSamplePoint, (fractionalWidthOfPixel * 0.5) * dotScaling);
gl_FragColor = vec4(vec3(checkForPresenceWithinDot), 1.0);
}
);
You can change the last line to
gl_FragColor = vec4(checkForPresenceWithinDot * sampledColor, 1.0);
This will make the effect have color instead of black and white only.

Motion Blur effect on UIImage on iOS

Is there a way to get a Motion Blur effect on a UIImage?
I tried GPUImage, Filtrr and the iOS Core Image but all of these have regular blur - no motion blur.
I also tried UIImage-DSP but it's Motion Blur is almost non visible. I need something much stronger.
As I commented on the repository, I just added motion and zoom blurs to GPUImage. These are the GPUImageMotionBlurFilter and GPUImageZoomBlurFilter classes. This is an example of the zoom blur:
For the motion blur, I do a 9-hit Gaussian blur over a single direction. This is achieved using the following vertex and fragment shaders:
Vertex:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform highp vec2 directionalTexelStep;
varying vec2 textureCoordinate;
varying vec2 oneStepBackTextureCoordinate;
varying vec2 twoStepsBackTextureCoordinate;
varying vec2 threeStepsBackTextureCoordinate;
varying vec2 fourStepsBackTextureCoordinate;
varying vec2 oneStepForwardTextureCoordinate;
varying vec2 twoStepsForwardTextureCoordinate;
varying vec2 threeStepsForwardTextureCoordinate;
varying vec2 fourStepsForwardTextureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
oneStepBackTextureCoordinate = inputTextureCoordinate.xy - directionalTexelStep;
twoStepsBackTextureCoordinate = inputTextureCoordinate.xy - 2.0 * directionalTexelStep;
threeStepsBackTextureCoordinate = inputTextureCoordinate.xy - 3.0 * directionalTexelStep;
fourStepsBackTextureCoordinate = inputTextureCoordinate.xy - 4.0 * directionalTexelStep;
oneStepForwardTextureCoordinate = inputTextureCoordinate.xy + directionalTexelStep;
twoStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 2.0 * directionalTexelStep;
threeStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 3.0 * directionalTexelStep;
fourStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 4.0 * directionalTexelStep;
}
Fragment:
precision highp float;
uniform sampler2D inputImageTexture;
varying vec2 textureCoordinate;
varying vec2 oneStepBackTextureCoordinate;
varying vec2 twoStepsBackTextureCoordinate;
varying vec2 threeStepsBackTextureCoordinate;
varying vec2 fourStepsBackTextureCoordinate;
varying vec2 oneStepForwardTextureCoordinate;
varying vec2 twoStepsForwardTextureCoordinate;
varying vec2 threeStepsForwardTextureCoordinate;
varying vec2 fourStepsForwardTextureCoordinate;
void main()
{
lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
fragmentColor += texture2D(inputImageTexture, oneStepBackTextureCoordinate) * 0.15;
fragmentColor += texture2D(inputImageTexture, twoStepsBackTextureCoordinate) * 0.12;
fragmentColor += texture2D(inputImageTexture, threeStepsBackTextureCoordinate) * 0.09;
fragmentColor += texture2D(inputImageTexture, fourStepsBackTextureCoordinate) * 0.05;
fragmentColor += texture2D(inputImageTexture, oneStepForwardTextureCoordinate) * 0.15;
fragmentColor += texture2D(inputImageTexture, twoStepsForwardTextureCoordinate) * 0.12;
fragmentColor += texture2D(inputImageTexture, threeStepsForwardTextureCoordinate) * 0.09;
fragmentColor += texture2D(inputImageTexture, fourStepsForwardTextureCoordinate) * 0.05;
gl_FragColor = fragmentColor;
}
As an optimization, I calculate the step size between texture samples outside of the fragment shader by using the angle, blur size, and the image dimensions. This is then passed into the vertex shader, so that I can calculate the texture sampling positions there and interpolate across them in the fragment shader. This avoids dependent texture reads on iOS devices.
The zoom blur is much slower, because I still do these calculations in the fragment shader. No doubt there's a way I can optimize this, but I haven't tried yet. The zoom blur uses a 9-hit Gaussian blur where the direction and per-sample offset distance vary as a function of the placement of the pixel vs. the center of the blur.
It uses the following fragment shader (and a standard passthrough vertex shader):
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp vec2 blurCenter;
uniform highp float blurSize;
void main()
{
// TODO: Do a more intelligent scaling based on resolution here
highp vec2 samplingOffset = 1.0/100.0 * (blurCenter - textureCoordinate) * blurSize;
lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + samplingOffset) * 0.15;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + (2.0 * samplingOffset)) * 0.12;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + (3.0 * samplingOffset)) * 0.09;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + (4.0 * samplingOffset)) * 0.05;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - samplingOffset) * 0.15;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - (2.0 * samplingOffset)) * 0.12;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - (3.0 * samplingOffset)) * 0.09;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - (4.0 * samplingOffset)) * 0.05;
gl_FragColor = fragmentColor;
}
Note that both of these blurs are hardcoded at 9 samples for performance reasons. This means that at larger blur sizes, you'll start to see artifacts from the limited samples here. For larger blurs, you'll need to run these filters multiple times or extend them to support more Gaussian samples. However, more samples will lead to much slower rendering times because of the limited texture sampling bandwidth on iOS devices.
CoreImage has a Motion Blur filter.
It's called CIMotionBlur... http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Reference/CoreImageFilterReference/Reference/reference.html#//apple_ref/doc/filter/ci/CIMotionBlur

Resources