Quality loss (bluriness) in shader - webgl

I am trying to make a shader that either passes through an image unaltered or displays a tiled texture depending on some conditions. It more or less works, but I noticed that the tiled texture doesn't quite looks right, so I simplified the shader for testing so it would only show the tiled image:
precision highp float;
uniform sampler2D uSampler;
varying vec2 vTextureCoord;
varying vec4 vColor;
varying vec2 vFilterCoord;
uniform vec2 dimensions;
uniform vec4 filterArea;
uniform sampler2D selector;
uniform vec2 selectorSize;
uniform sampler2D alternate;
uniform vec2 alternateSize;
vec2 mapCoord( vec2 coord )
{
coord *= filterArea.xy;
coord += filterArea.zw;
return coord;
}
vec2 unmapCoord( vec2 coord )
{
coord -= filterArea.zw;
coord /= filterArea.xy;
return coord;
}
void main()
{
vec2 coord = vTextureCoord;
coord = mapCoord(coord);
// sample the alternate:
vec2 av = mod( coord, alternateSize ) / (alternateSize - 1.0);
vec4 alt = texture2D(alternate, av);
gl_FragColor = alt ;
}
I am not quite sure what's going on. The original image is 100x100, and the repeating area is 100x100. The pattern looks the same, but it's slightly blurred in in the shader (see screenshots below). Does this have to do with retina? (I haven't done anything special to setup retina) Mipmaps? Something else?
UPDATE: As suggested by #danieltran, I tried setting the texture to GL_NEAREST (In pixi, this is done by passing the Pixi.SCALE_MODES.NEAREST to the texture constructor). And it made no difference, so then I just tried making a sprite from the texture and displaying that, and it has the same problem, so I think this is either something related to retina, or something pixi-specific.
Original texture is taken from this image:
Here's what the output of the shader looks like:

Change the texture filter to GL_NEAREST then it will solve the issue.
To be specific, the problem here is when GPU look up for the fragment, instead of taking the colour from 1 single texel, it calculate the colour using nearby texels also, that make the picture looks blurry.

Related

Position scaled video texture over image texture background

I have this working scaled masked video texture over an image texture background. However it is positioned in the bottom left corner. I tried some tricks multiplying the coords but it doesn't seem to make much difference. I'll probably have to make alot of the values changeable uniforms but hardcoded ok for now.
What values can be used to change the video texture coords to display in the top right or bottom right corner ?
The video is a webcam stream with bodypix data providing the mask.
The alpha in mix is from bodypix data and needs to be calculated at 255 to properly display.
Fragment example
precision mediump float;
uniform sampler2D background;
uniform sampler2D frame;
uniform sampler2D mask;
uniform float texWidth;
uniform float texHeight;
void main(void) {
vec2 texCoord = gl_FragCoord.xy / vec2(texWidth,texHeight);
vec2 frameuv = texCoord * vec2(texWidth, texHeight) / vec2(200.0, 200.0);
vec4 texel0 = texture2D(background, texCoord);
vec4 frameTex = texture2D(frame, frameuv.xy);
vec4 maskTex = texture2D(mask, frameuv.xy);
gl_FragColor = mix(texel0, frameTex, step(frameuv.x, 1.0) * step(frameuv.y, 1.0) * maskTex.a * 255.);
}
https://jsfiddle.net/danrossi303/82tpoy94/3/

OpenGL ES 2.0 draw Fullscreen Quad very slow

When I'm rendering my content onto a FBO with a texture bound to it and then render this bound texture to a fullscreen quad using a basic shader the performance drops ridiculously.
For example:
Render to screen directly (with basic shader):
And when render to texture first, then render texture with fullscreen quad: (with same basic shader, would be something like blur or bloom normally):
Anyone got an idea how to speed this up? Since the current performance is not usable. Also I'm using GLKit for the basic OpenGL stuff.
Need to use precisions in places where it's needed.
lowp - for colors, textures coord, normals etc.
highp - for matrices and vertices/positions
Quick reference , check the range of precisions, on 3 page in "Qualifiers".
// BasicShader.vsh
precision mediump float;
attribute highp vec2 position;
attribute lowp vec2 texCoord;
attribute lowp vec4 color;
varying lowp vec2 textureCoord;
varying lowp vec4 textureColor;
uniform highp mat4 projectionMat;
uniform highp mat4 worldMat;
void main() {
highp mat4 worldProj = worldMat * projectionMat;
gl_Position = worldProj * vec4(position, 0.0, 1.0);
textureCoord = texCoord;
textureColor = color;
}
// BasicShader.fsh
precision mediump float;
varying lowp vec2 textureCoord;
varying lowp vec4 textureColor;
uniform sampler2D sampler;
void main() {
lowp vec4 Color = texture2D(sampler, textureCoord);
gl_FragColor = Color * textureColor;
}
This is very likely caused by ill-performant openGL ES API calls.
You should attach a real device and do an openGL ES frame capture. (It really needs a real device, the option for frame capture won't be available with a simulator).
The frame capture will indicate memory and other warnings along with suggestions to fix them alongside each API call. Step through these and fix each. The performance should improve considerably.
Here's a couple of references to get this done:
Debugging openGL ES frame
Xcode tools overview

How do I modify GPUImageGaussianSelectiveBlurFilter to operate over a rectangle Focus (eg. instagram) instead of a circle Focus to move blurred area? [duplicate]

I have used the GPUImage framework for a blur effect similar to that of the Instagram application, where I have made a view for getting a picture from the photo library and then I put an effect on it.
One of the effects is a selective blur effect in which only a small part of the image is clear the rest is blurred. The GPUImageGaussianSelectiveBlurFilter chooses the circular part of the image to not be blurred.
How can I alter this to make the sharp region be rectangular in shape instead?
Because Gill's answer isn't exactly correct, and since this seems to be getting asked over and over, I'll clarify my comment above.
The fragment shader for the selective blur by default has the following code:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromCenter = distance(excludeCirclePoint, textureCoordinateToUse);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
This fragment shader takes in a pixel color value from both the original sharp image and a Gaussian blurred version of the image. It then blends between these based on the logic of the last three lines.
The first and second of these lines calculate the distance from the center coordinate that you specify ((0.5, 0.5) in normalized coordinates by default for the dead center of the image) to the current pixel's coordinate. The last line uses the smoothstep() GLSL function to smoothly interpolate between 0 and 1 when the distance from the center point travels between two thresholds, the inner clear circle, and the outer fully blurred circle. The mix() operator then takes the output from the smoothstep() and fades between the blurred and sharp color pixel colors to produce the appropriate output.
If you just want to modify this to produce a square shape instead of the circular one, you need to adjust the two center lines in the fragment shader to base the distance on linear X or Y coordinates, not a Pythagorean distance from the center point. To do this, change the shader to read:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
textureCoordinateToUse = abs(excludeCirclePoint - textureCoordinateToUse);
highp float distanceFromCenter = max(textureCoordinateToUse.x, textureCoordinateToUse.y);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
The lines that Gill mentions are just input parameters for the filter, and don't control its circularity at all.
I leave modifying this further to produce a generic rectangular shape as an exercise for the reader, but this should provide a basis for how you could do this and a bit more explanation of what the lines in this shader do.
Did it ... the code for the rectangular effect is just in these 2 lines
blurFilter = [[GPUImageGaussianSelectiveBlurFilter alloc] init];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCircleRadius:80.0/320.0];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCirclePoint:CGPointMake(0.5f, 0.5f)];
// [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setBlurSize:0.0f]; [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setAspectRatio:0.0f];

How can I take advantage of lookup tables in my Blinn-Phong lighting shader?

I'm experimenting with some lighting techniques on iOS and I've been able to produce some effects that I'm pleased with by taking advantage of iOS' OpenGL ES extensions for depth lookup textures and a relatively simple Blinn-Phong shader:
The above shows 20 Suzanne monkeys being rendered at full-screen retina with multi-sampling and the following shader. I'm doing multi-sampling because it is only adding 1ms per frame. My current average render time is 30ms total (iPad 3), which is far too slow for 60fps.
Vertex shader:
//Position
uniform mat4 mvpMatrix;
attribute vec4 position;
uniform mat4 depthMVPMatrix;
uniform mat4 vpMatrix;
//Shadow out
varying vec3 ShadowCoord;
//Lighting
attribute vec3 normal;
varying vec3 normalOut;
uniform mat3 normalMatrix;
varying vec3 vertPos;
uniform vec4 lightColor;
uniform vec3 lightPosition;
void main() {
gl_Position = mvpMatrix * position;
//Used for handling shadows
ShadowCoord = (depthMVPMatrix * position).xyz;
ShadowCoord.z -= 0.01;
//Lighting calculations
normalOut = normalize(normalMatrix * normal);
vec4 vertPos4 = vpMatrix * position;
vertPos = vertPos4.xyz / vertPos4.w;
}
Fragment shader:
#extension GL_EXT_shadow_samplers : enable
precision lowp float;
uniform sampler2DShadow shadowTexture;
varying vec3 normalOut;
uniform vec3 lightPosition;
varying vec3 vertPos;
varying vec3 ShadowCoord;
uniform vec4 fillColor;
uniform vec3 specColor;
void main() {
vec3 normal = normalize(normalOut);
vec3 lightDir = normalize(lightPosition - vertPos);
float lambertian = max(dot(lightDir,normal), 0.0);
vec3 reflectDir = reflect(-lightDir, normal);
vec3 viewDir = normalize(-vertPos);
float specAngle = max(dot(reflectDir, viewDir), 0.0);"
float specular = pow(specAngle, 16.0);
gl_FragColor = vec4((lambertian * fillColor.xyz + specular * specColor) * shadow2DEXT(shadowTexture, ShadowCoord), fillColor.w);
}
I've read that it is possible to use textures as lookup tables to reduce computation in the fragment shader, however the linked example seems to be doing full Phong lighting, rather than Blinn-Phong (I'm not doing anything with surface tangents). Furthermore, when running the sample the lighting seemed fairly banded (the background on mine, which is a solid color + Phong shading, looks slightly banded as a result of compression - it looks far smoother on the device). Is it possible to use a lookup texture in my case, or am I going to have to move down to 30fps (which I can just about achieve), turn off multi-sampling and limit Phong shading to the monkeys, rather than the full screen? In a real world (i.e. game) scenario, am I going to need do be doing Phong shading across the entire screen anyway?

How to modify GPUImageGaussianSelectiveBlurFilter to show a rectangular area rather than circular [duplicate]

I have used the GPUImage framework for a blur effect similar to that of the Instagram application, where I have made a view for getting a picture from the photo library and then I put an effect on it.
One of the effects is a selective blur effect in which only a small part of the image is clear the rest is blurred. The GPUImageGaussianSelectiveBlurFilter chooses the circular part of the image to not be blurred.
How can I alter this to make the sharp region be rectangular in shape instead?
Because Gill's answer isn't exactly correct, and since this seems to be getting asked over and over, I'll clarify my comment above.
The fragment shader for the selective blur by default has the following code:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromCenter = distance(excludeCirclePoint, textureCoordinateToUse);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
This fragment shader takes in a pixel color value from both the original sharp image and a Gaussian blurred version of the image. It then blends between these based on the logic of the last three lines.
The first and second of these lines calculate the distance from the center coordinate that you specify ((0.5, 0.5) in normalized coordinates by default for the dead center of the image) to the current pixel's coordinate. The last line uses the smoothstep() GLSL function to smoothly interpolate between 0 and 1 when the distance from the center point travels between two thresholds, the inner clear circle, and the outer fully blurred circle. The mix() operator then takes the output from the smoothstep() and fades between the blurred and sharp color pixel colors to produce the appropriate output.
If you just want to modify this to produce a square shape instead of the circular one, you need to adjust the two center lines in the fragment shader to base the distance on linear X or Y coordinates, not a Pythagorean distance from the center point. To do this, change the shader to read:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
textureCoordinateToUse = abs(excludeCirclePoint - textureCoordinateToUse);
highp float distanceFromCenter = max(textureCoordinateToUse.x, textureCoordinateToUse.y);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
The lines that Gill mentions are just input parameters for the filter, and don't control its circularity at all.
I leave modifying this further to produce a generic rectangular shape as an exercise for the reader, but this should provide a basis for how you could do this and a bit more explanation of what the lines in this shader do.
Did it ... the code for the rectangular effect is just in these 2 lines
blurFilter = [[GPUImageGaussianSelectiveBlurFilter alloc] init];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCircleRadius:80.0/320.0];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCirclePoint:CGPointMake(0.5f, 0.5f)];
// [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setBlurSize:0.0f]; [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setAspectRatio:0.0f];

Resources