Position scaled video texture over image texture background - webgl

I have this working scaled masked video texture over an image texture background. However it is positioned in the bottom left corner. I tried some tricks multiplying the coords but it doesn't seem to make much difference. I'll probably have to make alot of the values changeable uniforms but hardcoded ok for now.
What values can be used to change the video texture coords to display in the top right or bottom right corner ?
The video is a webcam stream with bodypix data providing the mask.
The alpha in mix is from bodypix data and needs to be calculated at 255 to properly display.
Fragment example
precision mediump float;
uniform sampler2D background;
uniform sampler2D frame;
uniform sampler2D mask;
uniform float texWidth;
uniform float texHeight;
void main(void) {
vec2 texCoord = gl_FragCoord.xy / vec2(texWidth,texHeight);
vec2 frameuv = texCoord * vec2(texWidth, texHeight) / vec2(200.0, 200.0);
vec4 texel0 = texture2D(background, texCoord);
vec4 frameTex = texture2D(frame, frameuv.xy);
vec4 maskTex = texture2D(mask, frameuv.xy);
gl_FragColor = mix(texel0, frameTex, step(frameuv.x, 1.0) * step(frameuv.y, 1.0) * maskTex.a * 255.);
}
https://jsfiddle.net/danrossi303/82tpoy94/3/

Related

Quality loss (bluriness) in shader

I am trying to make a shader that either passes through an image unaltered or displays a tiled texture depending on some conditions. It more or less works, but I noticed that the tiled texture doesn't quite looks right, so I simplified the shader for testing so it would only show the tiled image:
precision highp float;
uniform sampler2D uSampler;
varying vec2 vTextureCoord;
varying vec4 vColor;
varying vec2 vFilterCoord;
uniform vec2 dimensions;
uniform vec4 filterArea;
uniform sampler2D selector;
uniform vec2 selectorSize;
uniform sampler2D alternate;
uniform vec2 alternateSize;
vec2 mapCoord( vec2 coord )
{
coord *= filterArea.xy;
coord += filterArea.zw;
return coord;
}
vec2 unmapCoord( vec2 coord )
{
coord -= filterArea.zw;
coord /= filterArea.xy;
return coord;
}
void main()
{
vec2 coord = vTextureCoord;
coord = mapCoord(coord);
// sample the alternate:
vec2 av = mod( coord, alternateSize ) / (alternateSize - 1.0);
vec4 alt = texture2D(alternate, av);
gl_FragColor = alt ;
}
I am not quite sure what's going on. The original image is 100x100, and the repeating area is 100x100. The pattern looks the same, but it's slightly blurred in in the shader (see screenshots below). Does this have to do with retina? (I haven't done anything special to setup retina) Mipmaps? Something else?
UPDATE: As suggested by #danieltran, I tried setting the texture to GL_NEAREST (In pixi, this is done by passing the Pixi.SCALE_MODES.NEAREST to the texture constructor). And it made no difference, so then I just tried making a sprite from the texture and displaying that, and it has the same problem, so I think this is either something related to retina, or something pixi-specific.
Original texture is taken from this image:
Here's what the output of the shader looks like:
Change the texture filter to GL_NEAREST then it will solve the issue.
To be specific, the problem here is when GPU look up for the fragment, instead of taking the colour from 1 single texel, it calculate the colour using nearby texels also, that make the picture looks blurry.

How do I modify GPUImageGaussianSelectiveBlurFilter to operate over a rectangle Focus (eg. instagram) instead of a circle Focus to move blurred area? [duplicate]

I have used the GPUImage framework for a blur effect similar to that of the Instagram application, where I have made a view for getting a picture from the photo library and then I put an effect on it.
One of the effects is a selective blur effect in which only a small part of the image is clear the rest is blurred. The GPUImageGaussianSelectiveBlurFilter chooses the circular part of the image to not be blurred.
How can I alter this to make the sharp region be rectangular in shape instead?
Because Gill's answer isn't exactly correct, and since this seems to be getting asked over and over, I'll clarify my comment above.
The fragment shader for the selective blur by default has the following code:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromCenter = distance(excludeCirclePoint, textureCoordinateToUse);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
This fragment shader takes in a pixel color value from both the original sharp image and a Gaussian blurred version of the image. It then blends between these based on the logic of the last three lines.
The first and second of these lines calculate the distance from the center coordinate that you specify ((0.5, 0.5) in normalized coordinates by default for the dead center of the image) to the current pixel's coordinate. The last line uses the smoothstep() GLSL function to smoothly interpolate between 0 and 1 when the distance from the center point travels between two thresholds, the inner clear circle, and the outer fully blurred circle. The mix() operator then takes the output from the smoothstep() and fades between the blurred and sharp color pixel colors to produce the appropriate output.
If you just want to modify this to produce a square shape instead of the circular one, you need to adjust the two center lines in the fragment shader to base the distance on linear X or Y coordinates, not a Pythagorean distance from the center point. To do this, change the shader to read:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
textureCoordinateToUse = abs(excludeCirclePoint - textureCoordinateToUse);
highp float distanceFromCenter = max(textureCoordinateToUse.x, textureCoordinateToUse.y);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
The lines that Gill mentions are just input parameters for the filter, and don't control its circularity at all.
I leave modifying this further to produce a generic rectangular shape as an exercise for the reader, but this should provide a basis for how you could do this and a bit more explanation of what the lines in this shader do.
Did it ... the code for the rectangular effect is just in these 2 lines
blurFilter = [[GPUImageGaussianSelectiveBlurFilter alloc] init];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCircleRadius:80.0/320.0];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCirclePoint:CGPointMake(0.5f, 0.5f)];
// [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setBlurSize:0.0f]; [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setAspectRatio:0.0f];

How to modify GPUImageGaussianSelectiveBlurFilter to show a rectangular area rather than circular [duplicate]

I have used the GPUImage framework for a blur effect similar to that of the Instagram application, where I have made a view for getting a picture from the photo library and then I put an effect on it.
One of the effects is a selective blur effect in which only a small part of the image is clear the rest is blurred. The GPUImageGaussianSelectiveBlurFilter chooses the circular part of the image to not be blurred.
How can I alter this to make the sharp region be rectangular in shape instead?
Because Gill's answer isn't exactly correct, and since this seems to be getting asked over and over, I'll clarify my comment above.
The fragment shader for the selective blur by default has the following code:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromCenter = distance(excludeCirclePoint, textureCoordinateToUse);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
This fragment shader takes in a pixel color value from both the original sharp image and a Gaussian blurred version of the image. It then blends between these based on the logic of the last three lines.
The first and second of these lines calculate the distance from the center coordinate that you specify ((0.5, 0.5) in normalized coordinates by default for the dead center of the image) to the current pixel's coordinate. The last line uses the smoothstep() GLSL function to smoothly interpolate between 0 and 1 when the distance from the center point travels between two thresholds, the inner clear circle, and the outer fully blurred circle. The mix() operator then takes the output from the smoothstep() and fades between the blurred and sharp color pixel colors to produce the appropriate output.
If you just want to modify this to produce a square shape instead of the circular one, you need to adjust the two center lines in the fragment shader to base the distance on linear X or Y coordinates, not a Pythagorean distance from the center point. To do this, change the shader to read:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
textureCoordinateToUse = abs(excludeCirclePoint - textureCoordinateToUse);
highp float distanceFromCenter = max(textureCoordinateToUse.x, textureCoordinateToUse.y);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
The lines that Gill mentions are just input parameters for the filter, and don't control its circularity at all.
I leave modifying this further to produce a generic rectangular shape as an exercise for the reader, but this should provide a basis for how you could do this and a bit more explanation of what the lines in this shader do.
Did it ... the code for the rectangular effect is just in these 2 lines
blurFilter = [[GPUImageGaussianSelectiveBlurFilter alloc] init];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCircleRadius:80.0/320.0];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCirclePoint:CGPointMake(0.5f, 0.5f)];
// [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setBlurSize:0.0f]; [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setAspectRatio:0.0f];

Computing texture coordinates in fragment shader (iOS/OpenGL ES 2.0)

I am finding that in my fragment shader, these 2 statements give identical output:
// #1
// pos is set from gl_Position in vertex shader
highp vec2 texc = ((pos.xy / pos.w) + 1.0) / 2.0;
// #2 - equivalent?
highp vec2 texc2 = gl_FragCoord.xy/uWinDims.xy;
If this is correct, could you please explain the math? I understand #2, which is what I came up with, but saw #1 in a paper. Is this an NDC (normalized device coordinate) calculation?
The context is that I am using the texture coordinates with an FBO the same size as the viewport. It's all working, but I'd like to understand the math.
Relevant portion of vertex shader:
attribute vec4 position;
uniform mat4 modelViewProjectionMatrix;
varying lowp vec4 vColor;
// transformed position
varying highp vec4 pos;
void main()
{
gl_Position = modelViewProjectionMatrix * position;
// for fragment shader
pos = gl_Position;
vColor = aColor;
}
Relevant portion of fragment shader:
// transformed position - from vsh
varying highp vec4 pos;
// viewport dimensions
uniform highp vec2 uWinDims;
void main()
{
highp vec2 texc = ((pos.xy / pos.w) + 1.0) / 2.0;
// equivalent?
highp vec2 texc2 = gl_FragCoord.xy/uWinDims.xy;
...
}
(pos.xy / pos.w) is the coordinate value in normalized device coordinates (NDC). This value ranges from -1 to 1 in each dimension.
(NDC + 1.0)/2.0 changes the range from (-1 to 1) to (0 to 1) (0 on the left of the screen, and 1 on the right, similar for top/bottom).
Alternatively, gl_FragCoord gives the coordinate in pixels, so it ranges from (0 to width) and (0 to height).
Dividing this value by width and height (uWinDims), gives the position again from 0 on the left side of the screen, to 1 on the right side.
So yes they appear to be equivalent.

Minimum size of rendered object using GL_LINES in iOS Open GL ES

I have just completed the first version of my iOS app, Corebox, and am now working on some new features.
One of the new features is a "small" tweak to the OpenGL rendering to force some objects to never be drawn smaller than a minimum size. All of the objects needing this treatment are simple 2 point lines drawn with GL_LINES.
This annotated screenshot explains what I'm after. Ignore the grey lines, the only objects I'm interested in altering are the yellow wider lines.
I have googled this extensively and it seems what I need to do is alter the geometry of the lines using a vertex shader. I'm quite new to GLSL and most shader examples I can find deal with applying lighting and other effects, eg: GLSL Heroku Editor and KicksJS shader editor.
My current vertex shader is extremely basic:
// GL_LINES vertex shader
uniform mat4 Projection;
uniform mat4 Modelview;
attribute vec4 Position;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
DestinationColor = SourceColor;
gl_Position = Projection * Modelview * Position;
}
As is my fragment shader:
// GL_LINES fragment shader
varying lowp vec4 DestinationColor;
void main(void) {
gl_FragColor = DestinationColor;
}
My guess as to what is required:
Determine the distance between the viewer (camera position) and the object
Determine how big the object is on the screen, based on its size and distance from camera
If the object will be too small then adjust its vertices such that it becomes large enough to easily see on the screen.
Caveats and other notes:
But if you zoom out won't this cause the model to be just a blob of orange on the screen? Yes, this is exactly the effect I'm after.
Edit: Here is the final working version implementing suggestions by mifortin
uniform mat4 Projection;
uniform mat4 Modelview;
uniform float MinimumHeight;
attribute vec4 Position;
attribute vec4 ObjectCenter;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
// screen-space position of this vertex
vec4 screenPosition = Projection * Modelview * Position;
// screen-space mid-point of the object this vertex belongs to
vec4 screenObjectCenter = Projection * Modelview * ObjectCenter;
// Z should be 0 by this time and the projective transform in w.
// scale so w = 1 (these two should be in screen-space)
vec2 newScreenPosition = screenPosition.xy / screenPosition.w;
vec2 newObjectCenter = screenObjectCenter.xy / screenObjectCenter.w;
float d = distance(newScreenPosition, newObjectCenter);
if (d < MinimumHeight && d > 0.0) {
// Direction of this object, this really only makes sense in the context
// of a line (eg: GL_LINES)
vec2 towards = normalize(newScreenPosition - newObjectCenter);
// Shift the center point then adjust the vertex position accordingly
// Basically this converts: *--x--* into *--------x--------*
newObjectCenter = newObjectCenter + towards * MinimumHeight;
screenPosition.xy = newObjectCenter.xy * screenPosition.w;
}
gl_Position = screenPosition;
DestinationColor = SourceColor;
}
Note that I didn't test the code, but it should illustrate the solution.
If you want to use shaders, add in another uniform vec4 that is the center position of your line. Then you can do something similar to (note center could be precomputed on the CPU once):
uniform float MIN; //Minimum size of blob on-screen
uniform vec4 center; //Center of the line / blob
...
vec4 screenPos = Projection * Modelview * Position;
vec4 center = Projection * Modelview * Position;
//Z should be 0 by this time and the projective transform in w.
//scale so w = 1 (these two should be in screen-space)
vec2 nScreenPos = screenPos.xy / screenPos.w;
vec2 nCenter = center.xy / center.w;
float d = distance(nScreenPos, nCenter);
if (d < MIN && d > 0)
{
vec2 towards = normalize(nScreenPos - nCenter);
nCenter = nCenter + towards * MIN;
screenPos.xy = nCenter.xy * screenPos.w;
}
gl_Position = screenPos;
Find where on the screen the vertex would be drawn, then from the center of the blob stretch it if needed to ensure a minimum size.
This example is for round objects. For corners, you could make MIN an attribute so the distance from the center varies on a per-vertex basis.
If you just want something more box-like, check that the minimum distance of the x and y coordinates separately.
On the CPU, you could compute the coordinates in screen-space and scale accordingly before submitting to the GPU.

Resources