I want to achieve a smooth merge effect of the image on center cut. The centre cut i achieved from the below code.
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main(){
vec4 CurrentColor = vec4(0.0);
if(textureCoordinate.y < 0.5){
CurrentColor = texture2D(videoFrame,vec2(textureCoordinate.x,(textureCoordinate.y-0.125)));
} else{
CurrentColor = texture2D(videoFrame,vec2(textureCoordinate.x,(textureCoordinate.y+0.125)));
}
gl_fragColor = CurrentColor;
}
The above code gives the effect to below image.
Actual:
Centre cut:
Desired Output:
What i want is the sharp cut should not be there, there should be smooth gradient merge of both halves.
Do you want an actual blur there, or just linear blend? Because blurring involves a blurring kernel, whereas a blend would be simple interpolation between those two, depending on the y-coordinate.
This is the code for a linear blend.
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main(){
float steepness = 20; /* controls the width of the blending zone, larger values => shaper gradient */
vec4 a = texture2D(videoFrame,vec2(textureCoordinate.x,(textureCoordinate.y-0.125)));
vec4 b = texture2D(videoFrame,vec2(textureCoordinate.x,(textureCoordinate.y+0.125)));
/* EDIT: Added a clamp to the smoothstep parameter -- should not be neccessary though */
vec4 final = smoothstep(a, b, clamp((y-0.5)*steepness, 0., 1.)); /* there's also mix instead of smoothstep, try both */
gl_FragColor = final;
}
Doing an actual blur is a bit more complicated, as you've to apply that blurring kernel. Basically it involves two nested loops, iterating over the neighbouring texels and summing them up according to some distribution (most flexible by supplying that distribution through an additional texture which also allowed to add some bokeh).
Related
I have a requirement to implement an iOS UIImage filter / effect which is a copy of Photoshop's Distort Wave effect. The wave has to have multiple generators and repeat in a tight pattern within a CGRect.
Photos of steps are attached.
I'm having problems creating the glsl code to reproduce the sine wave pattern. I'm also trying to smooth the edge of the effect so that the transition to the area outside the rect is not so abrupt.
I found some WebGL code that produces a water ripple. The waves produced before the center point look close to what I need, but I can't seem to get the math right to remove the water ripple (at center point) and just keep the repeating sine pattern before it:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float time;
uniform highp vec2 center;
uniform highp float angle;
void main() {
highp vec2 cPos = -1.0 + 2.0 * gl_FragCoord.xy / center.xy;
highp float cLength = length(cPos);
highp vec2 uv = gl_FragCoord.xy/center.xy+(cPos/cLength)*cos(cLength*12.0-time*4.0)*0.03;
highp vec3 col = texture2D(inputImageTexture,uv).xyz;
gl_FragColor = vec4(col,1.0);
}
I have to process two Rect areas, one at top and one at the bottom. So being able to process two Rect areas in one pass would be ideal. Plus the edge smoothing.
Thanks in advance for any help.
I've handled this in the past by generating an offset table on the CPU and uploading it as an input texture. So on the CPU, I'd do something like:
for (i = 0; i < tableSize; i++)
{
table [ i ].x = amplitude * sin (i * frequency * 2.0 * M_PI / tableSize + phase);
table [ i ].y = 0.0;
}
You may need to add in more sine waves if you have multiple "generators". Also, note that the above code offsets the x coordinate of each pixel. You could do Y instead, or both, depending on what you need.
Then in the glsl, I'd use that table as an offset for sampling. So it would be something like this:
uniform sampler2DRect table;
uniform sampler2DRect inputImage;
//... rest of your code ...
// Get the offset from the table
vec2 coord = glTexCoord [ 0 ].xy;
vec2 newCoord = coord + texture2DRect (table, coord);
// Sample the input image at the offset coordinate
gl_FragColor = texture2DRect (inputImage, newCoord);
NOTE: Right now I'm testing this in the simulator. But the idea is that I get acceptable performance in say, an iPhone 4s. (I know, I should be testing on the device, but I won't have a device for a few days).
I was playing around with making a convolution shader that would allow convolving an image with a filter of support 3x3, 5x5 or 7x7 and the option of multiple passes. The shader itself works I guess. But I notice the following:
A simple box filter 3x3, single-pass, barely blurs the image. So to get a more noticeable blur, I have to do either 3x3 2-pass or 5x5.
The simplest case (the 3x3, 1-pass) is already slow enough that it couldn't be used at say, 30 fps.
I tried two approaches so far (this is for some OGLES2-based plugins I'm doing for iPhone, that's why the methods):
- (NSString *)vertexShader
{
return SHADER_STRING
(
attribute vec4 aPosition;
attribute vec2 aTextureCoordinates0;
varying vec2 vTextureCoordinates0;
void main(void)
{
vTextureCoordinates0 = aTextureCoordinates0;
gl_Position = aPosition;
}
);
}
- (NSString *)fragmentShader
{
return SHADER_STRING
(
precision highp float;
uniform sampler2D uTextureUnit0;
uniform float uKernel[49];
uniform int uKernelSize;
uniform vec2 uTextureUnit0Offset[49];
uniform vec2 uTextureUnit0Step;
varying vec2 vTextureCoordinates0;
void main(void)
{
vec4 outputFragment = texture2D(uTextureUnit0, vTextureCoordinates0 + uTextureUnit0Offset[0] * uTextureUnit0Step) * uKernel[0];
for (int i = 0; i < uKernelSize; i++) {
outputFragment += texture2D(uTextureUnit0, vTextureCoordinates0 + uTextureUnit0Offset[i] * uTextureUnit0Step) * uKernel[i];
}
gl_FragColor = outputFragment;
}
);
}
The idea in this approach is that both the filter values and the offsetCoordinates to fetch texels are precomputed once in Client / App land, and then get set in uniforms. Then, the shader program will always have them available any time it is used. Mind you, the big size of the uniform arrays (49) is because potentially I could do up to a 7x7 kernel.
This approach takes .46s per pass.
Then I tried the following approach:
- (NSString *)vertexShader
{
return SHADER_STRING
(
// Default pass-thru vertex shader:
attribute vec4 aPosition;
attribute vec2 aTextureCoordinates0;
varying highp vec2 vTextureCoordinates0;
void main(void)
{
vTextureCoordinates0 = aTextureCoordinates0;
gl_Position = aPosition;
}
);
}
- (NSString *)fragmentShader
{
return SHADER_STRING
(
precision highp float;
uniform sampler2D uTextureUnit0;
uniform vec2 uTextureUnit0Step;
uniform float uKernel[49];
uniform float uKernelRadius;
varying vec2 vTextureCoordinates0;
void main(void)
{
vec4 outputFragment = vec4(0., 0., 0., 0.);
int kRadius = int(uKernelRadius);
int kSupport = 2 * kRadius + 1;
for (int t = -kRadius; t <= kRadius; t++) {
for (int s = -kRadius; s <= kRadius; s++) {
int kernelIndex = (s + kRadius) + ((t + kRadius) * kSupport);
outputFragment += texture2D(uTextureUnit0, vTextureCoordinates0 + (vec2(s,t) * uTextureUnit0Step)) * uKernel[kernelIndex];
}
}
gl_FragColor = outputFragment;
}
);
}
Here, I still pass the precomputed kernel into the fragment shader via a uniform. But I now compute the texel offsets and even the kernel indices in the shader. I'd expect this approach to be slower since I not only have 2 for loops but I'm also doing a bunch of extra computations for every single fragment.
Interestingly enough, this approach takes .42 secs. Actually faster...
At this point, the only other thing I can think of doing is braking the convolution into 2-passes by thinking of the 2D kernel as two separable 1D kernels. Haven't tried it out yet.
Just for comparison, and aware that the following example is a specific implementation of box filtering that is A - pretty much hardcoded and B - doesn't really adhere to theoretical definition of a classic nxn linear filter (it is not a matrix and doesn't add up to 1), I tried this approach from the OpenGL ES 2.0 Programming guide:
- (NSString *)fragmentShader
{
return SHADER_STRING
(
// Default pass-thru fragment shader:
precision mediump float;
// Input texture:
uniform sampler2D uTextureUnit0;
// Texel step:
uniform vec2 uTextureUnit0Step;
varying vec2 vTextureCoordinates0;
void main() {
vec4 sample0;
vec4 sample1;
vec4 sample2;
vec4 sample3;
float step = uTextureUnit0Step.x;
sample0 = texture2D(uTextureUnit0, vec2(vTextureCoordinates0.x - step, vTextureCoordinates0.y - step));
sample1 = texture2D(uTextureUnit0, vec2(vTextureCoordinates0.x + step, vTextureCoordinates0.y + step));
sample2 = texture2D(uTextureUnit0, vec2(vTextureCoordinates0.x + step, vTextureCoordinates0.y - step));
sample3 = texture2D(uTextureUnit0, vec2(vTextureCoordinates0.x - step, vTextureCoordinates0.y + step));
gl_FragColor = (sample0 + sample1 + sample2 + sample3) / 4.0;
}
);
}
This approach takes 0.06s per pass.
Mind you, the above is my adaptation where I made the step pretty much the same texel offset I was using in my implementation. With this step, the result is very similar to my implementation, but the original shader in the OpenGL guide uses a larger step which blurs more.
So with all the above being said, my questions is really two-fold:
I'm computing the step / texel offset as vec2(1 / image width, 1 / image height). With this offset, like I said, a 3x3 box filter is barely noticeable. Is this correct? or am I misunderstanding the computation of the step or something else?
Is there anything else I could do to try and get the "convolution in the general case" approach to run fast enough for real-time? Or do I necessarily need to go for a simplification like the OpenGL example?
If you run those through the OpenGL ES Analysis tool in Instruments or the Frame Debugger in Xcode, you'll probably see a note about dependent texture reads -- you're calculating texcoords in the fragment shader, which means the hardware can't fetch texel data until it gets to that point in evaluating the shader. If texel coordinates are known going into the fragment shader, the hardware can prefetch your texel data in parallel with other tasks, so it's ready to go by the time the fragment shader needs it.
You can speed things up greatly by precomputing texel coordinates in the vertex shader. Brad Larson has a good example of doing such in this answer to a similar question.
I don't have answers regarding your precise questions, but you should take a look at GPUImage framework - which implements several box blur filter (see this SO question) - among which a 2-pass 9x9 filter - you can also see this article for real-time FPS of different approaches : vImage VS GPUImage vs CoreImage
I'm trying to combine two texture using shaders in opengl es 2.0
as you can see on the screen shot, I am trying to create a needle reflection on backward object using dynamic environment mapping.
but, reflection of the needle looks semi transparent and it's blend with my environment map.
here is the my fragment shader;
varying highp vec4 R;
uniform samplerCube cube_map1;
uniform samplerCube cube_map2;
void main()
{
mediump vec3 output_color1;
mediump vec3 output_color2;
output_color1 = textureCube(cube_map1 , R.xyz).rgb;
output_color2 = textureCube(cube_map2 , R.xyz).rgb;
gl_FragColor = mix(vec4(output_color1,1.0),vec4(output_color2,1.0),0.5);
}
but, "mix" method cause a blending two textures.
I'm also checked Texture Combiners examples but it didn't help either.
is there any way to combine two textures without blend each other.
thanks.
Judging from the comments, my guess is you want to draw the needle on top of the landscape picture. I'd simply render it as an overlay but since you want to do it in a shader maybe this would work:
void main()
{
mediump vec3 output_color1;
mediump vec3 output_color2;
output_color1 = textureCube(cube_map1 , R.xyz).rgb;
output_color2 = textureCube(cube_map2 , R.xyz).rgb;
if ( length( output_color1 ) > 0.0 )
gl_FragColor = vec4(output_color1,1.0);
else
gl_FragColor = vec4(output_color2,1.0);
}
I have just completed the first version of my iOS app, Corebox, and am now working on some new features.
One of the new features is a "small" tweak to the OpenGL rendering to force some objects to never be drawn smaller than a minimum size. All of the objects needing this treatment are simple 2 point lines drawn with GL_LINES.
This annotated screenshot explains what I'm after. Ignore the grey lines, the only objects I'm interested in altering are the yellow wider lines.
I have googled this extensively and it seems what I need to do is alter the geometry of the lines using a vertex shader. I'm quite new to GLSL and most shader examples I can find deal with applying lighting and other effects, eg: GLSL Heroku Editor and KicksJS shader editor.
My current vertex shader is extremely basic:
// GL_LINES vertex shader
uniform mat4 Projection;
uniform mat4 Modelview;
attribute vec4 Position;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
DestinationColor = SourceColor;
gl_Position = Projection * Modelview * Position;
}
As is my fragment shader:
// GL_LINES fragment shader
varying lowp vec4 DestinationColor;
void main(void) {
gl_FragColor = DestinationColor;
}
My guess as to what is required:
Determine the distance between the viewer (camera position) and the object
Determine how big the object is on the screen, based on its size and distance from camera
If the object will be too small then adjust its vertices such that it becomes large enough to easily see on the screen.
Caveats and other notes:
But if you zoom out won't this cause the model to be just a blob of orange on the screen? Yes, this is exactly the effect I'm after.
Edit: Here is the final working version implementing suggestions by mifortin
uniform mat4 Projection;
uniform mat4 Modelview;
uniform float MinimumHeight;
attribute vec4 Position;
attribute vec4 ObjectCenter;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
void main(void) {
// screen-space position of this vertex
vec4 screenPosition = Projection * Modelview * Position;
// screen-space mid-point of the object this vertex belongs to
vec4 screenObjectCenter = Projection * Modelview * ObjectCenter;
// Z should be 0 by this time and the projective transform in w.
// scale so w = 1 (these two should be in screen-space)
vec2 newScreenPosition = screenPosition.xy / screenPosition.w;
vec2 newObjectCenter = screenObjectCenter.xy / screenObjectCenter.w;
float d = distance(newScreenPosition, newObjectCenter);
if (d < MinimumHeight && d > 0.0) {
// Direction of this object, this really only makes sense in the context
// of a line (eg: GL_LINES)
vec2 towards = normalize(newScreenPosition - newObjectCenter);
// Shift the center point then adjust the vertex position accordingly
// Basically this converts: *--x--* into *--------x--------*
newObjectCenter = newObjectCenter + towards * MinimumHeight;
screenPosition.xy = newObjectCenter.xy * screenPosition.w;
}
gl_Position = screenPosition;
DestinationColor = SourceColor;
}
Note that I didn't test the code, but it should illustrate the solution.
If you want to use shaders, add in another uniform vec4 that is the center position of your line. Then you can do something similar to (note center could be precomputed on the CPU once):
uniform float MIN; //Minimum size of blob on-screen
uniform vec4 center; //Center of the line / blob
...
vec4 screenPos = Projection * Modelview * Position;
vec4 center = Projection * Modelview * Position;
//Z should be 0 by this time and the projective transform in w.
//scale so w = 1 (these two should be in screen-space)
vec2 nScreenPos = screenPos.xy / screenPos.w;
vec2 nCenter = center.xy / center.w;
float d = distance(nScreenPos, nCenter);
if (d < MIN && d > 0)
{
vec2 towards = normalize(nScreenPos - nCenter);
nCenter = nCenter + towards * MIN;
screenPos.xy = nCenter.xy * screenPos.w;
}
gl_Position = screenPos;
Find where on the screen the vertex would be drawn, then from the center of the blob stretch it if needed to ensure a minimum size.
This example is for round objects. For corners, you could make MIN an attribute so the distance from the center varies on a per-vertex basis.
If you just want something more box-like, check that the minimum distance of the x and y coordinates separately.
On the CPU, you could compute the coordinates in screen-space and scale accordingly before submitting to the GPU.
It seems this should be easy but I'm having a lot of difficulty using part of a texture with a point sprite. I have googled around extensively and turned up various answers but none of these deal with the specific issue I'm having.
What I've learned so far:
Basics of point sprite drawing
How to deal with point sprites rendering as solid squares
How to alter orientation of a point sprite
How to use multiple textures with a point sprite, getting closer here..
That point sprites + sprite sheets has been done before, but is only possible in OpenGL ES 2.0 (not 1.0)
Here is a diagram of what I'm trying to achieve
Where I'm at:
I have a set of working point sprites all using the same single square image. Eg: a 16x16 image of a circle works great.
I have an Objective-C method which generates a 600x600 image containing a sprite-sheet with multiple images. I have verified this is working by applying the entire sprite sheet image to a quad drawn with GL_TRIANGLES.
I have used the above method successfully to draw parts of a sprite sheet on to quads. I just cant get it to work with point sprites.
Currently I'm generating texture coordinates pointing to the center of the sprite on the sprite sheet I'm targeting. Eg: Using the image at the bottom; star: 0.166,0.5; cloud: 0.5,0.5; heart: 0.833,0.5.
Code:
Vertex Shader
uniform mat4 Projection;
uniform mat4 Modelview;
uniform float PointSize;
attribute vec4 Position;
attribute vec2 TextureCoordIn;
varying vec2 TextureCoord;
void main(void)
{
gl_Position = Projection * Modelview * Position;
TextureCoord = TextureCoordIn;
gl_PointSize = PointSize;
}
Fragment Shader
varying mediump vec2 TextureCoord;
uniform sampler2D Sampler;
void main(void)
{
// Using my TextureCoord just draws a grey square, so
// I'm likely generating texture coords that texture2D doesn't like.
gl_FragColor = texture2D(Sampler, TextureCoord);
// Using gl_PointCoord just draws my whole sprite map
// gl_FragColor = texture2D(Sampler, gl_PointCoord);
}
What I'm stuck on:
I don't understand how to use the gl_PointCoord variable in the fragment shader. What does gl_PointCoord contain initially? Why? Where does it get its data?
I don't understand what texture coordinates to pass in. For example, how does the point sprite choose what part of my sprite sheet to use based on the texture coordinates? I'm used to drawing quads which have effectively 4 sets of texture coordinates (one for each vertex), how is this different (clearly it is)?
A colleague of mine helped with the answer. It turns out the trick is to utilize both the size of the point (in OpenGL units) and the size of the sprite (in texture units, (0..1)) in combination with a little vector math to render only part of the sprite-sheet onto each point.
Vertex Shader
uniform mat4 Projection;
uniform mat4 Modelview;
// The radius of the point in OpenGL units, eg: "20.0"
uniform float PointSize;
// The size of the sprite being rendered. My sprites are square
// so I'm just passing in a float. For non-square sprites pass in
// the width and height as a vec2.
uniform float TextureCoordPointSize;
attribute vec4 Position;
attribute vec4 ObjectCenter;
// The top left corner of a given sprite in the sprite-sheet
attribute vec2 TextureCoordIn;
varying vec2 TextureCoord;
varying vec2 TextureSize;
void main(void)
{
gl_Position = Projection * Modelview * Position;
TextureCoord = TextureCoordIn;
TextureSize = vec2(TextureCoordPointSize, TextureCoordPointSize);
// This is optional, it is a quick and dirty way to make the points stay the same
// size on the screen regardless of distance.
gl_PointSize = PointSize / Position.w;
}
Fragment Shader
varying mediump vec2 TextureCoord;
varying mediump vec2 TextureSize;
uniform sampler2D Sampler;
void main(void)
{
// This is where the magic happens. Combine all three factors to render
// just a portion of the sprite-sheet for this point
mediump vec2 realTexCoord = TextureCoord + (gl_PointCoord * TextureSize);
mediump vec4 fragColor = texture2D(Sampler, realTexCoord);
// Optional, emulate GL_ALPHA_TEST to use transparent images with
// point sprites without worrying about z-order.
// see: http://stackoverflow.com/a/5985195/806988
if(fragColor.a == 0.0){
discard;
}
gl_FragColor = fragColor;
}
Point sprites are composed of a single position. Therefore any "varying" values will not actually vary, because there's nothing to interpolate between.
gl_PointCoord is a vec2 value where the XY values are between [0, 1]. They represent the location on the point. (0, 0) is the bottom-left of the point, and (1, 1) is the top-right.
So you want to map (0, 0) to the bottom-left of your sprite, and (1, 1) to the top-right. To do that, you need to know certain things: the size of the sprites (assuming they're all the same size), the size of the texture (because the texture fetch functions take normalized texture coordinates, not pixel locations), and which sprite is currently being rendered.
The latter can be set via a varying. It can just be a value that's passed as per-vertex data into the varying in the vertex shader.
You use that plus the size of the sprites to determine where in the texture you want to pull data for this sprite. Once you have the texel coordinates you want to use, you divide them by the texture size to produce normalized texture coordinates.
In any case, point sprites, despite the name, aren't really meant for sprite rendering. It would be easier to use quads/triangles for that, as you can have more assurance over exactly what positions everything has.