I have one base image and created layers for that. Layers are png images.
I could load canvas.drawImage to add layers with base image. But how can i apply texture on layer image without changing layer size.
Like, I want render image on another image in webgl.
i don't know exactly what effect you want to achieve, but i think that fragment shader will help you with that, try to put this code into shadertoy editor:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
fragColor = texture2D(iChannel1, uv) * texture2D(iChannel0, uv);
}
And do not forget to put some textures to iChannel0 and iChannel1, so it looks like this:
Yeah and i forget about those articles, they should also help you a lot:
WebGL image processing and WebGL using 2 or more textures
Related
I'm implementing a paint app by using OpenGL/GLSL.
There is a feature where a user draws a "mask" by using brush with a pattern image, meantime the background changes according to the brush position. Take a look at the video to understand: video
I used CALayer's mask (iOS stuff) to achieve this effect (on the video). But this implementation is very costly, fps is pretty low. So I decided to use OpenGL for that.
For OpenGL implementation, I use the Stencil buffer for masking, i.e.:
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, 0);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
// Draw mask (brush pattern)
glStencilFunc(GL_EQUAL, 1, 255);
// Draw gradient background
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
The problem: Stencil buffer doesn't work with alpha, that's why I can't use semi-transparent patterns for the brushes.
The question: How can I achieve that effect from video by using OpenGL/GLSL but without Stencil buffer?
Since your background is already generated (from comments) then you can simply use 2 textures in the shader to draw a each of the segments. You will need to redraw all of them until user lifts up his finger though.
So assume you have a texture that has a white footprint on it with alpha channel footprintTextureID and a background texture "backgroundTextureID". You need to bind both of a textures using activeTexture 1 and 2 and pass the 2 as uniforms in the shader.
Now in your vertex shader you will need to generate the relative texture coordinates from the position. There should be a line similar to gl_Position = computedPosition; so you need to add another varying value:
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (computedPosition.y+1.0)*0.5);
or if you need to flip vertically
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (-computedPosition.y+1.0)*0.5):
(The reason for this equation is that the output vertices are in interval [-1,1] but the textures use [0,1]: [-1,1]+1 = [0,2] then [0,2]*0.5 = [0,1]).
Ok so assuming you bound all of these correctly you now only need to multiply the colors in fragment shader to get the blended color:
uniform sampler2D footprintTexture;
varying lowp vec2 footprintTextureCoordinate;
uniform sampler2D backgroundTexture;
varying lowp vec2 backgroundTextureCoordinates;
void main() {
lowp vec4 footprintColor = texture2D(footprintTexture, footprintTextureCoordinate);
lowp vec4 backgroundColor = texture2D(backgroundTexture, backgroundTextureCoordinates);
gl_FragColor = footprintColor*backgroundColor;
}
If you wanted you could multiply with alpha value from the footprint but that only loses the flexibility. Until the footprint texture is white it makes no difference so it is your choice.
Stencil is a boolean on/off test, so as you say it can't cope with alpha.
The only GL technique which works with alpha is the blending, but due to the color change between frames you can't simply flatten this into a single layer in a single pass.
To my mind it sounds like you need to maintain multiple independent layers in off-screen buffers, and then blend them together per frame to form what is shown on screen. This gives you complete independence for how you update each layer per frame.
I have started experimenting with the info-beamer software for Raspberry Pi. It appears to have support for display PNGs, text, and video, but when I see GLSL primitives, my first instinct is to draw a texture-mapped polygon.
Unfortunately, I can't find the documentation that would allow me to draw so much as a single triangle using the shaders. I have made a few toys using GLSL, so I'm familiar with the pipeline of setting transform matrices and drawing triangles that are filtered by the vertex and fragment shaders.
I have grepped around in info-beamer-nodes on GitHub for examples of GL drawing, but the relevant examples have so far escaped my notice.
How do I use info-beamer's GLSL shaders on arbitrary UV mapped polygons?
Based on the comment by the author of info-beamer it is clear that functions to draw arbitrary triangles are not available in info-beamer 0.9.1.
The specific effect I was going to attempt was a rectangle that faded to transparent at the margins. Fortunately the 30c3-room/ example in the info-beamer-nodes sources illustrates a technique where we draw an image as a rectangle that is filtered by the GL fragment shader. The 1x1 white PNG is a perfectly reasonable template whose color can be replaced by the calculations of the shader in my application.
While arbitrary triangles are not available, UV-mapped rectangles (and rotated rectangles) are supported and are suitable for many use cases.
I used the following shader:
uniform sampler2D Texture;
varying vec2 TexCoord;
uniform float margin_h;
uniform float margin_v;
void main()
{
float q = min((1.0-TexCoord.s)/margin_h, TexCoord.s/margin_h);
float r = min((1.0-TexCoord.t)/margin_v, TexCoord.t/margin_v);
float p = min(q,r);
gl_FragColor = vec4(0,0,0,p);
}
and this LUA in my node.render()
y = phase * 30 + center.y
shader:use {
margin_h=0.03;
margin_v=0.2;
}
white:draw(x-20,y-20,x+700,y+70)
shader:deactivate()
font:write(x, y, "bacon "..(phase), 50, 1,1,0,1)
I need to determine if point lies inside shape. In case our shape is circle it's easy:
highp vec2 textureCoordinateToUse = vec2(textureCoordinate.x, (textureCoordinate.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float dist = distance(center, textureCoordinateToUse);
textureCoordinateToUse = textureCoordinate;
if (dist < radius) {
...
}
But what if my shape is star, hexagon, spirale or etc? Does somebody know any fast way to do it? May I use images with alpha channels as shapes? How to do it?
UPDATE: I have just understand that the best option now is to pass another texture to shader. How can I do it? Now shader has 2 properties: varying highp vec2 textureCoordinate; and uniform sampler2D inputImageTexture;. And I want to pass another texture to check alpha channel of it inside shader code.
UPDATE 2: I have tried to load shape to shader (I think so). I'm using GPUImage framework, so I have set sampler2D with my shape to uniform and tried to check alpha channel there. Is it okay? On my iPhone 5s it's looks very well, but what about performance?
A shader won't give you anything because the result of shader's routine is an image.
In image-based approach the problem need to be reformulated. Lets say you have a grayscale image with rendered shapes where white and gray pixels define shapes and black pixels define nothing. You must know the center of each shape and the bounding circle of each shape. Note that bounding circles of shapes must not intersect each other.
Then you can probe a point agains shapes first by bounding circles (this probe is necessary to distinguish shapes because by peeking a pixel from image you can only know if you point intersect some shape), and second by peeking a certain pixel. If both probes are positive then your point is inside of a shape.
If you can have an analytic shape representation such as circle all you need to find is an equation that describes that shape.
If you have a pre-drawn shape and you can pack it into a texture you can do that as well. All you need is to treat the object as a rectangle (a whole texture image) and do a rectangle check such as for the circle plus get the colour of that texture and do a colour check. What to check in colour really depends on you, it can be black-white, use the alpha channel.. anything really.
If you have a complex drawn object such as 3D model you need to get a model projection (silhouette) which can be drawn to a frame buffer object and again used as a texture or better yet try to draw it directly to the scene using some additional buffer such as stencil which you can then again use in fragment shader to check a specific value.
For arbitrary polygonal shape:
1. Triangulate you shape (for example using Delaunay triangulation).
2. Check you point against every triangle. It is trivial.
3. Improve performance by using bounding shapes around original polygonal shapes and space partitioning for triangles.
I'm trying to achieve the following blending when the texture at one vertex merges with another:
Here's what I currently have:
I've enabled blending and am specifying the blending function as:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I can see that the image drawn in the paper app is made up of a small circle that merges with the same texture before and after and has some blending effect on the color and the alpha.
How do I achieve the desired effect?
UPDATE:
What I think is happening is that the intersected region of the two textures is getting the alpha channel to be modified (either additive or some other custom function) while the texture is not being drawn in the intersected region. The rest of the region has the rest of the texture drawn. Like so:
I'm not entirely sure of how to achieve this result, though.
You shouldn't need blending for this (and it won't work the way you want).
I think as long as you define your texture coordinate in the screen space, it should be seamless between two separate circles.
To do this, instead of using a texture coordinate passed through the vertex shader, just use the position of the fragment to sample the texture, plus or minus some scaling:
float texcoord = gl_FragCoord / vec2(xresolution_in_pixels, yresolution_in_pixels);`
gl_FragColor = glTexture2D(papertexture, texcoord);
If you don't have access to GLSL, you could do something instead with the stencil buffer. Just draw all your circles into the stencil buffer, use the combined region as a stencil mask, and then draw a fullscreen quad of your texture. The color will be seamlessly deposited at the union of all the circles.
You can achieve this effect with max blending for alpha. Or manual (blending off) with shader (OpenGL ES 2.0):
#extension GL_EXT_shader_framebuffer_fetch : require
precision highp float;
uniform sampler2D texture;
uniform vec4 color;
varying vec2 texCoords;
void main() {
float res_a =gl_LastFragData[0].a;
float a = texture2D(texture, texCoords).a;
res_a = max(a, res_a);
gl_FragColor = vec4(color.rgb * res_a, res_a);
}
Result:
I'm trying to implement the technique described at : Compositing Images with Depth.
The idea is to use an existing texture (loaded from an image) as a depth mask, to basically fake 3D.
The problem I face is that glDrawPixels is not available in OpenglES. Is there a way to accomplish the same thing on the iPhone?
The depth buffer is more obscured than you think in OpenGL ES; not only is glDrawPixels absent but gl_FragDepth has been removed from GLSL. So you can't write a custom fragment shader to spool values to the depth buffer as you might push colours.
The most obvious solution is to pack your depth information into a texture and to use a custom fragment shader that does a depth comparison between the fragment it generates and one looked up from a texture you supply. Only if the generated fragment is closer is it allowed to proceed. The normal depth buffer will catch other cases of occlusion and — in principle — you could use a framebuffer object to create the depth texture in the first place, giving you a complete on-GPU round trip, though it isn't directly relevant to your problem.
Disadvantages are that drawing will cost you an extra texture unit and textures use integer components.
EDIT: for the purposes of keeping the example simple, suppose you were packing all of your depth information into the red channel of a texture. That'd give you a really low precision depth buffer, but just to keep things clear, you could write a quick fragment shader like:
void main()
{
// write a value to the depth map
gl_FragColor = vec4(gl_FragCoord.w, 0.0, 0.0, 1.0);
}
To store depth in the red channel. So you've partially recreated the old depth texture extension — you'll have an image that has a brighter red in pixels that are closer, a darker red in pixels that are further away. I think that in your question, you'd actually load this image from disk.
To then use the texture in a future fragment shader, you'd do something like:
uniform sampler2D depthMap;
void main()
{
// read a value from the depth map
lowp vec3 colourFromDepthMap = texture2D(depthMap, gl_FragCoord.xy);
// discard the current fragment if it is less close than the stored value
if(colourFromDepthMap.r > gl_FragCoord.w) discard;
... set gl_FragColor appropriately otherwise ...
}
EDIT2: you can see a much smarter mapping from depth to an RGBA value here. To tie in directly to that document, OES_depth_texture definitely isn't supported on the iPad or on the third generation iPhone. I've not run a complete test elsewhere.