Texture and Lines not blending in WebGL - webgl

I'm supposed to be having a 2d program using webgl and supposed to have a picture over the canvas which is a texture and above it i draw lines using webgl also. But the thing is both have different shader programs and whenever i'm drawing lines it's getting messed up. So Can anyone help?

Related

Webgl full screen blending slowdown

I've tried to make an "overlay" effect in a 3d scene. After drawing stuff to the buffer, i tried to draw a full screen quad with blending enabled and the depth test disabled. On some android devices this seems to have caused a slow down.
I found this link:
The particularly slow point is the point where the drawing of a pixel needs to check what the color behind it was.
So instead of drawing a single full screen quad, i divided it up in tiles, and rendered with multiple draw calls, which seems to have caused some gain.
What may be happening here and how can this be profiled with webgl i.e. how does one come to the conclusion from the quote above?
I guess that to profile it, you simply have to test with several blending function, with or without blending enabled, etc...
Blending is not a trivial operation, and indeed we can assume that blending function which need to read pixel on buffer could induce performance lose, like all "reading" operation in OpenGL, because this can block the pipeline. I guess most of modern desktop GPU have some specific design to optimize this, but on mobile phones, this is maybe more problematic.
Anyway, if you are about to draw a full screen quad, why don't you render your quad directly using two source texture, which you blend directly in the fragment shader using a custom equation ? this way, you don't need to use blending and you avoid any back buffer reading problem.

ios how to replace a color from the camera preview with a texture

I would like to do something like this:
Have the camera on and tap on the screen to get the color of that area and then replace that color with a texture. I have done something similar by replacing the color on the screen with another color (that is still not working right though), but replacing with a texture is more complex i think.
So please, can somebody give me a hint on how i can do this?
Also , on how to create the texture.
Thank you,
Alex
basically you will want to do this with a boolean operation in the fragment shader.
you'll need to feed two textures to the shader, one is the camera image, the other is the replacement image. then you need a function which determines if the per-fragment color from the camera texture is within a certain color range (which you choose), and depending on that either show the camera texture or the other texture.
your question is a bit vague, you should try to break it down into smaller problems. the tricky part, if you haven't done this before, is getting the OpenGL boilerplate code right.
you need to know:
how to write, compile and use basic GLSL shaders
how to load images into OpenGL textures and use them in your shaders (search for sampler2d)
a good first step is to do the following:
figure out how to show a texture as a flat fullscreen image using 2D geometry. You'll need to render two triangles, and map the texture's coordinates (UV) onto the triangle points.
if you follow this tutorial you'll be able to do the thing you want:
http://www.raywenderlich.com/70208/opengl-es-pixel-shaders-tutorial

Drawing a background image in IOS with OpenGL has poor quality

When using open gl to draw background images, you typically have to use a texture and draw a quad. When ever I've done that, I dont get a pixel perfect representation of my image. Things like fine text gets blended badly. In the past I've relied on canvas type drawing to make that nicer. Is there a way to get the pixel perfect drawing of the canvas type bit blits with opengl es?

Optimise OpenGL ES 2.0 2D drawing using dirty rectangles

Is it possible to optimise OpenGL ES 2.0 drawing by using dirty rectangles?
In my case, I have a 2D app that needs to draw a background texture (full screen on iPad), followed by the contents of several VBOs on each frame. The problem is that these VBOs can potentially contain millions of vertices, taking anywhere up to a couple of seconds to draw everything to the display. However, only a small fraction of the display would actually be updated each frame.
Is this optimisation possible, and how (or perhaps more appropriately, where) would this be implemented? Would some kind of clipping plane need to be passed into the vertex shader?
If you set an area with glViewport, clipping is adjusted accordingly. This however happens after the vertex shader stage, just before rasterization. As the GL cannot know the result of your own vertex program, it cannot sort out any vertex before applying the vertex program. After that, it does. How efficent it does depents on the actual GPU.
Thus you have to sort and split your objects to smaller (eg. rectangulary bounded) tiles and test them against the field of view by yourself for full performance.

Distortion/Water in WebGL

I'm relatively new to WebGL, and OpenGL too for that matter, but in recent days I've filled up most my time writing a little game for it. However, when I wanted to implement something like heat waves, or any sort of distortion, I was left stuck.
Now, I can make a texture ripple using the fragment shader, but I feel like I'm missing something when it comes to distorting the content behind an object. Is there any way to grab the color of a pixel that's already been rendered within the fragment shader?
I've tried rendering to a texture and then having the texture of the object be that, but it appears if you choose to render your scene to a texture, you cannot render it to the screen also. And beyond that, if you want to render to a texture, that texture must be a power of two (which many screen resolutions do not quite fit into)
Any help would be appreciated.
You're going to have to render to a texture and draw that texture onto the screen while distorting it. Also, there's no requirement that framebuffer objects must be of a power-of-two size in OpenGL ES 2.0 (which is the graphics API WebGL uses). But non-power-of-two textures can't have mipmapping or texture-wrapping.
I believe you can modify individual canvas pixels directly. Might be a good way to ripple a small area, but might not be gpu-accelerated.

Resources