Distortion/Water in WebGL - webgl

I'm relatively new to WebGL, and OpenGL too for that matter, but in recent days I've filled up most my time writing a little game for it. However, when I wanted to implement something like heat waves, or any sort of distortion, I was left stuck.
Now, I can make a texture ripple using the fragment shader, but I feel like I'm missing something when it comes to distorting the content behind an object. Is there any way to grab the color of a pixel that's already been rendered within the fragment shader?
I've tried rendering to a texture and then having the texture of the object be that, but it appears if you choose to render your scene to a texture, you cannot render it to the screen also. And beyond that, if you want to render to a texture, that texture must be a power of two (which many screen resolutions do not quite fit into)
Any help would be appreciated.

You're going to have to render to a texture and draw that texture onto the screen while distorting it. Also, there's no requirement that framebuffer objects must be of a power-of-two size in OpenGL ES 2.0 (which is the graphics API WebGL uses). But non-power-of-two textures can't have mipmapping or texture-wrapping.

I believe you can modify individual canvas pixels directly. Might be a good way to ripple a small area, but might not be gpu-accelerated.

Related

What is the fastest way to render subsets of a single texture onto a WebGL canvas?

If you have a single power-of-two width/height texture (say 2048) and you want to blit out scaled and translated subsets of it (say 64x92-sized tiles scaled down) as quickly as possible onto another texture (as a buffer so it can be cached when not dirty), then draw that texture onto a webgl canvas, and you have no more requirements - what is the fastest strategy?
Is it first loading the source texture, binding an empty texture to a framebuffer, rendering the source with drawElementsInstancedANGLE to the framebuffer, then unbinding the framebuffer and rendering to the canvas?
I don't know much about WebGL and I'm trying to write a non-stateful version of https://github.com/kutuluk/js13k-2d (that just uses draw() calls instead of sprites that maintain state, since I would have millions of sprites). Before I get too far into the weeds, I'm hoping for some feedback.
There is no generic fastest way. The fastest way is different by GPU and also different by specifics.
Are you drawing lots of things the same size?
Are the parts of the texture atlas the same size?
Will you be rotating or scaling each instance?
Can their movement be based on time alone?
Will their drawing order change?
Do the textures have transparency?
Is that transparency 100% or not (0 or 1) or is it various values in between?
I'm sure there's tons of other considerations. For every consideration I might choose a different approach.
In general your idea if using drawElementsAngleInstanced seems fine but without knowing exactly what you're trying to do and on which device it's hard to know.
Here's some tests of drawing lots of stuff.

ios how to replace a color from the camera preview with a texture

I would like to do something like this:
Have the camera on and tap on the screen to get the color of that area and then replace that color with a texture. I have done something similar by replacing the color on the screen with another color (that is still not working right though), but replacing with a texture is more complex i think.
So please, can somebody give me a hint on how i can do this?
Also , on how to create the texture.
Thank you,
Alex
basically you will want to do this with a boolean operation in the fragment shader.
you'll need to feed two textures to the shader, one is the camera image, the other is the replacement image. then you need a function which determines if the per-fragment color from the camera texture is within a certain color range (which you choose), and depending on that either show the camera texture or the other texture.
your question is a bit vague, you should try to break it down into smaller problems. the tricky part, if you haven't done this before, is getting the OpenGL boilerplate code right.
you need to know:
how to write, compile and use basic GLSL shaders
how to load images into OpenGL textures and use them in your shaders (search for sampler2d)
a good first step is to do the following:
figure out how to show a texture as a flat fullscreen image using 2D geometry. You'll need to render two triangles, and map the texture's coordinates (UV) onto the triangle points.
if you follow this tutorial you'll be able to do the thing you want:
http://www.raywenderlich.com/70208/opengl-es-pixel-shaders-tutorial

Loading texture in segments

I'm working on an Open GL app that uses 1 particularly large texture 2250x1000. Unfortunately, Open GL ES 2.0 doesn't support textures larger than 2048x2048. When I try to draw my texture, it appears black. I need a way to load and draw the texture in 2 segments (left, right). I've seen a few questions that touch on libpng, but I really just need a straight forward solution for drawing large textures in opengl es.
First of all the texture size support depends on device, I believe iPad 3 supports 4096x4096 but don't mind that. There is no way to push all those data as they are to most devices onto 1 texture. First you should ask yourself if you really need such a large texture, will it really make a difference if you resample it down to 2048x_. If the answer is NO you will need to break it at some point. You could cut it by half in width and append of the cut parts to the bottom of the texture resulting in 1125x2000 texture or simply create 2 or more textures and push to them certain parts of the texture image. In any of the cases you might have trouble with texture coordinates but this all heavily depends on what you are trying to do, what is on that texture (a single image or parts of a sophisticated model; color mapping or some data you can not interpolate; do you create it at load time or it is modified as it goes...). Maybe some more info could help us solve your situation more specifically.

Rendering to a Texture vs Rendering to CAEAGLLayer-backed view?

I have a very loose grasp of the OpenGL environment, so this is something I'm trying to understand. Could someone explain in lay terms what is the difference between these two styles of rendering? Primarily I would like to understand what it means to render to a texture and when would it be appropriate to choose to do that?
If you render to a texture then the image you've rendered is processed in such a manner as to make it immediately usable as a texture, which in practice usually means with negligible or zero effort from the CPU. You normally render to texture as part of a more complicated rendering pipeline.
Shadow buffers are the most obvious example — they're one way of rendering shadows. You position the camera where the light source would be and render the scene from there so that the final depth information ends up in a texture. You don't show that to the user. For each pixel you do intend to show to the user you work out its distance from the light and where it would appear in the depth map, then check if it is closer or further from the light than whatever was left in the depth map. Hence, with some effort expended on precision issues, you check whether each pixel is 'visible' from the light and hence whether it is lit.
Rendering to a CAEAGLLayer-backed view is a way of producing OpenGL output that UIKit knows how to composite to the screen. So it's the means by which iOS allows you to present your final OpenGL output to the user, within the hierarchy of a normal Cocoa Touch display.

Example code for Resizing an image using DirectX

I know it is possible, and a lot faster than using GDI+. However I haven't found any good example of using DirectX to resize an image and save it to disk. I have implemented this over and over in GDI+, thats not difficult. However GDI+ does not use any hardware acceleration, and I was hoping to get better performance by tapping into the graphics card.
You can load the image as a texture, texture-map it onto a quad and draw that quad in any size on the screen. That will do the scaling. Afterwards you can grab the pixel-data from the screen, store it in a file or process it further.
It's easy. The basic texturing DirectX examples that come with the SDK can be adjusted to do just this.
However, it is slow. Not the rendering itself, but the transfer of pixel data from the screen to a memory buffer.
Imho it would be much simpler and faster to just write a little code that resizes an image using bilinear scaling from one buffer to another.
Do you really need to use DirectX? GDI+ does the job well for resizing images. In DirectX, you don't really need to resize images, as most likely you'll be displaying your images as textures. Since textures can only applies on 3d object (triangles/polygons/mesh), the size of the 3d object and view port determines the actual image size displayed. If you need to scale your texture within the 3d object, just play the texture coordinate or matrix.
To manipute the texture, you can use alpha blending, masking and all sort of texture manipulation technique, if that's what you're looking for. To manipulate individual pixel like GDI+, I still think GDI+ is the way to do. DirectX was never mend to do image manipulation.

Resources