OpenGL ES 2.0 plane morph/distortion effect GPUImage iOS - ios

was playing a bit with awesome GPUImage framework and was able to reproduce some "convex"-like effects with fragment shaders.
However, I'm wondering if it's possible to get some more complex plane curving in 3D using GPUImage or any other OpenGL rendering to texture.
The effect I'm trying to reach looks like this one - is there any chance I can get something alike using depth buffer and vertex shader - or just need to develop some more sophisticated fragment shader emulating Z coordinate?
This is what I get now using only fragment shader and some periodical bulging
Thanks
Well another one thought is maybe it's possible to prototype a curved surface in some 3d modeling app and somehow map the texture to it?

Related

Correct Normal Map format to use for SceneKit content - DirectX or OpenGL?

I wonder, which format for Normal Maps is the correct one to use within SceneKit content, for iOS? As referenced here: DirectX vs. OpenGL normal maps.
OpenGL or DirectX? Or does is not matter?
I had to figure it out by testing the OpenGL vs. DirectX Normal Map Typus side by side on planes. This gives me the following results:
This means, if you have the choice between the OpenGL or the DirectX Normal Map, you better choose OpenGL.

How to make custom camera lens effects in ios

I am not an ios developer but my client wants me to make an iphone app like
https://itunes.apple.com/us/app/trippy-booth-amazing-filterswarps/id448037560?mt=8
I have seen some custom library like
https://github.com/BradLarson/GPUImage
but do not find any camera lens customization example.
any kind of suggestions would be helpful
Thanks in advance
You can do it through some custom shader written in OpenGL(or metal just for iOS), then you can apply your shader to do interesting stuff like the image in above link.
I suggest you take a look at how to use the OpenGL framework in iOS.
Basically the flow would like:
Use whatever framework to capture(even in real time) a image.
Use some framework to modify the image. (The magic occur here)
Use another stuff to present the image.
You should learn how to obtain a OpenGL context, draw a image on it, write a custom shader, apply the shader, get the output, to "distort the image". For real, the hardest part is how to create that "effect" in your mind by describing it using a formula.
This is quite similar to the photoshop mesh warp (Edit->Transform->Warp). Basically you treat your image as a texture and then you render it on to a mesh (Bezier Patch) that is a grid that has been distorted into bezier curves, but you leave the texture coordinates as if it was still a grid. This has the effect of "pulling" the image towards the nodes of the patch. You can use OpenGL (GL_PATCHES) for this; I imagine metal or sceneKit might work as well.
I can't tell from the screen shots but its possible that the examples you reference are actually placing their mesh based on facial recognition. CoreImage has basic facial recognition to give youth out and eye positions which you could use to control some of the nodes in your mesh.

gl_LastFragData for blurring the entire scene

So I have a cocos2d iOS app which uses OpenGL ES 2.0. I've got fragment shader where I'm currently just grabbing vec4 lastFragColor = gl_LastFragData[0]; and manipulating it.
But what I'm really wondering if/how I can access the neighboring fragments of the current one so that I can do convolution type effects like a gaussian blur?
The answer is no. You can't grab neighbouring fragments using this gl extension.
Render to an fbo and use that as an input to render Gaussian blurs instead.

iOS: Render a purely pixel-based fractal effect using OpenGL ES?

I am new to Objective-C and OpenGL, so please be patient.
I'm building an app that is mainly based on a full-screen 2D pixelbuffer that is filled and animated using mathematical formulas (similar to fractals), mostly using sin, cos, atan etc.
I have already optimized sin and cos by using tables which gave quite an fps boost, however, while the framerate is cool in the Simulator on a Mac Mini (around 30 fps), I get a totally ridiculous 5 fps on an actual device (iPad Mini non-retina).
As I see no further ways to optimize the pixel loops, would it be possible to implement the effects using, say, an OpenGL shader, and then just draw a fullscreen quad with a texture on it?
As I said, the effects are really simple and just iterate over all pixels in a nested x/y loop and use basic math and trig functions. The way I blit to the screen is already optimal for the device while staying in non-OpenGL, and gives like a million FPS if I leave out the actual math.
Thanks!
If you implement this as a OpenGL shader you will get a rediculously massive increase in performance. The shader would run on the graphics chip, which is designed to be massively parallel, and is optimized exactly for this kind of math.
You don't make a texture so much as define a shader for the surface. Your shader code would be invoked for every rendered pixel on that surface.
I would start by trying to see if you can hack a shader here: http://glsl.heroku.com/
Once you have something working, you can research how to get an OpenGL context working with your shader on iOS, and you shouldn't have to change the actual shader much to get it working.

is it worth it to use hlsl shaders for 2D drawing

I was wondering if it is worth it to use shaders to draw a 2D texture in xna. I am asking because with openGL it is much faster if you use GLSL.
Everything on a modern GPU is drawn using a shader.
For the old immediate-style rendering (ie: glBegin/glVertex), that will get converted to something approximating buffers and shaders somewhere in the driver. This is why using GLSL is "faster" - because it's closer to the metal, you're not going through a conversion layer.
For a modern API, like XNA, everything is already built around "buffers and shaders".
In XNA, SpriteBatch provides its own shader. The source code for the shader is available here. The shader itself is very simple: The vertex shader is a single matrix multiplication to transform the vertices to the correct raster locations. The pixel shader simply samples from your sprite's texture.
You can't really do much to make SpriteBatch's shader faster - there's almost nothing to it. There are some things you can do to make the buffering behaviour faster in specific circumstances (for example: if your sprites don't change between frames) - but this is kind of advanced. If you're experiencing performance issues with SpriteBatch, be sure you're using it properly in the first place. For what it does, SpriteBatch is already extremely well optimised.
For more info on optimisation, see this answer.
If you want to pass a custom shader into SpriteBatch (eg: for a special effect) use this overload of Begin and pass in an appropriate Effect.

Resources