Correct Normal Map format to use for SceneKit content - DirectX or OpenGL? - ios

I wonder, which format for Normal Maps is the correct one to use within SceneKit content, for iOS? As referenced here: DirectX vs. OpenGL normal maps.
OpenGL or DirectX? Or does is not matter?

I had to figure it out by testing the OpenGL vs. DirectX Normal Map Typus side by side on planes. This gives me the following results:
This means, if you have the choice between the OpenGL or the DirectX Normal Map, you better choose OpenGL.

Related

How to make custom camera lens effects in ios

I am not an ios developer but my client wants me to make an iphone app like
https://itunes.apple.com/us/app/trippy-booth-amazing-filterswarps/id448037560?mt=8
I have seen some custom library like
https://github.com/BradLarson/GPUImage
but do not find any camera lens customization example.
any kind of suggestions would be helpful
Thanks in advance
You can do it through some custom shader written in OpenGL(or metal just for iOS), then you can apply your shader to do interesting stuff like the image in above link.
I suggest you take a look at how to use the OpenGL framework in iOS.
Basically the flow would like:
Use whatever framework to capture(even in real time) a image.
Use some framework to modify the image. (The magic occur here)
Use another stuff to present the image.
You should learn how to obtain a OpenGL context, draw a image on it, write a custom shader, apply the shader, get the output, to "distort the image". For real, the hardest part is how to create that "effect" in your mind by describing it using a formula.
This is quite similar to the photoshop mesh warp (Edit->Transform->Warp). Basically you treat your image as a texture and then you render it on to a mesh (Bezier Patch) that is a grid that has been distorted into bezier curves, but you leave the texture coordinates as if it was still a grid. This has the effect of "pulling" the image towards the nodes of the patch. You can use OpenGL (GL_PATCHES) for this; I imagine metal or sceneKit might work as well.
I can't tell from the screen shots but its possible that the examples you reference are actually placing their mesh based on facial recognition. CoreImage has basic facial recognition to give youth out and eye positions which you could use to control some of the nodes in your mesh.

how to make "Azimuthal Equidistant Projection" in iOS

I'm thinking about creating an iOS app that transforms a 3D sphere into a 2D image using the azimuthal equidistant projection. Here is a nice sample of this type of projection.
Azimuthal Map, Anywhere
I'm new to 3D programming, so I would like to ask for advice which framework / tool is good to use in this case. These are the options that I know:
Unity (+ OpenGL?)
SceneKit (+ CoreGraphics?)
Processing + Processing.js (inside WebView)
Please tell me if there are other solutions. I'd be also glad if you could tell me if there is any sample code or an open source library for this projection.
Thanks in advance!
this can easily be done using shaders, and does not require external libraries.
See http://rogerallen.github.io/webgl/2014/01/27/azimuthal-equidistant-projection/
I would highly recommend to use the c++ 3D libraries such as GXmap and VES/VTK.
GXmap is a small virtual globe and map program. Apart from showing an ordinary globe view of the earth, it can also generate Azimuthal equidistant projection maps suitable for amateur radio use.
VES is the VTK OpenGL ES Rendering Toolkit. It is a C++ library for mobile devices using OpenGL ES 2.0

What is the subset of OpenGL ES 3.0 that SceneKit supports?

In the documentation of SCNView it is stated that:
SceneKit supports OpenGL ES 3.0, but some features are disabled when rendering in a OpenGL ES 3.0 context
I could not find anywhere which features were disabled. I wanted to use my own shader with SceneKit (assigning a SCNProgram to my material) and I tried to use a 3D texture. But I got the following error:
SceneKit: error, C3DBaseTypeFromString: unknown type name 'sampler3D'
So I'm guessing that 3D textures are part of the disabled features but I could not find a confirmation anywhere. Do I have to give up on SceneKit and do all my rendering with OpenGL manually just to use 3D textures?
Bonus question: Why Apple would support only a subset of OpenGL ES 3.0 in SceneKit since iOS has full support?
Some features of SceneKit don't work in an ES3 context. You should still be able to use all ES3 features in your OpenGL code.
This looks like an error in SceneKit detecting the uniform declaration for use with its higher-level APIs... so you won't be able to, say, bind an SCNMaterialProperty to that uniform with setValue:forKey:. However, you should still be able to use the shader program -- you'll have to bind it with glBindTexture/glActiveTexture instead (inside a block you set up with handleBindingOfSymbol:usingBlock:).

Developing Shaders With SpriteKit

I've read that some of the downfalls of SpriteKit is that you're unable to develop shaders if you use it.
However, I read a post here on SO that suggest otherwise:
How to apply full-screen SKEffectNode for post-processing in SpriteKit
Can you develop your own shaders if you decide to use SpriteKit?
Thanks
It is not supported in iOS 7, but iOS 8 will support custom shaders. For more information, view the pre-release documentation of SKShader.
An SKShader object holds a custom OpenGL ES fragment shader. Shader objects are used to customize the drawing behavior of many different kinds of nodes in Sprite Kit.
Sprite Kit does not provide an interface for using custom OpenGL shaders. The SKEffectNode class lets you use Core Image filters to post-process parts of a Sprite Kit scene, though. Core Image provides a number of built-in filters that might do some of what you're after, and on OS X you can create custom filter kernels using a language similar to GLSL.

OpenGL ES 2.0 plane morph/distortion effect GPUImage iOS

was playing a bit with awesome GPUImage framework and was able to reproduce some "convex"-like effects with fragment shaders.
However, I'm wondering if it's possible to get some more complex plane curving in 3D using GPUImage or any other OpenGL rendering to texture.
The effect I'm trying to reach looks like this one - is there any chance I can get something alike using depth buffer and vertex shader - or just need to develop some more sophisticated fragment shader emulating Z coordinate?
This is what I get now using only fragment shader and some periodical bulging
Thanks
Well another one thought is maybe it's possible to prototype a curved surface in some 3d modeling app and somehow map the texture to it?

Resources