SceneKit - How to use different blend modes - e.g. additive blending? [duplicate] - ios

I can't see an obvious way to change the blending function (glBlendFunc) for a scene kit node or geometry - it doesn't seem to be part of the material, and it isn't very obvious from the scene kit documentation how it organises render passes.
Do I need to make a Render delegate for the node which just changes the GLblending mode, or do I need to somehow set up different render passes etc. (It's not obvious from the documentation how I even control things like render passes)?

Will It Blend? - SceneKit
Yes! In iOS 9 and OS X 10.11 (currently in beta), blendMode is an attribute on materials, so you can render any SceneKit content with additive, multiplicative, or other kinds of blending.
But for while you're still supporting earlier OS versions... SceneKit in iOS 8.x and OS X 10.8 through 10.10 doesn't offer API for blend modes.
There are a couple of options you can look at for working around this.
1. Set the GL state yourself
If you call glBlendFunc and friends before SceneKit draws, SceneKit will render using the blend state you've selected. The trick is setting the state at an appropriate time for drawing your blended content and leaving the state as SceneKit expects for un-blended content.
If you set your GL state in renderer:willRenderScene:atTime: and unset it in renderer:didRenderScene:atTime:, you'll apply blending to the entire scene. Probably not what you want. And you can't use a node renderer delegate for only the node you want blended because then SceneKit won't render your node content.
If you can find a good way to wedge those calls in, though, they should work. Try rendering related nodes with a custom program and set your state in handleBindingOfSymbol:usingBlock:, maybe?
2. Use Programmable Blending (iOS only)
The graphics hardware in iOS devices supports reading the color value of a destination fragment in the shader. You can combine this value with the color you intend to write in any number of ways — for example, you can create Photoshop-style blend modes.
In SceneKit, you can use this with a fragment shader modifier — read from gl_LastFragData and write to _output. The example here uses that to do a simple additive blend.
#pragma transparent
#extension GL_EXT_shader_framebuffer_fetch : require
#pragma body
_output.color = gl_LastFragData[0] + _output.color;

From what I can tell after several hours of experimenting, there is no way to actually set the blend mode used to render a piece of geometry, or to control the overall blend mode used to render a pass using SCNTechnique.
SceneKit appears to only have two different blending modes - one where blending is off - if it considers the material opaque, and a "transparent" blending mode (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) when it considers a material transparent. This is bad news if you want to render things like glows, because it doesn't seem possible to get anything like a (GL_ONE, GL_ONE) blend mode you'd want for rendering light beams or glows.
However, I've found a hack to get around this which doesn't give you proper control over blending, but which works if you're wanting to render glowing things like light beams:
Because SceneKit uses GL_ONE, GL_ONE_MINUS_SRC_ALPHA blending mode all you should have to do is render your geometry with an alpha channel of 0. Unfortunately, it's not that simple because the default SceneKit shader discards fragments with an alpha channel of 0, so nothing will actually get rendered. A quick-and-dirty workaround is to use a diffuse colour map which has an alpha channel of 1 (assuming an 8 bit per channel map with values from 1-255). Because the alpha channel is nearly 0, pretty much all of the background image will show through. This mostly works, but because the alpha isn't quite zero it will still produce noticeable artefacts in bright areas.
So to work around this problem you can just use a standard texture map with a solid alpha chanel, but attach a shader modifier to "SCNShaderModifierEntryPointFragment" which simply sets the alpha channel of the output colour to zero. This works because fragment shader modifiers come after the zero-alpha culling.
here's that shader modifier in its entirety :
#pragma transparent
#pragma body
_output.color.a = 0;
note the "#pragma transparent" declaration in the first line - this is necessary to force SceneKit to use its transparent blending mode even when it otherwise wouldn't.
This is not a complete solution, because it's not real control over blending - it's only a useful hack for producing light beam glows etc - and the shading process certainly isn't as optimal as it could be, but it works well for this case.

Related

Fading a 3D object into the background, using D3D9, SH3 & HLSL

I have a simple program that renders a couple of 3D objects, using DirectX 3D 9 and HLSL. I'm just starting off with HLSL, I have no experience with 3D rendering.
I am able to change the texture & color of the models and fade between two textures without problems, however I was wondering what the best way to simply fade a 3D object (blend it with the background) would be. I would assume that it wouldn't be done as fading between two textures (using lerp), since I want the object faded to the entire background, so there would be many different textures behind it.
I'm using the LPD3DXEFFECT as my effect class, DrawIndexedPrimitive as the drawing function in each pass, and I only have a single pass. I'm also using Shader Model 3, as this is an older project.
The only way that I thought it possible would be to simply get the color of the pixel before you apply any changes, and then do calculations on it with the color of the texture of the model to attain a faded pixel. However, after looking over the internet, it does not appear that it's actually possible to get the color of a pixel before doing anything to it with HLSL.
Is it even possible to do something like this using HLSL? Am I missing something that could assist me here?
Any help is appreciated!
Forgive me if I'm misunderstanding, but it sounds like you're trying to simulate transparency instead of using built-in transparency.
If you're trying to get the color of the pixels behind the object and want to avoid using transparency, I'd start by trying to use the last rendered frame as a texture, then reference that texture in your current shader. There may be some way to do it within the same frame - to force all other rendering to go first, then handle the one object - but I don't know it.
After a long grind, I finally found a very good workaround for my problem, and I will try to explain my understanding of it for anyone else that has a smillar issue. Thanks to Alexander Stewart for suggesting that there may be an in-built way to do it.
Method Description
Instead of taking care of the background fade in the HLSL pixel shader, there is another way to do it, using a method called Frame Buffer Alpha Blending (full MS Docs documentation: https://learn.microsoft.com/en-us/windows/win32/direct3d9/frame-buffer-alpha).
The basic idea behind this method is to provide a simple way of blending a given pixel that is to be rendered, with the existing pixel on the screen. There is a formula that is followed: FinalColor = ObjectPixelColor * SourceBlendFactor + BackgroundPixelColor * DestinationBlendFactor, all of these "variables" being groups of 4 float values, in the format (R, G, B, A).
How I Implemented it
Before doing anything with the actual shaders, in my Visual Studio C++ file I have to pass a few flags to my render device (I used LPDIRECT3DDEVICE9 as my device class). I had to set render states for both D3DRS_SRCBLEND and D3DRS_DESTBLEND, which are reffering to ObjectPixelColor and DestinationBlendFactor respectivelly in the formula above. These will be my factors that will be multiplying each one of my object and background pixel colors. There are many possible values that can be assigned to D3DRS_SRCBLEND and D3DRS_DESTBLEND, full list is available in the MS Docs link above, but in order to achieve what I wanted to (simply a way to fade an object into the background with an alpha number going from 0 to 1), I figured out the flags should be like this: SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);.
After setting these flags, before passing through my shaders & rendering, I just needed to set one more flag: SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);. I was also able to alternate between TRUE and FALSE here without changing anything else with no rendering problems (although my project was very simple, it will probably cause issues on larger projects). You can then pass any arguments you want, such as the alpha number, to the HLSL shader as a global variable (I did it using SetValue()).
Going back to my HLSL shader, after these changes, passing a color float4 variable taken from the tex2D() function from my pixel shader with an alpha value between 0 and 1 yielded the correct alpha, provided there aren't other issues (another issue that I had but hadn't realized at the time was the fact that my transparent object was actually rendering before the background, so I can only reccomend to check the rendering order when working on rendering projects).
I'm sure there could have probably been a better way of implementing this with the latest DirectX, but my compiler only supports Shader Model 3 and lower.

Render an SCNGeometry as a wireframe

I'm using SceneKit on iOS and I have a geometry I want to render as a wireframe. So basically I want to draw only the lines, so no textures.
I figured out that I could use the shaderModifiers property of the used SCNMaterial to accomplish this. Example of a shader modifier:
material.shaderModifiers = [
SCNShaderModifierEntryPointFragment: "_output.color.rgb = vec3(1.0) - _output.color.rgb;"
]
This example apparently simply inverts the output colors. I know nothing about this 'GLSL' language I have to use for the shader fragment.
Can anybody tell me what code I should use as the shader fragment to only draw near the edges, to make the geometry look like a wireframe?
Or maybe there is a whole other approach to render a geometry as a wireframe. I would love to hear it.
Try setting the material fillMode to .lines (iOS 11+, and macOS 10.13+):
sphereNode.geometry?.firstMaterial?.fillMode = .lines
Now it is possible (at least in Cocoa) with:
gameView.debugOptions.insert(SCNDebugOptions.showWireframe)
or you can do it interactively if enabling the statistics with:
gameView.showsStatistics = true
(gameView is an instance of SCNView)
This is not (quite) an answer, because this a question without an easy answer.
Doing wireframe rendering entirely in shader code is a lot more difficult than it seems like it should be, especially on mobile where you don't have a geometry shader. The problem is that the vertex shader (and subsequently the fragment shader) just doesn't have the information needed to know where polygon edges are.
I know nothing about this 'GLSL' language I have to use for the shader fragment.
If you really want to tackle this problem, you'll need to learn some more about GLSL (the OpenGL Shading Language). There are loads of books and tutorials out there for that.
Once you've got some GLSL under your belt, take a look at some of the questions (like this one pulled from the Related sidebar) and other stuff people have written about the problem. (Note that when you're looking for mobile-specific limitations, OpenGL ES has the same limitations as WebGL on the desktop.)
With SceneKit, you have the additional wrinkle that you probably don't have a barycentric-coordinates vertex attribute (aka SCNGeometrySource) for the geometry you're working with, and you probably don't want to do the hard work of generating one. In OS X, you can use an SCNProgram with a geometryShader to add barycentric coordinates before the vertex/fragment shaders run — but then you have to do your own shading (i.e. you can't piggyback on the SceneKit shading like you can with shader modifiers). And that isn't available in iOS — the hardware there doesn't do geometry shaders. You might be able to fake it using texture coordinates if those happen to be lined up right in your geometry.
It might be easier to just draw the object using lines — try making a new SCNGeometry from the sources and elements of your original (solid) geometry, but when recreating the SCNGeometryElement, use SCNPrimitiveTypeLine.

GLPaint based OpenGL ES Blending Issue

I'm working on an app based on Apple's GLPaint sample code. I've changed the clear color to transparent black and have added an opacity slider, however when I mix colors together with a low opacity setting they don't mix the way I'm expecting. They seem to mix the way light mixes, not the way paint mixes. Here is an example of what I mean:
The "Desired Result" was obtained by using glReadPixels to render each color separately and merge it with the previous rendered image (i.e. using apple's default blending).
However, mixing each frame with the previous is too time consuming to be done on the fly, how can I get OpenGL to blend the colors properly? I've been researching online for quite a while and have yet to find a solution that works for me, please let me know if you need any other info to help!
From the looks of it, with your current setup, there is no easy solution. For what you are trying to do, you need custom shaders. Which is not possible using just GLKit.
Luckily you can mix GLKit and OpenGL ES.
My recommendation would be to:
Stop using GLKit for everything except setting up your rendering
surface with GLKView (which is tedious without GLKit).
Use an OpenGl program with custom shaders to draw to a texture that is backing an FBO.
Use a second program with custom shaders that does post processing (after drawing above texture to a quad which is then rendered to the screen).
A good starting point would be to load up the OpenGl template that comes with Xcode. And start modifying it. Be warned: If you don't understand shaders, the code here will make little sense. It draws 2 cubes, one using GLKit, and one without - using custom shaders.
References to start learning:
Intro to shaders
Rendering to a Texture
Shader Toy - This should help you experiment with your post processing frag shader.
GLEssentials example - This shows how to render to texture using OpenGL ( a bit outdated.)
Finally, if you are really serious about using OpenGL ES to it's full potential, you really should invest the time to read through OpenGL ES 2.0 programming guide. Even though it is 6 years old, it is still relevant and the only book I've found that explains all the concepts correctly.
Your "Current Result" is additive color, which is how OpenGL is supposed to work. To work like mixing paint would be substractive color. You don't have control over this with OpenGL ES 1.1, but you could write a custom fragment shader for OpenGL ES 2.0 that would do substractive color. If you are blending textures images from iOS, you need to know if the image data has been premultiplied by alpha or not, in order to do blending. OpenGL ES expects the non-premultiplied format.
You need to write that code in the function which is called on color change.
and each time you need to set BlendFunc.
CGFloat red , green, blue;
// set red, green ,blue with desire color combination
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(red* kBrushOpacity,
green * kBrushOpacity,
blue* kBrushOpacity,
kBrushOpacity);
To do more things by using BlendFunc use this link
Please specify if it works or not. It work for me.

Replicating UIView drawRect in OpenGL ES

My iOS application draws into a bitmap (same size as my view) using Core Graphics. I want to push updated regions of the bitmap to the screen. (I've used the standard UIView drawRect method but I have some good reasons to switch to OpenGL).
I just want to replicate the same behavior as UIView/CALayer drawRect but in an OpenGL view. Essentially I would like to update dirty rectangles on my OpenGL view. Nothing more.
So far I've been able to create an OpenGL ES 1.1 view and push my entire bitmap on screen using a single quad (texture on a vertex array) for each update of my bitmap. Of course, this is pretty inefficient since I only need to refresh the dirty rectangle, not the whole view.
What would be the most efficient way to do that in OpenGL ES? Should I use a lattice of quads and update the texture of the quads that intersect with my dirty rectangle? (If I were to use that method, should I use VBO?) Is there a better way to do that?
FYI (just in case), I won't need rotation but will need to scale the entire OpenGL view.
UPDATE:
This method indeed works. However, there's a bug in iOS5.x on retina display devices that produces an artifact when using single buffering. The problem has been fixed in iOS6. I don't yet have a work around.
You could simply just update a part of the texture using TexSubImage, and redraw your standard full-screen quad, but with the scissor rect beeing set (glScissor) to the "dirty" part. The GL will then not draw any fragments outside this rect.
For this to work, you must of course use single buffering.

iOS: Smooth button Glow effect by blending between images

I am creating a custom button that needs to be able to glow to a varying degree
How would I use these pictures to make a button that 'glows' the diamond when it is pressed, and have this glow gradually fade back to inert state?
I want to churn out several different colours of diamond as well... I am hoping to generate all different coloured diamonds from the same stock images presented here.
I would like to get my head around the basic methods available, in enough detail that I can see each one through and make a decision which path to take...
My tangled efforts so far... ( I will delete all of this, or move it into possibly several answers as a solution unfolds... )
I can see 3 potential solution paths:
GL
it looks as though GL has everything it takes to get complete fine-grained control over the process, although functions exposed by core graphics come tantalisingly close, and that would save several hundred lines of code spread over a bunch of source files, which seems a bit ridiculous for such a basic task.
core graphics, and core animation to accomplish the blending
documentation goes on to say
Anything underneath the unpainted samples, such as the current fill color or other drawing, shows through.
so I can chroma-key mask the left image, setting {0,0,0} ie Black as the key.
this at least secures a transparent background, now I have to work on making it yellow instead of grey.
so maybe I could have started instead with setting a yellow back colour for my image context, then use some CGContextSetBlendMode(...) to imprint the diamond on the yellow, THEN use chroma-key masking to get a transparent background
ok, this covers at least getting the basic unlit image on-screen
now I could overlay the sparkly image, using some blend mode, maybe I could keep it in its current greyscale state, and that would just boost the colours of the original
only problem with this is that it is a lot of heavy real-time blending
so maybe I could pre-calculate every image in the animation... this is looking increasingly mucky...
Cocos2D
if this allows me to set the blend mode to additive blending then I could just composite the glowing image over the original image with an appropriate Alpha setting.
After digging through a lot of documentation, the optimal solution seems to be to use core graphics functions to get the source images into a single 2-component GL texture, and then use GL to blend between them.
I will need to pass a uniform value glow_factor into the shader
The obvious solution might seem to simply use
r,g,b = in_r,g,b * { (1 - glow_factor) * inertPixel + glow_factor * shinyPixel }
(where inertPixel is the appropriate pixel of the inert diamond etc)...
it looks like I would also do well to manufacture my own sparkles and add them over the top; a gem should sparkle white irrespective of its characteristic colour.
After having looked at this problem a little more, I can see several solutions
Solution A -- store the transition from glow=0 to glow=1 as 60 frames in memory, then load the appropriate frame into a GL texture every time it is required.
this has an obvious benefit that a graphic designer could construct the entire sequence and I could load it in as a bunch of PNG files.
another advantage is that these frames wouldn't need to be played in sequence... the appropriate frame can be chosen on-the-fly
however, it has a potential drawback of a lot of sending data RAM->VRAM
this can be optimised by using glTexSubImage2D; several frames can be sent simultaneously and then unpacked from within GL... in fact maybe the entire sequence. if this is so, then it would make sense to use PVRT texture compression.
iOS: playing a frame-by-frame greyscale animation in a custom colour
Solution B -- load glow=0 and glow=1 images as GL textures, and manually write shader code that takes in the glow factor as a uniform and performs the blend
this has an advantage that it is close to the wire and can be tweaked in all sorts of ways. Also it is going to be very efficient. This advantage is that it is a big extra slice of code to maintain.
Solution C -- set glBlendMode to perform additive blending.
then draw the glow=0 image image, setting eg alpha=0.2 on each vertex.
then draw the glow=1 image image, setting eg alpha=0.8 on each vertex.
this has an advantage that it can be achieved with a more generic code structure -- ie a very general ' draw textured quad / sprite ' class.
disadvantage is that without some sort of wrapper it is a bit messy... in my game I have a couple of dozen diamonds -- at any one time maybe 2 or 3 are likely to be glowing. so first-pass I would render EVERYTHING ( just need to set Alpha appropriately for everything that is glowing ) and then on the second pass I could draw the glowing sprite again with appropriate Alpha for everything that IS glowing.
it is worth noting that if I pursue solution A, this would involve creating some sort of real-time movie player object, which could be a very useful reusable code component.

Resources