Multisampling on iOS can't get depth texture? - ios

I have set up rendering to a framebuffer with color and depth textures on iOS, all works ok. I then tried to add multisampling via APPLE extensions (I used this code Rendering to texture on iOS OpenGL ES—works on simulator, but not on device ) but there's a catch apparently.
After resolving the multisampled buffer into my original framebuffer (which I use for post processing effects), I only have the color buffer resolved. glResolveMultisampleFramebufferAPPLE() apparently does not touch my depth texture at all, so if I use multisampling I have to give up on my depth texture effects. Is there no way to get the depth texture if I use multisampling ? I know how multisampling works, I just want a depth texture alongside the color texture.

Spec on APPLE_framebuffer_multisample tells that glResolveMultisampleFramebufferApple resolves color attachment, this means that you will have to write depth to color renderbuffer in additional render pass and resolve it to get depth information.

Ok, so after a couple more days of looking into this matter, I got my answer.
So the APPLE extension exists (and is different from the EXT one) just because it only resolves color. The GL ES 3.0 Standard (Probably coming to iOS 7.1) or DesktopGL says that in order to resolve Color or Depth you use glBlitFramebuffer which copies and resolves things. I tried it with DesktopGL 4.2 and blitting the depth buffer works.
I also went back to my DirectX11 renderer and tried the same thing with a GPU that Supports DirectX 11.1 level features and I was surprised that leading edge hardware can't do that in a single resolve call. ID3D11DeviceContext::ResolveSubresource throws errors when you try to resolve textures bound as depth. The workaround is to either have a special shader pass that does the depth resolving, but that implies using Texture2DMS ( A DirectX 10_1 level feature ) or to ping-pong the texture through 2 separate textures that are not bound to depth (implies 1 resolve call and 2 full depth texture copies).
Using the same counterpart in GL means using glTexImage2DMultisample images instead of multisampled renderbuffers (part of Desktop OpenGL 3.1) and then using sampler2DMS in a pixel shader for the actual shader texel fetch.
EDIT: Whether or not glBlitFramebuffer resolves depth seems to be marked as implementation dependent. On desktop GL (HD7850) it works, on GLES3 it still doesn't resolve it.

Related

24bit depth buffer with WebGL 1.0

I've found that there's no GL_DEPTH_COMPONENT24_OES renderbuffer storage equivalent in WebGL. But there's WEBGL_depth_texture extension that I can use to make a 24_8 or 32bit depth texture. Are those the DEPTH_COMPONENT24/DEPTH_COMPONENT32 renderbuffer equivalent? It would work the same as the screen buffer?
Is this the reason why they didn't make 24/32bit depth renderbuffer for WebGL?
There's a way to allocate 24 bit depth buffer in WebGL 1: DEPTH_STENCIL. Here's a test: http://jsbin.com/tadikibidi/edit?js,output. That's not actually true. The whereas the spec doesn't explicitly specifies sizes of components of DEPTH_STENCIL buffer, conformance tests suggest that the only requirement is for DEPTH to have >= 16 bit and >= 8 bit for STENCIL. It seems that the only way (at least w/o extensions) would be to allocate a DEPTH_STENCIL attachment and check DEPTH_BITS parameter of a framebuffer the attachment's attached to.
AFAIK, 32bit depth buffers a harder to get on mobile devices, thus WebGL doesn't have them (only with extensions).
What the spec doesn't also say about is that DEPTH_STENCIL is just an abstraction. WebGL implementations, especially the older ones, can implement DEPTH_STENCIL by bundling one depth buffer and one stencil buffer under the hood. So a DEPTH_STENCIL renderbuffer can be anything: 16/8(if 24 is not supported), 24/8(with extension) and so on. And the conformance test implies that the depth buffer is at least 16bit.
They came up with this abstraction and didn't even give a simple anatomy of what it is. Just this line pops out of nowhere:
WebGL adds DEPTH_STENCIL renderbuffer
The mailing list archive.

sRGB on iOS OpenGL ES 2.0

According to very few related topics that I could find I am gathering that the exponentiation step to obtain proper lighting computations perhaps must be done within the final fragment shader on an iOS app.
I have been profiling with the latest and greatest Xcode 5 OpenGL debugger and the exponentiation of the fragment accounts for a significant amount of computation. It was the line that took the longest time in the entire shader (the rest of the performance got sucked out by the various norm calls needed for point-lights).
glEnable(GL_FRAMEBUFFER_SRGB); unfortunately does not work as GL_FRAMEBUFFER_SRGB is not declared.
Of course the actual enum I should be using for GL ES may be different.
According to Apple:
The following extensions are supported for the SGX 543 and 554
processors only:
EXT_color_buffer_half_float
EXT_occlusion_query_boolean
EXT_pvrtc_sRGB
EXT_shadow_samplers
EXT_sRGB
EXT_texture_rg
OES_texture_half_float_linear
Well, that's nice, the newest device that does not have a 543 or 554 is the iPhone 4.
From the extension's text file it looks like I can set SRGB8_ALPHA8_EXT to the internalformat parameter of RenderbufferStorage, but nothing is said of how to get the normal final framebuffer to apply sRGB for us for free.
Now the sRGB correction seems like the missing step to get the correct colors. What I've been doing in my app to deal with the horrible "underexposed" colors is manually applying gamma correction like this in the fragment shader:
mediump float gammaf = 1.0/1.8; // this line declared outside of `main()`
// it specifies a constant 1.8 gamma
mediump vec4 gamma = vec4(gammaf, gammaf, gammaf, 1.0);
gl_FragColor = pow(color, gamma); // last line of `main()`
Now I recognize that the typical render pipeline involves one or more renders to a texture followed by a FS quad draw, which will afford me the opportunity to make use of an SRGB8_ALPHA_EXT renderbuffer, but what am I supposed to do without one? Am I SOL?
If that is the case, the pow call is sucking up so much time that it almost seems like I can squeeze some more perf out of it by building a 1D texture to sample and use as a gamma lookup table. This texture could then be used to tweak the output color intensities in custom ways (and get a much better approximation to sRGB compared to just the raw exponentiation). But it just all seems kind of wrong because supposedly sRGB is free.
Also somewhat alarming is the absence of any mention of the string srgb anywhere in the GL ES 2.0 spec. According to the makers of glm GL ES simply ignores sRGB entirely.
I know that I have used my code to render textures (I made a basic OpenGL powered image viewer that renders PVRTC textures) and they did not get "dimmed". I think what is happening there is that due to GL ES 2's lack of sRGB awareness, the textures are loaded in as being in linear space and written back out in the same way. In that situation, since no lighting gets applied (all colors got multiplied by 1.0) nothing bad happened to the results.
iOS 7.0 adds the new color format kEAGLColorFormatSRGBA8, which you can set instead of kEAGLColorFormatRGBA8 (the default value) for the kEAGLDrawablePropertyColorFormat key in the drawableProperties dictionary of a CAEAGLLayer. If you’re using GLKit to manage your main framebuffer for you, you can get GLKView to create a sRGB renderbuffer by setting its drawableColorFormat property to GLKViewDrawableColorFormatSRGBA8888.
Note that the OpenGL ES version of EXT_sRGB behaves as if GL_FRAMEBUFFER_SRGB is always enabled. If you want to render without sRGB conversion to/from the destination framebuffer, you’ll need to use a different attachment with a non-sRGB internal format.
I think you are getting confused between the EXT_sRGB and the ARB_framebuffer_sRGB extensions. The EXT_sRGB is the more recent extension, and is the one supported by iOS devices. This differs from ARB_framebuffer_sRGB in one important way, it is not necessary to call glEnable(GL_FRAMEBUFFER_SRGB) on the framebuffer to enable gamma correction, it is always enabled. All you need to do is create the framebuffer with an sRGB internal format and render linear textures to it.
This is not hugely useful on its own, as textures are rarely in a linear colour space. Fortunately the extension also includes the ability to convert sRGB textures to linear space. By uploading your textures with an internal format of sRGB8_ALPHA8_EXT, they will be converted into linear space when sampled in a shader for free. This allows you to use sRGB textures with a better perception encoded colour range, blend in higher precision linear space, then encode the result back to sRGB in the render buffer without any shader cost and accurate gamma correction.
Here are my test results. My only iOS7 device is an A7-powered iPad5, and in order to test fillrate I had to tweak my test app a bit to enable blending. That was sufficient on iOS 6.1 to prevent fragment-discarding optimizations on opaque geometry, but for iOS 7 I also needed to write gl_FragColor.w != 1.0 in the shader. Not a problem.
Using the GLKViewDrawableColorFormatSRGBA8888 does indeed appear to be free or close to free in terms of performance. I do not have a proper timedemo style benchmark setup so I am just testing "similar" scenes and the removal of the pow shaved around 2ms off the frame time (which would e.g. take 43ms to 41ms, 22 fps to 24 fps). Then, setting the sRGB framebuffer color format did not introduce a noticeable increase in the frame time as reported by the debugger, but this isn't very scientific and it certainly could have slowed it by a half a millisecond or so. I can't actually tell if it is completely free (i.e. fully utilizing a hardware path to perform final sRGB transformation) without first building more benching software, but I already have the problem solved so more rigorous testing will have to wait.

OpenGL photoshop overlay blend mode

Im trying to implement a particle system (using OpenGL 2.0 ES), where each particle is made up of a quad with a simple texture
the red pixels are transparent. Each particle will have a random alpha value from 50% to 100%
Now the tricky part is i like each particle to have a blendmode much like Photoshop "overlay" i tried many different combinations with the glBlendFunc() but without luck.
I dont understand how i could implement this in a fragment shader, since i need infomations about the current color of the fragment. So that i can calculate a new color based on the current and texture color.
I also thought about using a frame buffer object, but i guess i would need to re-render my frame-buffer-object into a texture, for each particle since each particle every frame, since i need the calculated fragment color when particles overlap each other.
Ive found math' and other infomations regrading the Overlay calculation but i have a hard time figuring out which direction i could go to implement this.
http://www.pegtop.net/delphi/articles/blendmodes/
Photoshop blending mode to OpenGL ES without shaders
Im hoping to have a effect like this:
You can get information about the current fragment color in the framebuffer on an iOS device. Programmable blending has been available through the EXT_shader_framebuffer_fetch extension since iOS 6.0 (on every device supported by that release). Just declare that extension in your fragment shader (by putting the directive #extension GL_EXT_shader_framebuffer_fetch : require at the top) and you'll get current fragment data in gl_LastFragData[0].
And then, yes, you can use that in the fragment shader to implement any blending mode you like, including all the Photoshop-style ones. Here's an example of a Difference blend:
// compute srcColor earlier in shader or get from varying
gl_FragColor = abs(srcColor - gl_LastFragData[0]);
You can also use this extension for effects that don't blend two colors. For example, you can convert an entire scene to grayscale -- render it normally, then draw a quad with a shader that reads the last fragment data and processes it:
mediump float luminance = dot(gl_LastFragData[0], vec4(0.30,0.59,0.11,0.0));
gl_FragColor = vec4(luminance, luminance, luminance, 1.0);
You can do all sorts of blending modes in GLSL without framebuffer fetch, but that requires rendering to multiple textures, then drawing a quad with a shader that blends the textures. Compared to framebuffer fetch, that's an extra draw call and a lot of schlepping pixels back and forth between shared and tile memory -- this method is a lot faster.
On top of that, there's no saying that framebuffer data has to be color... if you're using multiple render targets in OpenGL ES 3.0, you can read data from one and use it to compute data that you write to another. (Note that the extension works differently in GLSL 3.0, though. The above examples are GLSL 1.0, which you can still use in an ES3 context. See the spec for how to use framebuffer fetch in a #version 300 es shader.)
I suspect you want this configuration:
Source: GL_SRC_ALPHA
Destination: GL_ONE.
Equation: GL_ADD
If not, it might be helpful if you could explain the math of the filter you're hoping to get.
[EDIT: the answer below is true for OpenGL and OpenGL ES pretty much everywhere except iOS since 6.0. See rickster's answer for information about EXT_shader_framebuffer_fetch which, in ES 3.0 terms, allows a target buffer to be flagged as inout, and introduces a corresponding built-in variable under ES 2.0. iOS 6.0 is over a year old at the time of writing so there's no particular excuse for my ignorance; I've decided not to delete the answer because it's potentially valid to those finding this question based on its opengl-es, opengl-es-2.0 and shader tags.]
To confirm briefly:
the OpenGL blend modes are implemented in hardware and occur after the fragment shader has concluded;
you can't programmatically specify a blend mode;
you're right that the only workaround is to ping pong, swapping the target buffer and a source texture for each piece of geometry (so you draw from the first to the second, then back from the second to the first, etc).
Per Wikipedia and the link you provided, Photoshop's overlay mode is defined so that the output pixel from a background value of a and a foreground colour of b, f(a, b) is 2ab if a < 0.5 and 1 - 2(1 - a)(1 - b) otherwise.
So the blend mode changes per pixel depending on the colour already in the colour buffer. And each successive draw's decision depends on the state the colour buffer was left in by the previous.
So there's no way you can avoid writing that as a ping pong.
The closest you're going to get without all that expensive buffer swapping is probably, as Sorin suggests, to try to produce something similar using purely additive blending. You could juice that a little by adding a final ping-pong stage that converts all values from their linear scale to the S-curve that you'd see if you overlaid the same colour onto itself. That should give you the big variation where multiple circles overlap.

Can iOS use different filtering parameters for different texture units in the same context?

In iOS, I have an input image, which I am rendering to an intermediate texture (using a framebuffer) and then rendering that texture to the iOS-supplied renderbuffer (also using a framebuffer, of course). This is 2D, so I'm just drawing a quad each time.
No matter what I've tried, I can't seem to get the second rendering operation to use GL_LINEAR on the texture (I'm using GL_NEAREST on the first). The only way I've seen something filtered is if both textures use GL_LINEAR. Extremely similar code (at least, the OpenGL bits) works fine on Android. Am I just doing something wrong, or does this not work on iOS?
The answer is "yes, you can use different filtering options with different texture units in the same context". I had different issues with my program.

Artifacts when enabling 4x MSAA anti-aliasing on iPad / iOS

I've enabled 4x MSAA on my iPad OpenGL ES 2.0 app using the example on Apple's website. On the simulator this works well and the image is nice and smooth however on the device there are colored artifacts on the edges where it should be antialiased. This exists on the iPad/iPad2 and iPhone4 but not in the simulator. I've attached a picture below of what the artifact looks like. Anyone know what this could be?
It looks very much like your shader is attacking, but you didn't post the shader so I can't be sure. See, when you turn on MSAA, it then becomes possible for the shader to get executed for samples that are inside the pixel area, but outside of the triangle area. Without MSAA, this pixel would not have caused a fragment shader execution at all, but now that you turned on MSAA, it must execute the fragment shader for that pixel if one of the samples is active.
The link I posted explains the issue in greater depth. It also gives you ways to avoid this issue, but I don't know if OpenGL ES 2.0 provides access to centroid sampling. If it does not, then you will have to disable multisampled rendering for those things that cause artifacts with glDisable(GL_MULTISAMPLE). You can re-enable it when you need multisampling active.

Resources