I've enabled 4x MSAA on my iPad OpenGL ES 2.0 app using the example on Apple's website. On the simulator this works well and the image is nice and smooth however on the device there are colored artifacts on the edges where it should be antialiased. This exists on the iPad/iPad2 and iPhone4 but not in the simulator. I've attached a picture below of what the artifact looks like. Anyone know what this could be?
It looks very much like your shader is attacking, but you didn't post the shader so I can't be sure. See, when you turn on MSAA, it then becomes possible for the shader to get executed for samples that are inside the pixel area, but outside of the triangle area. Without MSAA, this pixel would not have caused a fragment shader execution at all, but now that you turned on MSAA, it must execute the fragment shader for that pixel if one of the samples is active.
The link I posted explains the issue in greater depth. It also gives you ways to avoid this issue, but I don't know if OpenGL ES 2.0 provides access to centroid sampling. If it does not, then you will have to disable multisampled rendering for those things that cause artifacts with glDisable(GL_MULTISAMPLE). You can re-enable it when you need multisampling active.
Related
I've tried to make an "overlay" effect in a 3d scene. After drawing stuff to the buffer, i tried to draw a full screen quad with blending enabled and the depth test disabled. On some android devices this seems to have caused a slow down.
I found this link:
The particularly slow point is the point where the drawing of a pixel needs to check what the color behind it was.
So instead of drawing a single full screen quad, i divided it up in tiles, and rendered with multiple draw calls, which seems to have caused some gain.
What may be happening here and how can this be profiled with webgl i.e. how does one come to the conclusion from the quote above?
I guess that to profile it, you simply have to test with several blending function, with or without blending enabled, etc...
Blending is not a trivial operation, and indeed we can assume that blending function which need to read pixel on buffer could induce performance lose, like all "reading" operation in OpenGL, because this can block the pipeline. I guess most of modern desktop GPU have some specific design to optimize this, but on mobile phones, this is maybe more problematic.
Anyway, if you are about to draw a full screen quad, why don't you render your quad directly using two source texture, which you blend directly in the fragment shader using a custom equation ? this way, you don't need to use blending and you avoid any back buffer reading problem.
I'm using Brad Larson's GPUImage framework. However when I'm trying to apply kuwahara filter with filter radius 5.0f , I'm getting artifacts on an iPhone 4S. (works fine on higher performance devices)
Source image size was 2048x2048px.
By reading original developer's comments I understood that there's a kind of watchdog timer which fires when something takes too long to run on the GPU.
So my question is , what is the maximum possible resolution for an iPhone 4S I can apply Kuwahara filter with radius of 5.0f without getting artifacts ?
Kuwahara filter makes square artefacts and very complex.
You can use Generalised Kuwahara filter (e.g. with 8 segments).
You can manually generate shader without cycles for selected radius. For decreased number of readings from texture, you can make trick:
Generate shader for constant radius.
Pixels offset must depend on ratio of current radius and constant radius.
You get some artefacts, but they are artistic (like canvas). And Kuwahara will be faster.
There really isn't a hard limit. The tiling artifacts you are seeing are due to the OpenGL ES watchdog timer aborting the scene rendering after it takes too long. If you have a single frame that takes longer than approximately 2 seconds to render, your frame rendering will be killed in this manner.
The exact time it takes is a function of hardware capabilities, system load, shader complexity, and iOS version. In GPUImage, you pretty much only see this with the Kuwahara filter because of the ridiculously unoptimized shader I use for that. It's drawn from a publication that was doing this using desktop GPUs, and is about the worst case operation for a mobile GPU like these. Someone contributed a fixed-radius version of this which is significantly faster, but you'll need to create your own optimized version if you want to use this with large images on anything but the latest devices.
Im trying to implement a particle system (using OpenGL 2.0 ES), where each particle is made up of a quad with a simple texture
the red pixels are transparent. Each particle will have a random alpha value from 50% to 100%
Now the tricky part is i like each particle to have a blendmode much like Photoshop "overlay" i tried many different combinations with the glBlendFunc() but without luck.
I dont understand how i could implement this in a fragment shader, since i need infomations about the current color of the fragment. So that i can calculate a new color based on the current and texture color.
I also thought about using a frame buffer object, but i guess i would need to re-render my frame-buffer-object into a texture, for each particle since each particle every frame, since i need the calculated fragment color when particles overlap each other.
Ive found math' and other infomations regrading the Overlay calculation but i have a hard time figuring out which direction i could go to implement this.
http://www.pegtop.net/delphi/articles/blendmodes/
Photoshop blending mode to OpenGL ES without shaders
Im hoping to have a effect like this:
You can get information about the current fragment color in the framebuffer on an iOS device. Programmable blending has been available through the EXT_shader_framebuffer_fetch extension since iOS 6.0 (on every device supported by that release). Just declare that extension in your fragment shader (by putting the directive #extension GL_EXT_shader_framebuffer_fetch : require at the top) and you'll get current fragment data in gl_LastFragData[0].
And then, yes, you can use that in the fragment shader to implement any blending mode you like, including all the Photoshop-style ones. Here's an example of a Difference blend:
// compute srcColor earlier in shader or get from varying
gl_FragColor = abs(srcColor - gl_LastFragData[0]);
You can also use this extension for effects that don't blend two colors. For example, you can convert an entire scene to grayscale -- render it normally, then draw a quad with a shader that reads the last fragment data and processes it:
mediump float luminance = dot(gl_LastFragData[0], vec4(0.30,0.59,0.11,0.0));
gl_FragColor = vec4(luminance, luminance, luminance, 1.0);
You can do all sorts of blending modes in GLSL without framebuffer fetch, but that requires rendering to multiple textures, then drawing a quad with a shader that blends the textures. Compared to framebuffer fetch, that's an extra draw call and a lot of schlepping pixels back and forth between shared and tile memory -- this method is a lot faster.
On top of that, there's no saying that framebuffer data has to be color... if you're using multiple render targets in OpenGL ES 3.0, you can read data from one and use it to compute data that you write to another. (Note that the extension works differently in GLSL 3.0, though. The above examples are GLSL 1.0, which you can still use in an ES3 context. See the spec for how to use framebuffer fetch in a #version 300 es shader.)
I suspect you want this configuration:
Source: GL_SRC_ALPHA
Destination: GL_ONE.
Equation: GL_ADD
If not, it might be helpful if you could explain the math of the filter you're hoping to get.
[EDIT: the answer below is true for OpenGL and OpenGL ES pretty much everywhere except iOS since 6.0. See rickster's answer for information about EXT_shader_framebuffer_fetch which, in ES 3.0 terms, allows a target buffer to be flagged as inout, and introduces a corresponding built-in variable under ES 2.0. iOS 6.0 is over a year old at the time of writing so there's no particular excuse for my ignorance; I've decided not to delete the answer because it's potentially valid to those finding this question based on its opengl-es, opengl-es-2.0 and shader tags.]
To confirm briefly:
the OpenGL blend modes are implemented in hardware and occur after the fragment shader has concluded;
you can't programmatically specify a blend mode;
you're right that the only workaround is to ping pong, swapping the target buffer and a source texture for each piece of geometry (so you draw from the first to the second, then back from the second to the first, etc).
Per Wikipedia and the link you provided, Photoshop's overlay mode is defined so that the output pixel from a background value of a and a foreground colour of b, f(a, b) is 2ab if a < 0.5 and 1 - 2(1 - a)(1 - b) otherwise.
So the blend mode changes per pixel depending on the colour already in the colour buffer. And each successive draw's decision depends on the state the colour buffer was left in by the previous.
So there's no way you can avoid writing that as a ping pong.
The closest you're going to get without all that expensive buffer swapping is probably, as Sorin suggests, to try to produce something similar using purely additive blending. You could juice that a little by adding a final ping-pong stage that converts all values from their linear scale to the S-curve that you'd see if you overlaid the same colour onto itself. That should give you the big variation where multiple circles overlap.
I have set up rendering to a framebuffer with color and depth textures on iOS, all works ok. I then tried to add multisampling via APPLE extensions (I used this code Rendering to texture on iOS OpenGL ES—works on simulator, but not on device ) but there's a catch apparently.
After resolving the multisampled buffer into my original framebuffer (which I use for post processing effects), I only have the color buffer resolved. glResolveMultisampleFramebufferAPPLE() apparently does not touch my depth texture at all, so if I use multisampling I have to give up on my depth texture effects. Is there no way to get the depth texture if I use multisampling ? I know how multisampling works, I just want a depth texture alongside the color texture.
Spec on APPLE_framebuffer_multisample tells that glResolveMultisampleFramebufferApple resolves color attachment, this means that you will have to write depth to color renderbuffer in additional render pass and resolve it to get depth information.
Ok, so after a couple more days of looking into this matter, I got my answer.
So the APPLE extension exists (and is different from the EXT one) just because it only resolves color. The GL ES 3.0 Standard (Probably coming to iOS 7.1) or DesktopGL says that in order to resolve Color or Depth you use glBlitFramebuffer which copies and resolves things. I tried it with DesktopGL 4.2 and blitting the depth buffer works.
I also went back to my DirectX11 renderer and tried the same thing with a GPU that Supports DirectX 11.1 level features and I was surprised that leading edge hardware can't do that in a single resolve call. ID3D11DeviceContext::ResolveSubresource throws errors when you try to resolve textures bound as depth. The workaround is to either have a special shader pass that does the depth resolving, but that implies using Texture2DMS ( A DirectX 10_1 level feature ) or to ping-pong the texture through 2 separate textures that are not bound to depth (implies 1 resolve call and 2 full depth texture copies).
Using the same counterpart in GL means using glTexImage2DMultisample images instead of multisampled renderbuffers (part of Desktop OpenGL 3.1) and then using sampler2DMS in a pixel shader for the actual shader texel fetch.
EDIT: Whether or not glBlitFramebuffer resolves depth seems to be marked as implementation dependent. On desktop GL (HD7850) it works, on GLES3 it still doesn't resolve it.
Due to performance bottlenecks in Core Graphics, I'm trying to use OpenGL EL on iOS to draw a 2D scene, but OpenGL is rendering my images at an incredibly low resolution.
Here's part of the image I'm using (in Xcode):
And OpenGL's texture rendering (in simulator):
I'm using Apple's Texture2D to create a texture from a PNG, and then to draw it to the screen. I'm using an Ortho projection to look straight down on the scene as was recommended in Apress' Beginning iPhone Development book. The image happens to be the exact size of the screen and is drawn as such (I didn't take the full image in the screenshots above). I'm not using any transforms on the model, so drawing the image should cause no sub-pixel rendering.
I'm happy to post code examples, but thought I'd start without in the case that there's a simple explanation.
Why would the image lose so much quality using this method? Am I missing a step in my environment setup? I briefly read about textures performing better as sizes of powers of two -- does this matter on the iPhone? I also read about multisampling, but I wasn't sure if that was related.
Edit: Updated screen shots so as to alleviate confusion.
I still don't fully understand why I was having quality issues with my code, but I managed to work around the problem. I was originally using the OpenGLES2DView class provided by the makers of the Apress book "Beginning iPhone 4 Development", and I suspect the problem lied somewhere within this code. I then came across this tutorial which suggested I use GLKit instead. GLKit allowed me to do exactly what I wanted!
A few lessons learned:
Using GLKit and the sprites class in the tutorial linked above, my textures don't need to be powers of two. GLKit seems to handle this fine.
Multisampling was not the solution to the problem. With GLKit, using multisampling seemingly has no effect (to my eye) when rendering 2D graphics.
Thanks all who tried to help.