Broken texture in iOS - ios

I'm using OpenTK in MonoTouch to render some textures in iOS, and some of the textures come up broken. This is a closeup of an iPad screenshot showing one correctly rendered texture (the top one) and two broken ones below:
I'm not doing anything weird. I'm loading the texture from a semitransparent PNG using CGImage->CGBitmapContext->GL.TexImage2D. I'm rendering each sprite with two triangles, and my fragment shader just reads the texel from the sampler with texture2D() and multiplies it by a uniform vec4 to color the texture.
The files themselves seem to be okay, and the Android port of the same application using Mono for Android, and the exact same binary resources renders them perfectly. As you can see, other transparent textures work fine.
If it helps, pretty much every texture is broken when I run the program in the simulator. Also this problem persists even if I rebuild the program.
Any ideas on how to figure out what is causing this problem?
Here's my vertex shader:
attribute vec4 spritePosition;
attribute vec2 textureCoords;
uniform mat4 projectionMatrix;
uniform vec4 color;
varying vec4 colorVarying;
varying vec2 textureVarying;
void main()
{
gl_Position = projectionMatrix * spritePosition;
textureVarying = textureCoords;
colorVarying = color;
}
Here's my fragment shader:
varying lowp vec4 colorVarying;
varying lowp vec2 textureVarying;
uniform sampler2D spriteTexture;
void main()
{
gl_FragColor = texture2D(spriteTexture, textureVarying) * colorVarying;
}
I'm loading the image like this:
using (var bitmap = UIImage.FromFile(resourcePath).CGImage)
{
IntPtr pixels = Marshal.AllocHGlobal(bitmap.Width * bitmap.Height * 4);
using (var context = new CGBitmapContext(pixels, bitmap.Width, bitmap.Height, 8, bitmap.Width * 4, bitmap.ColorSpace, CGImageAlphaInfo.PremultipliedLast))
{
context.DrawImage(new RectangleF(0, 0, bitmap.Width, bitmap.Height), bitmap);
int[] textureNames = new int[1];
GL.GenTextures(1, textureNames);
GL.BindTexture(TextureTarget.Texture2D, textureNames[0]);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)All.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)All.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)All.ClampToEdge);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)All.ClampToEdge);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, bitmap.Width, bitmap.Height, 0, PixelFormat.Rgba, PixelType.UnsignedByte, pixels);
CurrentResources.Add(resourceID, new ResourceData(resourcePath, resourceType, 0, new TextureEntry(textureNames[0], bitmap.Width, bitmap.Height)));
}
}
and in my onRenderFrame, I have this:
GL.ClearColor(1.0f, 1.0f, 1.0f, 1.0f);
GL.Clear(ClearBufferMask.ColorBufferBit);
GL.Enable(EnableCap.Blend);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
GL.UseProgram(RenderingProgram);
GL.VertexAttribPointer((int)ShaderAttributes.SpritePosition, 2, VertexAttribPointerType.Float, false, 0, squareVertices);
GL.VertexAttribPointer((int)ShaderAttributes.TextureCoords, 2, VertexAttribPointerType.Float, false, 0, squareTextureCoords);
GL.EnableVertexAttribArray((int)ShaderAttributes.SpritePosition);
GL.EnableVertexAttribArray((int)ShaderAttributes.TextureCoords);
//...
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, textureEntry.TextureID);
GL.Uniform1(Uniforms[(int)ShaderUniforms.Texture], 0);
// ...
GL.DrawArrays(BeginMode.TriangleStrip, 0, 4);
That triangle strip is made out of two triangles that make up the texture, with the vertex and texture coordinates set to where I want to show my sprite. projectionMatrix is a simple ortographic projection matrix.
As you can see, I'm not trying to do anything fancy here. This is all pretty standard code, and it works for some textures, so I think that in general the code is okay. I'm also doing pretty much the same thing in Mono for Android, and it works pretty well without any texture corruption.
Corrupted colors like that smell like uninitialized variables somewhere, and seeing it happen only on the transparent part leads me to believe that I'm having uninitialized alpha values somewhere. However, GL.Clear(ClearBufferMask.ColorBufferBit) should clear my alpha values, and even so, the background texture has an alpha value of 1, and with the current BlendFunc, should set the alpha for those pixels to 1. Afterwards, the transparent textures have alpha values ranging from 0 to 1, so they should blend properly. I see no uninitialized variables anywhere.
...or... this is all the fault of CGBitmapContext. Maybe by doing DrawImage, I'm not blitting the source image, but drawing it with blending instead, and the garbage data comes from when I did AllocGlobal. This doesn't explain why it consistently happens with just these two textures though... (I'm tagging this as core-graphics so maybe one of the quartz people can help)
Let me know if you want to see some more code.

Okay, it is just as I had expected. The memory I get with Marshal.AllocHGlobal is not initialized to anything, and CGBitmapContext.DrawImage just renders the image on top of whatever is in the context, which is garbage.
So the way to fix this is simply to insert a context.ClearRect() call before I call context.DrawImage().
I don't know why it worked fine with other (larger) textures, but maybe it is because in those cases, I'm requesting a large block of memory, so the iOS (or mono) memory manager gets a new zeroed block, while for the smaller textures, I'm reusing memory previously freed, which has not been zeroed.
It would be nice if your memory was allocated to something like 0xBAADF00D when using the debug heap, like LocalAlloc does in the Windows API.
Two other somewhat related things to remember:
In the code I posted, I'm not releasing the memory requested with AllocHGlobal. This is a bug. GL.TexImage2D copies the texture to VRAM, so it is safe to free it right there.
context.DrawImage is drawing the image into a new context (instead of reading the raw pixels from the image), and Core Graphics only works with premultiplied alpha (which I find idiotic). So the loaded texture will always be loaded with premultiplied alpha if I do it in this way. This means that I must also change the alpha blending function to GL.BlendFunc(BlendingFactorSrc.One, BlendingFactorDest.OneMinusSrcAlpha), and make sure that all crossfading code works over the entire RGBA, and not just the alpha value.

Related

Dissolve SKShader works as expected on simulator, strange behaviour on actual device

I encountered weird behaviour when trying to create dissolve shader for iOS spritekit. I have this basic shader that for now only changes alpha of texture depending on black value of noise texture:
let shader = SKShader(source: """
void main() {\
vec4 colour = texture2D(u_texture, v_tex_coord);\
float noise = texture2D(noise_tex, v_tex_coord).r;\
gl_FragColor = colour * noise;\
}
""", uniforms: [
SKUniform(name: "noise_tex", texture: spriteSheet.textureNamed("dissolve_noise"))
])
Note that this code is called in spriteSheet preload callback.
On simulator this consistently gives expected result ie. texture with different alpha values all over the place. On actual 14.5.1 device it varies:
Applied directly to SKSpriteNode - it makes whole texture semi-transparent with single value
Applied to SKEffectNode with SKSpriteNode as its child - I see miniaturized part of a whole spritesheet
Same as 2 but texture is created from image outside spritesheet - it works as on simulator (and as expected)
Why does it behave like this? Considering this needs to work on iOS 9 devices I'm worried 3 won't work everywhere. So I'd like to understand why it happens and ideally get sure way to force 1 or at least 2 to work on all devices.
After some more testing I finally figured out what is happening. The textures in the shader are whole spritesheets instead of separate textures on devices, so the coordinates go all over the place. (which actually makes more sense than simulator behaviour now that I think about it)
So depending if I want 1 or 2 I need to apply different maths. 2 is easier, since display texture is first rendered onto a buffer, so v_text_coord will take full [0.0, 1.0], so all I need is noise texture rect to do appropriate transform. For 1 I need to additionally provide texture rect to first change it into [0.0, 1.0] myself and then apply that to noise coordinates.
This will work with both spritesheets loaded into the shader or separate images, just in later case it will do some unnecessary calculations.

WebGL feedback loop formed between Framebuffer and active Texture

I have a webgl project setup that uses 2 pass rendering to create effects on a texture.
Everything was working until recently chrome started throwing this error:
[.WebGL-0000020DB7FB7E40] GL_INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture.
This just started happening even though I didn't change my code, so I'm guessing a new update caused this.
I found this answer on SO, stating the error "happens any time you read from a texture which is currently attached to the framebuffer".
However I've combed through my code 100 times and I don't believe I am doing that. So here is how I have things setup.
Create a fragment shader with a uniform sampler.
uniform sampler2D sampler;
Create 2 textures
var texture0 = initTexture(); // This function does all the work to create a texture
var texture1 = initTexture(); // This function does all the work to create a texture
Create a Frame Buffer
var frameBuffer = gl.createFramebuffer();
Then I start the "2 pass processing" by uploading a html image to texture0, and binding texture0 to the sampler.
I then bind the frame buffer & call drawArrays:
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture1, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
To clean up I unbind the frame buffer:
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
Edit:
After adding break points to my code I found that the error is not actually thrown until I bind the null frame buffer. So the drawArrays call isn't causing the error, it's binding the null frame buffer afterwards that sets it off.
Chrome since version 83 started to perform conservative checks for the framebuffer and the active texture feedback loop. These checks are likely too conservative and affect usage that should actually be allowed.
In these new checks Chrome seem to disallow a render target to be bound to any texture slot, even if this slot is not used by the program.
In your 2 pass rendering you likely have something like:
Initialize a render target and create a texture that points to a framebuffer.
Render to the target.
In 1 you likely bind a texture using gl.bindTexture(gl.TEXTURE_2D, yourTexture) you need to then, before the step 2, unbind the texture using gl.bindTexture(gl.TEXTURE_2D, null); Otherwise Chrome will fail because the render target is bound as a texture, even though this texture is not sampled by the program.

Profiling OpenGL ES app on iOS

I'm looking at a game I'm working on in the "OpenGL ES Driver" template in Instruments. The sampler is showing that I'm spending nearly all my time in a function called gfxODataGetNewSurface with a call tree that looks like this:
gfxIODataGetNewSurface
gliGetNewIOSurfaceES
_ZL29native_window_begin_iosurfaceP23_EAGLNativeWindowObject
usleep
__semwait_signal
(sorry for the weird formatting, safari or stack overflow is eating my line breaks)
The game is only getting about 40 FPS (on iPhone 4) under what I don't believe is a heavy workload which makes me think I'm doing something pathological with my OpenGL code.
Does anyone know what gliGetNewIOSurfaceES/gfxIODataGetNewSurface is doing? And what it indicates is happening in my app. Is it constantly creating new renderbuffers or something?
EDIT: New info...
I've discovered that with the following pixel shader:
varying vec2 texcoord;
uniform sampler2D sampler ;
const vec4 color = vec4(...);
void main()
{
gl_FragColor = color*texture2D(sampler,texcoord);
}
(yet again my formatting is getting mangled!)
If I change the const 'color' to a #define, the Renderer Utilization drops from 75% to 35% when drawing a full-screen (960x640) sprite to the screen. Really I want this color to be an interpolated 'varying' quantity from the vertex shader, but if making it a global constant kills performance I can't imagine there's any hope that the 'varying' version would be any better.

Re-use of texture unit for different texture types breaks in chrome

check out the following test:
http://binks.knobbits.org/webgl/texture3.html
It's a simple test of cube textures. It also has a 2D texture in there for good measure.
I discovered that in some browsers (so far, chrome) The image is not displayed properly if I re-use the same texture unit for drawing the cube texture as for the 2D texture.
There is a checkbox at the bottom marked "Use separate texture units for the cube texture on the sphere and the 2D texture on the floor" that shows this.
Is this a bug in chrome or in my code?
I don't see anything wrong with your code but
1) You can't use the same texture for 2 different targets. In other words you can't do this
tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex);
2) You can't use both a TEXTURE_2D and a CUBE_MAP on a texture unit AT THE SAME TIME.
You can assign both, but when you render you're only allowed to reference one of them in your shaders. In other words.
gl.activeTexture(gl.TEXTURE0);
tex1 = gl.createTexture();
tex2 = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex1);
gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex2);
Is okay but a shader that tried use both textures from texture unit 0 would fail.
I have ordered a bit the code of the drawing functions and now are working.
Square:
TexturedSquare.prototype.draw = function() {
gl.bindBuffer(gl.ARRAY_BUFFER,this.v);
gl.enableVertexAttribArray(gl.va_vertex);
gl.enableVertexAttribArray(gl.va_normal);
gl.enableVertexAttribArray(gl.va_tex1pos);
gl.vertexAttribPointer(gl.va_vertex,4,gl.FLOAT,false,10*4,0);
gl.vertexAttribPointer(gl.va_normal,4,gl.FLOAT,false,10*4,4*4);
gl.vertexAttribPointer(gl.va_tex1pos,2,gl.FLOAT,false,10*4,4*8);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,this.texture);
gl.bindTexture(gl.TEXTURE_CUBE_MAP,null);
gl.uniform1i(shader.textures,1);
gl.uniform1i(shader.texture1,0);
gl.uniform1i(shader.cube_textures,0);
gl.uniform1i(shader.cubeTexture0,1);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER,this.e);
gl.drawElements(gl.TRIANGLES,this.l,gl.UNSIGNED_SHORT,0);
gl.disableVertexAttribArray(gl.va_tex1pos);
}
Sphere:
GLHTexturedSphere.prototype.draw = function() {
gl.bindBuffer(gl.ARRAY_BUFFER,this.vbuf);
gl.enableVertexAttribArray(gl.va_vertex);
gl.enableVertexAttribArray(gl.va_normal);
gl.enableVertexAttribArray(this.va_cubetex0pos);
gl.vertexAttribPointer(gl.va_vertex,4,gl.FLOAT,false,8*4,0);
gl.vertexAttribPointer(gl.va_normal,4,gl.FLOAT,false,8*4,4*4);
gl.vertexAttribPointer(this.va_cubetex0pos,4,gl.FLOAT,false,8*4,4*4);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,null);
gl.bindTexture(gl.TEXTURE_CUBE_MAP,this.texture);
gl.uniform1i(shader.textures,0);
gl.uniform1i(shader.texture1,1);
gl.uniform1i(shader.cube_textures,1);
gl.uniform1i(shader.cubeTexture0,0);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER,this.ebuf);
gl.drawElements(gl.TRIANGLES,this.l,gl.UNSIGNED_SHORT,0);
gl.disableVertexAttribArray(gl.va_cubetex0pos);
}
Both of them are using now TEXTURE0. Please check WebGL states and uniform values.
Original code is a bit hard for me, sorry. But I think the problem is that texture1 and cubeTexture0 uniforms are been setted with the same value.

Is it possible to use a pixel shader inside a sprite?

Is it possible to use a pixel shader inside a sprite?
I have create a simple pixel shader, that just writes red color, for
testing.
I have surrounded my Sprite.DrawImage(tex,...) call by the
effect.Begin(...), BeginPass(0), and EndPass(), End(),
but my shader seems not to be used : My texture is drawn just
normally.
I am not sure what language you are using. I will assume this is an XNA question.
Is it possible to use a pixel shader
inside a sprite?
Yes, you can load a shader file (HLSL, up to and including shader model 3 in XNA) and call spritebatch with using it.
If you post sample code it would be easier for us to see if anything isn't setup properly. However, It looks like you have things in the right order. I would check the shader code.
Your application code should look something like this:
Effect effect;
effect = Content.Load<Effect> ("customeffect"); //load "customeffect.fx"
effect.CurrentTechnique = effect.Techniques["customtechnique"];
effect.Begin();
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Begin();
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
spriteBatch.Draw(texture, Vector2.Zero, null, Color.White, 0, new Vector2(20, 20), 1, SpriteEffects.None, 0);
spriteBatch.End();
pass.End();
}
effect.End();

Resources