Scaling issues with LUMINANCE_ALPHA - ios

I'm currently extending my OpenGL-UI system, for this i rewrite the font part and faced a issue which appears when using mipmapping. Because of the fact that images say more than thousands of words:
As you can see the font's transparency is fading out ( the text should be displayed 8 times! ), this happens only when using LUMINANCE_ALPHA-textures. The code which loads the textures is basically the same but they differ in the formats used, this is what LUMINANCE_ALPHA uses:
TexImageInternalFormat.LUMINANCE_ALPHA, TexImageFormat.LUMINANCE_ALPHA, TexImagePixelType.UNSIGNED_BYTE
Linear filtering is enabled and clamp is set to GL_CLAMP_TO_EDGE. For me it seems like a mipmapping issue but i tried a lot of different settings and it isn't working and, as i already said, RGBA textures are working without any issues. The application also runs on iOS so using a LUMINANCE_ALPHA-texture saves a lot of ram compared to a RGBA.
What could cause this and how can i solve it?

As it turned out the ImageFormat settings have been wrong:
LA8 = new ImageFormat("LA8", TexImageInternalFormat.LUMINANCE_ALPHA, TexImageFormat.LUMINANCE_ALPHA, TexImagePixelType.UNSIGNED_BYTE, 4);
The last number indicates the number of bytes per pixel for this format and should be 2 in case of LUMINANCE_ALPHA. The PVR reader doesn't complain about the missing image data and no exception has been thrown. Changing the 4 to 2 solves the problem.

Related

Decompressing a PVRTC compressed image format?

I'm trying to open some textures from an iPhone game that I believe are using the PVRTC format (Pictured below)
PVRTC image format?
However everything I've tried in regards to opening it has failed. The PVRTexTool won't decompress it, the program only opens files with the extension .PVR and doesn't recognise it. I've also tried using TexturePacker but it doesn't recognise it either. It's been baffling me for a few days, any help towards decompressing the file would be appreciated thanks.
I can only offer some suggestions.
iOS restricts PVRTC textures to be square and power of 2 sizes, and they will be either 2bpp or, more likely, 4bpp. Therefore if we initially assume no MIP mapping, there can thus be only a few possible sizes for the raw data. From that you might be able to deduce the size of any header data and strip that off. I think the PowerVR SDK from Imagination Tech provides decoder source code in C (or at least it did last time I checked though admittedly that was a few years ago) if you have that raw data. Also, the data might be in Morton order.
If MIP mapping is used, then I think you'll need to include the entire MIP map chain in your size calculation, but note that the small maps will be rounded up to at least 8bytes each.

Unity 5 iOS font distortion issue

SOLVED:
Before handling the Video RGBA Data and pushing it to the texture, I was setting the Unpack Alignment to 4. glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
Simply discarding this, or setting back to 1after handling the video frame memory fixes the issue.
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
- END UPDATE -
*Update: This only happens when I am uploading a texture to a mesh. This is being done with opengl es 2.0. That mesh is in 3d space though obviously, and does not overlap the 2d UI text even after the 2d and 3d are composed together. Merely disabling the plane mesh entirely fixes this. Any indication as to why, or how to fix this would be greatly appreciated.
Original Post:
I would appreciate any insight as to why the font looks so odd in the image below and how to fix it. This does not happen in editor, only on device.
I have tried every suggestion that I have seen out there. The script from http://answers.unity3d.com/questions/900663/why-are-my-unity-ui-fonts-rendering-incorrectly.html for instance, as well as rebuilding the font with a much smaller subset of characters, as someone suggests the font atlas is getting full and dropping glyphs to make room for dynamic character changes.
Here is an image:
This DOES NOT happen when the text is already entered. Only after a field is updated such as a score change or the like. I have tried both 32 and 64 bit builds, and it happens on new and old ipads. I have also tried multiple fonts, including Arial.

Strange rendering behavior with transparent texture in WebGL

I've been writing a little planet generator using Haxe + Away3D, and deploying to HTML5/WebGL. But I'm having a strange issue when rendering my clouds. I have the planet mesh, and then the clouds mesh slightly bigger in the same position.
I'm using a perlin noise function to generate the planetary features and the cloud formations, writing them to a bitmap and applying the bitmap as the texture. Now, strangely, when I deploy this to iOS or C++/OSX, it renders exactly how I wanted it to:
Now, when I deploy to WebGL, it generates an identical diffuse map, but renders as:
(The above was at a much lower resolution, due to how often I was reloading the page. The problem persisted at higher resolutions.)
The clouds are there, and the edges look alright, wispy and translucent. But the inside is opaque and seemingly being rendered differently (each pixel is the same color, only the alpha channel is changed)
I realize this is likely something to do with how the code is ultimately compiled/generated in haxe, but I'm hoping it's something simple like a render setting or blending mode I'm not setting. But since I'm not even sure exactly what is happening, I wouldn't know where to look.
Here's the diffuse map being produced. I overlaid it on red so the clouds would be viewable.
Bitmapdata.perlinNoise does not work on html5.
You should implement it by yourself, or you could use pre-rendered image.
public function perlinNoise (baseX:Float, baseY:Float, numOctaves:UInt, randomSeed:Int, stitch:Bool, fractalNoise:Bool, channelOptions:UInt = 7, grayScale:Bool = false, offsets:Array = null):Void {
openfl.Lib.notImplemented ("BitmapData.perlinNoise");
}
https://github.com/openfl/openfl/blob/c072a98a3c6699f4d334dacd783be947db9cf63a/openfl/display/BitmapData.hx
Also, WebGL-Inspector is very useful for debugging WebGL apps. Have you used it?
http://benvanik.github.io/WebGL-Inspector/
Well, then, did you upload that image from ByteArray?
Lime once allowed access ByteArray with array index operator, even though it shouldn't on js. This is fixed in the lastest version of Lime to avoid mistakes.
I used __get and __set method instead of [] to access a byte array.
Away3d itself might be the cause of this issue too, because the code of backend is generated from different source files depending on the target you use.
For example, byteArrayOffset parameter of Texture.uploadFromByteArray is supported on html5, but not on native.
If away3d is the cause of the problem, which part of the code is causing the problem? I'm not sure for now.
EDIT: I've also experienced a problem with OpenFL's latest WebGL backend. I think legacy OpenFL doesn't have this problem. OpenFL's sprite renderer was changing colorMask (and possibly other OpenGL render states) without my knowledge! This problem occured because my code and OpenFL's sprite renderer was actually using the same OpenGL context. I got rid of this problem by manually disabling OpenFL's sprite renderer.

How to prevent pixel bleeding from rendering sprite-sheet generated with Zwoptex on older iOS device?

I packed up several individual sprites and generated a big sprite-sheet 2048*2048 in size with Zwoptex. But I scale down to match each iOS device such as 2048*2048 for iPad HD, 512*512 for iPhone, etc.
I found out that "Spacing Pixel" option in Zwoptex will effect the result of sprites rendering on device. That value means a space (in pixel) between each individual sprite packing up inside sprite-sheet. For instance, if I set that value too low then there's more chance that pixel bleeding will occur on newer or better device as well as older device. But if I increase that value, the chance lowers and for certain value that is high enough, pixel bleeding (hopefully) won't happen.
Anyway, I set value to around 17-20 which is really high and it consumes valuable space on sprite-sheet. The result is, on iPhone simulator, there's still a problem.
As we can only restricts some devices from install the game for certain iOS version, but iPhone 3GS can still update to newest version thus I need to solve this problem.
So I want to know the solution on how to prevent pixel bleeding problem to occur across all iOS devices ranging from iPhone to iPad (included retina).
It would be great to know any best practice or practical solution on selecting certain value for "Spacing Pixel" between sprites to remove the problem away when rendering.
If only the Simulator shows those artifacts, then by all means ignore them! None of your users will ever run your app in the Simulator, will they? The Simulator isn't perfect.
A spacing of 2 pixels around each texture atlas sprite frame is enough (and generally recommended) to kill all artifacts caused by pixel bleeding. If you still see artifacts, they're not a direct cause from too little spacing. They can't be.
I'm not sure about Zwoptex, do you actually have to manually create each scaled-down version of the texture atlas? You may be doing something wron there. Try TexturePacker, I wouldn't be surprised if the artifacts go away just like that.
For example, one type of artifact is caused by not placing objects at integer positions. You may see a gap (usually a black line) between two objects if their position is something like (1.23545, 10.0) and (41.23545, 10.0). Using integer coordinates (1,10) and (41,10) would fix the issues. The difficulty is that this goes all the way up the hierarchy, if these object's parent node is also on a non-integer position you can still experience this line gap artifact.
If you search around you'll find numerous cause and effect discussions for cocos2d artifacts. One thing to keep in mind: do not use the CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL macro. It's not a fix, it doesn't even come close. It kinda fixes the non-integer position artifact and introduces another (much worse IMHO): aliasing/flickering during movement.

Can Direct3D 11 do offscreen-only rendering (no swap chain)?

Is it possible to use Direct3D 11 for rendering to textures only, i.e. without creating a swap chain and without creating any window? I have tried that and all my API calls succeed. The only problem is that the picture I am downloading from a staging texture is black.
I finally managed to capture a full stream using PIX (Parallel Nsight does not seem to work at all). PIX shows that my render target is black, too, although I clear it to blue.
Is it possible at all what I intend to do? If so, how would one do it?
Actually, the whole thing is working as intended if you initialise the device correctly.

Resources