Core Video texture cache and minification issue - ios

I use Core Video texture cache for my OpenGL textures. I have an issue with rendering such textures in case of minification. Parameter GL_TEXTURE_MIN_FILTER has no effect: interpolation for minification is always the same as GL_TEXTURE_MAG_FILTER. The interesting fact is that everything works ok when I create pixel buffer object with CVPixelBufferCreateWithBytes function. The problem appears when I use CVPixelBufferCreate.
Environment:
iOS 7
OpenGL ES 2.0
iPad mini, iPad 3, iPad 4.
I've developed simple application which demonstrates this issue: https://github.com/Gubarev/iOS-CVTextureCache. The Demo application can render checkerboard (size of cell is 1x1) texture in three modes:
Regular OpenGL texture (ok).
Core Video texture, pixel buffer created with CVPixelBufferCreate (problem).
Core Video texture, pixel buffer created with CVPixelBufferCreateWithBytes (ok).
Texture is rendered two times with slight minification (achieved by using OpenGL viewport smaller than texture):
Left image rendered with minification filter GL_NEAREST, magnification filter GL_NEAREST.
Right image rendered with minification filter GL_LINEAR, magnification filter GL_NEAREST.
Image below demonstrates proper minificiation in case of regular OpenGL texture. It's clearly visible that setting for minificiation filter takes effect. Same results could be obtained when "CVPixelBufferCreateWithBytes" approach is used. The problem appear in case of "CVPixelBufferCreate" approach: both images minificated with setting for magnification filer (GL_NEAREST in particular).

Related

"Template Rendering is not supported in texture atlases" SpriteKit and Xcode

I am making a game in SpriteKit and I am using a texture atlas for all of my textures. I am using the setting
Scale Factors: Single Vector
Xcode is giving me a warning for this TextureAtlas :
Template rendering is not supported in texture atlases.
I am unsure what this means
In your assets folder, one of your textures is marked Render As:Template Image. Texture atlases do not support this render mode, so you need to change it back to Default

WebGL: Asynchronous texImage2D?

I am updating some textures of the scene all the time by new images.
Problem is uploading is synchronous and texImage2D takes ~100ms. It takes so long time even if texture is not used during rendering of the next frame or rendering is switched off.
I am wondering, is there any way to upload texture data asynchronously?
Additional conditions:
I had mention there is old texture which could stay active until uploading of new one to GPU will be finished.
Solution is to use texSubImage2D and upload image to GPU by small portions. Once uploading will be finished activate your new texture and delete old one.
is there any way to upload texture data asynchronously?
no, not in WebGL 1.0. There might be in WebGL 2.0 but that's not out yet.
Somethings you might try.
make it smaller
What are you uploading? Video? Can you make it smaller?
Have you tried different formats?
WebGL converts from whatever format the image is stored in to the format you request. So for example if you load a .JPG the browser might make an RGB image. If you then upload it with gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE it has to convert the image to RGBA before uploading (more time).
Do you have UNPACK_FLIP_Y set to true?
If so WebGL has to flip your image before uploading it.
Do you have UNPACK_COLORSPACE_CONVERSION_WEBGL set to BROWSER_DEFAULT_WEBGL?
If not WebGL may have to re-decompress your image
Do you have UNPACK_PREMULTIPLY_ALPHA_WEBGL set to false or true?
Depending on how the browser normally stores images it might have
to convert the image to the format your requesting
Images have to be decompressed
are you sure your time is in "uploading" vs "decompressing"? If you switch to uploading a TypedArray of the same dimensions does it speed up?

iOS GLKit texture blurry on retina display

I am working with OpenGL ES 2.0 and GLKit for iOS.
My app only needs to run at a resolution of 480 by 320 just like the pre iPhone 4 displays as it is using retro style graphics.
The texture graphics are made according to this resolution and a GLKit projection matrix of (0, 480, 0, 320).
This all looks fine on the 3GS but on later models OpenGL (understandably) does some sort of resizing in order to stretch the scene. This resizing results in an undesirable blurring/smoothing of the graphics - probably using some sort of default interpolation scheme.
Is it possible to affect the way this resizing is done by OpenGL? Preferably setting it to no interpolation where the pixels are just directly enlarged.
You need to set the scaling filters on the view like this.
self.layer.magnificationFilter = kCAFilterNearest;
self.layer.minificationFilter = kCAFilterNearest;

Core Image Auto Adjustments Rendering MUCH Too Slowly on CPU

Understandably, when rendering on the CPU as opposed to the GPU, the rendering will take much more time. However, photos taken with the iPhone 4's camera are too large to render with the GPU, so they must be rendered with the CPU. This works well for Core Image filters, except the filters returned from autoAdjustmentFiltersWithOptions: When rendering a CIImage modified with these filters, it takes 40+ seconds, as opposed to a split second with the GPU.
Steps to Reproduce:
Create a CIImage with an image larger than 2048x2048 on an iPhone 4, or 4096x4096 on iPhone 4S.
Call the method autoAdjustmentFiltersWithOptions: on the CIImage.
Apply the returned filters to the CIImage.
Render the CIImage to a CGImageRef.
Expected Results:
The image takes a few seconds longer than it would take when using the GPU to render.
Actual Results:
It takes upwards of 40 seconds to render.
Notes:
The Photos app can enhance large photos much faster than this method. Shows that the iPhone 4/4S's hardware is capable of achieving this, regardless of whether the Photos app uses private APIs or not.
Anyone have any advice?
autoAdjustmentFiltersWithOptions uses the CPU to determine the applied filters. Try downscaling the image before calling that, then apply the filters to the original image. Also, consider turning off red eye detection if you don't need it.

Rendering large textures on iOS OpenGL

I'm developing an iPad 2 app which will overlay panoramic views on top of physical space using Cinder.
The panorama images are about 12900x4000 pixels; they are being loaded from the web.
Right now the line to load the image is:
mGhostTexture = gl::Texture( loadImage( loadUrl( "XXX.jpg" ) ) );
Works fine for small images (e.g. 500x500). Not so well for full images (the rendered texture becomes a large white box).
I assume I'm hitting a size limit. Does anyone know a way to render or split up large images in openGL and/or Cinder?
for OpenGL ES 2.0:
"The maximum 2D or cube map texture size is 2048 x 2048. This is also the maximum renderbuffer size and viewport size."1
also, seems a solution may be present here:
Using libpng to "split" an image into segments

Resources