Rendering large textures on iOS OpenGL - ipad

I'm developing an iPad 2 app which will overlay panoramic views on top of physical space using Cinder.
The panorama images are about 12900x4000 pixels; they are being loaded from the web.
Right now the line to load the image is:
mGhostTexture = gl::Texture( loadImage( loadUrl( "XXX.jpg" ) ) );
Works fine for small images (e.g. 500x500). Not so well for full images (the rendered texture becomes a large white box).
I assume I'm hitting a size limit. Does anyone know a way to render or split up large images in openGL and/or Cinder?

for OpenGL ES 2.0:
"The maximum 2D or cube map texture size is 2048 x 2048. This is also the maximum renderbuffer size and viewport size."1
also, seems a solution may be present here:
Using libpng to "split" an image into segments

Related

supplying the right image size when not knowing what the size will be at runtime

I am displaying a grid of images (3rows x 3 columns) in collection view. Each image is a square and its width is determined to be 1/3 of collectionView's width. Collection view is pinned to left and right margin of the mainView.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+. I was advised to supply images that exactly matches the size on screen. Bigger images often tend to become pixelate and too sharp when downsized. How does one tackle such problem?
The usual solution is to supply three versions, for single-, double-, and triple-resolution screens, and downsize in real time by redrawing with drawInRect into a graphics context when the image is first needed.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+
Okay, so your first sentence is a lie. The second sentence proves that you do know what the size is to be on the different screen sizes. Clearly, if I tell you the name of a device, you can tell me what you think the image size should be. So, if you don't want to downscale a larger image at runtime because you don't like the resulting quality, simply supply actual images at the correct size and resolution for every device, and use the correct image on the actual device type you find yourself running on.
If your images are photos or raster type images created using a raster drawing tool, then somewhere you will have to scale the original to the sizes you want. You can either do this while running in iOS, or create sets up front using a tool which can give you better scaling results. Unfortunately, the only perfect image will be the original with everything else being a distortion of the truth.
For icons, the only accurate rendering solution is to use vector graphics. Tools like Adobe Illustrator will let you create images which you can scale to different sizes without losing clarity. Unfortunately this still leaves you generating images up front. You can script this generation using most tools and given you said your images were all square, then the total number needed is not huge. At most you need 3 for iPhone (4/5 are same width, 6 and 6+) and 2 for iPad (#1 for mini/ipad1 and #2 for retina).
Although iOS has no direct support I know of for vector image rendering, there are some 3rd party tools. http://www.paintcodeapp.com/ is an example which seems to let you import vector images or draw vector images and then generate image code to run in your app. This kind of tool would give you what you want as the images are now vector drawings drawn at the scale you choose at run time. $99 though.
There is also the SVGKit (https://github.com/SVGKit/SVGKit), but not sure how good/bad this is. It seems to let you simply load and render direct from SVG files. Might be worth trying.
So in summary, I think you either generate the relatively small subset up front using a tool you can control the output from, take the hit in iOS and let it scale the images or use a 3rd party vector to image rendering kit which would give you what you want.

GLKTextureLoader not loading jpg on "The new iPad"

Im trying to create a cube map of six jpg files from the web in GLKit. It works great on my iPhone 6+ but when i run the same code on "The new iPad" the cube map is just black when applied to an object. If i try the same thing with png files it works. Is there anything specific that needs to be done to load jpg's correctly on certain hardware?
The error from cubeMapWithContentsOfFiles is nil so it appears like GLKit thinks it loaded the texture properly.
Here is a demo project http://s.swic.name/Yw8F
If the dimensions of textures you're generating are themselves determined by the device's display dimensions (e.g. rendering a full-screen UIView to a texture) then the resulting cube-map could easily fall within the MAX_TEXTURE_SIZE on some devices but exceed it on larger devices. What are the pixel dimensions of your cube map on iPhone 6 Plus vs iPad 4th generation? If they exceed 4096 in either dimension you could be in trouble.

What's the best way to use big textures (2048*1536) in Unity3d with NGUI on ios?

I'm using Unity3d (4.3.1) and NGUI for creating an 2d iOS (iPad) app. Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
Now I'm using them with GUI type, override for iPhone with max size 2048 and compression quality: normal. And I'm using a UITexture with Unlit/Transparent shader to show them.
However, after about 40 images in the project XCode returns the terminated due to memory error. So the question is, what type of images do I need, and with which preferences to make them work?
I'm using iPad 3 as a test device with XCode 5.1.1. I'll be thankful for any help!
Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
I think your 2048x2048 size images use a very huge memory area. Basically, 2048 image use 16MB memory. So, this case need to use about a 1600MB memory! Normal application don't over about 200 MB.
So, I think you need to be reduce using a memory:
Remember that this texture is going to be expand 2048x2048 by unity.( http://www.opengl.org/wiki/NPOT_Texture ) So, if you are going to reduce file size to 1500x1000, your application still use 2048x2048 image. But if you can reduce file size to 1024x1024, do it. 1024 image just use 4 MB memory.
If you can use texture compression. Use it. PVRTC 4 bit ( https://docs.unity3d.com/Documentation/Manual/ReducingFilesize.html ) compression is make file size 1/8 than true color. Also memory size is going to reduce.(maybe reduced to half)
If your application don't display all images, load image dynamically. Use thumb nail.
Good luck:D
If you want to make a gallery-like app to render photos maybe you can try a different approach:
create two large editable textures and fill texels with image data (it must be editable otherwise you will no have access to write directly image data into them).
if you still have memory issues or if you want to use lower memory you can use several smaller textures as tiles. You can render then image parts to each smaller texture. Remember to configurate correctly the texture borders or so not use border texels to avoid wrapping problems.
Best way is to use a smaller texture. In an ipad you will need a magnifying glass to really appreciate the difference between 1024x1024 and larger textures. Remember an ipad screen is smaller (7"~10") than a computer one and with filtering enabled is really hard to tell the difference.
If you still need manager such a large texture for some other reason (zooming or similar) I recommend you one of the following approaches:
split the texture into layers with alpha channel (transparency): usually backgrounds can be rendered with lower resolutions.
split also the texture into blocks: usually most textures have repeating patterns.
use compression.
Always avoid use such large textures if possible.

Is there a maximum image width in iOS (image I have is 25020 x 238)? Image works when resized

This image (http://imgur.com/TyPtrxy) will not load in the simulator, although when I scale it to half the size it loads just fine. When trying to load the full image I just get a black box where it should be.
Yes, there is a maximum image size (number of pixels). The limit depends on the hardware in part, but it is generally in the range of 5 to 10 million pixels. This limit is related to limitations on the maximum sizes of textures that can be sent to the graphics card; therefore, it only applies to images that are drawn.
From the documentation:
You should avoid creating UIImage objects that are greater than 1024 x 1024 in size. Besides the large amount of memory such an image would consume, you may run into problems when using the image as a texture in OpenGL ES or when drawing the image to a view or layer. This size restriction does not apply if you are performing code-based manipulations, such as resizing an image larger than 1024 x 1024 pixels by drawing it to a bitmap-backed graphics context. In fact, you may need to resize an image in this manner (or break it into several smaller images) in order to draw it to one of your views.
It might be that you are hitting some limits on the maximum size of the CALayer (which in turn is dependent on the maximum OpenGL texture size supported by the hardware) that is backing the view. If you're exceeding the maximum size, a message like CoreAnimation: surface <size> is too large will be logged. It's also possible that the decompressed image may be too large to fit in memory. You should use CATiledLayer to display content of that size to ensure that it stays within the resource constraints of the device.
Just to expand a bit on the other answers.
The UIImage documentation (as of iOS 10) no longer seems to mention size limitations, although if you use UIImageView with images whose dimensions are larger than the maximum texture size* supported by the device you happen to be using you do get very large memory consumption at render time.
(The memory consumption I see in Instruments seems to indicate that the entire image is put into a 32 bits per pixel buffer when the CA::Layer is rendered.)
If your app doesn't get kill by the OS due to memory usage, the UIImageView does still end up displaying the image though.
Given this, you'll still need strategies to deal with very large images.
* You can check the maximum texture size using something like glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);. Just make sure you've set the EAGLContext current context to be something non-nil before querying OpenGL, otherwise you'll get zero.

Core Video texture cache and minification issue

I use Core Video texture cache for my OpenGL textures. I have an issue with rendering such textures in case of minification. Parameter GL_TEXTURE_MIN_FILTER has no effect: interpolation for minification is always the same as GL_TEXTURE_MAG_FILTER. The interesting fact is that everything works ok when I create pixel buffer object with CVPixelBufferCreateWithBytes function. The problem appears when I use CVPixelBufferCreate.
Environment:
iOS 7
OpenGL ES 2.0
iPad mini, iPad 3, iPad 4.
I've developed simple application which demonstrates this issue: https://github.com/Gubarev/iOS-CVTextureCache. The Demo application can render checkerboard (size of cell is 1x1) texture in three modes:
Regular OpenGL texture (ok).
Core Video texture, pixel buffer created with CVPixelBufferCreate (problem).
Core Video texture, pixel buffer created with CVPixelBufferCreateWithBytes (ok).
Texture is rendered two times with slight minification (achieved by using OpenGL viewport smaller than texture):
Left image rendered with minification filter GL_NEAREST, magnification filter GL_NEAREST.
Right image rendered with minification filter GL_LINEAR, magnification filter GL_NEAREST.
Image below demonstrates proper minificiation in case of regular OpenGL texture. It's clearly visible that setting for minificiation filter takes effect. Same results could be obtained when "CVPixelBufferCreateWithBytes" approach is used. The problem appear in case of "CVPixelBufferCreate" approach: both images minificated with setting for magnification filer (GL_NEAREST in particular).

Resources