Apples docs state:
You should avoid creating UIImage objects that are greater than 1024 x
1024 in size. Besides the large amount of memory such an image would
consume, you may run into problems when using the image as a texture
in OpenGL ES or when drawing the image to a view or layer. This size
restriction does not apply if you are performing code-based
manipulations, such as resizing an image larger than 1024 x 1024
pixels by drawing it to a bitmap-backed graphics context. In fact, you
may need to resize an image in this manner (or break it into several
smaller images) in order to draw it to one of your views.
I assume this means that if we are working with non-square images, we should break them into smaller images? Is there any specific documentation or explanations on this, or does anyone have any any tips from experience?
Thanks for reading.
On the pre-A5 iOS devices, the maximum OpenGL ES texture size was 2048x2048 (Apple's documentation is incorrect in this regard by saying it's 1024x1024). What that means is that you can't have an image larger than that in either dimension. The newer iOS devices (iPhone 4S, iPad 2, iPad 3) have a maximum texture size of 4096x4096.
It does not mean that you have to have square images, just that an image must not have its width or height exceed 2048 (again, 4096 on newer devices). If you try to do so, I believe your image will just render as black.
This used to be a limitation for all UIViews not backed by a CATiledLayer, but I believe they now do tiling on large enough views automatically. If you need to work with an image larger than 2048x2048, you'll need to host it in a CATiledLayer or the like.
The memory cautions are worth paying attention to, though. Images are stored in their uncompressed form in memory, no matter their source, so you're looking at 16,777,216 bytes per 2048x2048 image (4 bytes per pixel for RGBA). That can add up pretty quickly, if you're not careful.
Related
I am designing a game that makes use of large backgrounds. These are illustrated backgrounds, that are currently sitting at around 4.5 MB and as backgrounds, are sitting in the scene for the entirety of the game.
First, I am not sure if this would cause memory usage to amp up, but I imagine it would, given that there are also other overlaid textures on the screen. That is my first question: can it cause memory issues?
Second, if I have a background that is 2048 x 1536 and at a 300 dpi, and compress/optimise this image, would it reduce memory usage/CPU usage? Is there documentation that relates to how best to optimise these kinds of images?
There are several techniques to do that. It depends on how you're going to use the images.
If it's a background in movement you can split it in tiles, then you render smaller images.
Depends on the format, most of the people just know PNG and JPEG, but there are other projects/formats you can use. Some of them are the smaller size but slower read/write, so is up to you how to use them. i.e.: https://github.com/onevcat/APNGKit
If in your background is not necessary the alpha channel, use JPEG over PNG then you'll save some space.
I am working on a project to recognize text in Business Cards and map them to appropriate fields.I am using opencv for image processing.I need to feed the preprocessed image to Tesseract-OCR engine for text recognition.This link
states that images should have atleast a DPI of 300.My image pixel size is 2560x1536 with 72 DPI.
How to increase the DPI to 300?
It is also said that it is beneficial to resize image.How to resize my image optimally for good OCR results
Tesseract works best on images which have a DPI of at least 300 dpi, so it may be beneficial to resize images. What does 'so' imply here.What is the relation between resizing an image and DPI?
For OCR, what really matters is the resolution in pixels. Because the physical characters can range from tiny to huge, independently of the DPI of the acquisition device.
As a rule of thumb, stroke width around 3 pixels is a good start. If lower, resizing might not be helpful because the information is missing. If much higher, the running time might be excessive (or the OCR function not be taylored to deal with it).
Also check that the package will not attempt to resize internally, based on its own assumption of stroke width and the DPI info stored in the header, if there is a mismatch.
I am just confused about this topic.
Resizing images has a couple of costs including:
Scaling images is an imperfect process. A 3x image scaled to 1x pixel density can have visual artifacts or fail to communicate it's purpose to the user effectively.
Scaling images requires you to load the full size image on older devices with less memory and computing power available to perform those operations which can lead to slower performance. Especially during animations like scrolling views.
As always consider if these are concerns for your specific use case and act accordingly.
For any given file data size, I want to be able to resize (or compress) a UIImage to fit within that data limit. This question is NOT about how to resize, or how to check file sizes... it is about an algorithm to getting this in a performant way.
Searching here already, I found this thread which talks about stepping down the image jpeg quality in a linear, or binary algorithm. This isn't very performant, taking dozens of seconds at best.
I am working on iOS so images can be close to 10MB (from iPhone 4S). My target, although variable, is currently 3145728 bytes.
I am currently using UIImageJPEGRepresentation to compress a little, but to get to my low target it appears I would have to lose much quality for such a large photo. Is there a relation between UIImage size and NSData size? Is there some function where I can say something like:
area * X = dataSize
...and solve for a scale factor so I can resize in one shot?
One idea I just had after looking at the thread you linked to: compressing a 10MB image is going to be relatively slow. How about resizing to be much smaller (so that compression is much faster), then performing the compression algorithm (from the link). This can then be used as a guide to the size of compressing the 10MB image? The idea being that the compression ratio should be similar for the same image, independent of size.
Let's say 1000x1000 pixels compressed is 10MB, target size is 3MB.
Then say smaller 100x100 pixels (for example), compressed with same quality, is C MB. Then perform the binary search alg on the 100x100 image until size = C * (3/10). Then use this compression quality for the 1000x1000 image to get ~3MB image.
Note: I have no idea how well this will work - it's just a suggestion. What size to pick (I've used 100x100) for the smaller-sized image is also just a guess and something would need to be experimented with.
I'm making a Worms-style bitmap destructible terrain game using OpenGL. I'd like to know where the limitiations in terms of video memory are for the size of the worlds.
Currently, I use blocks of 512*512 RGBA textures for the terrain.
How much memory, very roughly, can I expect such a 512*512 RGBA texture to take up?
Is there any internal, automatic compression going on?
How much video memory can I expect most user's computers to have free?
How much memory, very roughly, can I expect such a 512*512 RGBA texture to take up?
Not enough information. You should always use sized OpenGL image formats (GL_RGBA8, GL_RGBA16).
GL_RGBA8 takes up 32-bits per pixel, which is 4 bytes. Therefore, 512*512*4 = 1MB.
Is there any internal, automatic compression going on?
No.
How much video memory can I expect most user's computers to have free?
How much are you using currently?
OpenGL will page image data in and out according to the available space. If you run out of GPU memory, OpenGL will happily allocate system memory and upload the images as needed.
But to be honest, your little Worms game isn't going to actually cost anything in terms of memory size. Maybe 64MB when you're done, tops. It's nothing you need to be concerned about.
I would not worry about that very much. Even with 8192*2048 world (4 screens wide and 2 screens tall, which is very big for Worms-style game) you would require only 8*2*4=64Mb (add mipmaps, other textures, framebuffer) you should fit into 128MB bounds. As far as I know even older GPUs have that kind of memory (we don't speak about GeForce4 cards, right?).
Older GPUs may have limitation on how big each texture could be, but since you already split your world into 512x512 chunks it won't be a problem.
If video memory becomes an issue you could allow users to use half-sized textures (i.e. downsample the world to 4096*1024 and 256x256 chinks) and fetch new / discard unused regions on demand.
With 32-bpp (4 bytes) you get 4*512*512 = 1 MB
See this regarding texture compression: http://www.oldunreal.com/editing/s3tc/ARB_texture_compression.pdf
Again, this depends on your engine, but if I were you I would do this:
Since your terrain texture will probably be reusing some mosaic-like textures, and you need to know whether a pixel is present, or destroyed, then given you are using mosaic textures no larger than 256x256 you could definitely get away with an GL_RG16 internal format (where each component would be a texture coordinate that you would need to map from [0, 255] -> [0.0, 1.0] and you would reserve some special value to indicate that the terrain is destroyed) for your terrain texture, making every 512x512 block take up 0.5MB.
Although it's temping to add an extra byte to indicate terrain presence, but a 3 byte format wouldn't cache too well