3D Float16 Texture Sizes - metal

I am rendering Signed Distance Fields into a 3D rgba16Float texture and render the contents of the 3D texture via ray marching.
Right now on my M1 Max the maximum rgba16Float size I can allocate is around 700x700x700, a bit larger and the allocation fails.
In the documentation of Apple the max size of 3D textures is given as 2048.
Now, I have 64gb of unified memory on my M1 Max and would love to be able to create and render larger textures.
Is the limitation I currently experience a hardware limitation ? Or something driver / software related which can one day increase ?

Related

Why is a Texture in Stage3D limited to 2048 × 2048?

The biggest Texture you can create in Starling, which uses the Stage3D API, is limited to a maximum size of 2048 × 2048. Why is there such a size limit? I've read somewhere that a texture of this size should only consume ~16-17 MB of memory, which doesn't seem a lot to me. Is this limit based on limits on the most common devices? (impractically slow to load if any larger) Or it's a lower-level technological limitation? (It cannot get any higher even with the best GPU)
You can use 4096x4096 in starling but your application will only be supported by some devices. 2048x2048 is generally the best target for a good range of cross platform support. If you are targeting desktop or web you should be ok to use bigger textures. You should also avoid using multiple spritesheets, it is possible to have an entire game in 1 or 2 2048x2048 spritesheets. You could also look to using videos in starling (See Starling 1.6) if you want detailed animations.

DirectX9 and Incompatible Texture size

I'm working with DirectX9 and now I'm having problems with the texture creation.
I'm using the functions CreateTexture and LoadSurfaceFromMemory with D3DFMT_DXT1 compression, I checked the devices caps of my graphic card and D3DPTEXTURECAPS_POW2 and D3DPTEXTURECAPS_NONPOW2CONDITIONAL are off, I think this means that my graphic card have support of NON Power of Two Textures... I can use textures of any sizes.
My problem is the most of the textures are working well (and their sizes aren't power of two), but in some cases don't work, like "1228 x 453", if I resize to "1228 x 452" the texture works well.
What's going on?
Sorry for my English!.
Thanks.
The BCn texture formats are block based. The blocks pack pixels into groups of 4x4 elements, so the texture dimension must be aligned on 4 for theses formats.
Unfortunately, this is a graphics card issue. Even if the card claims support for non power of two textures, support is often buggy / limited.
You could pad the texture and use a subtexture, but the best approach is to build a texture atlas (in general you should be doing this anyway to conserve memory bandwidth)

Paint a very high resolution textured object (sphere) in OpenGL ES

I'm drawing planets in OpenGL ES, and running into some interesting performance issues. The general question is: how best to render "hugely detailed" textures on a sphere?
(the sphere is guaranteed; I'm interested in sphere-specific optimizations)
Base case:
Window is approx. 2048 x 1536 (e.g. iPad3)
Texture map for globe is 24,000 x 12,000 pixels (an area half the size of USA fits the full width of screen)
Globe is displayed at everything from zoomed in (USA fills screen) to zoomed out (whole globe visible)
I need a MINIMUM of 3 texture layers (1 for the planet surface, 1 for day/night differences, 1 for user-interface (hilighting different regions)
Some of the layers are animated (i.e. they have to load and drop their texture at runtime, rapidly)
Limitations:
top-end tablets are limited to 4096x4096 textures
top-end tablets are limited to 8 simultaneous texture units
Problems:
In total, it's naively 500 million pixels of texture data
Splitting into smaller textures doesn't work well because devices only have 8 units; with only a single texture layer, I could split into 8 texture units and all textures would be less than 4096x4096 - but that only allows a single layer
Rendering the layers as separate geometry works poorly because they need to be blended using fragment-shaders
...at the moment, the only idea I have that sounds viable is:
split the sphere into NxM "pieces of sphere" and render each one as separate geometry
use mipmaps to render low-res textures when zoomed out
...rely on simple culling to cut out most of them when zoomed in, and mipmapping to use small(er) textures when they can't be culled
...but it seems there ought to be an easier way / better options?
Seems that there are no way to fit such huge textures in memory of mobile GPU, even into the iPad 3 one.
So you have to stream texture data. The thing you need is called clipmap (popularized by id software with extended megatexture technology).
Please read about this here, there are links to docs describing technique: http://en.wikipedia.org/wiki/Clipmap
This is not easily done in ES, as there is no virtual texture extension (yet). You basically need to implement virtual texturing (some ES devices implement ARB_texture_array) and stream in the lowest resolution possible (view-dependent) for your sphere. That way, it is possible to do it all in a fragment shader, no geometry subdivision is required. See this presentation (and the paper) for details how this can be implemented.
If you do the math, it is simply impossible to stream 1 GB (24,000 x 12,000 pixels x 4 B) in real time. And it would be wasteful, too, as the user will never get to see it all at the same time.

OpenGL: Texture size and video memory

I'm making a Worms-style bitmap destructible terrain game using OpenGL. I'd like to know where the limitiations in terms of video memory are for the size of the worlds.
Currently, I use blocks of 512*512 RGBA textures for the terrain.
How much memory, very roughly, can I expect such a 512*512 RGBA texture to take up?
Is there any internal, automatic compression going on?
How much video memory can I expect most user's computers to have free?
How much memory, very roughly, can I expect such a 512*512 RGBA texture to take up?
Not enough information. You should always use sized OpenGL image formats (GL_RGBA8, GL_RGBA16).
GL_RGBA8 takes up 32-bits per pixel, which is 4 bytes. Therefore, 512*512*4 = 1MB.
Is there any internal, automatic compression going on?
No.
How much video memory can I expect most user's computers to have free?
How much are you using currently?
OpenGL will page image data in and out according to the available space. If you run out of GPU memory, OpenGL will happily allocate system memory and upload the images as needed.
But to be honest, your little Worms game isn't going to actually cost anything in terms of memory size. Maybe 64MB when you're done, tops. It's nothing you need to be concerned about.
I would not worry about that very much. Even with 8192*2048 world (4 screens wide and 2 screens tall, which is very big for Worms-style game) you would require only 8*2*4=64Mb (add mipmaps, other textures, framebuffer) you should fit into 128MB bounds. As far as I know even older GPUs have that kind of memory (we don't speak about GeForce4 cards, right?).
Older GPUs may have limitation on how big each texture could be, but since you already split your world into 512x512 chunks it won't be a problem.
If video memory becomes an issue you could allow users to use half-sized textures (i.e. downsample the world to 4096*1024 and 256x256 chinks) and fetch new / discard unused regions on demand.
With 32-bpp (4 bytes) you get 4*512*512 = 1 MB
See this regarding texture compression: http://www.oldunreal.com/editing/s3tc/ARB_texture_compression.pdf
Again, this depends on your engine, but if I were you I would do this:
Since your terrain texture will probably be reusing some mosaic-like textures, and you need to know whether a pixel is present, or destroyed, then given you are using mosaic textures no larger than 256x256 you could definitely get away with an GL_RG16 internal format (where each component would be a texture coordinate that you would need to map from [0, 255] -> [0.0, 1.0] and you would reserve some special value to indicate that the terrain is destroyed) for your terrain texture, making every 512x512 block take up 0.5MB.
Although it's temping to add an extra byte to indicate terrain presence, but a 3 byte format wouldn't cache too well

iPad OpenGL ES: On Memory Texture Size

I am loading a RGBA texture which is 1024 x 1024. I expected the on-memory texture size would be 1024 x 1024 x 4 => 4 MB . But when I try to print the memory consumption I can see that the texture is taking around 7 - 8 MB, almost double. I was just wondering whether IPad is converting every channel from byte to half-float,
So is there any way to specify that every pixel should take 4 bytes and not 8 bytes.
The easiest way to specify it is using a sized internal format (like GL_RGBA8 instead of GL_RGBA), although I'm not sure if these are supported in ES. But I would be surprised if an ES device would store a standard RGBA texture with more than 8 bits per channel.
How do you determine the GPU memory consumption? I would rather guess the additional memory is due to other important GPU resources, like VBOs and not to forget the framebuffer itself (the memory you render into), that takes a reasonable amount of memory. And remember, when using mip-maps these additionally require around 33% of the base texture's memory.
And if you're talking about the size of the CPU data you create the texture from, then this doesn't have anything to do with the texture's size anyway and only depends on the size of your own data.
You have to specify the type and internal format of your texture when you create it using glTexImage2D.
Yours is probably set to GL_FLOAT or something .
Lookup the documentation here : http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml

Resources