How to interpret texture memory information output by deviceQuery sample to know texture memory size?
Here is output of my texture memory.
Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65535),3D=(2048,2048,2048)
Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048
It is a common misconception, but there is no such thing as "texture memory" in CUDA GPUs. There are only textures, which are global memory allocations accessed through dedicated hardware which has inbuilt cache, filtering and addressing limitations which lead to the size limits you see reported in the documentation and device query. So the limit is either roughly the free amount of global memory (allowing for padding and alignment in CUDA arrays) or the dimensional limits you already quoted.
The output shows that the maximum texture dimensions are:
For 1D textures 65536
For 2D textures 65536*65535
For 3D textures 2048*2048*2048
If you want the size in bytes, multiply that by the maximum number of channels (4) and the maximum sub-pixel size (4B).
(For layered textures, multiply the relevant numbers you got for the dimensions by the number of maximum layers you got.)
However, this is the maximum size for a single texture, not the available memory for all textures.
Related
I am trying to understand digitalization of sound and images.
As far as I know, they both need to convert analog signal to digital signal. Both should be using sampling and quantization.
Sound: We have amplitudes on axis y and time on axis x. What is on axis x and y during image digitalization?
What is kind of standard of sample rate for image digitalization? It is used 44kHz for CDs (sound digitalization). How exactly is used sample rate for images?
Quantization: Sound - we use bit-depth - which means levels of amplitude - Image: using bit-depth also, but it means how many intesities are we able to recognize? (is it true?)
What are other differences between sound and image digitalization?
Acquisition of images can be summarized as a spatial sampling and conversion/quantization steps. The spatial sampling on (x,y) is due to the pixel size. The data (on the third axis, z) is the number of electrons generated by photoelectric effect on the chip. These electrons are converted to ADU (analog digital unit) and then to bits. What is quantized is the light intensity in level of greys, for example data on 8 bits would give 2^8 = 256 levels of gray.
An image loses information both due to the spatial sampling (resolution) and the intensity quantization (levels of gray).
Unless you are talking about videos, images won't have sampling in units of Hz (1/time) but in 1/distance. What is important is to verify the Shannon-Nyquist theorem to avoid aliasing. The spatial frequencies you are able to get depend directly on the optical design. The pixel size must be chosen respectively to this design to avoid aliasing.
EDIT: On the example below I plotted a sine function (white/black stripes). On the left part the signal is correctly sampled, on the right it is undersampled by a factor of 4. It is the same signal, but due to bigger pixels (smaller sampling) you get aliasing of your data. Here the stripes are horizontal, but you also have the same effect for vertical ones.
There is no common standard for the spatial axis for image sampling. A 20 megapixel sensor or camera will produce images at a completely different spatial resolution in pixels per mm, or pixels per degree angle of view than a 2 megapixel sensor or camera. These images will typically be rescaled to yet another non-common-standard resolution for viewing (72 ppi, 300 ppi, "Retina", SD/HDTV, CCIR-601, "4k", etc.)
For audio, 48k is starting to become more common than 44.1ksps. (on iPhones, etc.)
("a nice thing about standards is that there are so many of them")
Amplitude scaling in raw format also has no single standard. When converted or requantized to storage format, 8-bit, 10-bit, and 12-bit quantizations are the most common for RGB color separations. (JPEG, PNG, etc. formats)
Channel formats are different between audio and image.
X, Y, where X is time and Y is amplitude is only good for mono audio. Stereo usually needs T,L,R for time, left, and right channels. Images are often in X,Y,R,G,B, or 5 dimensional tensors, where X,Y are spatial location coordinates, and RGB are color intensities at that location. The image intensities can be somewhat related (depending on gamma corrections, etc.) to the number of incident photons per shutter duration in certain visible EM frequency ranges per incident solid angle to some lens.
A low-pass filter for audio, and a Bayer filter for images, are commonly used to make the signal closer to bandlimited so it can be sampled with less aliasing noise/artifacts.
It's kind of interesting how much documentation avoids disambiguating what WebGLRenderingContext#getParameter(WebGLRenderingContext.MAX_TEXTURE_SIZE) means. "Size" is not very specific.
Is it the maximum storage size of textures in bytes, implying lowering bit-depth or using fewer color channels increases the maximum dimensions? Is it the maximum diameter in pixels of textures, implying you are much more limited in terms of addressable-area if your textures are highly rectangular? Is it the maximum number of pixels?
As it says in the WebGL spec section 1.1
The remaining sections of this document are intended to be read in conjunction with the OpenGL ES 2.0 specification (2.0.25 at the time of this writing, available from the Khronos OpenGL ES API Registry). Unless otherwise specified, the behavior of each method is defined by the OpenGL ES 2.0 specification
The OpenGL ES 2.0.25 spec, section 3.7.1 says
The maximum allowable width and height of a two-dimensional texture image must be at least 2^(k−lod) for image arrays of level zero through k, where k is the log base 2 of MAX_TEXTURE_SIZE and lod is the level-of-detail of the image array.
It's the largest width and/or height you can specify for a texture. Note that this has nothing to do with memory as #Strilanc points out. So while you can probably create a 1 x MAX_TEXTURE_SIZE or a MAX_TEXTURE_SIZE x 1 texture you probably can not create a MAX_TEXTURE_SIZE x MAX_TEXTURE_SIZE texture as you'd run out of memory
It's the maximum diameter in pixels. If M is the maximum texture size, then you can create textures of size M x M, M/2 x M/4, M x 1, and so on; but you can't make a texture of size 2M x 2M or 1 x 2M.
Consider that the largest MAX_TEXTURE_SIZE reported in this opengl capability report is 16384 (2^15). If that was the maximum number of pixels (nevermind bytes), instead of the maximum diameter, you'd be unable to create 256x256 textures. Which is really small.
(Note that limits besides MAX_TEXTURE_SIZE apply. For example, my machine returns a maximum texture size of 2^15. But a 2^15 x 2^15 texture with rgba float pixels would take 16 gibibytes of space. It wouldn't fit in the available memory.)
I'm working on a volume rendering program using DirectX 11.
I render both to a window ( HWND ) and to a texture ( ID3D11Texture2D ).
While the rendering for the HWND always looks correct, my ID3D11Texture2D looks corrupt for render sizes smaller than 64x64:
I wonder whether there is a minimum size limit for textures in DirectX 11.
Unfortunately, I was only able to find information about the maximum texture size limit.
There is no minimum texture size; 1x1x1 is valid.
It looks to me like you've mapped the 3D texture and are extracting the data while ignoring the "RowPitch" returned. On textures that are sufficiently small (or of unusual dimensions) the address at which the next row of texels begins need not necessarily be contiguous after the previous row, but will instead begin "RowPitch" bytes after the last.
See D3D11_MAPPED_SUBRESOURCE
I use imread to read an image (about 1.5GB), but the image.data[x] value always be 0.
Is there any parameters I can use?
Or,is there any other API that I can use to load this image.
Thanks
ubuntu 12.04
Image information :
XXX.tif
1.21GB
Dimensions : 34795 * 37552
Horizontal resolution : 150 dpi
Vertical resolution : 150 dpi
bit depth : 8
The cv::Mat is a continuous memory space allocation. It might be almost impossible to allocate 1.5GB of continuous memory space. Try splitting the file either spatially or based on its different channels.
Suppose I have a texture which is naturally not square (for example, a photographic texture of something with a 4:1 aspect ratio). And suppose that I want to use PVRTC compression to display this texture on an iOS device, which requires that the texture be square. If I scale up the texture so that it is square during compression, the result is a very blurry image when the texture is viewed from a distance.
I believe that this caused by mipmapping. Since the mipmap filter sees the new larger stretched dimension, it uses that to choose a low mip level, which is actually not correct, since those pixels were just stretched to that size. If it looked at the other dimension, it would choose a higher resolution mip level.
This theory is confirmed (somewhat) by the observation that if I leave the texture in a format that doesn't have to be square, the mipmap versions look just dandy.
There is a LOD Bias parameter, but the docs say that is applied to both dimensions. It seems like what is called for is a way to bias the LOD but only in one dimension (that is, to bias it toward more resolution in the dimension of the texture which was scaled up).
Other than chopping up the geometry to allow the use of square subsets of the original texture (which is infeasible, give our production pipeline), does anyone have any clever hacks they've used to deal with this issue?
It seems to me that you have a few options, depending on what you can do with, say, the vertex UVs.
[Hmm Just realised that in the following I'm assuming that the V coordinates run from the top to the bottom... you'll need to allow for me being old school :-) ]
The first thing that comes to mind is to take your 4N*N (X*Y) source texture and repeat it 4x vertically to give a 4N*4N texture, and then adjust the V coordinates on the model to be 1/4 of their current values. This won't save you much in terms of memory (since it effectively means a 4bpp PVRTC becomes 4x larger) but it will still save bandwidth and cache space, since the other parts of the texture won't be accessed. MIP mapping will also work all the way down to 1x1 textures.
Alternatively, if you want to save a bit of space and you have a pair of 4N*N textures, you could try packing them together into a "sort of" 4N*4N atlas. Put the first texture in the top N rows, then follow it by the N/2 of the top rows. The pack the bottom N/2 rows of the 2nd texture, followed by the second texture, and then the top N/2 rows. Finally, do the bottom N/2 rows of the first texture. For the UVs that access the first texture, do the same divide by 4 for the V parameter. For the second texture, you'll need to divide by 4 and add 0.5
This should work fine until the MIP map level is so small that the two textures are being blended together... but I doubt that will really be an issue.