When I bind an array to a texture in CUDA,
is that array copy to a texture space? or,
is that array reference as a texture?
If the answer is 1., then i can bind a texture and safety fetch data from the texture memory space while I write the result to the array, which is allocated in global memory.
If the answer is 2., then, is the texture memory a global memory space where the data is cached and spatially fetched?
I'd like to know about this topic, as I've seen some question related to this topic and I've not the answer clear right now.
Thanks in advance.
The answer is the second option, but from there things get a little more complex. There is no such thing as "texture memory", just global memory which is accessed via dedicated hardware which include an on GPU read cache (6-8kb per MP depending on card, see table F-2 in Appending F of Cuda Programming Guide) and a number of hardware accelerated filtering/interpolation actions. There are two ways the texture hardware can be used in CUDA:
Bind linear memory to a texture and read from it in a kernel using the 1D fetch API. In this case the texture hardware is really just acting as a read-through cache, and (IIRC) there are no filtering actions available.
Create a CUDA array, copy the contents of linear memory to that array, and bind it to a texture. The resulting CUDA array contains a spatially ordered version of the linear source, stored in global memory in some sort of (undocumented) space filling curve. The texture hardware provides cached access to that array, include simultaneous memory reads with hardware accelerated filtering.
You might find the overview of the GT200 architecture written by David Kanter worth reading to get a better idea of how the actual architecture implements the memory hierarchy the APIs expose.
Related
I was watching the Metal API overview video and it was mentioned that command buffers can have multiple command encoders and command encoders define render targets which are frame buffer specific. But the thing that I do not understand is that how are command buffers executed with respect to frame buffers. What is the hierarchy?
To quote from #Dan Hulme's response to "What is the difference in overlay and framebuffer?" on the Computer Graphics Beta StackExchange community found here (accessed on 3/7/21):
As you've [the asker] understood, the framebuffer [emphasis added] is an array in memory that holds all the pixels to display on the screen
The frame buffer acts as a storage for pixel data that the video hardware uses to display things to the screen.
Render targets refer to the output textures of the entire rendering process and are sometimes said to be attached to the frame buffer. They are where all of the work done in your fragment shaders (and possibly compute kernels) are stored and are simply textures in system memory, or reside entirely in tile memory — memory on-chip in the GPU — if you set their storage mode to MTLStoageMode.memoryless in certain circumstances
A command buffer is an abstraction that represents an encoding of a sequence of commands to be run by the GPU. Because GPU hardware can be quite different, it would be impractical to have to go down to driver-level implementations each time you wanted to support a new GPU and do work with it. So Metal provides us with a way to manage the process a GPU follows without having to worry about what GPU we are working with. I applaud the wonderful software design Metal implements, and I think that we should all study its power. Since we are working with the GPU, buffers are used pretty much everywhere, so I suppose the name "command buffer" fit naturally within the lexicon.
I am also still learning about graphics and all of the jargon that comes with it as well. Metal adds a whole other set of terms to learn, but I can assure you that it will become seemingly natural when you start to do more things in Metal (and more research of course)
I am completely new to CUDA programming and I want to make sure I understand some basic memory related principles, as I was a little confused with some of my thoughts.
I am working on simulation, using billions of one time use random numbers in range of 0 to 50.
After cuRand fill a huge array with random 0.0 to 1.0 floats, I run a kernel that convert all this float data to desired integer range. From what I learned I had a feeling that storing 5 these values on one unsigned int by using just 6 bits is better because of very low bandwidth of global memory. So I did it.
Now I have to store somewhere around 20000 read-only yes/no values, that will be accessed randomly with let's say the same probability based on random values going to the simulation.
First I thought about shared memory. Using one bit looked great until I realized that the more information in one bank, the more collisions there will be. So the solution seems to be use of unsigned short (2Byte->40KB total) to represent one yes/no information, using maximum of available memory and so minimizing probabilities of reading the same bank by different threads.
Another thought came from using constant memory and L1 cache. Here, from what I learned, the approach would be exactly opposite from shared memory. Reading the same location by more threads is now desirable, so putting 32 values on one 4B bank is now optimal.
So based on overall probabilities I should decide between shared memory and cache, with shared location probably being better as with so many yes/no values and just tens or hundreds of threads per block there will not be many bank collisions.
But am I right with my general understanding of the problem? Are the processors really that fast compared to memory that optimizing memory accesses is crucial and thoughts about extra instructions when extracting data by operations like <<, >>, |=, &= is not important?
Sorry for a wall of text, there is million of ways I can make the simulation work, but just a few ways of making it the right way. I just don't want to repeat some stupid mistake again and again only because I understand something badly.
The first two optimization priorities for any CUDA programmer are:
Launch enough threads (i.e. expose enough parallelism) that the machine has sufficient work to be able to hide latency
Make efficient use of memory.
The second item above will have various guidelines based on the types of GPU memory you are using, such as:
Strive for coalesced access to global memory
Take advantage of caches and shared memory to optimize the reuse of data
For shared memory, strive for non-bank-conflicted access
For constant memory, strive for uniform access (all threads within a warp read from the same location, in a given access cycle)
Consider use of texture caching and other special GPU features.
Your question is focused on memory:
Are the processors really that fast compared to memory that optimizing memory accesses is crucial and thoughts about extra instructions when extracting data by operations like <<, >>, |=, &= is not important?
Optimizing memory access is usually quite important to have a code that runs fast on a GPU. Most of the above recommendations/guidelines I gave are not focused on the idea that you should pack multiple data items per int (or some other word quantity), however it's not out of the question to do so for a memory-bound code. Many programs on the GPU end up being memory-bound (i.e. their usage of memory is ultimately the performance limiter, not their usage of the compute resources on the GPU).
An example of a domain where developers are looking at packing multiple data items per word quantity is in the Deep Learning space, specifically with convolutional neural networks on GPUs. Developers have found, in some cases, that they can make their training codes run faster by:
store the data set as packed half (16-bit float) quantities in GPU memory
load the data set in packed form
unpack (and convert) each packed set of 2 half quantities to 2 float quantities
perform computations on the float quantities
convert the float results to half and pack them
store the packed quantities
repeat the process for the next training cycle
(See this answer for some additional background on the half datatype.)
The point is, that efficient use of memory is a high priority for making codes run fast on GPUs, and this may include using packed quantities in some cases.
Whether or not it's sensible for your case is probably something you would have to benchmark and compare to determine the effect on your code.
Some other comments reading your question:
Here, from what I learned, the approach would be exactly opposite from shared memory.
In some ways, the use of shared memory and constant memory are "opposite" from an access-pattern standpoint. Efficient use of constant memory involves uniform access as already stated. In the case of uniform access, shared memory should also perform pretty well, because it has a broadcast feature. You are right to be concerned about bank-conflicts in shared memory, but shared memory on newer GPUs have a broadcast feature that negates the bank-conflict issue when the accesses are taking place from the same actual location as opposed to just the same bank. These nuances are covered in many other questions about shared memory on SO so I'll not go into it further here.
Since you are packing 5 integer quantities (0..50) into a single int, you might also consider (maybe you are doing this already) having each thread perform the computations for 5 quantities. This would eliminate the need to share that data across multiple threads, or have multiple threads read the same location. If you're not doing this already, there may be some efficiencies in having each thread perform the computation for multiple data points.
If you wanted to consider switching to storing 4 packed quantities (per int) instead of 5, there might be some opportunities to use the SIMD instrinsics, depending on what exactly you are doing with those packed quantities.
I am trying to implement an algorithm for a system which the camera get 1000fps, and I need to get the value of each pixel in all images and do the different calculation on the evolution of pixel[i][j] in N number of images, for all the pixels in the images. I have the (unsigned char *ptr) I want to transfer them to the GPU and start implementing the algorithm.but I am not sure what would be the best option for realtime processing.
my system:
CPU Intel Xeon x5660 2.8Ghz(2 processors)
GPU NVIDIA Quadro 5000
I got the following questions:
I do I need to add any Image Processing library addition to CUDA? if yes what do you suggest?
can I create a matrix for pixel[i,j] containing values for images [1:n] for each pixel in the image size? for example for 1000 images with 200x200 size I will end up with 40000 matrix each
containing 1000 values for one pixel? Does CUDA gives me some options like OpenCV to have a Matrices? or Vector?
1 - Do I need to add any Image Processing library addition to CUDA?
Apples and oranges. Each has a different purpose. An image processing library like OpenCV offers a lot more than simple accelerated matrix computations. Maybe you don't need OpenCV to do the processing in this project as you seem to rather use CUDA directly. But you could still have OpenCV around to make it easier to load and write different image formats from the disk.
2 - Does CUDA gives me some options like OpenCV to have a Matrices?
Absolutely. Some time ago I wrote a simple (educational) application that used OpenCV to load an image from the disk and use CUDA to convert it to its grayscale version. The project is named cuda-grayscale. I haven't tested it with CUDA 4.x but the code shows how to do the basic when combining OpenCV and CUDA.
It sounds like you will have 40000 independent calculations, where each calculation works only within one (temporal) pixel. If so, this should be a good task for the GPU. Your 352 core Fermi GPU should be able to beat your 12 hyperthreaded Xeon cores.
Is the algorithm you plan to run a common operation? It sounds like it might not be, in which case you will likely have to write your own kernels.
Yes, you can have arrays of elements of any type in CUDA.
Having this being a "streaming oriented" approach is good for a GPU implementation in that it maximizes number of calculations as compared to transfers over the PCIe bus. It it might also introduce difficulties in that, if you want to process the 1000 values for a given pixel in a specific order (oldest to newest, for instance), you will probably want to avoid continuously shifting all the frames in memory (to make room for the newest frame). It will slightly complicate your addressing of the pixel values, but the best approach, to avoid shifting the frames, may be to overwrite the oldest frame with the newest frame each time a new frame is added. That way, you end up with a "stack of frames" that is fairly well ordered but has a discontinuity between old and new frames somewhere within it.
I do I need to add any Image Processing library addition to CUDA ???
if yes what do you suggest?
Disclosure: My company develop & market CUVILib
There are very few options when it comes to GPU Accelerated Imaging libraries which also offer general-purpose functionality. CUVILib is one of those options which offers the following, very suited for your specific needs:
CuviImage object which holds your image data and image as a 2D matrix
You can write your own GPU function and use CuviImage as a 2D GPU matrix.
CUVILib already provides a rich set of Imaging functionality like Color Operations, Image Statistics, Feature detection, Motion estimation, FFT, Image Transforms etc so chances are that you will find your desired functionality.
As for the question of whether GPUs are suited for your application: Yes! Imaging is one of those domains which are ideal for parallel computation.
Links:
CUVILib: http://www.cuvilib.com
TunaCode: http://www.tunacode.com
I am using Assimp to import some 3d models.
Assimp is great, but it stores everything in a non-interleaved vertex format.
According to the Apple OpenGL ES Programming Guide, interleaved vertex data is preferred on ios: https://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.html#//apple_ref/doc/uid/TP40008793-CH107-SW8
I am using VertexArrays to consolidate all the buffer related state changes - is it still worth the effort to interleave all the vertex data?
Because interleaved vertex data increases the locality of vertex data, it allows the GPU to cache much more efficiently and generally to be a lot lighter on memory bandwidth at that stage in the pipeline.
How much difference it makes obviously depends on a bunch of other factors — whether memory access is a bottleneck (though it usually is, since texturing is read intensive), how spaced out your vertex data is if not interleaved and the specifics of how that particular GPU does fetching and caching.
Uploading multiple vertex buffers and bundling them into a vertex array would in theory allow the driver to perform this optimisation behind your back (either so as to duplicate memory or once it becomes reasonably confident that the buffers is the array aren't generally in use elsewhere) but I'm not confident that it will. But the other way around to look at it is that you should be able to make the optimisation yourself at the very end of your data pipeline, so you needn't plan in advance for it or change your toolset. It's an optimisation so if it's significant work to implement then the general rule against premature optimisation applies — wait until you have hard performance data.
I have to apply a convolution filter on each row of many images. The classic is 360 images of 1024x1024 pixels. In my use case it is 720 images 560x600 pixels.
The problem is that my code is much slower than what is advertised in articles.
I have implemented the naive convolution, and it takes 2m 30s. I then switched to FFT using fftw. I used complex 2 complex, filtering two rows in each transform. I'm now around 20s.
The thing is that articles advertise around 10s and even less for the classic condition.
So I'd like to ask the experts here if there could be a faster way to compute the convolution.
Numerical recipes suggest to avoid the sorting done in the dft and adapt the frequency domain filter function accordingly. But there is no code example how this could be done.
Maybe I lose time in copying data. With real 2 real transform I wouldn't have to copy the data into the complexe values. But I have to pad with 0 anyway.
EDIT: see my own answer below for progress feedback and further information on solving this issue.
Question (precise reformulation):
I'm looking for an algorithm or piece of code to apply a very fast convolution to a discrete non periodic function (512 to 2048 values). Apparently the discrete time Fourier transform is the way to go. Though, I'd like to avoid data copy and conversion to complex, and avoid the butterfly reordering.
FFT is the fastest technique known for convolving signals, and FFTW is the fastest free library available for computing the FFT.
The key for you to get maximum performance (outside of hardware ... the GPU is a good suggestion) will be to pad your signals to a power of two. When using FFTW use the 'patient' setting when creating your plan to get the best performance. It's highly unlikely that you will hand-roll a faster implementation than what FFTW provides (forget about N.R.). Also be sure to be using the Real version of the forward 1D FFT and not the Complex version; and only use single (floating point) precision if you can.
If FFTW is not cutting it for you, then I would look at Intel's (very affordable) IPP library. The have hand tuned FFT's for Intel processors that have been optimized for images with various bit depths.
Paul
CenterSpace Software
You may want to add image processing as a tag.
But, this article may be of interest, esp with the assumption the image is a power or 2. You can also see where they optimize the FFT. I expect that the articles you are looking at made some assumptions and then optimized the equations for those.
http://www.gamasutra.com/view/feature/3993/sponsored_feature_implementation_.php
If you want to go faster you may want to use the GPU to actually do the work.
This book may be helpful for you, if you go with the GPU:
http://www.springerlink.com/content/kd6qm361pq8mmlx2/
This answer is to collect progress report feedback on this issue.
Edit 11 oct.:
The execution time I measured doesn't reflect the effective time of the FFT. I noticed that when my program ends, the CPU is still busy in system time up to 42% for 10s. When I wait until the CPU is back to 0%, before restarting my program I then get the 15.35s execution time which comes from the GPU processing. I get the same time if I comment out the FFT filtering.
So the FFT is in fact currently faster then the GPU and was simply hindered by a competing system task. I don't know yet what this system task is. I suspect it results from the allocation of a huge heap block where I copy the processing result before writing it to disk. For the input data I use a memory map.
I'll now change my code to get an accurate measurement of the FFT processing time. Making it faster is still actuality because there is room to optimize the GPU processing like for instance by pipelining the transfer of data to process.