Add the same image with different offsets to the accumulating image on GPU - image-processing

As the title states, I am trying to add the same image with different offsets, stored in a list, to the accumulating image.
The current implementation performs this on a CPU, and with some intrinsics it can be quite fast.
However, with larger images (2048x2048) and many offsets in the list (~10000), the performance is not satisfactory.
My question is, can the accumulation of the image with different offsets be efficiently implemented on a GPU?

Yes, you can. The results will be likely much faster than on CPU. The trick is to not send the data for each addition, and to not even launch a new kernel for each addition: the kernel you have should do some decent number of offset additions at once, at least 16 but possibly a few hundred, depending on your typical list size (and you can have more than one kernel of course).

Related

TensorFlow for image recognition, size of images

How can size of an image effect training the model for this task?
My current training set holds images that are 2880 X 1800, but I am worried this may be too large to train. In total my sample size will be about 200-500 images.
Would this just mean that I need more resources (GPU,RAM, Distribution) when training my model?
If this is too large, how should I go about resizing? -- I want to mimic real-world photo resolutions as best as possible for better accuracy.
Edit:
I would also be using TFRecord format for the image files
Your memory and processing requirements will be proportional to the pixel size of your image. Whether this is too large for you to process efficiently will depend on your hardware constraints and the time you have available.
With regards to resizing the images there is no one answer, you have to consider how to best preserve information that'll be required for your algorithm to learn from your data while removing information that won't be useful. Reducing the size of your input images won't necessarily be a negative for accuracy. Consider two cases:
Handwritten digits
Here the images could be reduced considerably in size and maintain all the structural information necessary to be correctly identified. Have a look at the MNIST data set, these images are distributed at 28 x 28 resolution and identifiable to 99.7%+ accuracy.
Identifying Tree Species
Imagine a set of images of trees where individual leaves could help identify species. Here you might find that reducing the image size reduces small scale detail on leaf shape in a way that's detrimental to the model, but you might find that you get a similar result with a tight crop (which preserves individual leaves) rather than an image resize. If this is the case you may find that creating multiple crops from the same image gives you an augmented data set for training that considerably improves results (which is something to consider, if possible, given your training set is very small)
Deep learning models are achieving results around human level in many image classification tasks: if you struggle to identify your own images then it's less likely you'll train an algorithm to. This is often a useful starting point when considering the level of scaling that might be appropriate.
If you are using GPUs to train, this will def affect your training time. Tensorflow does most of the GPU allocation so you don't have to worry about that. But with big photos you will be experiencing long training time although your dataset is small. You should consider data-augmentation.
You could complement your resizing with the data-augmentation. Resize in equal dimensions and then perform reflection and translation (as in geometric movement)
If your images are too big, your GPU might run out of memory before it can start training because it has to store the convolution outputs on its memory. If that happens, you can do some of the following things to reduce memory consumption:
resize the image
reduce batch size
reduce model complexity
To resize your image, there are many scripts just one Google search away, but I will add that in your case 1440 by 900 is probably a sweet spot.
Higher resolution images will result in a higher training time and an increased memory consumption (mainly GPU memory).
Depending on your concrete task, you might want to reduce the image size in order to therefore fit a reasonable batch size of let's say 32 or 64 on the GPU - for stable learning.
Your accuracy is probably affected more by the size of your training set. So instead of going for image size, you might want to go for 500-1000 sample images. Recent publications like SSD - Single Shot MultiBox Detector achieve high accuracy values like an mAP of 72% on the PascalVOC dataset - with "only" using 300x300 image resolution.
Resizing and augmentation: SSD for instance just scales every input image down to 300x300, independent of the aspect ratio - does not seem to hurt. You could also augment your data by mirroring, translating, ... etc (but I assume there are built-in methods in Tensorflow for that).

Efficiently analyze dominant color in UIImage

I am trying to come up with an efficient algorithm to query the top k most dominant colors in a UIImage (jpg or png). By dominant color I mean the color that is present in the most amount of pixels. The use case is mostly geared toward finding the top-1 (single most dominant) color to figure out the images background. Below I document the algorithm I am currently trying to implement and wanted some feedback.
First my algorithm takes an image and draws it into a bitmap context, I do this so I have control over how many bytes per pixel will be present in my image-data for processing/parsing.
Second, after drawing the image, I loop through every pixel in the image buffer. I quickly realized looping through every pixel would not scale, this became a huge performance bottleneck. In order to optimize this, I realized I need not analyze every pixel in the image. I also realized that as I scaled down an image, the dominant colors in the image became more prominent and it resulted in looping through far fewer pixels. So the second step is actually to loop through every pixel in the resized image buffer:
Third, as I loop through each pixel I build up a CountedSet of UIColor objects, this basicly keeps track of a histogram of color counts.
Finally, once I have my counted set it is very easy to then loop through it and respond to top-k dominant color queries.
So my algorithm in short is to resize the image (scale it down by some function proportional to the images size), draw it into a bitmap context buffer once and cache it, loop through all data in the buffer and build out a histogram.
My question to the stack overflow community is how efficient is my algorithm, and are there any gains or further optimizations I can make? I just need something that gives me reasonable performance and after doing some performance testing this seemed to work pretty darn well. Furthermore, how accurate will this be? Particularly the rescale operation kinda worries me. Am I trading signficant amount of accuracy for performance here? At the end of the day this will mostly just be used to determine the background color of an image.
Ideas for potential performance improvements:
1) When analyzing a single dominant color do some math to figure out if I have already found the most dominant color based on number of pixels analyzed and exit early.
2) For the top k query, answer it quickly by levaraging a binary-heap data structure (typical top-k query algo)
You can use some performance tweaks to avoid the downscaling altogether. As we do not see your implementation is hard to say where the bottleneck really is. So here some pointers what to look/check for or improve. Take in mind I do not code for your environment so take extreme prejudice:
pixel access
Most pixel access function I saw are SLOOOW especially functions called putpixel,getpixel,pixels,.... Because in each single pixel access they are doing too many sanity/safety checks and color/space/address conversions. Instead use direct pixel access. Most of the image interfaces I saw have some kind of ScanLine[] access which gives you direct pointer to a single line in an image. So if you fill your own array of pointers with it you obtain direct pixel access without any slowdowns. This usually speeds up the algorithm from 100 to 10000 times on most platforms (depends on the usage).
To check for this try to read or fill image 1024*1024*32bit and measure the time. On standard PC it should take up to few [ms] or less. If you got slow access it could be even seconds. For more info see Display an array of color in C
Dominant color
if #1 is still not fast enough you can take advantage of that dominant color has highest probability in the image. So in theory you do not need to sample whole image instead you could:
sample every n-th pixel (which is downscaling with nearest neighbor filter) or use randomized pixel positions for sampling. Both approaches have their pros and cons but if you combine them you could get much better results with much less pixels to process then the whole image. Of coarse this will lead to wrong results on some occasions (when you miss many of the dominant pixels) which is improbable but possible.
histogram structure
for low color count like up to 16bit you can use bucket sort/histogram acquisition which is fast and can be done in O(n) where n is the number of pixels. No searching needed. So if you reduce colors from true color to 16 bit you can significantly boost the speed of histogram computation. Because you lower the constant time hugely and also the complexity goes from O(n*m) to O(n) which is for high color count m really big difference. See my C++ histogram example it is in HSV but in RGB is almost the same...
In case you need true-color you got 16.7M colors which is not practical for bucket sort style. So you need to use binary search and dictionary to speed up the color search in histogram. If you do not have this then this is your slow down.
histogram sort
How did you sort the histogram? If you got wrong sort implementation it could take much time for big color counts. I usually use bubble-sort in my examples because it is less code to write and usually enough. But I saw here on SO too many times wrongly implemented bubble sort using alway the worse case time T(n^2) which is wrong (and even I sometimes do it). For time sensitive code I use quick-sort. See bubble sort in C++.
Also your task is really resembling Color quantization (or it is just me?) so take a look at: Effective gif/image color quantization?
Downscaling an image requires looking at each pixel so you can pick a new pixel that is closest to the average color of some group of neighbors. The reason this appears to happen so fast compared to your implementation of iterating through all the pixels is that CoreGraphics hands the scaling task off to the GPU hardware, whereas your approach uses the CPU to iterate through each pixel which is much slower.
So the thing you need to do is write some GPU-based code to scan through your original image and look at each pixel, tallying up the color counts as you go. This has the advantage not only of being very fast, but you'll also get an accurate count of colors. Downsampling produces as I mentioned pixels that are color averages, so you won't end up with reliably correct color counts that correlate to your original image (unless you happen to be downscaling solid colors, but in the typical case you'll end up with something other than you started with).
I recommend looking into Apple's Metal framework for an API that lets you write code directly for the GPU. It'll be a challenge to learn, but I think you'll find it interesting and when you're done your code will scan original images extremely fast without having to go through any extra downsampling effort.

Fast image segmentation algorithms for automatic object extraction

I need to segment a set of unknown objects (books, cans, toys, boxes, etc.) standing on top of a surface (table top, floor…). I want to extract a mask (either binary or probabilistic) for each object on the scene.
I do know what the appearance of the surface is (a color model). The size, geometry, amount, appearance of the objects is arbitrary, and they could be texture-less as well). Multiple views might be available as well. No user interaction is available.
I have been struggling on picking the best kind of algorithm for this scenario (graph based, cluster based, super-pixels, etc.). This comes, naturally from a lack of experience with different methods. I'd like to know how they compare one to another.
I have some constraints:
Can’t use libraries (it’s a legal constraint, except for OpenCV). So any algorithm must be implemented by me. So I’d like to choose an algorithm that is simple enough to be implemented in a non-too-long period of time.
Performance is VERY important. There will be many other processes running at the same time, so I can’t afford to have a slow method.
It’s much preferred to have a fast and simple method with less resolution than something complex and slow that provides better results.
Any suggestion on some approach suitable for this scenario would be appreciated.
For speed, I'd quickly segment the image into surface and non-surface (stuff). So this at least gets you from 24 bits (color?) to 8 bits or even one bit, if you are brave.
From there you need to aggregate and filter the regions into blobs of appropriate size. For that, I'd try a morphological (or linear) filter that is keyed to a disk that would just fit inside the smallest object of interest. This would be an opening and a closing. Perhaps starting with smaller radii for better results.
From there, you should have an image of blobs that can be found and discriminated. Each blob or region should designate the objects of interest.
Note that if you can get to a 1-bit image, things can go VERY fast. However, efficient tools that can make use of this data form (8 pixels per character) are often not forthcoming.

Convoluting a large filter in GPGPU

I wish to apply a certain 2D filter to 2D images, however, the filter size is huge. Image dimensions are about 2000x2000 and the filter size is about 500*500.
No, I cannot do this in frequency domain so FFT is no go. I'm aware of normal GPU convolution and the use of shared memory for coalescing memory access, however shared memory doesn't seem feasible since the space needed by the filter is large and would therefore need to be divided, this might even prove to be very complex to write.
Any ideas?
I think you can easily manage doing filtering for such sized images. You can transfer hundreds of megabytes to the videomemory. Such size is going to be working well.
You can use byte matrices to transfer the image data then you can use your filter to operate on it.

How to manage large 2D FFTs in cuda

I have succesfully written some CUDA FFT code that does a 2D convolution of an image, as well as some other calculations.
How do I go about figuring out what the largest FFT's I can run are? It seems to be that a plan for a 2D R2C convolution takes 2x the image size, and another 2x the image size for the C2R. This seems like a lot of overhead!
Also, it seems like most of the benchmarks and such are for relatively small FFTs..why is this? It seems like for large images, I am going to quickly run out of memory. How is this typically handled? Can you perform an FFT convolution on a tile of an image and combine those results, and expect it to be the same as if I had run a 2D FFT on the entire image?
Thanks for answering these questions
CUFFT plans a different algorithm depending on your image size. If you can't fit in shared memory and are not a power of 2 then CUFFT plans an out-of-place transform while smaller images with the right size will be more amenable to the software.
If you're set on FFTing the whole image and need to see what your GPU can handle my best answer would be to guess and check with different image sizes as the CUFFT planning is complicated.
See the documentation : http://developer.download.nvidia.com/compute/cuda/1_1/CUFFT_Library_1.1.pdf
I agree with Mark and say that tiling the image is the way to go for convolution. Since convolution amounts to just computing many independent integrals you can simply decompose the domain into its constituent parts, compute those independently, and stitch them back together. The FFT convolution trick simply reduces the complexity of the integrals you need to compute.
I expect that your GPU code should outperform matlab by a large factor in all situations unless you do something weird.
It's not usually practical to run FFT on an entire image. Not only does it take a lot of memory, but the image must be a power of 2 in width and height which places an unreasonable constraint on your input.
Cutting the image into tiles is perfectly reasonable. The size of the tiles will determine the frequency resolution you're able to achieve. You may want to overlap the tiles as well.

Resources