I'm looking for a way to determine whether two CALayers turn on any pixels (to any color at all) in common.
I can set one layer as the mask of the other, and get a result that will only have such pixels. But now, is there a way of determining whether any of those pixels has alpha > 0? This operation would have to be extremely fast. It would be equivalent to determine the pixel maximum or minimum.
I'd rather not have to create an image and run down all the pixels by hand on the cpu. It would seem that CIFilter ought to be able to do this, so it could run on the gpu, but all of the filters I can find produce more images (I want a single boolean). Can GPUImage do this? Metal?
Just checking for anyone's experience before I go down these rabbit holes...
I was wrong: it is fast enough for my purposes (meaning keep user responsiveness) to do the intersection on the cpu, without having to resort to gpu programming. I initially thought it wasn't, because I was using a library that called an objective-c method in the inner loop to determine whether each pixel was on or off. That was slow. When I redid it in c++ using an inline function, it was plenty fast.
Related
I've written an app with an object detection model and process images when an object is detected. The problem I'm running into is when an object is detected with 99% confidence but the frame I'm processing is very blurry.
I've considered analyzing the frame and attempting to detect blurriness or detecting device movement and not analyzing frames when the device is moving a lot.
Do you have any other suggestions to only process un-blurry photos or solutions other than the ones I've proposed? Thanks
You might have issues detecting "movement" when for instance driving in car. In that case looking at something inside your car is not considered as movement while looking at something outside is (if it's not far away anyway). There can be many other cases for this.
I would start by checking if camera is in focus. It is not the same as checking if frame is blurry but it might be very close.
The other option I can think of is simply checking 2 or more sequential frames and see if they are relatively the same. To do something like that it is bast to define a grid for instance 16x16 on which you evaluate similar values. You would need to mipmap your photos which manually means resizing it by half till you get to 16x16 image (2000x1500 would become 1024x1024 -> 512x512 -> 256x256 ...). Then grab those 16x16 pixels and store them. Once you have enough frames (at least 2) you can start comparing these values. GPU is perfect for resizing but those 16x16 values are probably best evaluated on the CPU. What you need to do is basically find an average pixel difference in 2 sequential 16x16 buffers. Then use that to evaluate if detection should be enabled.
This procedure may still not be perfect but it should be relatively feasible from performance perspective. There may be some shortcuts as some tools maybe already do resizing so that you don't need to "halve" them manually. From theoretical perspective you are creating sectors and compute their average color. If all the sectors have almost same color between 2 or more frames there is a high chance the camera did not move in that time much and the image should not be blurry from movement. Still if camera is not in focus you can have multiple sequential frames that are exactly the same but in fact they are all blurry. Same happens if you detect phone movement.
I am trying to come up with an efficient algorithm to query the top k most dominant colors in a UIImage (jpg or png). By dominant color I mean the color that is present in the most amount of pixels. The use case is mostly geared toward finding the top-1 (single most dominant) color to figure out the images background. Below I document the algorithm I am currently trying to implement and wanted some feedback.
First my algorithm takes an image and draws it into a bitmap context, I do this so I have control over how many bytes per pixel will be present in my image-data for processing/parsing.
Second, after drawing the image, I loop through every pixel in the image buffer. I quickly realized looping through every pixel would not scale, this became a huge performance bottleneck. In order to optimize this, I realized I need not analyze every pixel in the image. I also realized that as I scaled down an image, the dominant colors in the image became more prominent and it resulted in looping through far fewer pixels. So the second step is actually to loop through every pixel in the resized image buffer:
Third, as I loop through each pixel I build up a CountedSet of UIColor objects, this basicly keeps track of a histogram of color counts.
Finally, once I have my counted set it is very easy to then loop through it and respond to top-k dominant color queries.
So my algorithm in short is to resize the image (scale it down by some function proportional to the images size), draw it into a bitmap context buffer once and cache it, loop through all data in the buffer and build out a histogram.
My question to the stack overflow community is how efficient is my algorithm, and are there any gains or further optimizations I can make? I just need something that gives me reasonable performance and after doing some performance testing this seemed to work pretty darn well. Furthermore, how accurate will this be? Particularly the rescale operation kinda worries me. Am I trading signficant amount of accuracy for performance here? At the end of the day this will mostly just be used to determine the background color of an image.
Ideas for potential performance improvements:
1) When analyzing a single dominant color do some math to figure out if I have already found the most dominant color based on number of pixels analyzed and exit early.
2) For the top k query, answer it quickly by levaraging a binary-heap data structure (typical top-k query algo)
You can use some performance tweaks to avoid the downscaling altogether. As we do not see your implementation is hard to say where the bottleneck really is. So here some pointers what to look/check for or improve. Take in mind I do not code for your environment so take extreme prejudice:
pixel access
Most pixel access function I saw are SLOOOW especially functions called putpixel,getpixel,pixels,.... Because in each single pixel access they are doing too many sanity/safety checks and color/space/address conversions. Instead use direct pixel access. Most of the image interfaces I saw have some kind of ScanLine[] access which gives you direct pointer to a single line in an image. So if you fill your own array of pointers with it you obtain direct pixel access without any slowdowns. This usually speeds up the algorithm from 100 to 10000 times on most platforms (depends on the usage).
To check for this try to read or fill image 1024*1024*32bit and measure the time. On standard PC it should take up to few [ms] or less. If you got slow access it could be even seconds. For more info see Display an array of color in C
Dominant color
if #1 is still not fast enough you can take advantage of that dominant color has highest probability in the image. So in theory you do not need to sample whole image instead you could:
sample every n-th pixel (which is downscaling with nearest neighbor filter) or use randomized pixel positions for sampling. Both approaches have their pros and cons but if you combine them you could get much better results with much less pixels to process then the whole image. Of coarse this will lead to wrong results on some occasions (when you miss many of the dominant pixels) which is improbable but possible.
histogram structure
for low color count like up to 16bit you can use bucket sort/histogram acquisition which is fast and can be done in O(n) where n is the number of pixels. No searching needed. So if you reduce colors from true color to 16 bit you can significantly boost the speed of histogram computation. Because you lower the constant time hugely and also the complexity goes from O(n*m) to O(n) which is for high color count m really big difference. See my C++ histogram example it is in HSV but in RGB is almost the same...
In case you need true-color you got 16.7M colors which is not practical for bucket sort style. So you need to use binary search and dictionary to speed up the color search in histogram. If you do not have this then this is your slow down.
histogram sort
How did you sort the histogram? If you got wrong sort implementation it could take much time for big color counts. I usually use bubble-sort in my examples because it is less code to write and usually enough. But I saw here on SO too many times wrongly implemented bubble sort using alway the worse case time T(n^2) which is wrong (and even I sometimes do it). For time sensitive code I use quick-sort. See bubble sort in C++.
Also your task is really resembling Color quantization (or it is just me?) so take a look at: Effective gif/image color quantization?
Downscaling an image requires looking at each pixel so you can pick a new pixel that is closest to the average color of some group of neighbors. The reason this appears to happen so fast compared to your implementation of iterating through all the pixels is that CoreGraphics hands the scaling task off to the GPU hardware, whereas your approach uses the CPU to iterate through each pixel which is much slower.
So the thing you need to do is write some GPU-based code to scan through your original image and look at each pixel, tallying up the color counts as you go. This has the advantage not only of being very fast, but you'll also get an accurate count of colors. Downsampling produces as I mentioned pixels that are color averages, so you won't end up with reliably correct color counts that correlate to your original image (unless you happen to be downscaling solid colors, but in the typical case you'll end up with something other than you started with).
I recommend looking into Apple's Metal framework for an API that lets you write code directly for the GPU. It'll be a challenge to learn, but I think you'll find it interesting and when you're done your code will scan original images extremely fast without having to go through any extra downsampling effort.
I am using a GPUImage edge detector to create edge boundaries for text. I then want to convert those text elements to solids rather than outlines. The first what I can think of doing that is to flood fill the region.
Is there a way to flood-fill using GPUImage, or perhaps a better way to achieve the result I want?
GPUImage is based around the application of vertex and fragment shaders to input image data. Unfortunately, standard flood fill algorithms aren't a great fit for these shaders.
Flood fills generally are calculated in a way that results from one pixel depend on results from another. This makes them difficult to calculate in a parallel manner within a fragment shader. Maybe you could rig up something for an iterative calculation, but this might be an operation that's best handled on the CPU side.
Beyond that, if your goal is to highlight text and hide the rest, there might be other ways of achieving this. I've seen adaptive thresholds (which threshold based on a box blur of a large area of surrounding pixels) do this, and you might be able to help that by running a bilateral blur beforehand (bilateral blur can blur images while preserving sharp edges, enhancing those edges as boundaries).
Depending on your specific needs, this might be a broader language-independent image processing question that the folks over at Signal Processing may be able to help with.
I am working on a project that requires me to:
Look at images that contain relatively well-defined objects, e.g.
and pick out the color of n-most (it's generic, could be 1,2,3, etc...) prominent objects in some space (whether it be RGB, HSV, whatever) and return it.
I am looking into ways to segment images like this into the independent objects. Once that's done, I'm under the impression that it won't be particularly difficult to find the contours of the segments and analyze them for average or centroid color, etc...
I looked briefly into the Watershed algorithm, which seems like it could work, but I was unsure of how to generate the marker image for an indeterminate number of blobs.
What's the best way to segment such an image, and if it's using Watershed, what's the best way to generate the corresponding marker image of integers?
Check out this possible approach:
Efficient Graph-Based Image Segmentation
Pedro F. Felzenszwalb and Daniel P. Huttenlocher
Here's what it looks like on your image:
I'm not an expert but I really don't see how the Watershed algorithm can be very useful to your segmentation problem.
From my limited experience/exposure to this kind of problems, I would think that the way to go would be to try a sliding-windows approach to segmentation. Basically this entails walking the image using a window of a set size, and attempting to determine if the window encompasses background vs. an object. You will want to try different window sizes and steps.
Doing this should allow you to detect the object in the image, presuming that the images contain relatively well defined objects. You might also attempt to perform segmentation after converting the image to black and white with a certain threshold the gives good separation of background vs. objects.
Once you've identified the object(s) via the sliding window you can attempt to determine the most prominent color using one of the methods you mentioned.
UPDATE
Based on your comment, here's another potential approach that might work for you:
If you believe the objects will have mostly uniform color you might attempt to process the image to:
remove noise;
map original image to reduced color space (i.e. 256 or event 16 colors)
detect connected components based on pixel color and determine which ones are large enough
You might also benefit from re-sampling the image to lower resolution (i.e. if the image is 1024 x 768 you might reduce it to 256 x 192) to help speed up the algorithm.
The only thing left to do would be to determine which component is the background. This is where it might make sense to also attempt to do the background removal by converting to black/white with a certain threshold.
Suppose I have an image file/URL, and I want my software to search it within a set of up to 100 images (or at least in that order of magnitude). The target image that the software should find should be the "same" image as the given image, but it should still be able to "forgive" slight processing on either of them (the two images may have been cropped differently, or they were compressed differently).
The question is - is this feasible a task, given that I won't have any of the images before the search is taking place (i.e., there won't be any indexing prior to the search.) Is it likely to work in subsecond time (remember that the compare set is quite small). And if feasible, which tools can I use for this task? This could be software components or even an online service (I can live with that for a proof of concept). Can OpenSURF help me here?
To focus my question further - I'm not asking which algorithms to use, at this point I would rather use an existing tool/API/service.
The target image that the software should find should be the "same" image as the given image, but it should still be able to "forgive" slight processing on either of them.
If "slight processing" doesn't involve rotation, but only "cropping", then simple cross-correlation should work, if there could be perspective correction, rotation, lens distortion correction, then things are more complicated.
I think this method is quite forgiving to slight color corrections. Anyway, you can always convert both images to grayscale and compare grayscale versions if you want.
To focus my question further - I'm not asking which algorithms to use, at this point I would rather use an existing tool/API/service.
You can start from cvMatchTemplate from OpenCV library (the link points to the C version of the API, but it's available also for C++ and Python). Use the cropped image as a template, and look for it in all your images.
If the images you compare have dark features on light backgrounds, you may benefit from using CV_TM_CCOEFF or CV_TM_CCOEFF_NORMED methods. They both subtract the average over the template area from both images. Normalized methods (CV_TM_*_NORMED) generally work better but are slower than their non-normalized counterparts.
You may consider to do some preprocessing with the images before the cross-correlation. If you normalize them first, the cross-correlation will be less sensitive to slight brightness/contrast modification. If you detect edges first, as suggested by #misha, you'll lose color/lightness information, but the results for contour overlapping will be much better.
jetxee set you off on the right track. However, if you simply use template matching, you can run into problems where the background interferes with your template matching result. For example, if your template is a building and your background is primarily light (e.g. desert sand), then the template matching will fail because the lighter background will always return a higher cross-correlation than the darker template. Here is an example of this problem.
The way you solve it is the same as what is in the link:
Perform edge-detection on both your template and the target image.
Throw original template and image away
Perform template detection using the edge-detected template and edge-detected target image
As far as forgiving slight processing, the edge detection step will take care of that. As long as the edges in the two images are not modified significantly (blurred, optically distorted), the approach will work.
I know you are not looking specifically for algorithms, but nonetheless, let me suggest the following which can accomplish exactly what you are trying to do, very efficiently...
For cropped versions of the same image, including rotation, the Fourier-Mellin transform or a log-polar transform (watch out for the artsy semi-nude drawing - good source however) will give you the translation, rotation and scale coefficients between the two images, allowing to to determine what operations were needed to go from one to the other.