Getting locations of objects inside a picture - ios

So here is a question that is sure to stump some people.
Here is my scenario. I want a user to take a picture of something, in this case it will be just a black rectangle with white circles on it. I don't care about the size of the circles, but I want to know how many circles there are and where they are located in respect to the photo. Then a user will enter the width and height of the photo they just took and I will be able to tell the distance the circles are from each other.
Does anyone have any clue how I could do this?

I don't think you will get a straight forward answer.But below is my approach.
Take the image, get its pixel data using CGBitmapContext (reference).
Now search in the array, where white pixels are located (white pixels - colorvalue >240/255).
Now try to find its white-circle-centre using some algorithm (reference).
Store those centers in an array,and later when user gives width ,return widths relatively.

Related

Comparing Images via pixels

I was playing around with images and came across a little game I tried to create.
You see an Image (for simplicity let's say a circle) and you have to redraw the circle as exact as possible on top of it.
All of that works in my little project already.
I want to be able to tell how many % of the image was recreated correctly (and again for simplicity no colours needed. It's always black and white)
Could I just count the the black pixels overlaying the image subtract the ones that are not and divide it by the amount of black pixels in the original?
This would look like this I guess:
ratio = (correctPixelCount - wrongPixelCount) / originalPixelCount
If yes, how would I go about getting each pixel and compare them?
If no, what else could I do?
PS: I already tried a Image compare cocoa pod called AIImageCompare.
Unfortunately it crashes for some unknown reasons.
Thank you!

Trim transparency of an UIImage

I was wondering what would be the best way to trim the "canvas" of an UIImage (pretty much like any image editor allows out there)
Now, the previous example is not a single UIImage. It's actually 2 UIViews. So clipping the superview against the blue box would do the trick, but I guess I am looking into the best possible way to do this. Given that there could be several blue boxes in the "canvas".
Is there a faster way than going through every pixel?
Thanks!
Thinking about it algorithmically, I would say no. You need to find the pixel that extends furthest to the left, right, top and bottom. Unless you look at every pixel from each direction you could miss non-transparent pixels.
You could speed things up if you figure out how to map your image into memory and then index into memory directly rather than using a high level function that fetches pixels. I would suggest searching from the top down (which would be sequential memory accesses) until you find a non-clear pixel. Then search from the end of the image backwards, which would give you the bottom-most pixel.
You would then want to limit your search from each side to only look starting at the first non-transparent pixel from the top and ending at the last non-transparent pixel on the bottom.
For anything other than a very large image this should take a fraction of a second.
Ok, I was being dumb. The union of the subviews is all I really needed, so its just a simple loop over the subviews and doing a CGRect union against their frames.

Cropping image By selecting Object and color matching

We are developing an app where we need to crop an image according to the selecting object area. User will draw a line and we need to select the object and crop it .This crop need to be like the app: YourMoji
So far we have tried to get the color of the pixels along the line and then comparing those with the color of every pixel in the image and making a path from it to clip the image. But the almost going no where.
Is it possible through this way to crop an image or we are going in the wrong way? Can anyone provide a way to do this Or suggest a way to modify the way we have worked so far? Any advice and suggestions will be greatly appreciated!
Thanks in advance.
I guess what you want is the image segmentation algorithm called Graph Cut.
Here are two Github repositories, hope these would help:
GraphCut
GrabCutIOS
I'm not exactly clued up on image manipulation, but the first algorithm that comes to mind is something like this:
Take the average of the pixels in the line (as you have)
Since you appear to want faces, you might want to weight reds and blues over green. Not much green in faces of any skin tone.
For each pixel, if the colour is within a given threshold outside of your selected average, remove it / make transparent.
Perhaps the closer to the original line (or centroid), the less strict the threshold becomes.
I'd then provide the user with some tools for:
Sensitivity: how large the threshold is
Eraser: to remove parts of the image that your algorithm missed
Paintbrush: to replace parts of the image that your algorithm incorrectly removed.

Trim and find position of result with rmagick

I'm working on a jigsaw puzzle webapp, and one of the requirements is automatically generating puzzle pieces from any image. I'm using RMagick for the image processing. I've got some sets of blank puzzle pieces to use as masks, and I can handle that part, but then I need to trim the whitespace (er, transparentspace) out of the resulting images.
Now, I know I can use trim for this - I might have to put a one-pixel border on it to make sure all four corners are the right color, but that's easy and I can just subtract one pixel from the final number. The only problem is that I also need to record the position of the piece. According to the documentation on trim, the function will "retain the offset information", which sounds like exactly what I need. But I can't find anything about how to retrieve the offset information! Does anyone know how to do that?
If worst comes to worst, I suppose I could always just look through pixel-by-pixel, find the boundaries myself, and use crop to trim the picture, but that wouldn't exactly be good for performance.
Aha, found it. image.page.x and image.page.y give the upper left corner, and then image.rows and image.columns have the height and width.

flood fill performance issue on iPad

I am using 4-Way floodfill algorithm.
I have a transparent image with black out line.
That is staring point image(without color).
And after filling the color in this image it look like this
Please help me and let me know what can i do for proper fill.
I used and implemented myself FloodFill in other projects and the algorithm goes trough the whole draw, looking for closed spaces and then draw inside (or outside) them.
Your problem happens with every tool in the world that fills a draw, and the problem is the same, the spaces are not 100% closed.
The floodfill algorithm goes pixel by pixel and when it detect a black pixel, it stops. For example, the arm of the scuba driver is not thick enough or it has holes on it, and the flood fill algorithm manages to go trough it and not detect it as an empty space.
Nobody here can tell you why unless we take your project and analyse it, so the best I can offer is a guideline about where your error could be.
I tried the code with an image that has a very precise defined border around it (from here) and it seems to work OK with that image. I suggest perhaps that if you zoom into your image that there is some grey aliasing around the edges which won't get filled. Perhaps the algorithm has a threshold function that can be tweaked?
Try setting the andTolerance value (I tried 4 which seemed to improve my example).
//Call function to flood fill and get new image with filled color
UIImage *image1 = [self.image floodFillFromPoint:tpoint withColor:newcolor andTolerance:4];

Resources