Comparing Images via pixels - ios

I was playing around with images and came across a little game I tried to create.
You see an Image (for simplicity let's say a circle) and you have to redraw the circle as exact as possible on top of it.
All of that works in my little project already.
I want to be able to tell how many % of the image was recreated correctly (and again for simplicity no colours needed. It's always black and white)
Could I just count the the black pixels overlaying the image subtract the ones that are not and divide it by the amount of black pixels in the original?
This would look like this I guess:
ratio = (correctPixelCount - wrongPixelCount) / originalPixelCount
If yes, how would I go about getting each pixel and compare them?
If no, what else could I do?
PS: I already tried a Image compare cocoa pod called AIImageCompare.
Unfortunately it crashes for some unknown reasons.
Thank you!

Related

SpriteKit sktilemapnode vertical line glitch

I am making a 2d platformer and I decided to use multiple tilemapnodes as my backgrounds. Even with 1 tile map, I get these vertical or horizontal lines that appear and disappear when I'm moving the player around the screen. See image below:
My tiles are 256x256 and I'm storing them in a tileset sks file. Not exactly sure why I'm getting this or how to get rid of this and it is quite annoying. Wondering if others experience this as well.
Considering to not use the tile maps, but I would prefer to use them if I can.
Thanks for any help with this!!!
I had the same issue and was able to solve it by "extruding" the tiled image a couple pixels. This provides a little cushion of pixels to use when the floating point issue occurs instead of displaying nothing (hence the gap). This video sums it up pretty well.
Unity: extruding tile map images
If you're using TexturePacker to generate your sprite atlas' there is an option to add this automatically without having to do it to your tile images yourself.
Hope that helps!
Sort of like the "extruding" suggested by #cheaze, I simply make the tile size in the drawing code a tiny amount larger than the required tile size. This means the assets themselves do not have to be changed.
Eg. if you assets are sized 256 x 256 and all of your calculations are based on that; draw the textures as 256.02 x 256.02 pixels in size:
[SKSpriteNode spriteNodeWithTexture:texture size:CGSizeMake(256.02, 256.02)];
Only adding .02 pixel per side will overlap your tiles automatically and remove the line glitches, depending on your camera speed and frame rate.
If the problem is really bad, you can even go so far as to add half a pixel (+0.5) or an entire pixel to remove the glitches, yet the user will not be able to see the difference. (Since a one pixel difference on a retina screen is hard to distinguish).

Trim transparency of an UIImage

I was wondering what would be the best way to trim the "canvas" of an UIImage (pretty much like any image editor allows out there)
Now, the previous example is not a single UIImage. It's actually 2 UIViews. So clipping the superview against the blue box would do the trick, but I guess I am looking into the best possible way to do this. Given that there could be several blue boxes in the "canvas".
Is there a faster way than going through every pixel?
Thanks!
Thinking about it algorithmically, I would say no. You need to find the pixel that extends furthest to the left, right, top and bottom. Unless you look at every pixel from each direction you could miss non-transparent pixels.
You could speed things up if you figure out how to map your image into memory and then index into memory directly rather than using a high level function that fetches pixels. I would suggest searching from the top down (which would be sequential memory accesses) until you find a non-clear pixel. Then search from the end of the image backwards, which would give you the bottom-most pixel.
You would then want to limit your search from each side to only look starting at the first non-transparent pixel from the top and ending at the last non-transparent pixel on the bottom.
For anything other than a very large image this should take a fraction of a second.
Ok, I was being dumb. The union of the subviews is all I really needed, so its just a simple loop over the subviews and doing a CGRect union against their frames.

Cropping image By selecting Object and color matching

We are developing an app where we need to crop an image according to the selecting object area. User will draw a line and we need to select the object and crop it .This crop need to be like the app: YourMoji
So far we have tried to get the color of the pixels along the line and then comparing those with the color of every pixel in the image and making a path from it to clip the image. But the almost going no where.
Is it possible through this way to crop an image or we are going in the wrong way? Can anyone provide a way to do this Or suggest a way to modify the way we have worked so far? Any advice and suggestions will be greatly appreciated!
Thanks in advance.
I guess what you want is the image segmentation algorithm called Graph Cut.
Here are two Github repositories, hope these would help:
GraphCut
GrabCutIOS
I'm not exactly clued up on image manipulation, but the first algorithm that comes to mind is something like this:
Take the average of the pixels in the line (as you have)
Since you appear to want faces, you might want to weight reds and blues over green. Not much green in faces of any skin tone.
For each pixel, if the colour is within a given threshold outside of your selected average, remove it / make transparent.
Perhaps the closer to the original line (or centroid), the less strict the threshold becomes.
I'd then provide the user with some tools for:
Sensitivity: how large the threshold is
Eraser: to remove parts of the image that your algorithm missed
Paintbrush: to replace parts of the image that your algorithm incorrectly removed.

How do I "parse" image locations on a minimap with OpenCV (or other tool)?

I've been trying to work on a small hobby project that involves plotting players' positions from a game onto a heatmap, to see where the most active areas are at various points in time.
I'm a bit new to OpenCV and its tools, but I've managed to successfully run some text matching and extraction on the scoreboard and timers in the game, now trying to take the characters' positions from the in-game minimap.
It looks like this, which is the biggest resolution image I'm able to get with (about 185x185):
I'm trying to obtain the positions of only two things: the characters (big circles) and "wards", which are represented by these icons:
So given the assets to them, I thought that because there was too much "noise" in the source image, I'd try to subtract the background of in game minimap from its image, and then try to pattern match the original character and ward image with the resulting image together (which is meant to be the minimap, minus its background). But that didn't even get close to working as you can see:
> >
Even if that did work, I wouldn't be really sure how to handle cases where the icons are partially covering each other, or how I could obtain the positions of those little ward markers.
I'd really appreciate some help, as I've been searching the Internet and banging my head for a few days and haven't gotten anywhere. I've tried a bunch of difference techniques, read guides and articles, and tried a few GUI tools to experiment with but haven't gotten any closer to a method to work this out.
Please help me with what techniques I could or should be using instead, to get the locations of all the characters and wards.
I'm not an OpenCV user, but I can speak to some general problems.
First and foremost, you goofed in subtracting the background map. It appears that you did a straight, arithmetic subtraction of the map's RGB values. For instance, the blue-team icons in the lower-left corner are roughly #99FFFF, and you're subtracting the grayish background of maybe #D0D0FF. This leaves you with #002F00, a very dark green.
Also note that you're subtracting the original map, not the part that shows. Paths beyond view are shaded, but you appear to subtract the original value.
What you need to subtract is a masked background. Unfortunately, building that mask means that you have to find the icons. Masking won't work well at this stage.
Back to the subtraction: don't just blindly subtract. Rather, look for a match in hue. When you find a hue match, simply set that pixel to 0. You have two special cases to watch: icons on the background of their own colour, especially for the blue team. In this case, you need to define the region boundaries.
Start from a pixel that's an exact match to the original background. It won't be shaded, since all such problem pixels are in plain sight of an icon. Expand from that pixel so long as you have the exact match to the original background colour. That will give you the region you can blank out.
Your next problem is to identify icons. You should now have a map with only icons, many of which are fully revealed. Those are easy matches; identify and subtract them, one key icon at a time.
You now have a map of partial icons. Switch the match algorithm: a key icon is now a match to either the exact color, or to black (indicating it was previously covered). Iterate until you have no more matches.
This does still leave you with one problem: an icon that no longer has enough pixels showing to identify. These will be icons that were either entirely covered, or covered except for a small portion that is not unique, such as a few pixels of a red circular border.
For this, a general approach is to keep track of game progress to a small extent: from an earlier time, you know where the icon used to be. Track each icon as a software object. If other icons cover it, assume it's still there until you discover otherwise.
This will handle most cases. You'll still have some problems with minions or sensors that get shot out from underneath a legend's icon, but I trust that your heat map application is not so fragile as to take modelling damage from that situation. The legend will move soon enough, revealing the small item's death. A moving minion isn't covered by a legend for long; they don't move with the same intelligence.

flood fill performance issue on iPad

I am using 4-Way floodfill algorithm.
I have a transparent image with black out line.
That is staring point image(without color).
And after filling the color in this image it look like this
Please help me and let me know what can i do for proper fill.
I used and implemented myself FloodFill in other projects and the algorithm goes trough the whole draw, looking for closed spaces and then draw inside (or outside) them.
Your problem happens with every tool in the world that fills a draw, and the problem is the same, the spaces are not 100% closed.
The floodfill algorithm goes pixel by pixel and when it detect a black pixel, it stops. For example, the arm of the scuba driver is not thick enough or it has holes on it, and the flood fill algorithm manages to go trough it and not detect it as an empty space.
Nobody here can tell you why unless we take your project and analyse it, so the best I can offer is a guideline about where your error could be.
I tried the code with an image that has a very precise defined border around it (from here) and it seems to work OK with that image. I suggest perhaps that if you zoom into your image that there is some grey aliasing around the edges which won't get filled. Perhaps the algorithm has a threshold function that can be tweaked?
Try setting the andTolerance value (I tried 4 which seemed to improve my example).
//Call function to flood fill and get new image with filled color
UIImage *image1 = [self.image floodFillFromPoint:tpoint withColor:newcolor andTolerance:4];

Resources