How do I "parse" image locations on a minimap with OpenCV (or other tool)? - opencv

I've been trying to work on a small hobby project that involves plotting players' positions from a game onto a heatmap, to see where the most active areas are at various points in time.
I'm a bit new to OpenCV and its tools, but I've managed to successfully run some text matching and extraction on the scoreboard and timers in the game, now trying to take the characters' positions from the in-game minimap.
It looks like this, which is the biggest resolution image I'm able to get with (about 185x185):
I'm trying to obtain the positions of only two things: the characters (big circles) and "wards", which are represented by these icons:
So given the assets to them, I thought that because there was too much "noise" in the source image, I'd try to subtract the background of in game minimap from its image, and then try to pattern match the original character and ward image with the resulting image together (which is meant to be the minimap, minus its background). But that didn't even get close to working as you can see:
> >
Even if that did work, I wouldn't be really sure how to handle cases where the icons are partially covering each other, or how I could obtain the positions of those little ward markers.
I'd really appreciate some help, as I've been searching the Internet and banging my head for a few days and haven't gotten anywhere. I've tried a bunch of difference techniques, read guides and articles, and tried a few GUI tools to experiment with but haven't gotten any closer to a method to work this out.
Please help me with what techniques I could or should be using instead, to get the locations of all the characters and wards.

I'm not an OpenCV user, but I can speak to some general problems.
First and foremost, you goofed in subtracting the background map. It appears that you did a straight, arithmetic subtraction of the map's RGB values. For instance, the blue-team icons in the lower-left corner are roughly #99FFFF, and you're subtracting the grayish background of maybe #D0D0FF. This leaves you with #002F00, a very dark green.
Also note that you're subtracting the original map, not the part that shows. Paths beyond view are shaded, but you appear to subtract the original value.
What you need to subtract is a masked background. Unfortunately, building that mask means that you have to find the icons. Masking won't work well at this stage.
Back to the subtraction: don't just blindly subtract. Rather, look for a match in hue. When you find a hue match, simply set that pixel to 0. You have two special cases to watch: icons on the background of their own colour, especially for the blue team. In this case, you need to define the region boundaries.
Start from a pixel that's an exact match to the original background. It won't be shaded, since all such problem pixels are in plain sight of an icon. Expand from that pixel so long as you have the exact match to the original background colour. That will give you the region you can blank out.
Your next problem is to identify icons. You should now have a map with only icons, many of which are fully revealed. Those are easy matches; identify and subtract them, one key icon at a time.
You now have a map of partial icons. Switch the match algorithm: a key icon is now a match to either the exact color, or to black (indicating it was previously covered). Iterate until you have no more matches.
This does still leave you with one problem: an icon that no longer has enough pixels showing to identify. These will be icons that were either entirely covered, or covered except for a small portion that is not unique, such as a few pixels of a red circular border.
For this, a general approach is to keep track of game progress to a small extent: from an earlier time, you know where the icon used to be. Track each icon as a software object. If other icons cover it, assume it's still there until you discover otherwise.
This will handle most cases. You'll still have some problems with minions or sensors that get shot out from underneath a legend's icon, but I trust that your heat map application is not so fragile as to take modelling damage from that situation. The legend will move soon enough, revealing the small item's death. A moving minion isn't covered by a legend for long; they don't move with the same intelligence.

Related

iOS: How can you change the color of a part of UIImage?

In the Nike app on iOS if you customize a shoe somehow they let you select different colors for different sections of the shoes (midsole red, or tongue green for example). I am wondering how do you do this on iOS? The shapes are pretty irregular so making a Bezier curve seems unlikely. The image doesn't move either so it doesn't seem like they are swapping images. Does anyone have any clue about a way to do is?
One thing I did notice is the images is fuzzy when you make a change and then comes into focus after a delay but not sure if that is a clue or not. :shrug:
Here is an example
grey toe
black toe
They are swapping images, in fact they are all fetched when entering this screen. So they are all present on the memory when swapping between them, that's why it seems that they are changing colors.

Comparing Images via pixels

I was playing around with images and came across a little game I tried to create.
You see an Image (for simplicity let's say a circle) and you have to redraw the circle as exact as possible on top of it.
All of that works in my little project already.
I want to be able to tell how many % of the image was recreated correctly (and again for simplicity no colours needed. It's always black and white)
Could I just count the the black pixels overlaying the image subtract the ones that are not and divide it by the amount of black pixels in the original?
This would look like this I guess:
ratio = (correctPixelCount - wrongPixelCount) / originalPixelCount
If yes, how would I go about getting each pixel and compare them?
If no, what else could I do?
PS: I already tried a Image compare cocoa pod called AIImageCompare.
Unfortunately it crashes for some unknown reasons.
Thank you!

handling finger detection on small objects

The application I am working on requires a 4px bar height with a full screen size width. I need to be able to select this 4px bar and move it around. I also can not change the size of this bar it has to be 4px in height.
This wouldn't be that big of an issue if I wasn't using OpenGL to create the object. OpenGL obviously does not have its own selection features so I am needing to program my own.
Initially after research I built a color selector to identify the object. How my color selector works is what ever x and y my finger touch returns from touchesBegan: is the pixel I grab from a screenshot of the OpenGL View. The issue with this is finger location is not precise at all. If I use the mouse it works perfect...
I decided to maybe loop through a buffer zone of the selected x and y but unfortunately a screenshot of the OpenGL view has antialiasing happens to the image when it's stored in memory and the buffer returns several shade of my objects color. I could possibly do a comparative color look up, to see if its in the range of colors but that seems overly complicated with how much I have already had to do. Plus cycling through the buffer zone isn't quick.
I also have thought maybe just remembering the location of my line on the screen and if my finger is close to that location just know that that's the one I want to select and move it around.
The future of this application can have up to 4 lines just like this so, I want something more secure then just knowing the location of where it is in memory.
What better way is there out there of handling selection of small objects?
How about maintaining an array of frames for the four objects, but expand the heights to something more manageable (8px or bigger)? Then, a touch within the larger region could be compared against the array (using CGRectContainsPoint). If you get a hit, then "snap to" the center point of the smaller (4px) rectangle before beginning the drag.
I do something like this by maintaining a list of "drop targets" for drag & drop, where it snaps to the drop target when it gets pretty close. Don't know if I'm conveying the idea very well, but it ought to work.
If the four 4px rectangles are going to be contiguous or very close together, you'll have to be able to make the selected one stand out or the user won't be able to tell which they're dragging -- but you could do that by making it bigger (maybe 6-8 px) then bringing it to the front so it overlays its adjacent neighbors.
More of an idea than an answer I guess.
John,
I would suggest a different approach. As you've discovered, touches in iOS are very imprecise. Apple usually suggests that the "hit box" for your controls be at least 40x40 points. I've gone as small as 30x30 points, but that starts to get hard.
What I would suggest you do is to factor your code so the app knows where the line is, and keeps track of it as a logical object. Then in your touch handler, interpret touches based on a large "buffer area" around the things you want the user to be able to move. If you just have a single horizontal bar, this should work great. Where you'll get into trouble is if you have multiple, thin horizontal bars that are close together. In that case you might need to rethink your app design and find another way to solve the problem.
As for the implementation details, you might add a pan gesture recognizer to your OpenGL view, and have it notify the OpenGL view of touch and drag actions. Then your OpenGL view can use knowledge of where your draggable objects are to decide how to interpret the touches.

iOS: Smooth button Glow effect by blending between images

I am creating a custom button that needs to be able to glow to a varying degree
How would I use these pictures to make a button that 'glows' the diamond when it is pressed, and have this glow gradually fade back to inert state?
I want to churn out several different colours of diamond as well... I am hoping to generate all different coloured diamonds from the same stock images presented here.
I would like to get my head around the basic methods available, in enough detail that I can see each one through and make a decision which path to take...
My tangled efforts so far... ( I will delete all of this, or move it into possibly several answers as a solution unfolds... )
I can see 3 potential solution paths:
GL
it looks as though GL has everything it takes to get complete fine-grained control over the process, although functions exposed by core graphics come tantalisingly close, and that would save several hundred lines of code spread over a bunch of source files, which seems a bit ridiculous for such a basic task.
core graphics, and core animation to accomplish the blending
documentation goes on to say
Anything underneath the unpainted samples, such as the current fill color or other drawing, shows through.
so I can chroma-key mask the left image, setting {0,0,0} ie Black as the key.
this at least secures a transparent background, now I have to work on making it yellow instead of grey.
so maybe I could have started instead with setting a yellow back colour for my image context, then use some CGContextSetBlendMode(...) to imprint the diamond on the yellow, THEN use chroma-key masking to get a transparent background
ok, this covers at least getting the basic unlit image on-screen
now I could overlay the sparkly image, using some blend mode, maybe I could keep it in its current greyscale state, and that would just boost the colours of the original
only problem with this is that it is a lot of heavy real-time blending
so maybe I could pre-calculate every image in the animation... this is looking increasingly mucky...
Cocos2D
if this allows me to set the blend mode to additive blending then I could just composite the glowing image over the original image with an appropriate Alpha setting.
After digging through a lot of documentation, the optimal solution seems to be to use core graphics functions to get the source images into a single 2-component GL texture, and then use GL to blend between them.
I will need to pass a uniform value glow_factor into the shader
The obvious solution might seem to simply use
r,g,b = in_r,g,b * { (1 - glow_factor) * inertPixel + glow_factor * shinyPixel }
(where inertPixel is the appropriate pixel of the inert diamond etc)...
it looks like I would also do well to manufacture my own sparkles and add them over the top; a gem should sparkle white irrespective of its characteristic colour.
After having looked at this problem a little more, I can see several solutions
Solution A -- store the transition from glow=0 to glow=1 as 60 frames in memory, then load the appropriate frame into a GL texture every time it is required.
this has an obvious benefit that a graphic designer could construct the entire sequence and I could load it in as a bunch of PNG files.
another advantage is that these frames wouldn't need to be played in sequence... the appropriate frame can be chosen on-the-fly
however, it has a potential drawback of a lot of sending data RAM->VRAM
this can be optimised by using glTexSubImage2D; several frames can be sent simultaneously and then unpacked from within GL... in fact maybe the entire sequence. if this is so, then it would make sense to use PVRT texture compression.
iOS: playing a frame-by-frame greyscale animation in a custom colour
Solution B -- load glow=0 and glow=1 images as GL textures, and manually write shader code that takes in the glow factor as a uniform and performs the blend
this has an advantage that it is close to the wire and can be tweaked in all sorts of ways. Also it is going to be very efficient. This advantage is that it is a big extra slice of code to maintain.
Solution C -- set glBlendMode to perform additive blending.
then draw the glow=0 image image, setting eg alpha=0.2 on each vertex.
then draw the glow=1 image image, setting eg alpha=0.8 on each vertex.
this has an advantage that it can be achieved with a more generic code structure -- ie a very general ' draw textured quad / sprite ' class.
disadvantage is that without some sort of wrapper it is a bit messy... in my game I have a couple of dozen diamonds -- at any one time maybe 2 or 3 are likely to be glowing. so first-pass I would render EVERYTHING ( just need to set Alpha appropriately for everything that is glowing ) and then on the second pass I could draw the glowing sprite again with appropriate Alpha for everything that IS glowing.
it is worth noting that if I pursue solution A, this would involve creating some sort of real-time movie player object, which could be a very useful reusable code component.

Partial re-colorizing a Bitmap at runtime

I'm drawing some cars. They're Bitmap's, loaded from PNG's in the library. I need to be able to color the cars-- red ones and green ones and blue ones, whatever. However, when you paint the car green, the tires should stay black, and the windows stay window-color.
I know of two ways to handle this, neither one of which makes me happy. First, I could have two bitmaps for each car; one underneath for the body color, and one on top for detail bits. The underneath bitmap gets its transform.colorTransform set to turn the white car-body into whatever color I need. Not great, because I end up with twice as many Bitmap's running around on screen at runtime.
Second, I could programmatically search-and-replace "white" with "car-body" color when I load the bitmap for each car. Not great either, because the amount of memory I take up multiplies by however many colors I need.
What I would LIKE would be a way to say "draw this Bitmap with JUST THE WHITE PARTS turned into this other color" at runtime. Is there anything like this available? I will be less than surprised if the answer is "no," but I figure it's worth asking.
You might have answered the question yourself.
I think your first approach would need only two transparent images: one with pixels of the parts that need to change colour, one with the rest of the image. You will use colorTransform or ColorMatrix filter by case. It might even work with having the pixels the need the colour change covered with Sprite with a flat colour set on overlay ?
The downside would be that you will need to create a 'colour map'/set of pixels to replace for each different item that will need colour replacement.
For the second approach:
You might isolate the areas using something like threshold().
For speed, you might want either to store the indices of the pixels you need to replace in an Vector.<int> object that could be used in conjuction with BitmapData's getVector() method. (You would loop once to fetch the pixel indices that need to be replaced)
Since you will use the same image(same dimensions) to fill the same content with a different colour, you'll always loop through the same pixels. Also keep in mind that you will gain a bit of speed by using lock() before your loop to setPixel() and unlock() after the loop.
Alternatively you could use Pixel Bender and try some green screen/background subtraction techniques. It should be fast and wouldn't delay the execution of the rest of your as3 code as Pixel Bender code runs in it's own thread.
Also check out Lee's Pixel Bender subtraction technique too.
Although it's a bit old now, you can use some knowledge from #Quasimondo's article too.
HTH
I'm a little confused where you see the difference between your second approach and the one you would like to have. You can go over your loaded bitmap pixel by pixel and read out the color. If it turns out to be white replace it with another color. I do not see occurence of multiplied memory consumption.
You might want to try my selective color transform: http://www.quasimondo.com/archives/000614.php - it's from 2006, so some parts of it could probably be replaced by a pixel bender filter now.
Why not just load the pieces separately, perform the color transform on the one you want to change, then do a BitmapData.copyPixels() with the result? The blit routine runs in machine code, so is wicked fast. Doing it pixel by pixel in ActionScript would be glacially slow in comparison.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#copyPixels()

Resources