Fill image with different color by detecting the different parts - ios

I have an Image of a landscape which i need to fill with different colors.
When i select colors from palette and start scrubbing on any particular part, only that part should get the color even if by mistake i take my finger outside of that image part.
So basically i need to detect which part of image have i tapped so that only that part takes the color.
I am developing this app in Cocos2dx, but any help in logic would be a good point to start.
Here is an example of what i want.
Note : I know i could achieve this by taking separate images and then detecting touches, but that increases the app size by alot of MB's.

I guess user will be able to draw only on white part of the image.
If above is true, what i want you to do is, in your touchesMoved method, check if any black color (non white) pixel is present between previous touch point and current touch point.
If there is no such black pixel, then draw it else dont draw it.

Related

ios frosted glass logic

Frosted Glass Effect
I'm thinking of how to approach this logically..
So we take the background image ( for example )
Then, we want to add our frosted glass button to this image. Here's how it should look..
Now I know I cannot programatically blur the background image of the button, so I'll to try and do it with two images.. Background.png and Backgorund_Blurred.png.
Now, the frosted glass effect will happen on animated objects. So, as they move across the screen, it should appear that it is blurring the background image behind it, however, to achieve this I can only think of one way. But doing so is beyond my current capability.
It would have to be a background_blurred image for the UIButton for example. No scaled in any way, and the exact same size as the normal background. Then, I would have to take the buttons relative position on the normal background and append the background_blurred of the button to suit.
My first question; is this possible?
Second question; is there an easier approach?
Lastly, I've added an image to make sense of the relative position theory.
Check out the FXBlur library, it'll let you blur images/views.. I've used it successfully and sounds like it'll do what you want.
I think having two images for these assets maybe easier, but having the views blur may be better in the long run as you wouldn't have to worry about updating the images for different resolutions in the future or care about how big the button is/will be.. Also if you want to do this with more images it'll turn into a mess with all the different images to manage.. The library is simple to use, with one call you'll have a blurred image/view..

CGContextDrawImage Blending

I am drawing, or should I say "stamping", an image using the CGContextDrawImage method in Objective C. The image gets drawn to points that are determined by touch movements. Basically I'm stamping an image to create a "brush" effect. Looks something like this:
I am happy with the results, however when the touch moment slows down the image gets drawn on top of its self and ruins the alpha value I want. Is there a blend technique in which the opacity of the image would not stack on top of each other? Or should I just look at changing my points such that they are not so close together when the movement slows down?
Thanks in advance.

flood fill performance issue on iPad

I am using 4-Way floodfill algorithm.
I have a transparent image with black out line.
That is staring point image(without color).
And after filling the color in this image it look like this
Please help me and let me know what can i do for proper fill.
I used and implemented myself FloodFill in other projects and the algorithm goes trough the whole draw, looking for closed spaces and then draw inside (or outside) them.
Your problem happens with every tool in the world that fills a draw, and the problem is the same, the spaces are not 100% closed.
The floodfill algorithm goes pixel by pixel and when it detect a black pixel, it stops. For example, the arm of the scuba driver is not thick enough or it has holes on it, and the flood fill algorithm manages to go trough it and not detect it as an empty space.
Nobody here can tell you why unless we take your project and analyse it, so the best I can offer is a guideline about where your error could be.
I tried the code with an image that has a very precise defined border around it (from here) and it seems to work OK with that image. I suggest perhaps that if you zoom into your image that there is some grey aliasing around the edges which won't get filled. Perhaps the algorithm has a threshold function that can be tweaked?
Try setting the andTolerance value (I tried 4 which seemed to improve my example).
//Call function to flood fill and get new image with filled color
UIImage *image1 = [self.image floodFillFromPoint:tpoint withColor:newcolor andTolerance:4];

xcode custom overlay capture

I am working on OCR recognition App and I want to give the user the option to manually select the area (during the camera selection) on which to perform the OCR. Now, the issue I face is that I provide a rectangle on the camera screen by simply overriding the - (void)drawRect:(CGRect)rect method, However, despite there being a rectangle ,the camera tries to focus on the entire captured area rather than just within rectangle specified.
In other word, I do not want the entire picture to be send for processing but rather only the part of the captured image inside the rectangle. I have managed to provide a rectangle, However with no functionality. I do not want the entire screen area to be focused, but only the area under the rectangle.
I hope this makes sense since i have tried my best to explain it.
Thanks and let me know
Stream the camera's image to a UIScrollView using an AVCaptureOutput then allow the user to pinch/pull/pan the camera into the proper place... now use UIGraphics Image Context to take a "screen-shot" of this area and send that UIImage.CGImage in for processing.

Coloring Shapes in iOS app

We are building an iPad kids application in which a kid is requested to color different shapes with specific color. For example, consider an image with sky and trees , etc. all overlapping and a kid has selected a color for example "Blue" and then he taps the sky , the sky should turn to blue otherwise it should say "wrong color"
My questions are:
1- How to implement the coloring of the sky only with the selected color. We have implemented a Coco2d Floodfilling but it is too slow.
2- How to tie each part of the image with a specific right color. We suggested loading a fully colored image in a background layer and testing it at the tap .... BUT how to implement it.
Thanks
Are the shapes originally vectorial? If so, a solution would be to work directly with them as vectors, parsing them into CoreAnimation shapes.
You can give a try to SVGKit or get some inspiration from it. You'll get CAShapeLayers where you can change the fillColor property.
I believe that this way would be much more responsive (and the app size much more lighter) than doing tricks with images ;-)

Resources