UIImage masking with gesture - ios

I'm trying to achieve selective color feature in iOS. I personally think that first draw shape using finger gesture and convert that into mask, But at the same time it should be real time, It should work as i move my finger across the grayscale image. Can anyone direct me to correct path.
Sample app : https://itunes.apple.com/us/app/color-splash/id304871603?mt=8
Thanks.

You can position two UIImageViews over each other, the color version in the background and the black&white version in the foreground.
Then you can use touchesBegan, touchesMoved and so on events to track user input. In touches moved you can "erase" a path that the user moved the finger along like this (self.foregroundDrawView is the black&white UIImageView):
UIGraphicsBeginImageContext(self.foregroundDrawView.frame.size);
[self.foregroundDrawView.image drawInRect:CGRectMake(0, 0, self.foregroundDrawView.frame.size.width, self.foregroundDrawView.frame.size.height)];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextSetAllowsAntialiasing(context, TRUE);
CGContextSetLineWidth(context, 85);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetRGBStrokeColor(context, 1, 0, 0, 1.0);
// Soft edge ... 5.0 works ok, but we try some more
CGContextSetShadowWithColor(context, CGSizeMake(0.0, 0.0), 13.0, [UIColor redColor].CGColor);
CGContextBeginPath(context);
CGContextMoveToPoint(context, touchLocation.x, touchLocation.y);
CGContextAddLineToPoint(context, currentLocation.x, currentLocation.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.foregroundDrawView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The important part is CGContextSetBlendMode(context, kCGBlendModeClear);. This erases the traced part from the image, afterwards the image is set back as the image of the foreground image view.
When the user is done you should be able to combine the two images or use the black&white image as a mask.

Related

ios - How to improve draw and update large image performance?

I'm developing a selective color app in iOS (There are many similar apps in App Store, example: https://itunes.apple.com/us/app/color-splash/id304871603?mt=8). My idea is simple: use two UIViews, one for foreground (black and white image) and one for background (color image). I use touchesBegan, touchesMoved and so on events to track user input in foreground. In touches moved, I use kCGBlendModeClear to erase a path that the user moved the finger. Finally, I combine two images in UIViews together to get the result. The result will be displayed in the foreground view.
To achieve that idea, I have written two different implements.They work well with small image but very slow with large image (> 3MB).
In first version, I use two UIImageViews (imgForegroundView and imgBackgroundView).
Here is code to get the image result when user moved finger from point p1 to point p2. This code will be called from touchesMoved event:
-(UIImage*)getImageWithPoint:(CGPoint)p1 andPoint:(CGPoint)p2{
UIGraphicsBeginImageContext(originalImg.size);
[self.imgForegroundView.image drawInRect:CGRectMake(0, 0, originalImg.size.width, originalImg.size.height)];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextSetAllowsAntialiasing(context, TRUE);
CGContextSetLineWidth(context, brushSize);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetRGBStrokeColor(context, 1, 0, 0, 1.0);
CGContextBeginPath(context);
CGContextMoveToPoint(context, p1.x, p1.y);
CGContextAddLineToPoint(context, p2.x, p2.y);
CGContextStrokePath(context);
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
After get image result, I replace imgForegroundView.image with it.
In version 2, I use idea in http://www.effectiveui.com/blog/2011/12/02/how-to-build-a-simple-painting-app-for-ios/. The background is still UIImageView but the foreground is a subclass of UIView. In foreground, I use a cache context to store image. When user move finger, I draw on cache context, then I update the view by override drawRect method. In drawRect method, I get image from cache context and draw it to current context.
- (void) drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef cacheImage = CGBitmapContextCreateImage(cacheContext);
CGContextDrawImage(context, self.bounds, cacheImage);
CGImageRelease(cacheImage);
}
Then, in same way with first version, I get image from foreground and combine it with background.
With small image (<= 2 MB), both versions work well. But with larger image, it is very terrible: after user moves finger a long times (3 - 5 seconds, depend on image size), the image will be updated.
I want my app can achieve the speed near real time such as example app above, but I don't know how to do. Can anyone give me some suggestions?

Create Scalable Path CGContextPath

I'm new to developing in iOS.
I have problem when draw with Core Graphics/UIKit.
I want to implement a function like shape of paint in Window.
I use this source: https://github.com/JagCesar/Simple-Paint-App-iOS, and add new function.
When touchesMoved, I draw a shape, based on the point when touchesBegan, and the current touch point. It draws all the shape.
- (void)drawInRectModeAtPoint:(CGPoint)currentPoint
{
UIGraphicsBeginImageContext(self.imageViewDrawing.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.currentColor setFill];
[self.imageViewDrawing.image drawInRect:CGRectMake(0, 0, self.imageViewDrawing.frame.size.width, self.imageViewDrawing.frame.size.height)];
CGContextMoveToPoint(context, self.beginPoint.x, self.beginPoint.y);
CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y);
CGContextAddLineToPoint(context, self.beginPoint.x * 2 - currentPoint.x, currentPoint.y);
CGContextAddLineToPoint(context, self.beginPoint.x, self.beginPoint.y);
CGContextFillPath(context);
self.currentImage = UIGraphicsGetImageFromCurrentImageContext();
self.imageViewDrawing.image = self.currentImage;
UIGraphicsEndImageContext();
}
I mean, I want to create only one shape, when touchesBegan, the app record the point, when touchesMoved, the shape is scaled by touches, and when touchesEnd, draw the shape to the ImageContex
Hope you can give me some tips to do that.
Thank you.
You probably want to extract this functionality away from the context. As you are using an image, use an image view. At the start of the touches, create the image and the image view. Set the image view frame to the touch point with a size of {1, 1}. As the touch moves, move / scale the image view by changing its frame. When the touches end, use the start and end points to render the image into the context (which should be the same as the final frame of the image view).
Doing it this way means you don't add anything to the context which would need to be removed again when the next touch update is received. The above method would work similarly with a CALayer instead of an image view. You could also look at a solution using a transform on the view.

How to draw an image along a curve

I have drawn a curve:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(context, firstPoint.x, firstPoint.y);
CGContextAddCurveToPoint(context, cpx1, cpy1, cpx2, cpy2, finalPoint.x, finalPoint.y);
And now I want to draw an image multiple times along the path of the curve.
Is it possible to do that? If so, How?

why CGContextSaveGState is not required even after several modification to the current context?

I am really strugggling with Quartz2D for more then 10 days please help me understand few concepts I will be really grateful, please look at this code and screenshot url.
This code draw image with border and write text to it and the image become whole new image with border and text.
//part 1
CGSize cgs = CGSizeMake(250.0, 400.0);
UIGraphicsBeginImageContext(cgs);
CGRect rectangle = CGRectMake(0,0,cgs.width,cgs.height);
CGRect imageRect = CGRectInset(rectangle, 5.4, 5.4);
imageRect.size.height -= 100;
UIImage *myImage = [UIImage imageNamed:#"BMW.jpg"];
[myImage drawInRect:imageRect];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 10.0);
CGContextSetRGBStrokeColor(context, 0.0, 0.0, 1.0, 1.0);
CGContextStrokeRect(context, rectangle);
//
//part 2
1. CGRect contextRect = rectangle;
2. CGContextTranslateCTM(context, 0, contextRect.size.height);
3. CGContextScaleCTM(context, 1, -1);
4. float w, h;
5. w = contextRect.size.width;
6. h = contextRect.size.height;
7. CGContextSelectFont (context, "Helvetica-Bold", 25,
kCGEncodingMacRoman);
8. CGContextSetCharacterSpacing (context, 5);
9. CGContextSetRGBFillColor(context, 0.0, 1.0, 1.0, 1.0);
10. CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0.0, 1.0);
11. CGContextShowTextAtPoint(context, 45, 50, "Quartz 2D", 9);
//
//part 3
UIImage *testImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[testImg drawAtPoint:CGPointMake(35, 10)];
//
http://i40.tinypic.com/140aptv.png
part 1 and part 3 of the code is very clear to me
problem is regarding part 2
on line 2 and 3 coordinates are transformed so the text do not display
upside down, but uiimage already take care of this internally, why it
didn't transformed to upside down? why it is still displaying in
correct position after transform is applied for text using same
context? I am asking this because when uiimage coordinates are already
modified then this coordinate transform will not make uiimage again
upside down?
on line 9 and 10 fillcolor and strokecolor methods are called and
fillcolor changes the text color, but strokecolor not doing any thing
to text why? And why without CGContextSaveGState it modified the
color of text not the border color?
regarding these both points I mentioned above the common confusion is
why its working perfectly why this code didn't need
CGContextSaveGState and CGContextRestoreGState. How it is possible
that context is modified and it didn't effect the perviously drawing
item like blue border in this case and coordinates transformation for
text.
Please correct me if I am lacking in any way to make you understand my points.
Thanks in advance,
Regards.
Quartz 2D uses the "painter's model." That means, you draw one thing, and it's done. Then you draw another thing, and it goes on top of what you drew before. Then you draw another thing and that goes on top, etc. If I pick up a stamp, dip it in paint and press it to paper, then turn it over and do it again to another part of the paper, the first stamped image doesn't flip over just because I flipped the stamp.
Every time you see "stroke" or "draw," you're modifying the final image. Later changes to the context don't effect that.

iOS: draw circle like in calendar app

I want to have custom table cells in my table view, that contain a colored circle as seen in the calendar app of the iOS devices (the one where you select the displayed calendars).
Right now I made a custom cell that is having an UIView-sublass (called it CircleView), and the drawRect looks like this:
CGContextRef contextRef = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(contextRef, 0, 0, 255, 0.1);
CGContextFillEllipseInRect(contextRef, CGRectMake(10.0, 10.0, 10.0, 10.0));
This will basically draw a circle in the color I want and in the place I want. However I cant seem to find a way to make the border of the circle in dark, 1-point thick color, and the rest of the circle filled with a lighter color. Do I have to draw multiple circles and overlay them in a way?
Example of how it should look like: http://i56.tinypic.com/svrkmf.png
[edit] Managed to get a good looking solution with this code:
CGContextRef context= UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, self.color);
CGContextSetAlpha(context, 0.5);
CGContextFillEllipseInRect(context, CGRectMake(10.0, 10.0, 10.0, 10.0));
CGContextSetStrokeColorWithColor(context, self.color);
CGContextStrokeEllipseInRect(context, CGRectMake(10.0, 10.0, 10.0, 10.0));

Resources