Re-draw image using touch method - ios

I use the code below for masking image to remove a part using touch. This works properly using CGContextSetBlendMode.
Now I want to re-draw that image using this touch event. Can you help me for re-draw erased part of the image?
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
currentTouch = [touch locationInView:Second_IMG];
CGFloat brushSize = 35;
CGColorRef strokeColor = [UIColor whiteColor].CGColor;
UIGraphicsBeginImageContext(Second_IMG.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[Second_IMG.image drawInRect:CGRectMake(0, 0, Second_IMG.frame.size.width, Second_IMG.frame.size.height)];
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, brushSize);
CGContextSetStrokeColorWithColor(context, strokeColor);
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextBeginPath(context);
CGContextMoveToPoint(context, lastTouch.x, lastTouch.y);
CGContextAddLineToPoint(context, currentTouch.x, currentTouch.y);
CGContextStrokePath(context);
Second_IMG.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastTouch = [touch locationInView:Second_IMG];
}

I use the code below for masking image
No, you're not masking at all. You are drawing a new image that lacks the touched area (because you "erased" that part of the image using kCGBlendModeClear). And you are then replacing the image of Second_IMG with this new partially erased image. So there is no "erased part" to "redraw" - you have thrown the information away.
Thus, to do what you are asking to do, you will need first to have access to a copy of the original Second_IMG.image. If you then replace the partially erased Second_IMG.image with the original, all the erased material will magically reappear - because you will return to the image with nothing erased.
But let's say that that isn't your goal: you don't want to bring back all the erased material, but only the material most recently erased. Then you would need to save off each generated intermediate image. You don't want to do this on every touchesMoved:, obviously, because you would end up with thousands of intermediate images. But if you save off the current state of the image on touchesEnded:, for example, then if you want to go back to before the most recent touchesMoved: sequence, you'll be able to, because you saved it on the previous occasion.
It would be simpler, however, if you were really using masking! In other words, if each stroke were expressed as an actual mask layer sitting on top of the image view, then the stroke could be removed simply by removing that mask layer. You would thus be working with masks layers expressing the strokes, rather than replacing the images as you are doing now. You will fine, in any case, that replacing the image repeatedly does not hold up well on the device - it's a very inefficient way to proceed.

Related

ios - How to improve draw and update large image performance?

I'm developing a selective color app in iOS (There are many similar apps in App Store, example: https://itunes.apple.com/us/app/color-splash/id304871603?mt=8). My idea is simple: use two UIViews, one for foreground (black and white image) and one for background (color image). I use touchesBegan, touchesMoved and so on events to track user input in foreground. In touches moved, I use kCGBlendModeClear to erase a path that the user moved the finger. Finally, I combine two images in UIViews together to get the result. The result will be displayed in the foreground view.
To achieve that idea, I have written two different implements.They work well with small image but very slow with large image (> 3MB).
In first version, I use two UIImageViews (imgForegroundView and imgBackgroundView).
Here is code to get the image result when user moved finger from point p1 to point p2. This code will be called from touchesMoved event:
-(UIImage*)getImageWithPoint:(CGPoint)p1 andPoint:(CGPoint)p2{
UIGraphicsBeginImageContext(originalImg.size);
[self.imgForegroundView.image drawInRect:CGRectMake(0, 0, originalImg.size.width, originalImg.size.height)];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextSetAllowsAntialiasing(context, TRUE);
CGContextSetLineWidth(context, brushSize);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetRGBStrokeColor(context, 1, 0, 0, 1.0);
CGContextBeginPath(context);
CGContextMoveToPoint(context, p1.x, p1.y);
CGContextAddLineToPoint(context, p2.x, p2.y);
CGContextStrokePath(context);
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
After get image result, I replace imgForegroundView.image with it.
In version 2, I use idea in http://www.effectiveui.com/blog/2011/12/02/how-to-build-a-simple-painting-app-for-ios/. The background is still UIImageView but the foreground is a subclass of UIView. In foreground, I use a cache context to store image. When user move finger, I draw on cache context, then I update the view by override drawRect method. In drawRect method, I get image from cache context and draw it to current context.
- (void) drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef cacheImage = CGBitmapContextCreateImage(cacheContext);
CGContextDrawImage(context, self.bounds, cacheImage);
CGImageRelease(cacheImage);
}
Then, in same way with first version, I get image from foreground and combine it with background.
With small image (<= 2 MB), both versions work well. But with larger image, it is very terrible: after user moves finger a long times (3 - 5 seconds, depend on image size), the image will be updated.
I want my app can achieve the speed near real time such as example app above, but I don't know how to do. Can anyone give me some suggestions?

Masking two images

I want to do a selective masking between two images in iOS similar to the mask function in Blender. There are two images 1 and 2 (resized to same dimensions). Initially only image 1 will be visible but wherever user touches any area upon image1, it becomes transparent and image 2 becomes visible in those regions.
I created a mask-like image using core graphics with touch move. It is basically a full black image with white portions wherever I touched. The alpha is set to 1.0 throughout. I can use this image as a mask and do the necessary by implementing my own image-processing methods which will iterate over each pixel, check it and set according values. Now this method will be called inside touch move and so my method might slow the entire process (specially for 8MP camera images).
I want to know how this can be achieved by using Quartz Core or Core Graphics which will be efficient enough to run in big images.
The code I have so far :
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
mouseSwiped = YES;
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:staticBG];
UIGraphicsBeginImageContext(staticBG.frame.size);
[maskView.image drawInRect:CGRectMake(0, 0, maskView.frame.size.width, maskView.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 20.0);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 1.0, 1.0, 0.0);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
maskView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastPoint = currentPoint;
mouseMoved++;
if (mouseMoved == 10)
mouseMoved = 0;
staticBG.image = [self maskImage:staticBG.image withMask:maskView.image];
//maskView.hidden = NO;
}
- (UIImage*) maskImage:(UIImage *)baseImage withMask:(UIImage *)maskImage
{
CGImageRef imgRef = [baseImage CGImage];
CGImageRef maskRef = [maskImage CGImage];
CGImageRef actualMask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask(imgRef, actualMask);
return [UIImage imageWithCGImage:masked];
}
The maskImage method is not working as it creates a mask image depending upon alpha values.
I went through this link : Creating Mask from Path but I cannot understand the answer.
First of all I will mention something that I hope you know already.
masking works by taking the alpha values only.
creating an image with core graphics at each touchMove is a pretty huge overhead & you should try to avoid or use some other way of doing things.
try to use a static mask image.
I would like to propose you look at this from an inverted point of view.
i.e Instead of trying to make a hole in the top image trying to view the bottom, why not place the bottom image on top & mask the bottom image so that it would show user's touch point covering up the top view at specific parts. ?
I've done an example for you to get an idea over here > http://goo.gl/Zlu31T
Good luck & do post back if anything is not clear. I do believe there are much better and optimised way of doing this though. "_"
Since you're doing this in real time, I suggest you fake it while editing, and if you need to output the image later on, you can mask it for real, since it might take some time( not much, just not fast enough to do it in real time). And by faking I mean to put the image1 as background and on top of it put the hidden image2. Once the user touches the point, set the bounds of the image2 UIImageView to
CGRect rect= CGRectMake(touch.x - desiredRect.size.width/2,
touch.y - desiredRect.size.height/2,
desiredRect.size.width,
desiredRect.size.height);
and make it visible.
desiredRect would be the portion of the image2 that you want to show. Upon lifting the finger, you can just hide the image2 UIImageView so the image1 is fully visible. It is the fastest way I could think right now if your goal isn't to output the image at that very moment.
Use this code it will help for masking the two UIImages
CGSize newSize = CGSizeMake(320, 377);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[ backGroundImageView.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[self.drawImage.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageData = UIImagePNGRepresentation(newImage);

Create Scalable Path CGContextPath

I'm new to developing in iOS.
I have problem when draw with Core Graphics/UIKit.
I want to implement a function like shape of paint in Window.
I use this source: https://github.com/JagCesar/Simple-Paint-App-iOS, and add new function.
When touchesMoved, I draw a shape, based on the point when touchesBegan, and the current touch point. It draws all the shape.
- (void)drawInRectModeAtPoint:(CGPoint)currentPoint
{
UIGraphicsBeginImageContext(self.imageViewDrawing.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.currentColor setFill];
[self.imageViewDrawing.image drawInRect:CGRectMake(0, 0, self.imageViewDrawing.frame.size.width, self.imageViewDrawing.frame.size.height)];
CGContextMoveToPoint(context, self.beginPoint.x, self.beginPoint.y);
CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y);
CGContextAddLineToPoint(context, self.beginPoint.x * 2 - currentPoint.x, currentPoint.y);
CGContextAddLineToPoint(context, self.beginPoint.x, self.beginPoint.y);
CGContextFillPath(context);
self.currentImage = UIGraphicsGetImageFromCurrentImageContext();
self.imageViewDrawing.image = self.currentImage;
UIGraphicsEndImageContext();
}
I mean, I want to create only one shape, when touchesBegan, the app record the point, when touchesMoved, the shape is scaled by touches, and when touchesEnd, draw the shape to the ImageContex
Hope you can give me some tips to do that.
Thank you.
You probably want to extract this functionality away from the context. As you are using an image, use an image view. At the start of the touches, create the image and the image view. Set the image view frame to the touch point with a size of {1, 1}. As the touch moves, move / scale the image view by changing its frame. When the touches end, use the start and end points to render the image into the context (which should be the same as the final frame of the image view).
Doing it this way means you don't add anything to the context which would need to be removed again when the next touch update is received. The above method would work similarly with a CALayer instead of an image view. You could also look at a solution using a transform on the view.

Performance Issues When Using Many CALayer Masks

I am trying to use CAShapeLayer to mask a CALayer in my iOS app as it takes a fraction of the CPU time to mask an image vs manually masking one in a bitmap context;
When I have several dozen or more images layered over each other, the CAShapeLayer masked UIImageView is slow to move to my touch.
Here is some example code:
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"SomeImage.jpg" ofType:nil]];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddEllipseInRect(path, NULL, CGRectMake(0.f, 0.f, image.size.width * .25, image.size.height * .25));
for (int i = 0; i < 200; i++) {
SLTUIImageView *imageView = [[SLTUIImageView alloc]initWithImage:image];
imageView.frame = CGRectMake(arc4random_uniform(CGRectGetWidth(self.view.bounds)), arc4random_uniform(CGRectGetHeight(self.view.bounds)), image.size.width * .25, image.size.height * .25);
CAShapeLayer *shape = [CAShapeLayer layer];
shape.path = path;
imageView.layer.mask = shape;
[self.view addSubview:imageView];
[imageView release];
}
CGPathRelease(path);
With the above code, imageView is very laggy. However, it reacts instantly if I mask it manually in a bitmap context:
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"3.0-Pad-Classic0.jpg" ofType:nil]];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddEllipseInRect(path, NULL, CGRectMake(0.f, 0.f, image.size.width * .25, image.size.height * .25));
for (int i = 0; i < 200; i++) {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(image.size.width * .25, image.size.height * .25), NO, [[UIScreen mainScreen]scale]);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextAddPath(ctx, path);
CGContextClip(ctx);
[image drawInRect:CGRectMake(-(image.size.width * .25), -(image.size.height * .25), image.size.width, image.size.height)];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
SLTUIImageView *imageView = [[SLTUIImageView alloc]initWithImage:finalImage];
imageView.frame = CGRectMake(arc4random_uniform(CGRectGetWidth(self.view.bounds)), arc4random_uniform(CGRectGetHeight(self.view.bounds)), finalImage.size.width, finalImage.size.height);
[self.view addSubview:imageView];
[imageView release];
}
CGPathRelease(path);
By the way, here is the code to SLTUIImageView, it's just a simple subclass of UIImageView that responds to touches (for anyone who was wondering):
-(id)initWithImage:(UIImage *)image{
self = [super initWithImage:image];
if (self) {
self.userInteractionEnabled = YES;
}
return self;
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
[self.superview bringSubviewToFront:self];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
UITouch *touch = [touches anyObject];
self.center = [touch locationInView:self.superview];
}
Is it possible to somehow optimize how the CAShapeLayer is masking the UIImageView so that the performance is improved? I have tried to find out where the bottle-neck is using the Time Profiler in Instruments, but I can't tell exactly what is causing it.
I have tried setting shouldRasterize to YES on both layer and on layer.mask but neither seem to have any effect. I'm not sure what to do.
Edit:
I have done more testing and find that if I use just a regular CALayer to mask another CALayer (layer.mask = someOtherLayer) I have the same performance issues. It seems that the problem isn't specific to CAShapeLayer—rather it is specific to the mask property of CALayer.
Edit 2:
So after learning more about using the Core Animation tool in Instruments, I learned that the view is being rendered offscreen each time it moves. Setting shouldRaster to YES when the touch begins and turning it off when the touch ends makes the view stay green (thus keeping the cache) in instruments, but performance is still terrible. I believe this is because even though the view is being cached, if it isn't opaque, than it still has to be re-rendered with each frame.
One thing to emphasize is that if there are only a few views being masked (say even around ten) the performance is pretty good. However, when you increase that to 100 or more, the performance lags. I imagine this is because when one moves over the others, they all have to be re-rendered.
My conclusion is this, I have one of two options.
First, there must be someway to permanently mask a view (render it once and call it good). I know this can be done via the graphic or bitmap context route as I show in my example code, but when a layer masks its view, it happens instantly. When I do it in a bitmap context as shown, it is quite slow (as in it almost can't even be compared how much slower it is).
Second, there must be some faster way to do it via the bitmap context route. If there is an expert in masking images or views, their help would be very much appreciated.
You've gotten pretty far along and I believe are almost to a solution. What I would do is simply an extension of what you've already tried. Since you say many of these layers are "ending up" in final positions that remain constant relative to the other layers, and the mask.. So simply render all those "finished" layers to a single bitmap context. That way, every time you write out a layer to that single context, you'll have one less layer to worry about that is slowing down the animation/rendering process.
Quartz (drawRect:) is slower than CoreAnimation for many reasons: CALayer vs CGContext drawRect vs CALayer. But it is necessary to use it correctly.
In the documentation you can see some advices. ImprovingCoreAnimationPerformance
If you want a hight performance, maybe you can try using AsyncDisplayKit. This framework allows to create smooth and responsive apps.

Smoother freehand drawing experience (iOS)

I am making a math related activity in which the user can draw with their fingers for scratch work as they try to solve the math question. However, I notice that when I move my finger quickly, the line lags behind my finger somewhat noticeably. I was wondering if there was some area I had overlooked for performance or if touchesMoved simply just doesn't come enough (it is perfectly smooth and wonderful if you don't move fast). I am using UIBezierPath. First I create it in my init method like this:
myPath=[[UIBezierPath alloc]init];
myPath.lineCapStyle=kCGLineCapSquare;
myPath.lineJoinStyle = kCGLineJoinBevel;
myPath.lineWidth=5;
myPath.flatness = 0.4;
Then in drawRect:
- (void)drawRect:(CGRect)rect
{
[brushPattern setStroke];
if(baseImageView.image)
{
CGContextRef c = UIGraphicsGetCurrentContext();
[baseImageView.layer renderInContext:c];
}
CGBlendMode blendMode = self.erase ? kCGBlendModeClear : kCGBlendModeNormal;
[myPath strokeWithBlendMode:blendMode alpha:1.0];
}
baseImageView is what I use to save the result so that I don't have to draw many paths (gets really slow after a while). Here is my touch logic:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *mytouch=[[touches allObjects] objectAtIndex:0];
[myPath moveToPoint:[mytouch locationInView:self]];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *mytouch=[[touches allObjects] objectAtIndex:0];
[myPath addLineToPoint:[mytouch locationInView:self]];
[self setNeedsDisplay];
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0f);
CGContextRef c = UIGraphicsGetCurrentContext();
[self.layer renderInContext:c];
baseImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[myPath removeAllPoints];
[self setNeedsDisplay];
}
This project is going to be released as an enterprise app, so it will only be installed on iPad 2. Target iOS is 5.0. Any suggestions about how I can squeeze a little more speed out of this would be appreciated.
Of course you should start by running it under Instruments and look for your hotspots. Then you need to to make changes and re-evaluate to see their impact. Otherwise you're just guessing. That said, some notes from experience:
Adding lots of elements to a path can get very expensive. I would not be surprised if your addLineToPoint: turns out to be a hotspot. It has been for me.
Rather than backing your system with a UIImageView, I would probably render into a CGLayer. CGLayers are optimized for rendering into a specific context.
Why accumulate the path at all rather than just rendering it into the layer at each step? That way your path would never be more than two elements (move, addLine). Typically the two-stage approach is used so you can handle undo or the like.
Make sure that you're turning off any UIBezierPath features you don't want. In particular, look at the section "Accessing Draw Properties" in the docs. You may consider switching to CGMutablePath rather than UIBezierPath. It's not actually faster when configured the same, but it's default settings turn more things off, so by default it's faster. (You're already setting most of these; you'll want to experiment a little in Instruments to see what impact they make.)
http://mobile.tutsplus.com/tutorials/iphone/ios-sdk_freehand-drawing/
This link exactly shows how to make a curve smoother . This tutorial shows it step by step. And simply tells us how we can add some intermediate points (in touchesMoved method) to our curves to make them smoother.

Resources