I'm developing a selective color app in iOS (There are many similar apps in App Store, example: https://itunes.apple.com/us/app/color-splash/id304871603?mt=8). My idea is simple: use two UIViews, one for foreground (black and white image) and one for background (color image). I use touchesBegan, touchesMoved and so on events to track user input in foreground. In touches moved, I use kCGBlendModeClear to erase a path that the user moved the finger. Finally, I combine two images in UIViews together to get the result. The result will be displayed in the foreground view.
To achieve that idea, I have written two different implements.They work well with small image but very slow with large image (> 3MB).
In first version, I use two UIImageViews (imgForegroundView and imgBackgroundView).
Here is code to get the image result when user moved finger from point p1 to point p2. This code will be called from touchesMoved event:
-(UIImage*)getImageWithPoint:(CGPoint)p1 andPoint:(CGPoint)p2{
UIGraphicsBeginImageContext(originalImg.size);
[self.imgForegroundView.image drawInRect:CGRectMake(0, 0, originalImg.size.width, originalImg.size.height)];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextSetAllowsAntialiasing(context, TRUE);
CGContextSetLineWidth(context, brushSize);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetRGBStrokeColor(context, 1, 0, 0, 1.0);
CGContextBeginPath(context);
CGContextMoveToPoint(context, p1.x, p1.y);
CGContextAddLineToPoint(context, p2.x, p2.y);
CGContextStrokePath(context);
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
After get image result, I replace imgForegroundView.image with it.
In version 2, I use idea in http://www.effectiveui.com/blog/2011/12/02/how-to-build-a-simple-painting-app-for-ios/. The background is still UIImageView but the foreground is a subclass of UIView. In foreground, I use a cache context to store image. When user move finger, I draw on cache context, then I update the view by override drawRect method. In drawRect method, I get image from cache context and draw it to current context.
- (void) drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef cacheImage = CGBitmapContextCreateImage(cacheContext);
CGContextDrawImage(context, self.bounds, cacheImage);
CGImageRelease(cacheImage);
}
Then, in same way with first version, I get image from foreground and combine it with background.
With small image (<= 2 MB), both versions work well. But with larger image, it is very terrible: after user moves finger a long times (3 - 5 seconds, depend on image size), the image will be updated.
I want my app can achieve the speed near real time such as example app above, but I don't know how to do. Can anyone give me some suggestions?
Related
I'm trying to achieve selective color feature in iOS. I personally think that first draw shape using finger gesture and convert that into mask, But at the same time it should be real time, It should work as i move my finger across the grayscale image. Can anyone direct me to correct path.
Sample app : https://itunes.apple.com/us/app/color-splash/id304871603?mt=8
Thanks.
You can position two UIImageViews over each other, the color version in the background and the black&white version in the foreground.
Then you can use touchesBegan, touchesMoved and so on events to track user input. In touches moved you can "erase" a path that the user moved the finger along like this (self.foregroundDrawView is the black&white UIImageView):
UIGraphicsBeginImageContext(self.foregroundDrawView.frame.size);
[self.foregroundDrawView.image drawInRect:CGRectMake(0, 0, self.foregroundDrawView.frame.size.width, self.foregroundDrawView.frame.size.height)];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextSetAllowsAntialiasing(context, TRUE);
CGContextSetLineWidth(context, 85);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetRGBStrokeColor(context, 1, 0, 0, 1.0);
// Soft edge ... 5.0 works ok, but we try some more
CGContextSetShadowWithColor(context, CGSizeMake(0.0, 0.0), 13.0, [UIColor redColor].CGColor);
CGContextBeginPath(context);
CGContextMoveToPoint(context, touchLocation.x, touchLocation.y);
CGContextAddLineToPoint(context, currentLocation.x, currentLocation.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.foregroundDrawView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The important part is CGContextSetBlendMode(context, kCGBlendModeClear);. This erases the traced part from the image, afterwards the image is set back as the image of the foreground image view.
When the user is done you should be able to combine the two images or use the black&white image as a mask.
I'm new to developing in iOS.
I have problem when draw with Core Graphics/UIKit.
I want to implement a function like shape of paint in Window.
I use this source: https://github.com/JagCesar/Simple-Paint-App-iOS, and add new function.
When touchesMoved, I draw a shape, based on the point when touchesBegan, and the current touch point. It draws all the shape.
- (void)drawInRectModeAtPoint:(CGPoint)currentPoint
{
UIGraphicsBeginImageContext(self.imageViewDrawing.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.currentColor setFill];
[self.imageViewDrawing.image drawInRect:CGRectMake(0, 0, self.imageViewDrawing.frame.size.width, self.imageViewDrawing.frame.size.height)];
CGContextMoveToPoint(context, self.beginPoint.x, self.beginPoint.y);
CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y);
CGContextAddLineToPoint(context, self.beginPoint.x * 2 - currentPoint.x, currentPoint.y);
CGContextAddLineToPoint(context, self.beginPoint.x, self.beginPoint.y);
CGContextFillPath(context);
self.currentImage = UIGraphicsGetImageFromCurrentImageContext();
self.imageViewDrawing.image = self.currentImage;
UIGraphicsEndImageContext();
}
I mean, I want to create only one shape, when touchesBegan, the app record the point, when touchesMoved, the shape is scaled by touches, and when touchesEnd, draw the shape to the ImageContex
Hope you can give me some tips to do that.
Thank you.
You probably want to extract this functionality away from the context. As you are using an image, use an image view. At the start of the touches, create the image and the image view. Set the image view frame to the touch point with a size of {1, 1}. As the touch moves, move / scale the image view by changing its frame. When the touches end, use the start and end points to render the image into the context (which should be the same as the final frame of the image view).
Doing it this way means you don't add anything to the context which would need to be removed again when the next touch update is received. The above method would work similarly with a CALayer instead of an image view. You could also look at a solution using a transform on the view.
I'm trying to zoom and translate an image on the screen.
here's my drawRect:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetShouldAntialias(context, NO);
CGContextScaleCTM (context, senderScale, senderScale);
[self.image drawAtPoint:CGPointMake(imgposx, imgposy)];
CGContextRestoreGState(context);
}
When senderScale is 1.0, moving the image (imgposx/imgposy) is very smooth. But if senderScale has any other value, performance takes a big hit and the image stutters when I move it.
The image I am drawing is a UIImageobject. I create it with
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
and draw a simple UIBezierPath(stroke):
self.image = UIGraphicsGetImageFromCurrentImageContext();
Am I doing something wrong? Turning off the anti-aliasing did not improve things much.
Edit:
I tried this:
rectImage = CGRectMake(0, 0, self.frame.size.width * senderScale, self.frame.size.height * senderScale);
[image drawInRect:rectImage];
but it was just as slow as the other method.
If you want this to perform well, you should let the GPU do the heavy lifting by using CoreAnimation instead of drawing the image in your -drawRect: method. Try creating a view and doing:
myView.layer.contents = self.image.CGImage;
Then zoom and translate it by manipulating the UIView relative to its superview. If you draw the image in -drawRect: you're making it do the hard work of blitting the image for every frame. Doing it via CoreAnimation only blits once, and then subsequently lets the GPU zoom and translate the layer.
I need to draw a line chart from values that come to me every half a seconds. I've come up with my custom CALayer for this graph which stores all the previous lines and every two seconds redraws all previous lines and adds one new line. I find this solution non-optimal because there's only need to draw one additional line to the layer, no reason to redraw potentially thousands of previous lines.
What do you think would be the best solution in this case?
Use your own NSBitmapContext or UIImage as a backing store. Whenever new data comes in draw to this context and set your layer's contents property to the context's image.
I am looking at an identical implementation. Graph updates every 500 ms. Similarly I felt uncomfortable drawing the entire graph each iteration. I implemented a solution 'similar' to what Nikolai Ruhe proposed as follows:
First some declarations:
#define TIME_INCREMENT 10
#property (nonatomic) UIImage *lastSnapshotOfPlot;
and then the drawLayer:inContext method of my CALayer delegate
- (void) drawLayer:( CALayer*)layer inContext:(CGContextRef)ctx
{
// Restore the image of the layer from the last time through, if it exists
if( self.lastSnapshotOfPlot )
{
// For some reason the image is being redrawn upside down!
// This block of code adjusts the context to correct it.
CGContextSaveGState(ctx);
CGContextTranslateCTM(ctx, 0, layer.bounds.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
// Now we can redraw the image right side up but shifted over a little bit
// to allow space for the new data
CGRect r = CGRectMake( -TIME_INCREMENT, 0, layer.bounds.size.width, layer.bounds.size.height );
CGContextDrawImage(ctx, r, self.lastSnapshotOfPlot.CGImage );
// And finally put the context back the way it was
CGContextRestoreGState(ctx);
}
CGContextStrokePath(ctx);
CGContextSetLineWidth(ctx, 2.0);
CGContextSetStrokeColorWithColor(ctx, [UIColor blueColor].CGColor );
CGContextBeginPath( ctx );
// This next section is where I draw the line segment on the extreme right end
// which matches up with the stored graph on the image. This part of the code
// is application specific and I have only left it here for
// conceptual reference. Basically I draw a tiny line segment
// from the last value to the new value at the extreme right end of the graph.
CGFloat ppy = layer.bounds.size.height - _lastValue / _displayRange * layer.bounds.size.height;
CGFloat cpy = layer.bounds.size.height - self.sensorData.currentvalue / _displayRange * layer.bounds.size.height;
CGContextMoveToPoint(ctx,layer.bounds.size.width - TIME_INCREMENT, ppy ); // Move to the previous point
CGContextAddLineToPoint(ctx, layer.bounds.size.width, cpy ); // Draw to the latest point
CGContextStrokePath(ctx);
// Finally save the entire current layer to an image. This will include our latest
// drawn line segment
UIGraphicsBeginImageContext(layer.bounds.size);
[layer renderInContext: UIGraphicsGetCurrentContext()];
self.lastSnapshotOfPlot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Is this the most efficient way?
I have not been programming in ObjectiveC long enough to know so all suggestions/improvements welcome.
I'm using this code to colorize some images of a UIButton subclass:
UIImage *img = [self imageForState:controlState];
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContextWithOptions(img.size, NO, 0.0f);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[self.buttonColor setFill];
CGContextSetAllowsAntialiasing(context, true);
CGContextSetShouldAntialias(context, true);
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to multiply, and the original image
CGContextSetBlendMode(context, kCGBlendModeScreen);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
// set a mask that matches the shape of the image, then draw the colored image
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the colored image
[self setImage:coloredImg forState:controlState];
But the images come out with rough edges. I've tried using screen, lighten, and plusLighter blend modes, because some of the images have white parts that I want to stay white. The only part I want colorized is the black areas. I've attached the original button images, and after they've been colorized. I can't get the edges to look good. When I had them as white images that were colorized using multiply blend mode, it looked much better. But I want to use black so I can use one method for colorizing images with and without white in them. I tried with anti-aliasing, that didn't help either. It looks like it just isn't anti-aliasing it. I haven't worked with Core Graphics enough to know what's up with it.
EDIT
Here's what the original PNGs look like:
and here's what it should look like:
and here's what it does look like:
The size if different, but you can see the bad quality around the edges.
Maybe your original icons (PNGs?) are just "too sharp"? Could you show us? You just draw the image at its original size without resizing, so the problem could be right from the start.
I'm not sure what is what you are trying to accomplish here. Are you trying to round the edges of the images? If so, you are better of by changing the round corner property of the UIButton's layer. Since UIButton is a subclass of UIView, you can get its layer property and change the edge color and round its corner.