IOS: draw an image in a view - ios

I have this code to color in a view:
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:drawImage];
UIGraphicsBeginImageContext(drawImage.frame.size);
[drawImage.image drawInRect:CGRectMake(0, 0, drawImage.frame.size.width, drawImage.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), size);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), r, g, b, a);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
my problem is that I want to color not with a point, but I want to use a particular image that is repeated (as a png image)
Is it possible?

It's easy to load a single UIImage and draw it:
UIImage *brushImage = [UIImage imageNamed:#"brush.png"];
[brushImage drawAtPoint:CGPointMake(currentPoint.x-brushImage.size.width/2, currentPoint.y-brushImage.size.height/2)];
This will draw the image just once per cycle, not a continuous line. If you want solid lines of your brush picture, see Objective C: Using UIImage for Stroking.
This could end up loading the image file every time this method is called, making it slow. While the results of [UIImage imageNamed:] are often cached, my code above could be improved by storing the brush for later reuse.
Speaking of performance, test this on older devices. As written, it worked on my second generation iPod touch, but any additions could make it stutter on second and third generation devices. Apple's GLPaint example, recommended by #Armaan, uses OpenGL for fast drawing, but much more code is involved.
Also, you seem to be doing your interaction (touchesBegan:, touchesMoved:, touchesEnded:) in a view that then draws its contents to drawImage, presumably a UIImageView. It is possible to store the painting image, then set this view's layer contents to be that image. This will not alter performance, but the code will be cleaner. If you continue using drawImage, access it through a property instead using its ivar.

You can start with APPLE's sample code:Sample code:GLPaint

Related

Drawing line on iPhone X

I have drawing functionality in my app over photo. It's working on every device except iPhone X. On iPhone X the lines become fade and move upwards with each finger movement. The upper 10-20 percent area of view works fine. Following is the code to draw line.
- (void)drawLineNew{
UIGraphicsBeginImageContext(self.bounds.size);
[self.viewImage drawInRect:self.bounds];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(), self.selectedColor.CGColor);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), _lineWidth);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), previousPoint.x, previousPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setNeedsDisplay];}
Following is the sample drawing screenshot
After hours of hit and trial I made it work. The code I wrote is correct and should be working. The only issue was the frame size of that drawingView (self).
The height of view was in float and that was making it to go upwards with each pixel. I applied lroundf function and its working.
Happy coding.

iOS paint features to my app

I'm trying to add a drawing feature to my app. I have two UIImageViews... the bottom one contains a picture, let's say a photograph, and the second one on top of it is the one I want to paint on.
- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
UIView *tappedView = [gesture.view hitTest:[gesture locationInView:gesture.view] withEvent:nil];
CGPoint currentPoint = [gesture locationInView:_paintOverlay];
UIGraphicsBeginImageContext(_paintOverlay.frame.size);
[_paintOverlay.image drawInRect:CGRectMake(0, 0, _paintOverlay.frame.size.width, _paintOverlay.frame.size.height)];
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), 5, 5);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), brush );
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), red, green, blue, 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(),kCGBlendModeNormal);
CGContextStrokePath(UIGraphicsGetCurrentContext());
_paintOverlay.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"Touch event on view: %#",[tappedView class]);
}
This simply isn't working. I can't find any tutorials to help me with this, the one I found (where I derived this code from) wasn't so understandable.
Your code, such as it is, works well enough. I didn't have an image so I omitted that, but by supplying values for brush and so forth I found that tapping on the view made a line appear:
It's a terrible way to paint (paint by tapping? only making lines coming from a single point???), but it does make lines.
However, I naturally configured my tap gesture recognizer and views correctly. You don't say what you did, so who knows? Did you hook the tap gesture recognizer to its action handler? Did you add it to the view? Did you remember to turn on the view's userInteractionEnabled? Did you tap as your gesture? A lot of things can go wrong; you need to debug, see what's happening, and tell us more about it.
try with touches methods instead of tap gesture since one can move the fingers around to draw, touches will be good to implement.
refer the sample at: https://www.raywenderlich.com/18840/how-to-make-a-simple-drawing-app-with-uikit

UIImage masking with gesture

I'm trying to achieve selective color feature in iOS. I personally think that first draw shape using finger gesture and convert that into mask, But at the same time it should be real time, It should work as i move my finger across the grayscale image. Can anyone direct me to correct path.
Sample app : https://itunes.apple.com/us/app/color-splash/id304871603?mt=8
Thanks.
You can position two UIImageViews over each other, the color version in the background and the black&white version in the foreground.
Then you can use touchesBegan, touchesMoved and so on events to track user input. In touches moved you can "erase" a path that the user moved the finger along like this (self.foregroundDrawView is the black&white UIImageView):
UIGraphicsBeginImageContext(self.foregroundDrawView.frame.size);
[self.foregroundDrawView.image drawInRect:CGRectMake(0, 0, self.foregroundDrawView.frame.size.width, self.foregroundDrawView.frame.size.height)];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextSetAllowsAntialiasing(context, TRUE);
CGContextSetLineWidth(context, 85);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetRGBStrokeColor(context, 1, 0, 0, 1.0);
// Soft edge ... 5.0 works ok, but we try some more
CGContextSetShadowWithColor(context, CGSizeMake(0.0, 0.0), 13.0, [UIColor redColor].CGColor);
CGContextBeginPath(context);
CGContextMoveToPoint(context, touchLocation.x, touchLocation.y);
CGContextAddLineToPoint(context, currentLocation.x, currentLocation.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.foregroundDrawView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The important part is CGContextSetBlendMode(context, kCGBlendModeClear);. This erases the traced part from the image, afterwards the image is set back as the image of the foreground image view.
When the user is done you should be able to combine the two images or use the black&white image as a mask.

Masking two images

I want to do a selective masking between two images in iOS similar to the mask function in Blender. There are two images 1 and 2 (resized to same dimensions). Initially only image 1 will be visible but wherever user touches any area upon image1, it becomes transparent and image 2 becomes visible in those regions.
I created a mask-like image using core graphics with touch move. It is basically a full black image with white portions wherever I touched. The alpha is set to 1.0 throughout. I can use this image as a mask and do the necessary by implementing my own image-processing methods which will iterate over each pixel, check it and set according values. Now this method will be called inside touch move and so my method might slow the entire process (specially for 8MP camera images).
I want to know how this can be achieved by using Quartz Core or Core Graphics which will be efficient enough to run in big images.
The code I have so far :
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
mouseSwiped = YES;
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:staticBG];
UIGraphicsBeginImageContext(staticBG.frame.size);
[maskView.image drawInRect:CGRectMake(0, 0, maskView.frame.size.width, maskView.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 20.0);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 1.0, 1.0, 0.0);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
maskView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastPoint = currentPoint;
mouseMoved++;
if (mouseMoved == 10)
mouseMoved = 0;
staticBG.image = [self maskImage:staticBG.image withMask:maskView.image];
//maskView.hidden = NO;
}
- (UIImage*) maskImage:(UIImage *)baseImage withMask:(UIImage *)maskImage
{
CGImageRef imgRef = [baseImage CGImage];
CGImageRef maskRef = [maskImage CGImage];
CGImageRef actualMask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask(imgRef, actualMask);
return [UIImage imageWithCGImage:masked];
}
The maskImage method is not working as it creates a mask image depending upon alpha values.
I went through this link : Creating Mask from Path but I cannot understand the answer.
First of all I will mention something that I hope you know already.
masking works by taking the alpha values only.
creating an image with core graphics at each touchMove is a pretty huge overhead & you should try to avoid or use some other way of doing things.
try to use a static mask image.
I would like to propose you look at this from an inverted point of view.
i.e Instead of trying to make a hole in the top image trying to view the bottom, why not place the bottom image on top & mask the bottom image so that it would show user's touch point covering up the top view at specific parts. ?
I've done an example for you to get an idea over here > http://goo.gl/Zlu31T
Good luck & do post back if anything is not clear. I do believe there are much better and optimised way of doing this though. "_"
Since you're doing this in real time, I suggest you fake it while editing, and if you need to output the image later on, you can mask it for real, since it might take some time( not much, just not fast enough to do it in real time). And by faking I mean to put the image1 as background and on top of it put the hidden image2. Once the user touches the point, set the bounds of the image2 UIImageView to
CGRect rect= CGRectMake(touch.x - desiredRect.size.width/2,
touch.y - desiredRect.size.height/2,
desiredRect.size.width,
desiredRect.size.height);
and make it visible.
desiredRect would be the portion of the image2 that you want to show. Upon lifting the finger, you can just hide the image2 UIImageView so the image1 is fully visible. It is the fastest way I could think right now if your goal isn't to output the image at that very moment.
Use this code it will help for masking the two UIImages
CGSize newSize = CGSizeMake(320, 377);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[ backGroundImageView.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[self.drawImage.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageData = UIImagePNGRepresentation(newImage);

Create Scalable Path CGContextPath

I'm new to developing in iOS.
I have problem when draw with Core Graphics/UIKit.
I want to implement a function like shape of paint in Window.
I use this source: https://github.com/JagCesar/Simple-Paint-App-iOS, and add new function.
When touchesMoved, I draw a shape, based on the point when touchesBegan, and the current touch point. It draws all the shape.
- (void)drawInRectModeAtPoint:(CGPoint)currentPoint
{
UIGraphicsBeginImageContext(self.imageViewDrawing.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.currentColor setFill];
[self.imageViewDrawing.image drawInRect:CGRectMake(0, 0, self.imageViewDrawing.frame.size.width, self.imageViewDrawing.frame.size.height)];
CGContextMoveToPoint(context, self.beginPoint.x, self.beginPoint.y);
CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y);
CGContextAddLineToPoint(context, self.beginPoint.x * 2 - currentPoint.x, currentPoint.y);
CGContextAddLineToPoint(context, self.beginPoint.x, self.beginPoint.y);
CGContextFillPath(context);
self.currentImage = UIGraphicsGetImageFromCurrentImageContext();
self.imageViewDrawing.image = self.currentImage;
UIGraphicsEndImageContext();
}
I mean, I want to create only one shape, when touchesBegan, the app record the point, when touchesMoved, the shape is scaled by touches, and when touchesEnd, draw the shape to the ImageContex
Hope you can give me some tips to do that.
Thank you.
You probably want to extract this functionality away from the context. As you are using an image, use an image view. At the start of the touches, create the image and the image view. Set the image view frame to the touch point with a size of {1, 1}. As the touch moves, move / scale the image view by changing its frame. When the touches end, use the start and end points to render the image into the context (which should be the same as the final frame of the image view).
Doing it this way means you don't add anything to the context which would need to be removed again when the next touch update is received. The above method would work similarly with a CALayer instead of an image view. You could also look at a solution using a transform on the view.

Resources