iOS paint features to my app - ios

I'm trying to add a drawing feature to my app. I have two UIImageViews... the bottom one contains a picture, let's say a photograph, and the second one on top of it is the one I want to paint on.
- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
UIView *tappedView = [gesture.view hitTest:[gesture locationInView:gesture.view] withEvent:nil];
CGPoint currentPoint = [gesture locationInView:_paintOverlay];
UIGraphicsBeginImageContext(_paintOverlay.frame.size);
[_paintOverlay.image drawInRect:CGRectMake(0, 0, _paintOverlay.frame.size.width, _paintOverlay.frame.size.height)];
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), 5, 5);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), brush );
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), red, green, blue, 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(),kCGBlendModeNormal);
CGContextStrokePath(UIGraphicsGetCurrentContext());
_paintOverlay.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"Touch event on view: %#",[tappedView class]);
}
This simply isn't working. I can't find any tutorials to help me with this, the one I found (where I derived this code from) wasn't so understandable.

Your code, such as it is, works well enough. I didn't have an image so I omitted that, but by supplying values for brush and so forth I found that tapping on the view made a line appear:
It's a terrible way to paint (paint by tapping? only making lines coming from a single point???), but it does make lines.
However, I naturally configured my tap gesture recognizer and views correctly. You don't say what you did, so who knows? Did you hook the tap gesture recognizer to its action handler? Did you add it to the view? Did you remember to turn on the view's userInteractionEnabled? Did you tap as your gesture? A lot of things can go wrong; you need to debug, see what's happening, and tell us more about it.

try with touches methods instead of tap gesture since one can move the fingers around to draw, touches will be good to implement.
refer the sample at: https://www.raywenderlich.com/18840/how-to-make-a-simple-drawing-app-with-uikit

Related

Drawing line on iPhone X

I have drawing functionality in my app over photo. It's working on every device except iPhone X. On iPhone X the lines become fade and move upwards with each finger movement. The upper 10-20 percent area of view works fine. Following is the code to draw line.
- (void)drawLineNew{
UIGraphicsBeginImageContext(self.bounds.size);
[self.viewImage drawInRect:self.bounds];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(), self.selectedColor.CGColor);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), _lineWidth);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), previousPoint.x, previousPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setNeedsDisplay];}
Following is the sample drawing screenshot
After hours of hit and trial I made it work. The code I wrote is correct and should be working. The only issue was the frame size of that drawingView (self).
The height of view was in float and that was making it to go upwards with each pixel. I applied lroundf function and its working.
Happy coding.

Drawing geometry (circle or rect) on zoomable CGPDF document using QuartzCore

I am trying to draw on touches (mouse tap) on the pdf document generated by CGContextRef from my resource bundle. I have started from Apple zoomingpdfviewer
where I have a UIScrollView and Pdf is generated by CGContextDrawPDFPage(context, pdfPage);
I am have also added a CATiledLayer on the scrollView and it is drawn each time in layoutSubviews - when UIScrollView is zoomed. I am confused a bit that should I draw the mouse points on scrollView CGContext or TileLayers.
--> Moving ahead, I wanted to add a rect/ circle/point on the pdf document where the user taps. Just to start with I am : CGContextMoveToPoint(context, 5,5);
CGContextSetLineWidth(context, 12.0f);
CGContextSetStrokeColorWithColor(context, [[UIColor redColor] CGColor]);
CGContextAddRect(context, CGRectMake(50, 50, 20, 50));
CGContextAddLineToPoint(context, 25, 5);
CGContextStrokePath(context);
drawing it to the current context. Similarly I plan to draw when user taps in. But when I zoom and TitledLayer is redrawn to match the zoom content, these drawn rect/ circle disappears. If I am not wrong then CATiledLayer is being drawn above the currentContext rects/ circle . The end functionality is quite similar to the Maps App where the Tile Layers are added but the dropped points are located exactly on the same location even after the map is zoomed. Quite lost after seeing many such posts as drawing on scrollView , pdf iOS viewer. Does anyone know how can I draw geomtry (rect/points) on the pdf document and keep their location exactly same even if PdfView Zoom in and out? Should I convert a pdf into image or any other way to do this?
After searching and trying some stuff on CoreAnimation,CALayer I came up with:
If you want to add anything on zoomable PDF viewer,
1) Add a UITapGestureRecognizer for handling single touch
UITapGestureRecognizer *singleTapGestureRecognizer=[[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleSingleTapOnScrollView:)];
singleTapGestureRecognizer.numberOfTapsRequired=1;
singleTapGestureRecognizer.delegate=self;
[self addGestureRecognizer:singleTapGestureRecognizer];
[singleTapGestureRecognizer requireGestureRecognizerToFail:doubleTapGestureRecognizer]; // added if you have a UITapGestureRecognizer for double taps
2) Simply add the new views (circle,rect any geometry) as layers on the Tiled PDF view
[self.titledPdfView.layer addSublayer:view1.layer];
where you can subclass your geometry views to draw with CGContext currentContext. This automatically reposition the layers when pdf is zoomed in/out
I was trying to add the views by drawing context on drawLayer:(CALayer*)layer inContext:(CGContextRef)context of titledPdfView class which gave a positional error
You can also implement a TouchesMoved, TouchesEnded method if you want to draw a resizable TextView, arow, Line.
I hope this helps for someone who needs to draw customize objects on Apple's ZoomingPDFViewer.
Further to add (if anyone arrives here), replace the Apple code of TiledPdfView *titledPDFView = [[TiledPdfView alloc] initWithFrame:pageRect scale:pdfScale];
[titledPDFView setPage:pdfPage];
[self addSubview:titledPdfView];
[titledPDFView setNeedsDisplay];
[self bringSubviewToFront:titledPdfView];
self.titledPdfView = titledPDFView;
With :
titledPdfView = [[TiledPdfView alloc] initWithFrame:pageRect scale:pdfScale];
[titledPdfView setPage:pdfPage];
[self addSubview:titledPdfView];
[self bringSubviewToFront:titledPdfView];
I dont know why have they added a view like that (member object to class object) which restricts the -(void)drawLayer:(CALayer*)layer inContext:(CGContextRef)context being before zooming. I have tried with setNeedsDisplay with no effect then replaced n it works. Esp. this is annoying if you want to draw annots on your TileLayer and nothing appears. Hope this helps!!

Create Scalable Path CGContextPath

I'm new to developing in iOS.
I have problem when draw with Core Graphics/UIKit.
I want to implement a function like shape of paint in Window.
I use this source: https://github.com/JagCesar/Simple-Paint-App-iOS, and add new function.
When touchesMoved, I draw a shape, based on the point when touchesBegan, and the current touch point. It draws all the shape.
- (void)drawInRectModeAtPoint:(CGPoint)currentPoint
{
UIGraphicsBeginImageContext(self.imageViewDrawing.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.currentColor setFill];
[self.imageViewDrawing.image drawInRect:CGRectMake(0, 0, self.imageViewDrawing.frame.size.width, self.imageViewDrawing.frame.size.height)];
CGContextMoveToPoint(context, self.beginPoint.x, self.beginPoint.y);
CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y);
CGContextAddLineToPoint(context, self.beginPoint.x * 2 - currentPoint.x, currentPoint.y);
CGContextAddLineToPoint(context, self.beginPoint.x, self.beginPoint.y);
CGContextFillPath(context);
self.currentImage = UIGraphicsGetImageFromCurrentImageContext();
self.imageViewDrawing.image = self.currentImage;
UIGraphicsEndImageContext();
}
I mean, I want to create only one shape, when touchesBegan, the app record the point, when touchesMoved, the shape is scaled by touches, and when touchesEnd, draw the shape to the ImageContex
Hope you can give me some tips to do that.
Thank you.
You probably want to extract this functionality away from the context. As you are using an image, use an image view. At the start of the touches, create the image and the image view. Set the image view frame to the touch point with a size of {1, 1}. As the touch moves, move / scale the image view by changing its frame. When the touches end, use the start and end points to render the image into the context (which should be the same as the final frame of the image view).
Doing it this way means you don't add anything to the context which would need to be removed again when the next touch update is received. The above method would work similarly with a CALayer instead of an image view. You could also look at a solution using a transform on the view.

How can I draw lines on big images using gesture recognizer without memory issues

I am working on an app, and at some point the user needs to draw something over an image.
The code i wrote works just fine for images around 1500x1500 and smaller, but once the images get bigger, the problem starts.
When images get too big, it takes more time to do the drawing, and the Gesture Recognizer gets called rarely.
Here's how I did it: there are two classes, one is a UIScrollView subclass called DrawView and one is a UIImageView subclass called MyPen. DrawView has a UIPanGestureRecognizer that sends messages to MyPen every time it gets recognized (I get the [recognizer state] and depending on it i start or move a line). DrawView has two UIImageView objects in its subviews, one for the background image, and one for the drawings (the pen).
here's what I do in MyPen:
- (void)beginLine:(CGPoint) currentPoint
{
previousPoint = currentPoint;
}
- (void)moveLine:(CGPoint) currentPoint
{
self.image = [self drawLineFromPoint:previousPoint toPoint:currentPoint image:self.image];
previousPoint = currentPoint;
}
- (UIImage *)drawLineFromPoint:(CGPoint)fromPoint toPoint:(CGPoint)toPoint image:(UIImage *)image
{
CGSize screenSize = self.frame.size;
UIGraphicsBeginImageContext(screenSize);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
[image drawInRect:CGRectMake(0, 0, screenSize.width, screenSize.height)];
CGContextSetLineCap(currentContext, kCGLineCapRound);
CGContextSetLineWidth(currentContext, _thickness);
CGContextSetStrokeColorWithColor(currentContext, _color);
CGContextBeginPath(currentContext);
CGContextMoveToPoint(currentContext, fromPoint.x, fromPoint.y);
CGContextAddLineToPoint(currentContext, toPoint.x, toPoint.y);
CGContextStrokePath(currentContext);
UIImage *ret = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return ret;
}
When the image is big enough, it takes some time for CoreGraphics to render it, and so the gesture recognizer doesn't get recognized so often, and sends less points to the pen.
The question here is: Is there a way to optimize the drawing? Should I use another Thread for ir ? Is there another way around it ?
Any help greatly appreciated.
You should split foreground (the lines) and the background (the image the lines should be drawn on) into two Views. So you can only update the foreground View every time the user moves his finger. You can compose the two views into one Image when the user finishs drawing.
Drawing an image inside a CGContext is very expensive, especialy when large.

IOS: draw an image in a view

I have this code to color in a view:
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:drawImage];
UIGraphicsBeginImageContext(drawImage.frame.size);
[drawImage.image drawInRect:CGRectMake(0, 0, drawImage.frame.size.width, drawImage.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), size);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), r, g, b, a);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
my problem is that I want to color not with a point, but I want to use a particular image that is repeated (as a png image)
Is it possible?
It's easy to load a single UIImage and draw it:
UIImage *brushImage = [UIImage imageNamed:#"brush.png"];
[brushImage drawAtPoint:CGPointMake(currentPoint.x-brushImage.size.width/2, currentPoint.y-brushImage.size.height/2)];
This will draw the image just once per cycle, not a continuous line. If you want solid lines of your brush picture, see Objective C: Using UIImage for Stroking.
This could end up loading the image file every time this method is called, making it slow. While the results of [UIImage imageNamed:] are often cached, my code above could be improved by storing the brush for later reuse.
Speaking of performance, test this on older devices. As written, it worked on my second generation iPod touch, but any additions could make it stutter on second and third generation devices. Apple's GLPaint example, recommended by #Armaan, uses OpenGL for fast drawing, but much more code is involved.
Also, you seem to be doing your interaction (touchesBegan:, touchesMoved:, touchesEnded:) in a view that then draws its contents to drawImage, presumably a UIImageView. It is possible to store the painting image, then set this view's layer contents to be that image. This will not alter performance, but the code will be cleaner. If you continue using drawImage, access it through a property instead using its ivar.
You can start with APPLE's sample code:Sample code:GLPaint

Resources