I've subclassed a UIView and overridden the touchesBegan, touchesMoved, touchesEnded, and drawRect methods to create an app that allows the user to draw by touching the screen. I'm using the Quartz 2D library to do the drawings.
In touchesBegan, touchesMoved, and touchesEnded, I keep track of the current and previous locations of the touch event. Each time the touch moves and when the touch ends, I call setNeedsDisplayInRect using the smallest rectangle that contains the previous and current location of the touch in order to preserve underlying drawings. (I want all drawings to add on to one another like layers.)
I've noticed a strange artifact: When creating a drawing that overlaps with another drawing, Quartz re-draws the rectangle passed into setNeedsDisplayInRect but erases everything below that rectangle.
I suspect that the issue is due to the blending mode, however I experimented with a number of different blending modes and none seemed to do the trick.
How do I draw a path that preserves the underlying content?
Quartz is not a canvas model. It does not keep track of what was drawn in previous loops. Every time drawRect: is called, it is your responsibility to deal with every pixel in the rectangle you are passed. By default (set in UIView clearsContextBeforeDrawing), Quartz will clear the rectangle for you, but it's your job to draw the content every time.
If you want to layer things, you either need to put each thing in its own CALayer or UIView, or you need to redraw any overlaps each time drawRect: is called.
Related
I'm working on a drawing app, where user should be able to both fill locked areas and simply draw lines on finger moves.
Locked areas are provided as SVGs (paths), so I'm using SVGKit library to render them on the screen (as CAShapeLayers within a view). Then basically use fillColor on proper layer to fill it on touch.
However, for lines drawing then Core Graphics comes into play (CGContextStrokePath), and lines are always drawn below everything contained within CALayers hierarchy. So basically below filled areas.
What I'm trying to reach is a system where last applied drawing is always on top. So that applying fill overrides any lines in the area, and next drawing a line shows it above filled zone.
Seems that CGLayer's z-index is less than CALayer's one, and I need some other approach for my goal...
CAShapeLayer is designed to hold CGPath instances and render them, so I'd add a CAShapeLayer at the point in your layer-hierarchy where you want it to appear, and modify the CGPath property on that.
I 'm building a medical app, which, at some point, enables the user to draw.
Because the drawing is minimal at most points, I 'd been using vectored graphics,but there is a case that a full-screen note should be drawn.
What's the most appropriate way to handle touchesMoved?
The point is basically to have antialiased lines drawn so the user can see them while drawing. But should I call
[UIView setNeedsDisplay] each time touchesMoved is called, to update the entire full screen drawing?
Or should I keep the untouched UIImage in a cache and write on a new UIImage and redraw it while drawing, then when touchesEnded is called to merge?
How would an "eraser" should be implemented then? (the background is not white; It may have a body png, in which case simply by overwriting with white won't work).
Thanks a lot.
I'm using a tiled layer in a scroll view to display a custom view. The contents of the custom view change periodically, and I know the rectangle in which those changes occur. However I have found that if I do a setNeedsRedisplay only one tile is getting redrawn. How can I tell the CATiledLayer to redraw only the affected tiles?
If you call setNeedsDisplay on the CATiledLayer, then it redraws all its tiles. If you use setNeedsDisplayInRect instead, it should only redraws those tiles that intersect with the rectangle you specify. Note however, that it will redraw the whole tile, not just the part that intersects with the rectangle.
Note also the redraw bug in CATiledLayer when you call setNeedsDisplay when it is in the process of drawing. More on that you can find in this question.
I have been trying so much but have no solution find out yet. I have to implement the painting and erasing on iOS so I successfully implemented the painting logic using UIBezierPath. The problem is that for erasing, I implemented the same logic as for painting by using kCGBlendModeClear but the problem is that I cant redraw on the erased area and this is because in each pass in drawRect i have to stroke both the painting and erasing paths. So is there anyway that we can subtract erasing path from drawing path to get the resultant path and then stroke it. I am very new to Core Graphics and looking forward for your reply and comments. Or any other logic to implement the same. I can't use eraser as background color because my background is textured.
You don't need to stroke the path every time, in fact doing so is a huge performance hit. I guarantee if you try it on an iPad 3 you will be met with a nearly unresponsive screen after a few strokes. You only need to add and stroke the path once. After that, it will be stored as pixel data. So don't keep track of your strokes, just add them, stroke them, and get rid of them. Also look into using a CGLayer (you can draw to that outside the main loop, and only render it to your rect in the main loop so it saves lots of time).
These are the steps that I use, and I am doing the exact same thing (I use a CGPath instead of UIBezierPath, but the idea is the same):
1) In touches began, store the touch point and set the context to either erase or draw, depending on what the user has selected.
2) In touches moved, if the point is a certain arbitrary distance away from the last point, then move to the last point (CGContextMoveToPoint) and draw a line to the new point (CGContextAddLineToPoint) in my CGLayer. Calculate the rectangle that was changed (i.e. contains the two points) and call setNeedsDisplayInRect: with that rectangle.
3) In drawRect render the CGLayer into the current window context ( UIGraphicsGetCurrentContext() ).
On an iPad 3 (the one that everyone has the most trouble with due to its enormous pixel count) this process takes between 0.05 ms and 0.15ms per render (depending on how fast you swipe). There is one caveat though, if you don't take the proper precautions, the entire frame rectangle will be redrawn even if you only use setNeedsDisplayInRect: My hacky way to combat this (thanks to the dev forums) is described in my self answered question here, Otherwise, if your view takes a long time to draw the entire frame (mine took an unacceptable 150 ms) you will get a short stutter under certain conditions while the view buffer gets recreated.
EDIT With the new info from your comments, it seems that the answer to this question will benefit you -> Use a CoreGraphic Stroke as Alpha Mask in iPhone App
Hai here is the code for making painting, erasing, undo, redo, saving as picture. you can check sample code and implement this on your project.
Here
I'm working on an app that lets the user stack graphics on top of each other.
The graphics are instantiated as a UIImageView, and is transparent outside of the actual graphic. I'm also using pan gestures to let the user drag them around the screen.
So when you have a bunch of graphics of different sizes and shapes on top of one another, you may have the illusion that you are touching a sub-indexed view, but you're actually touching the top one because some transparent part of it its hovering over your touch point.
I was wondering if anyone had ideas on how we could accomplish ONLY listening to the pan gesture on the solid part of the imageview. Or something that would tighten up the user experience so that whatever they touched was what they select. Thanks
Create your own subclass of UIImageView. In your subclass, override the pointInside:withEvent: method to return NO if the point is in a transparent part of the image.
Of course, you need to determine if a point is in a transparent part. :)
If you happen to have a CGPath or UIBezierPath that outlines the opaque parts of your image, you can do it easily using CGPathContainsPoint or -[UIBezierPath containsPoint:].
If you don't have a handy path, you will have to examine the image's pixel data. There are many answers on stackoverflow.com already that explain how to do that. Search for get pixel CGImage or get pixel UIImage.