drawing on UIView in iOS - ios

I am developing iPhone application and I have taken UIView and I want to draw particular region on view on user touchMove event. So please can anybody suggest to me how to do it means how to implement the touchMove event so it can draw region concurrently with user touchMove.

What you need to do is:
Create subclass of UIView.
Implement all the touch methods like
touchesBegan:, touchesMoved:, etc.
Use drawRect:,
UIGraphicsBeginImageContextWithOptions and UIBezierPath to draw
image based on touch locations.
There are many tutorials on the internet. I find this one quite good: Advanced Freehand Drawing Techniques

Related

Swift: Drawing a UIBezierPath based on touch in a UIView

I've been looking at this thread as I'm trying to implement the same thing. However, I see that the Canvas class is implemented as a subclass of UIImageView. I'm trying to do the same thing except in a UIView. How will using a UIView rather than UIImageView affect the implementation of this solution? I see self.image used a couple times, but I don't know how I'd change that since I don't think that is available in a generic UIView.
Yes, you can implement this as a UIView subclass. Your model should hold the locations of the touch events (or the paths constructed from those locations) and then the drawRect of the view can render these paths. Or you create CAShapeLayer objects associated with those paths, too. Both approaches work fine.
Note, there is some merit to the approach of making snapshots (saved as UIImage) objects that you either show in a UIImageView or manually draw in drawRect of your UIView subclass. As your drawings get more and more complicated, you'll start to suffer performance issues if your drawRect has to redraw all of path segments (it can become thousands of locations surprisingly quickly because there are a lot of touches associated with a single screen gesture) upon every touch.
IMHO, I think that other answer you reference goes too far, making a new snapshot upon every touchesMoved. When you look at full resolution image for retina iPad or iPhone 6 plus, that's a large image snapshot to create upon every touch event. I personally adopt a hybrid approach: My drawRect or CAShapeLayer will render the current path associated with the current gesture (or the collection of touchesMoved events between touchesBegan and touchesEnded), but when the gesture finishes, it will create a new snapshot.
In the answer to that question, self.image is drawn into the drawing context first, then drawing is applied on top, then finally the image is updated to be the old image with new content drawn on top.
Since you just want to add a UIBezierPath, I'd just create a CAShapeLayer into which you place your bezier path, and place it on top of your views backing layer (self.view.layer). There's no need to do anything with DrawRect.

Using Quartz for small graphic project

I'm developing a iOS project that involves drawing small graphics (lines and paths) on the screen.
I initially chose to use Quartz instead of OpenGL, because I need to display some basic shapes and I need to update them every 5 seconds, so I thought Quartz was better and easier.
I found out that I can't simply draw in a view, but I have to subclass a UIView and draw in the drawRect method.
In my project, the user should be able to pinch and zoom on graphics, so I planned to add a pinchgesture to the view, but I am doubtful about how to redraw everything after the pinch. Do I have to erase everything and re-add the subviews so the drawRect will trigger or is there a better way to do this?
Thanks a lot.
When using Quartz, you technically don't have to subclass the view and replace the drawRect, but it probably is best practice. When you want to redraw your window, just call [self setNeedsDisplay]; (if calling from the subclassed view, or [self.view setNeedsDisplay]; if doing it from the view controller). This will result in calling your drawRect method for you and it takes care of everything for you.
See the setNeedsDisplay documentation for more information.

Intercepting touches on MKOverlay border

One of the functions of program is to select a piece of the map. I do this using MKAnnotations and using a MKPolygonView (with just the border visible) to connect the "dots". (Please take a look at the screenshot below).
However, I'm trying to find a mechanism so that users can add new pins. This should be done by pressing on a border part of the MKPolygonView and then a new pin is added in the middle of the border.
In order to do this, I have to intercept touches, probably using the UIGestureRecognizer. I have looked at Touch events on MKMapView's overlays, which gave me a good lead. The only problem is that this intercepts touches also inside the MKPolygonView. I just need the border.
Is there any way to achieve this kind of behavior?
This is an old question, but anyway - one of the possible workarounds is using MKPolyline simultaneously. You could add a MKPolyline, matching MKPolygon border and detect taps on MKPolyline.

What are the differences between a UIView and a CALayer?

Both have most of the same attributes, both support different kind of animations, both represent different data.
What are the differences between a UIView and a CALayer?
On iOS, every UIView is backed by a Core Animation CALayer, so you are dealing with CALayers when using a UIView, even though you may not realize it. Unlike NSViews on the Mac, which evolved before Core Animation existed, UIViews are intended to be lightweight wrappers around these CALayers.
As I describe in the similar question "When to use CALayer on the Mac/iPhone?", working directly with CALayers doesn't give you significant performance advantages over UIViews. One of the reasons you might want to build a user interface element with CALayers instead of UIViews is that it can be very easily ported to the Mac. UIViews are very different from NSViews, but CALayers are almost identical on the two platforms. This is why the Core Plot framework lays out its graphs using CALayers instead of other UI elements.
One thing UIViews provide over CALayers is built-in support for user interaction. They handle hit-testing on touches and other related actions that you would need to build yourself if managing a hierarchy of CALayers. It's not that hard to implement this yourself, but it is extra code you'd need to write when building a CALayer-only interface.
You will often need to access the underlying layers for a UIView when performing more complex animations than the base UIView class allows. UIView's animation capabilities have grown as the iOS SDK has matured, but there are still a few things that are best done by interacting with the underlying CALayer.
From the Ray Wenderlich blog (Tutorial)
CALayers are simply classes representing a rectangle on the screen
with visual content. “But wait a darn minute,” you may say, “that’s
what UIViews are for!” That’s true, but there’s a trick to that:
every UIView contains a root layer that it draws to!
Simply speaking,UIView inherit from UIResponder, handle events from users, contains CALayer, which inherit from NSObject, mainly focus on rendering, animation etc.
UIView is a container for CALayers. Using UIKit.
CALayer where we draw the contents. Using CoreGraphics
If you work with custom control like features it would be great to go ahead with single view containing more layers for accurate native rendering. Since CALayers are weightless than UIView.
To create common skeleton for Mac and iOS, follow the design for your app using CALayers. Since it is available in both platform.
UIView having feature like touch events achieved using delegates -(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event, tochesStart like events and other UIKit features.
To work with CALayers use Core Graphics knowledge.For any simple view rendering UIView is enough.
UIView: Views have more complex hierarchy layouts. They can receive user interactions like taps, pinches, cliks and more. Working with UIViews happens on the main thread, it means it is using CPU power.
CALayer: Layers on other hand have simpler hierarchy. That means they are faster to resolve and quicker to draw on the screen. There is no responder chain overhead unlike with views. Layers are drawn directly on the GPU. It happens on a separate thread without burdening the CPU.
For more details: https://medium.com/#fassko/uiview-vs-calayer-b55d932ff1f5
The big difference is UIView is designed for CocoaTouch on mobile device. It adds some event handler which CALayer did not provide.

Doodles... Open GL?

I am making an app where one function is the ability to place pre-made doodles on the page, They will have the ability to zoom the doodles in and out and place them where they wish. This is all for the iPad. Do I need to use open GL for this or is there a better / easier way? (New to iOS programming)
You should be able to achieve this using CALayers. You will need to need to add the QuartzCore framework for this to work. The idea would be to represent each doodle as a single CALayer. If your doodles are images, you can use the contents property to assign the doodle to the layer. You will need to assign a CGImageRef object which you can easily retrieve using CGImage property of a UIImage object.
You will need a view which will be your drawing board. Since you want to be able to move and alter the sizes of the doodles, you will have to attach a UIPanGestureRecognizer object for moving the layers and a UIPinchGestureRecognizer to zoom the doodles in and out. Since the recognizers can only be attached to a view and not layers, the non-trivial part when the gesture handlers are called will be identify which sublayer of the view are they manipulating. You can get the touches for the gestures using locationInView: for the pan gesture and locationOfTouch:inView: for the pinch gesture with the view argument being the view that the gesture is being done on which can be retrieved using gesture.view. Once you identify the layer in focus, you can use translationInView: for pan gesture to move the layer and use scale property of the pinch gesture to transform the layer.
While CALayer objects are lightweight objects, you could face problems when there are just too many of them. So stress test your application. Another roadblock is that images are usually memory hogs so you might not be able to get a lot of doodles in.

Resources