Doodles... Open GL? - ios

I am making an app where one function is the ability to place pre-made doodles on the page, They will have the ability to zoom the doodles in and out and place them where they wish. This is all for the iPad. Do I need to use open GL for this or is there a better / easier way? (New to iOS programming)

You should be able to achieve this using CALayers. You will need to need to add the QuartzCore framework for this to work. The idea would be to represent each doodle as a single CALayer. If your doodles are images, you can use the contents property to assign the doodle to the layer. You will need to assign a CGImageRef object which you can easily retrieve using CGImage property of a UIImage object.
You will need a view which will be your drawing board. Since you want to be able to move and alter the sizes of the doodles, you will have to attach a UIPanGestureRecognizer object for moving the layers and a UIPinchGestureRecognizer to zoom the doodles in and out. Since the recognizers can only be attached to a view and not layers, the non-trivial part when the gesture handlers are called will be identify which sublayer of the view are they manipulating. You can get the touches for the gestures using locationInView: for the pan gesture and locationOfTouch:inView: for the pinch gesture with the view argument being the view that the gesture is being done on which can be retrieved using gesture.view. Once you identify the layer in focus, you can use translationInView: for pan gesture to move the layer and use scale property of the pinch gesture to transform the layer.
While CALayer objects are lightweight objects, you could face problems when there are just too many of them. So stress test your application. Another roadblock is that images are usually memory hogs so you might not be able to get a lot of doodles in.

Related

User Interaction at runtime with MPAndroidChart

First I should say MPAndroidChart is awesome, you have brought in several functionalities and customization which makes it really cool.
I'm looking to add user interaction with the MPAndroidChart,
My requirement is:
In a combined chart (line chart and bubble chart), I want the user to drag and move the data point in x,y co-ordinate space of the MPAndroidChart.
I want the user to drag the data shown,
How to achieve this, which class can be sub-classed to achieve this.
To add interactivity to the graph data
- you need a view with pan gesture
Need to subclass CombinedChartView, LineChartRenderer, BubbleChartRenderer
Create the Renderer objects, - this renderer objects needed to be replaced by the renderers created in CombinedChartView, Hence we subclass it and replace the renderer objects with that of our subclassed version.
In our CombinedChartView(subclass) override the getter,setter of the data variable, Hence while assigning the data, we create our UIView once with Pan gesture, take a reference of it's corresponding ChartDataEntry in UIView, Hence in UIView's Pan handler function we change this DataEntry's xIndex etc and call setNeedsDisplay.(UIView's frame position is not know yet, keep it all at origin)
In our Renderer class (subclass of LineChartRenderer or BubbleChartRenderer) override the DrawDataSet function -this is where the actual drawing happens, this is the place where we could get the exact x,y co-ordinates for our UIView, We finally set the frame position for the views created in CombinedChartView.
That's it, when we pan the UIView, the view will pan and in-turn move the xIndex of the ChartDataEntry

Swift: Drawing a UIBezierPath based on touch in a UIView

I've been looking at this thread as I'm trying to implement the same thing. However, I see that the Canvas class is implemented as a subclass of UIImageView. I'm trying to do the same thing except in a UIView. How will using a UIView rather than UIImageView affect the implementation of this solution? I see self.image used a couple times, but I don't know how I'd change that since I don't think that is available in a generic UIView.
Yes, you can implement this as a UIView subclass. Your model should hold the locations of the touch events (or the paths constructed from those locations) and then the drawRect of the view can render these paths. Or you create CAShapeLayer objects associated with those paths, too. Both approaches work fine.
Note, there is some merit to the approach of making snapshots (saved as UIImage) objects that you either show in a UIImageView or manually draw in drawRect of your UIView subclass. As your drawings get more and more complicated, you'll start to suffer performance issues if your drawRect has to redraw all of path segments (it can become thousands of locations surprisingly quickly because there are a lot of touches associated with a single screen gesture) upon every touch.
IMHO, I think that other answer you reference goes too far, making a new snapshot upon every touchesMoved. When you look at full resolution image for retina iPad or iPhone 6 plus, that's a large image snapshot to create upon every touch event. I personally adopt a hybrid approach: My drawRect or CAShapeLayer will render the current path associated with the current gesture (or the collection of touchesMoved events between touchesBegan and touchesEnded), but when the gesture finishes, it will create a new snapshot.
In the answer to that question, self.image is drawn into the drawing context first, then drawing is applied on top, then finally the image is updated to be the old image with new content drawn on top.
Since you just want to add a UIBezierPath, I'd just create a CAShapeLayer into which you place your bezier path, and place it on top of your views backing layer (self.view.layer). There's no need to do anything with DrawRect.

drawing on UIView in iOS

I am developing iPhone application and I have taken UIView and I want to draw particular region on view on user touchMove event. So please can anybody suggest to me how to do it means how to implement the touchMove event so it can draw region concurrently with user touchMove.
What you need to do is:
Create subclass of UIView.
Implement all the touch methods like
touchesBegan:, touchesMoved:, etc.
Use drawRect:,
UIGraphicsBeginImageContextWithOptions and UIBezierPath to draw
image based on touch locations.
There are many tutorials on the internet. I find this one quite good: Advanced Freehand Drawing Techniques

CABasicAnimation speed -- Keeping up with user input

Update: It really was as simple as not animating the UI element when utilizing touches. It perfectly follows touches now with no lag.
I'm currently attempting to implement a UI feature by implementing a CALayer subclass inside of a UIView subclass. I receive touch events in the custom UIVIew's corresponding view controller, notify the UIView about the touches, which in turn notifies the CALayer in order to animate the UI elements drawn in the layer.
It all works, but I have noticed that when there is a big delta in movement (as in when quickly scrolling a finger), the CABasicAnimation lags behind. Ideally I want the animation to stay perfectly aligned with the user's finger.
I've come up with a hacky way of just setting the animation's speed arbitrarily high as in
anim.speed = 10.0f;
which essentially keeps up with the user's finger, but I feel that this is a total hack and not a shippable solution. Should I be artificially limiting how many touch events are processed in order to solve this problem? Is there some sort of calculation I should be doing for the speed/duration of the animation that I'm not aware of?
Thanks for any help with this!
During the continuous gesture, one shouldn’t animate movements, but rather just move directly to the gesture’s location. When the gesture finishes, if you want it to settle in some other position, then animate that final, post-gesture, destination. But don’t animate during the gesture itself.
In rare cases, where rendering of a single frame is incredibly slow, there can still be perceived lagginess. Obviously, one should optimize the draw(_:) process so that it isn’t slow (or take a snapshot and animate the snapshot view rather than the complicated view). But during the gesture, you can also use “predictive touches,” where the OS estimates where user’s gesture is going to be in the future. For example, you can implement touchesMoved(_:with:) and then call predictedTouches(for:). By moving the view to the predicted touch location, it reduces perceived lagginess.

hacking ios ui responder chain

I'm facing a delicated problem handling touch events. This is problably not a usual stuff to make but i think it is possible. I just don't know how...
I have a Main View (A) and Main View (B) with a lot of subviews 1,2,3,4,5,...
MainView
SubView(A)
1
2
3
SubView(B)
1
2
3
Some of these sub sub views (1,2,4) are scrollviews.
It happens that I want to change between A and B with a two finger pan.
I have tried to attach a UIPanGestureRecognizer to MainView but the scrollviews cancel the touches and it only works sometimes.
I need a consistent method to first capture the touches, detect if it is a two finger pan, and only then decide if it will pass the touches downwards (or upwards... i'm not sure) the responder chain.
I tried to create a top level view to handle that, but I cant make to have the touches being passed through that view.
I have found a lot of people with similar problems but couldn't find a solution to this problem from their solutions.
If any one could give me a light, that would be great as i'm already desperate with this.
You can create a top level view to capture the touches and the coordinates of touches then you can check if the coordinates of touch is inside of the sub views. You can do that using
BOOL CGRectContainsPoint(CGRect rect, CGPoint point)
Method. Rect is a frame of the view, and point is point of touch.
Please not that the frames and touch locations are relative to their super views, therefore you need to convert them to coordinate system of app window.
Or maybe it can be more helpful
Receiving touch events on more then one UIView simultaneously

Resources