Zoomable Graphics - ios

I am trying to draw zoomable graphics onto the screen. I currently have a UIView inside of a ScrollView and I'm wondering what the best way is to go about handling/implementing zooming of the graphics I've drawn on the screen.

You'll probably want to use something along the lines of what I describe in my answer here.
During the pinch-zooming event, a transform will be applied to your UIView, which will zoom the content but will lead to blurring. Once the zooming event is finished, use the -scrollViewDidEndZooming:withView:atScale: delegate method to determine the new scale factor and resize and re-render your UIView appropriately. If you're doing your drawing using Core Graphics within the -drawRect: method of your UIView, this should be pretty easy to manage.

Related

iOS, UIView animation, animate own content

i have a subclass of UIView that displays own content. I'd like to animate the content.
The content is self-drawn in an own drawRect:, i wonder what possibilities are there to animate it. The content itself consists of graphical shapes that change their form.
I don't see a way to construct the content with subviews that can then be animated themselves.
Is there a way to use an UIView animation block?
Are there other possibilities? I would not want to animate this using OpenGL ES, this would be my last choice.
Thanks for any hints
Torsten
are you sure you can't use CALayer? They are made for this! You can create complex frames/textures (you wrote you have geometric shapes) and apply animated transforms to them.
Consider that if your shapes don't fit in the (really wide array of possibility) of shapes, you can basically draw any line using a CALayer: you create a layer of the proper length and width then simply translate and rotate it as needed (and of course translation an rotation are "animatable").

Using panning gesture to animate GCRect resize

I'm drawing a few circles, each filled with an image. When the user pans I'd like to scale/resize the circles. So I called drawRect again and again, redrawing every GCRect until the gesture was completed - of course the animation was very choppy. In my case a UIScrollView doesn't fit the needs, because I don't want to scroll, but to scale the circles while the user is panning.
Is there any way except using OpenGL ES to implement this functionality?
Do you really need custom drawing for this? You can easily clip an image to a circle without drawRect.
Without -drawRect:
Using Core Animation you can set the corner radius of a layer. If all you want is to show an image inside a circle then you can put the image in an image view with a square frame and set the corner radius of the image views layer to half the width of the frame.
Now each time the user drags you can change the bounds and the corner radius of the image views layer. This will make it look like the circle becomes bigger/smaller.
If you require custom drawing
Maybe you are doing some custom shadows or blending that can only be done with Core Graphics. If so, you could apply a scale transform and stretch the image while the user is dragging their finger and only redraw once the finger lifts from the screen. That will be much, much cheaper and is also very easy to implement. Just create a scale transform (CGAffineTransformMakeScale(xScale, yScale);) and set it as the transform on the view with the circle (this will only work if each circle is its own view).
Note: You can still use the same trick (scaling while dragging and then redrawing) if you use the corner radius approach if you require the extra performance.

How to resize a custom UIView and maintain its custom draw in proportion?

I have a custom view with some drawing on it.
I want to resize it to a new proportion and I want the pattern I drew in it's drawRect to also be resized by the same proportion.
Is there anyway I can accomplish this without refreshing and redrawing everything.
This should be happening for you automatically with the default contentMode, which is UIViewContentModeScaleToFill. contentMode determines how to adjust the cached bitmap without forcing a new call to drawRect:. Also see contentStretch which allows you to control which part of the view is scaled.
you will have to redraw it for the new proportion.
For that you have to store the points that made the CGPath and scale the points according to the new proportion and render it again.
Redrawing CGPath needs attention.
If you have used simple moveTopoint / AddlinePoint you can do it just by storing points in an array. You can scale and redraw it later.
If you can used functions like addcurveTopoint etc., storing points in array won't work.A general purpose way is needed.For that you have to use the CGpathApply function. You can see an example it here. http://www.mlsite.net/blog/?p=1312
If you need to zoom and no interation neeeded you can take a scrrenshot and and zoom the image.

iOS : need inputs in developing efficient ( performance wise ) drawing app

I have this app using which one can draw basic shapes like rectangle, eclipse, circle, text etc.
I also allow free form drawing, which is stored as set-of-points, on the canvas.
Also a user can resize and move around these objects by operating on the selection handles that appear when an object is selected.
In addition the user should be able to zoom and pan the canvas.
I need some inputs on how to efficiently implement this drawing functionality.
I have following things in mind -
Use UIView's InvalidateRect and drawRect
Have a UIView for the main canvas and for each inserted object - invalidate the correspoding rect and redraw all the objects which intersects that rect in the drawRect function of the UIView.
Have a UIView and use CALayer ?
every one keep mentioning about the CALayer , I dont have much idea on this, before I venture into this I wanted a quick input on whether this route is worth taking.
like, https://developer.apple.com/library/ios/#qa/qa1708/_index.html
Have a UIImageView as canvas and when drawing each object, we do this
i) Draw the object into offscreen CGContext, basically, create a new CGContext by using UIGraphicsBeginImageContext, draw the shape, extract the image out of this CG context and use that as source of UIImageView's image property, but here how do I invalidate only a part of the UIImageView so that only that area gets refreshed.
Could you please suggest what is the best approach?
Is there any other efficient way to get this done?
Thanks.
Using a UIImage is more efficient for rendering multiple objects. But Using a CALayer is more efficient when moving and modifying a single object because you don't have to modify the other objects. So I think the best approach is to use a UIImage for general drawing and a CALayer for the shape that is being modified. In other words:
use a CALayer to draw the shape being added or modified, but don't draw it on the UIImage
use a UIImage to draw the other shapes
But OpenGL is still the most efficient solution, but don't bother with that if you don't have too many objects to draw.
If you want to draw polygons, you'll have to use Quartz framework, and have your drawing methods based on CALayer. It doesn't really matter which view you'll put your CALayers in, UIImageView or UIView. I'll say UIView since you won't be needing UIImageView's properties or methods for drawing.

Possible to ignore pan gestures on transparent parts of UIImageViews?

I'm working on an app that lets the user stack graphics on top of each other.
The graphics are instantiated as a UIImageView, and is transparent outside of the actual graphic. I'm also using pan gestures to let the user drag them around the screen.
So when you have a bunch of graphics of different sizes and shapes on top of one another, you may have the illusion that you are touching a sub-indexed view, but you're actually touching the top one because some transparent part of it its hovering over your touch point.
I was wondering if anyone had ideas on how we could accomplish ONLY listening to the pan gesture on the solid part of the imageview. Or something that would tighten up the user experience so that whatever they touched was what they select. Thanks
Create your own subclass of UIImageView. In your subclass, override the pointInside:withEvent: method to return NO if the point is in a transparent part of the image.
Of course, you need to determine if a point is in a transparent part. :)
If you happen to have a CGPath or UIBezierPath that outlines the opaque parts of your image, you can do it easily using CGPathContainsPoint or -[UIBezierPath containsPoint:].
If you don't have a handy path, you will have to examine the image's pixel data. There are many answers on stackoverflow.com already that explain how to do that. Search for get pixel CGImage or get pixel UIImage.

Resources