Drawing on a zoomable view - ios

I'm working on a small drawing application, which has a basic requirement of supporting zoom-in/out. I have two main issues:
Drawing doesn't appear crisp and clear, when view is zoomed/transformed. Is there a better approach, or is there a way to improve the drawing when the view is zoomed?
The drawing performance is slow, when drawing on 1200 x 1200 pts canvas (on iPhone). Any chance I can improve it for large canvas sizes?
Zooming Code:
- (void)scale:(UIPinchGestureRecognizer *)gestureRecognizer {
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
UIView *canvas = [gestureRecognizer view];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan ||
[gestureRecognizer state] == UIGestureRecognizerStateChanged) {
// Calculate the drawing view's size
CGSize drawingViewSize = ...;
// Calculate the minimum allowed tranform size
// Developer's Note: I actually wanted to use the size 1/4th of the view
// but self.view.frame.size doesn't return the correct (actual) width and height
// It returns these values inverted i.e. width as height, and vice verse.
// The reason is that the view appears to be transformed (90 degrees).
// Since there's no work-around this, so for now, I'm just using fixed values.
CGSize minTranformSize = CGSizeMake(100.0f, 100.0f);
if ((minTranformSize.width > drawingViewSize.width) && (minTranformSize.height > drawingViewSize.height)) {
minTranformSize = drawingViewSize;
}
// Transform the view, provided
// 1. It won't scale more than the original size of the background image
// 2. It won't scale less than the minimum possible transform
CGSize transformedSize = CGSizeMake(canvas.frame.size.width * [gestureRecognizer scale],
canvas.frame.size.height * [gestureRecognizer scale]);
if ((transformedSize.width <= drawingViewSize.width) && (transformedSize.height <= drawingViewSize.height) &&
(transformedSize.width >= minTranformSize.width) && (transformedSize.height >= minTranformSize.height)) {
canvas.transform = CGAffineTransformScale([canvas transform],
[gestureRecognizer scale],
[gestureRecognizer scale]);
}
[gestureRecognizer setScale:1.0];
} else if ([gestureRecognizer state] == UIGestureRecognizerStateEnded) {
// Recenter the container view, if required (piece is smaller than the view and it's not aligned)
CGSize viewSize = self.view.bounds.size;
if ((canvas.frame.size.width < viewSize.width) ||
(canvas.frame.size.height < viewSize.height)) {
canvas.center = CGPointMake(viewSize.width/2, viewSize.height/2);
}
// Adjust the x/y coordinates, if required (piece is larger than the view and it's moving outwards from the view)
if (((canvas.frame.origin.x > 0) || (canvas.frame.origin.y > 0)) &&
((canvas.frame.size.width >= viewSize.width) && (canvas.frame.size.height >= viewSize.height))) {
canvas.frame = CGRectMake(0.0,
0.0,
canvas.frame.size.width,
canvas.frame.size.height);
}
canvas.frame = CGRectIntegral(canvas.frame);
}
}
Drawing Code
- (void)draw {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
if (self.fillColor) {
[self.fillColor setFill];
[self.path fill];
}
if ([self.strokeColor isEqual:[UIColor clearColor]]) {
[self.path strokeWithBlendMode:kCGBlendModeClear alpha:1.0];
} else if (self.strokeColor) {
[self.strokeColor setStroke];
[self.path stroke];
}
CGContextRestoreGState(context);
}

This is a pretty complicated problem, that I have struggled a lot with.
I ended up converting the drawings to vector.
draw all lines in one layer, draw all fills in another.
Convert the line drawings to Vector, using potrace (http://potrace.sourceforge.net/)
draw the vector using SVGKit
(https://github.com/SVGKit/SVGKit) and hide the layer drawn in 1)
It is working pretty well and fairly fast, but it requires a lot of work. We have an app in our company that applies this technique:
https://itunes.apple.com/us/app/ideal-paint-hd-mormor/id569450492?mt=8.
If your only problem is performance, try taking a look at CATiledLayer. (also used in app mentioned above). It will increase performance tremendously (You can find a pretty good tutorial here http://www.cimgf.com/2011/03/01/subduing-catiledlayer/).
Good luck! :)

First of all You are Transforming View this is not better way for zooming .Transforming may use only for increase or decrease size of UIVew For zooming you can d this.
1) get get pic of screen by using this code
UIGraphicsBeginImageContext(self.drawingView.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//[self.view.layer renderInContext:context];
[self.layerContainerView.layer renderInContext:context];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
[scaleLabel setHidden:FALSE];
return screenShot;
2) then put it on some UIImageView and then perform Zomming on this Image
scrollView.userInteractionEnabled=TRUE;
self.scrollView.minimumZoomScale = 1.000;
self.scrollView.maximumZoomScale = 30.0f;
[self centerScrollViewContents];
CGFloat newZoomScale = self.scrollView.zoomScale / 1.5f;
newZoomScale = MAX(newZoomScale, self.scrollView.minimumZoomScale);
[self.scrollView setZoomScale:newZoomScale animated:YES];
3) then again Set zoomed image on uiView's background
this works perfect for me hope this will also works for you

Related

Resize UIImageView with finger IOS

I have a UIImageView (and a UITextView) and I am changing the hight and with of them both using a plus and minus button. However I wanted to do like you can do in most programs, where a box appears round my views, and the user can drag a corner of it to resize. Only one corner needs to be dragged. The opposite is fixed. How do you do this?
Another way - is by GestureRecognizer. In some task i resized image like this:
- (void)resizeImage:(UIPinchGestureRecognizer *)recognizer
{
if ([recognizer state] == UIGestureRecognizerStateBegan)
previousScale = [recognizer scale];
UIView *viewToResize = recognizer.view;
if ([recognizer state] == UIGestureRecognizerStateChanged)
{
CGFloat currentScale = [[viewToResize.layer valueForKeyPath:#"transform.scale"] floatValue];
CGFloat newScale = 1 - (previousScale - [recognizer scale]);
newScale = MIN(newScale, MAX_SCALE / currentScale);
newScale = MAX(newScale, MIN_SCALE / currentScale);
CGAffineTransform transform = CGAffineTransformScale([viewToResize transform], newScale, newScale);
viewToResize.transform = transform;
previousScale = [recognizer scale];
}
}
ios objective-c
I never done this before but i think you would have to override the touchesBegan and touchesMoved functions of the view. When you initiate the touchesBegan method make sure you are touching the correct view, (Imageview or UItextView) set a boolen in their stating. imTouchingOneOrTheOther. So when you now hit the touchesMoved function you can adjust the size of the frame accordingly. I would try to adjust the frame first with UIView block based animations and if that doesn't look ok then i would play around with coreAnimation. Let me know how it works out.

How do I accurately zoom in on a specific point on pinch gesture (with multiple pinches)?

I am trying to zoom a quad in my iOS application. It needs to zoom not based on the center of the quad, but based on the centroid of the pinch.
I am able to do this correctly - however only for the first pinch gesture. On subsequent pinch gestures, it works, but it drifts a little bit and doesn't quite seem accurate. I am unable to figure out what to do.
There are a few SO questions around this, and I've been through most, if not all of them. None of them accurately address my problem.
Also note that I'm scaling and translating a quad (which is rendered into a GLKView), and not the view itself. Most solutions I've seen deal with transforming the views directly.
Here's the code for the pinch gesture and handling:
First in viewDidLoad:
UIPinchGestureRecognizer *pinchRecognizer = [[UIPinchGestureRecognizer alloc]
initWithTarget:self action:#selector(respondToPinchGesture:)];
pinchRecognizer.cancelsTouchesInView = YES;
pinchRecognizer.delaysTouchesEnded = NO;
[glView addGestureRecognizer:pinchRecognizer];
Where glView is a GLKView object.
And the handler:
- (IBAction)respondToPinchGesture:(UIPinchGestureRecognizer *)recognizer{
if (recognizer.state == UIGestureRecognizerStateEnded || [recognizer numberOfTouches] < 2) return;
if (recognizer.state == UIGestureRecognizerStateBegan) {
point = [recognizer locationInView:glView];
point.x *= glView.contentScaleFactor;
point.y *= glView.contentScaleFactor;
point.y = height - point.y;
anchor = GLKVector3Make(point.x, point.y, 0);
lastScale = 1.0;
}
if (fabs(recognizer.scale - lastScale) > 0.01){
GLfloat scale = 1.0 - (lastScale - recognizer.scale);
lastScale = recognizer.scale;
new_anchor_point = anchor;
new_anchor_point = GLKVector3MultiplyScalar(new_anchor_point, scale);
GLKVector3 translate = GLKVector3Subtract(anchor, new_anchor_point);
path.transform = GLKMatrix4TranslateWithVector3(path.transform, translate);
path.transform = GLKMatrix4Scale(path.transform, scale, scale, 0);
cumulative_translate = GLKVector3Add(cumulative_translate, translate);
}
}
Any pointers appreciated. I am 2 days into this and even a vague suggestion might be helpful.
You have to
remember the previous transformation matrix upon RecognizerStateBegan,
construct your new transformation matrix for pinch zoom assuming the view or object have not been transformed before.
Then, concatenate two matrices together. This will be your final matrix for transforming your object or view.
I managed to solve this using:
-(GLKVector3)get_touch_point_on_view:(UIGestureRecognizer *)recognizer{
CGRect bounds = [glView bounds];
CGPoint point = [recognizer locationInView:glView];
point.y = bounds.size.height - point.y;
return GLKVector3Make((point.x * glView.contentScaleFactor - total_translation.x)/total_scale,
(point.y * glView.contentScaleFactor - total_translation.y)/total_scale, 0);
}
- (void)respondToPinchGesture:(UIPinchGestureRecognizer *)recognizer{
if (recognizer.state == UIGestureRecognizerStateBegan) {
lastScale = 1.0;
}
[self get_touch_point_on_view:recognizer];
if (fabs(recognizer.scale - lastScale) > 0.01){
GLfloat scale = 1.0 - (lastScale - recognizer.scale);
lastScale = recognizer.scale;
total_scale *= scale;
path.transform = GLKMatrix4TranslateWithVector3(path.transform, centroid);
path.transform = GLKMatrix4Scale(path.transform, scale, scale, 0);
path.transform = GLKMatrix4TranslateWithVector3(path.transform, GLKVector3Negate(centroid));
total_translation = [self get_total_translation];
}
}

Detecting collisions between rotated UIViews

I have two UIViews, one of which is rotated every .01 second using the following code:
self.rectView.transform = CGAffineTransformRotate(self.rectView.transform, .05);
Now, I want to be able to tell if another UIView, called view, intersects rectView. I used this code:
if(CGRectIntersectsRect(self.rectView.frame, view.frame)) {
//Intersection
}
There is a problem with this, however, as you probably know. Here is a screenshot:
In this case, a collision is detected, even though obviously they are not touching. I have looked around, but I cannot seem to find real code to detect this collision. How can this be done? Working code for detecting the collision in this case would be greatly appreciated! Or maybe would there be a better class to be using other than UIView?
when you rotate a view, its bounds won't change but its frame changes.
So, for my view with backgroundColor blue,
the initial frame i set to was
frame = (30, 150, 150, 35);
bounds={{0, 0}, {150, 35}};
but after rotating by 45 degree, the frame changed to
frame = (39.5926 102.093; 130.815 130.815);
bounds={{0, 0}, {150, 35}};
Because the frame always return the smallest enclosing rectangle of that view.
So, in your case, even-though it looks both views are not intersecting,their frames intersect.
To solve it you can use, separating axis test.
If you want learn on it, link here
I tried to solve it and finally got the solution.
If you like to check, below is the code.
Copy paste the below code into an empty project to check it out.
In .m file
#implementation ViewController{
UIView *nonRotatedView;
UIView *rotatedView;
}
- (void)viewDidLoad
{
[super viewDidLoad];
nonRotatedView =[[UIView alloc] initWithFrame:CGRectMake(120, 80, 150, 40)];
nonRotatedView.backgroundColor =[UIColor blackColor];
[self.view addSubview:nonRotatedView];
rotatedView =[[UIView alloc] initWithFrame:CGRectMake(30, 150, 150, 35)];
rotatedView.backgroundColor =[UIColor blueColor];
[self.view addSubview:rotatedView];
CGAffineTransform t=CGAffineTransformMakeRotation(M_PI_4);
rotatedView.transform=t;
CAShapeLayer *layer =[CAShapeLayer layer];
[layer setFrame:rotatedView.frame];
[self.view.layer addSublayer:layer];
[layer setBorderColor:[UIColor blackColor].CGColor];
[layer setBorderWidth:1];
CGPoint p=CGPointMake(rotatedView.bounds.size.width/2, rotatedView.bounds.size.height/2);
p.x = -p.x;p.y=-p.y;
CGPoint tL =CGPointApplyAffineTransform(p, t);
tL.x +=rotatedView.center.x;
tL.y +=rotatedView.center.y;
p.x = -p.x;
CGPoint tR =CGPointApplyAffineTransform(p, t);
tR.x +=rotatedView.center.x;
tR.y +=rotatedView.center.y;
p.y=-p.y;
CGPoint bR =CGPointApplyAffineTransform(p, t);
bR.x +=rotatedView.center.x;
bR.y +=rotatedView.center.y;
p.x = -p.x;
CGPoint bL =CGPointApplyAffineTransform(p, t);
bL.x +=rotatedView.center.x;
bL.y +=rotatedView.center.y;
//check for edges of nonRotated Rect's edges
BOOL contains=YES;
CGFloat value=nonRotatedView.frame.origin.x;
if(tL.x<value && tR.x<value && bR.x<value && bL.x<value)
contains=NO;
value=nonRotatedView.frame.origin.y;
if(tL.y<value && tR.y<value && bR.y<value && bL.y<value)
contains=NO;
value=nonRotatedView.frame.origin.x+nonRotatedView.frame.size.width;
if(tL.x>value && tR.x>value && bR.x>value && bL.x>value)
contains=NO;
value=nonRotatedView.frame.origin.y+nonRotatedView.frame.size.height;
if(tL.y>value && tR.y>value && bR.y>value && bL.y>value)
contains=NO;
if(contains==NO){
NSLog(#"no intersection 1");
return;
}
//check for roatedView's edges
CGPoint rotatedVertexArray[]={tL,tR,bR,bL,tL,tR};
CGPoint nonRotatedVertexArray[4];
nonRotatedVertexArray[0]=CGPointMake(nonRotatedView.frame.origin.x,nonRotatedView.frame.origin.y);
nonRotatedVertexArray[1]=CGPointMake(nonRotatedView.frame.origin.x+nonRotatedView.frame.size.width,nonRotatedView.frame.origin.y);
nonRotatedVertexArray[2]=CGPointMake(nonRotatedView.frame.origin.x+nonRotatedView.frame.size.width,nonRotatedView.frame.origin.y+nonRotatedView.frame.size.height);
nonRotatedVertexArray[3]=CGPointMake(nonRotatedView.frame.origin.x,nonRotatedView.frame.origin.y+nonRotatedView.frame.size.height);
NSInteger i,j;
for (i=0; i<4; i++) {
CGPoint first=rotatedVertexArray[i];
CGPoint second=rotatedVertexArray[i+1];
CGPoint third=rotatedVertexArray[i+2];
CGPoint mainVector =CGPointMake(second.x-first.x, second.y-first.y);
CGPoint selfVector =CGPointMake(third.x-first.x, third.y-first.y);
BOOL sign;
sign=[self crossProductOf:mainVector withPoint:selfVector];
for (j=0; j<4; j++) {
CGPoint otherPoint=nonRotatedVertexArray[j];
CGPoint otherVector = CGPointMake(otherPoint.x-first.x, otherPoint.y-first.y);
BOOL checkSign=[self crossProductOf:mainVector withPoint:otherVector];
if(checkSign==sign)
break;
else if (j==3)
contains=NO;
}
if(contains==NO){
NSLog(#"no intersection 2");
return;
}
}
NSLog(#"intersection");
}
-(BOOL)crossProductOf:(CGPoint)point1 withPoint:(CGPoint)point2{
if((point1.x*point2.y-point1.y*point2.x)>=0)
return YES;
else
return NO;
}
Hope this helps.
This can be done much more efficiently and easily than what has been suggested... and both the black and blue views can be rotated if need be.
Just convert the 4 corners of the rotated blueView to their location in the superview and then convert those points to their location in the rotated blackView then check if those points are within the blackView's bounds.
UIView *superview = blueView.superview;//Assuming blueView.superview and blackView.superview are the same...
CGPoint blueView_topLeft_inSuperview = [blueView convertPoint:CGPointMake(0, 0) toView:superview];
CGPoint blueView_topRight_inSuperview = [blueView convertPoint:CGPointMake(blueView.bounds.size.width, 0) toView:superview];
CGPoint blueView_bottomLeft_inSuperview = [blueView convertPoint:CGPointMake(0, blueView.bounds.size.height) toView:superview];
CGPoint blueView_bottomRight_inSuperview = [blueView convertPoint:CGPointMake(blueView.bounds.size.width, blueView.bounds.size.height) toView:superview];
CGPoint blueView_topLeft_inBlackView = [superview convertPoint:blueView_topLeft_inSuperview toView:blackView];
CGPoint blueView_topRight_inBlackView = [superview convertPoint:blueView_topRight_inSuperview toView:blackView];
CGPoint blueView_bottomLeft_inBlackView = [superview convertPoint:blueView_bottomLeft_inSuperview toView:blackView];
CGPoint blueView_bottomRight_inBlackView = [superview convertPoint:blueView_bottomRight_inSuperview toView:blackView];
BOOL collision = (CGRectContainsPoint(blackView.bounds, blueView_topLeft_inBlackView) ||
CGRectContainsPoint(blackView.bounds, blueView_topRight_inBlackView) ||
CGRectContainsPoint(blackView.bounds, blueView_bottomLeft_inBlackView) ||
CGRectContainsPoint(blackView.bounds, blueView_bottomRight_inBlackView));

IOS UIImage Scale and Crop, scaling up actually making image smaller?

im using someone else's pinch gesture code for scaling which works perfect but its scaling my image in my photo editing app and once the user presses done scaling, i need the changes to be reflected and saved or another way to say it i need the image to actually be zoomed in and cropped if someone used pinch to scale. I figured i could use the amount they scaled * the frame size for uigraphicsbeginimagecontext but that strategy is not working since when the user scales the image and hits the done button the image gets saved smaller because this now very large size is getting squeezed into the view when what i really want it crop off any leftovers and not do any fitting.
- (IBAction)pinchGest:(UIPinchGestureRecognizer *)sender{
if (sender.state == UIGestureRecognizerStateEnded
|| sender.state == UIGestureRecognizerStateChanged) {
NSLog(#"sender.scale = %f", sender.scale);
CGFloat currentScale = self.activeImageView.frame.size.width / self.activeImageView.bounds.size.width;
CGFloat newScale = currentScale * sender.scale;
if (newScale < .5) {
newScale = .5;
}
if (newScale > 4) {
newScale = 4;
}
CGAffineTransform transform = CGAffineTransformMakeScale(newScale, newScale);
self.activeImageView.transform = transform;
scalersOfficialChange = newScale;
sender.scale = 1;
}
}
- (IBAction)doneMoverViewButtonPressed:(UIButton *)sender {
// turn off ability to move & scale
moverViewActive = NO;
NSLog(#"%f %f",dragOfficialChange.x,dragOfficialChange.y);
NSLog(#"%f",rotationOfficialChange);
NSLog(#"%f",scalersOfficialChange);
//problem area below...
CGSize newSize = CGSizeMake(self.activeImageView.bounds.size.width * scalersOfficialChange, self.activeImageView.bounds.size.height * scalersOfficialChange );
UIGraphicsBeginImageContext(newSize);
[self.activeImageView.image drawInRect:CGRectMake(dragOfficialChange.x, dragOfficialChange.y, self.layerContainerView.bounds.size.width, self.layerContainerView.bounds.size.height)];
self.activeImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self hideMoveViewerAnimation];
//resets activeimageview coords
CGRect myFrame = self.layerContainerView.bounds;
myFrame.origin.x = 0;
myFrame.origin.y = 0;
self.activeImageView.frame = myFrame;
//reset changes values
dragOfficialChange.x = 0;
dragOfficialChange.y = 0;
rotationOfficialChange = 0;
scalersOfficialChange = 0;
}
first of all, can you make your question more clear? I suggest you want to draw your image in a rect and don't want to squeeze it, am I right?
Then lets try this method:
//The method: drawInRect:(CGRect) will scale your pic and squeeze it to fit the Rect area
//So if you dont want to scale your pic, you can use the method below
[image drawAsPatternInRect:(CGRect)_rect];
//This method would not scale your image and cut the needless part

How to move UIImageView in order to always keep image's head forward?

Assume I have an UIImageView in ViewController's view, and this UIImageView contains an image.
As an example, let it be a car image with the car's head directing towards north by default.
I want to do some rotation and movement on the UIImageView (containing the car image).
I use CGAffineTransformRotate function to rotate it
CGAffineTransform newTransform = CGAffineTransformRotate(_ImageView.transform, angle);
I assign values to tx and ty of newTransform.
Since the values of tx and ty are in UIImageView's coordinate system, this coordinate system of the UIImageView won't be rotated.
My question is: Is there any easy way to get the value of tx and ty such that UIImageView moves straightforward with the direction of image's head.
Use this method to move the imageview
-(void)move:(id)sender {
CGPoint translatedPoint = [(UIPanGestureRecognizer*)sender translationInView:self];
if([(UIPanGestureRecognizer*)sender state] == UIGestureRecognizerStateBegan) {
_firstX = [self center].x;
_firstY = [self center].y;
}
translatedPoint = CGPointMake(_firstX+translatedPoint.x, _firstY+translatedPoint.y);
[self setCenter:translatedPoint];
[self showOverlayWithFrame:self.frame];
}

Resources