I have an imageview that are being rotated PI/4 (radians) every time it's taped.
That works fine with this code:
- (void)handleTap:(UITapGestureRecognizer *)tapRecognize
{
if (tapRecognize == tapRecognizer)
{
CGAffineTransform transform = CGAffineTransformRotate(imageview.transform, (M_PI / 4));
[imageview setTransform:transform];
}
The tapRecognizer is asigned to the imageview.
Now, I want to check if the imageview has been rotated. This is my code:
if (CGAffineTransformEqualToTransform(imageview.transform, rot45)) //rot45 is a CGAffineTransformMakeRotation variable which is set to M_PI / 4
{
NSLog("Rotated");
}
That works fine for the first tap, when it has been rotated 45°. But I want to be able to check when it has been taped two times, which means that it has been rotated 90°. And so on. I want different actions on each rotation-angle. How can I check that?
Sorry if the question is unclear
Devise a scheme to map the tag property to rotation. On every rotation update the tag value.
Related
This is what I currently have:
func handlePinching(recognizer : UIPinchGestureRecognizer) {
self.layer.transform = CATransform3DMakeAffineTransform(CGAffineTransformScale(self.transform, recognizer.scale, recognizer.scale));
recognizer.scale = 1.0;
}
Using self.view.transform for that also makes it bigger. I want to make it zoom "internally" (I really don't know how to explain it).
So I'm not 100% sure to understand the question but I guess I get it.
For example you want to zoom on an image without zooming the other element of the UI (NavBar, Buttons, ...) right?
So I guess in you're example you're in a viewController, which means, when you change the scale of self.view you'll zoom everything. You have to apply the scale on the specific view that you want to zoom in.
In the example below, to zoom on the image, the image is inside of an UIImageView, and this imageView is subView of self.view. And you will just apply a transform on this imageView.
Moreover I think you get a little bit confused on how to zoom, considering the view you want to zoom is imageView you just need to do
imageView.transform = CGAffineTransformMakeScale(recognizer.scale,recognizer.scale)
I hope this answer your question, let me know if something is not clear.
I've a UIScrollView with other subviews inside an UIImageWiew and I need to rotate the whole content, so I take this road:
imageView.transform = CGAffineTransformMakeRotation([self.orientation floatValue]);
Good, works perfectly.
Being the ImageView inside a scroll view, I also need to set zoomScale in order to resize image inside, and I do it in this way:
- (void)updateZoom {
const float minZoom = MIN(self.view.bounds.size.width / self.imageView.image.size.width,
self.view.bounds.size.height / self.imageView.image.size.height);
if (minZoom > 1) {
return;
}
self.scrollView.minimumZoomScale = minZoom;
self.scrollView.zoomScale = minZoom;
}
updateZoom has the effect to "reset" initial transformation, so image come back to original orientation.
Generally, each time I modify "zoomScale" property, orientation is restored.
How can I keep both orientation both zoomScale?
I suppose I need to do something in scrollView delegate:
- (void)scrollViewDidZoom:(UIScrollView *)scrollView;
I just put a repo on GitHub that might help you. It is in Swift but it should do
PhotoSlideShow-Swift
I'm using UIKit Dynamics to push a UIView off screen, similar to how Tweetbot performs it in their image overlay.
I use a UIPanGestureRecognizer, and when they end the gesture, if they exceed the velocity threshold it goes offscreen.
[self.animator removeBehavior:self.panAttachmentBehavior];
CGPoint velocity = [panGestureRecognizer velocityInView:self.view];
if (fabs(velocity.y) > 100) {
self.pushBehavior = [[UIPushBehavior alloc] initWithItems:#[self.scrollView] mode:UIPushBehaviorModeInstantaneous];
[self.pushBehavior setTargetOffsetFromCenter:centerOffset forItem:self.scrollView];
self.pushBehavior.active = YES;
self.pushBehavior.action = ^{
CGPoint lowestPoint = CGPointMake(CGRectGetMinX(self.imageView.bounds), CGRectGetMaxY(self.imageView.bounds));
CGPoint convertedPoint = [self.imageView convertPoint:lowestPoint toView:self.view];
if (!CGRectIntersectsRect(self.view.bounds, self.imageView.frame)) {
NSLog(#"outside");
}
};
CGFloat area = CGRectGetWidth(self.scrollView.bounds) * CGRectGetHeight(self.scrollView.bounds);
CGFloat UIKitNewtonScaling = 5000000.0;
CGFloat scaling = area / UIKitNewtonScaling;
CGVector pushDirection = CGVectorMake(velocity.x * scaling, velocity.y * scaling);
self.pushBehavior.pushDirection = pushDirection;
[self.animator addBehavior:self.pushBehavior];
}
I'm having an immense amount of trouble detecting when my view actually completely disappears from the screen.
My view is setup rather simply. It's a UIScrollView with a UIImageView within it. Both are just within a UIViewController. I move the UIScrollView with the pan gesture, but want to detect when the image view is off screen.
In the action block I can monitor the view as it moves, and I've tried two methods:
1. Each time the action block is called, find the lowest point in y for the image view. Convert that to the view controller's reference point, and I was just trying to see when the y value of the converted point was less than 0 (negative) for when I "threw" the view upward. (This means the lowest point in the view has crossed into negative y values for the view controller's reference point, which is above the visible area of the view controller.)
This worked okay, except the x value I gave to lowestPoint really messes everything up. If I choose the minimum X, that is the furthest to the left, it will only tell me when the bottom left corner of the UIView has gone off screen. Often times as the view can be rotating depending on where the user pushes from, the bottom right may go off screen after the left, making it detect it too early. If I choose the middle X, it will only tell me when the middle bottom has gone off, etc. I can't seem to figure out how to tell it "just get me the absolute lowest y value.
2. I tried CGRectIntersectsRect as shown in the code above, and it never says it's outside, even seconds after it went shooting outside of any visible area.
What am I doing wrong? How should I be detecting it no longer being visible?
If you take a look on UIDynamicItem protocol properties, you can see they are center, bounds and transform. So UIDynamicAnimator actually modifies only these three properties. I'm not really sure what happens with the frame during the Dynamics animations, but from my experience I can tell it's value inside the action block is not always reliable. Maybe it's because the frame is actually being calculated by CALayer based on center, transform and bounds, as described in this excellent blog post.
But you for sure can make use of center and bounds in the action block. The following code worked for me in a case similar to yours:
CGPoint parentCenter = CGPointMake(CGRectGetMidX(self.view.bounds), CGRectGetMidY(self.view.bounds));
self.pushBehavior.action = ^{
CGFloat dx = self.imageView.center.x - parentCenter.x;
CGFloat dy = self.imageView.center.y - parentCenter.y;
CGFloat distance = sqrtf(dx * dx + dy * dy);
if(distance > MIN(parentCenter.y + CGRectGetHeight(self.imageView.bounds), parentCenter.x + CGRectGetWidth(self.imageView.bounds))) {
NSLog(#"Off screen!");
}
};
I want to rotate a UIImageView, and I am using this code:
-(IBAction)rotateImageView:(id)sender{
photoView.transform = CGAffineTransformMakeRotation(M_PI); //rotation in radians
}
The code works. when I press the button, the image rotates 180 degrees, but if I press it again it doesn't rotate back to its original position, but it stays still. why??
Because you are applying the same transform to the view again, without taking into account any existing transform.
Try this:
photoView.transform = CGAffineTransformRotate(photoView.transform,M_PI);
you have to rotate the imageView from the current rotation to make it original
use this code
CGAffineTransformRotate(photoView.transform, M_PI);
I'm working on a drawing app.
I want the user to be able to "drop" a shape on the screen and then move, resize or rotate it as desired.
The problem is with the rotation. I have the moving and resizing working fine.
I did this before with a rather complex and memory/processor-intensive process, which I am now trying to improve.
I've searched and searched but haven't found an answer similar to what I'm trying to do.
Basically, let's say the user drops a square on the "surface". Then, they tap it and get some handles. They can touch anywhere and pan to move the square around (working already), touch and drag on a resize handle to resize the square (working already), or grab the rotation handle to have the square rotate around its center.
I've looked into drawing the square using UIBezierPath or just having it be a subclass of UIView that I fill.
In either case, I'm trying to rotate the UIView itself, not some contents inside. Every time I try to rotate the view, either nothing happens, the view vacates the screen or it rotates just a little bit and stops.
Here's some of the code I've tried (this doesn't work, and I've tried a lot of different approaches to this):
- (void) rotateByAngle:(CGFloat)angle
{
CGPoint cntr = [self center];
CGAffineTransform move = CGAffineTransformMakeTranslation(-1 * cntr.x, -1 * cntr.y);
[[self path] applyTransform:move];
CGAffineTransform rotate = CGAffineTransformMakeRotation(angle * M_PI / 180.0);
[[self path] applyTransform:rotate];
[self setNeedsDisplay];
CGAffineTransform moveback = CGAffineTransformMakeTranslation(cntr.x, cntr.y);
[[self path] applyTransform:moveback];
}
In case it isn't obvious, the thinking here it to move the view to the origin (0,0), rotate around that point and then move it back.
In case you're wondering, "angle" is calculated correctly. I've also wrapped the code above in a [UIView beginAnimations:nil context:NULL]/[UIView commitAnimations] block.
Is it possible to rotate a UIView a "custom" amount? I've seen/done it before where I animate a control to spin, but in those examples, the control always ended up "square" (i.e., it rotated 1 or more full circles and came back to its starting orientation).
Is it possible to perform this rotation "real-time" in response to UITouches? Do I need to draw the square as an item in the layer of the UIView and rotate the layer instead?
Just so you know, what I had working before was a shape drawn by a set of lines or UIBezierPaths. I would apply a CGAffineTransform to the data and then call the drawRect: method, which would re-draw the object inside of a custom UIView. This UIView would host quite a number of these items, all of which would need to be re-drawn anytime one of them needed it.
So, I'm trying to make the app more performant by creating a bunch of UIView subclasses, which will only get a command to re-draw when the user does something with them. Apple's Keynote for the iPad seems to accomplish this using UIGestureRecognizers, since you have to use two fingers to rotate an added shape. Is this the way to go?
Thoughts?
Thanks!
-(void)rotate{
CGAffineTransform transform;
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:1.0];
[UIView setAnimationCurve:UIViewAnimationOptionBeginFromCurrentState];
myView.alpha = 1;
transform = CGAffineTransformRotate(myView.transform,0.5*M_PI);
[myView setUserInteractionEnabled:YES];
myView.transform = transform;
[UIView commitAnimations];
}
This might help.
for just simple rotation you might leave out the Animation:
-(void)rotate:(CGFloat)angle
{
CGAffineTransform transform = CGAffineTransformRotate( myView.transform, angle * M_PI / 180.0 );
myView.transform = transform;
}
I use a simple NSTimer to control a animation.