I have two UIViews, one of which is rotated every .01 second using the following code:
self.rectView.transform = CGAffineTransformRotate(self.rectView.transform, .05);
Now, I want to be able to tell if another UIView, called view, intersects rectView. I used this code:
if(CGRectIntersectsRect(self.rectView.frame, view.frame)) {
//Intersection
}
There is a problem with this, however, as you probably know. Here is a screenshot:
In this case, a collision is detected, even though obviously they are not touching. I have looked around, but I cannot seem to find real code to detect this collision. How can this be done? Working code for detecting the collision in this case would be greatly appreciated! Or maybe would there be a better class to be using other than UIView?
when you rotate a view, its bounds won't change but its frame changes.
So, for my view with backgroundColor blue,
the initial frame i set to was
frame = (30, 150, 150, 35);
bounds={{0, 0}, {150, 35}};
but after rotating by 45 degree, the frame changed to
frame = (39.5926 102.093; 130.815 130.815);
bounds={{0, 0}, {150, 35}};
Because the frame always return the smallest enclosing rectangle of that view.
So, in your case, even-though it looks both views are not intersecting,their frames intersect.
To solve it you can use, separating axis test.
If you want learn on it, link here
I tried to solve it and finally got the solution.
If you like to check, below is the code.
Copy paste the below code into an empty project to check it out.
In .m file
#implementation ViewController{
UIView *nonRotatedView;
UIView *rotatedView;
}
- (void)viewDidLoad
{
[super viewDidLoad];
nonRotatedView =[[UIView alloc] initWithFrame:CGRectMake(120, 80, 150, 40)];
nonRotatedView.backgroundColor =[UIColor blackColor];
[self.view addSubview:nonRotatedView];
rotatedView =[[UIView alloc] initWithFrame:CGRectMake(30, 150, 150, 35)];
rotatedView.backgroundColor =[UIColor blueColor];
[self.view addSubview:rotatedView];
CGAffineTransform t=CGAffineTransformMakeRotation(M_PI_4);
rotatedView.transform=t;
CAShapeLayer *layer =[CAShapeLayer layer];
[layer setFrame:rotatedView.frame];
[self.view.layer addSublayer:layer];
[layer setBorderColor:[UIColor blackColor].CGColor];
[layer setBorderWidth:1];
CGPoint p=CGPointMake(rotatedView.bounds.size.width/2, rotatedView.bounds.size.height/2);
p.x = -p.x;p.y=-p.y;
CGPoint tL =CGPointApplyAffineTransform(p, t);
tL.x +=rotatedView.center.x;
tL.y +=rotatedView.center.y;
p.x = -p.x;
CGPoint tR =CGPointApplyAffineTransform(p, t);
tR.x +=rotatedView.center.x;
tR.y +=rotatedView.center.y;
p.y=-p.y;
CGPoint bR =CGPointApplyAffineTransform(p, t);
bR.x +=rotatedView.center.x;
bR.y +=rotatedView.center.y;
p.x = -p.x;
CGPoint bL =CGPointApplyAffineTransform(p, t);
bL.x +=rotatedView.center.x;
bL.y +=rotatedView.center.y;
//check for edges of nonRotated Rect's edges
BOOL contains=YES;
CGFloat value=nonRotatedView.frame.origin.x;
if(tL.x<value && tR.x<value && bR.x<value && bL.x<value)
contains=NO;
value=nonRotatedView.frame.origin.y;
if(tL.y<value && tR.y<value && bR.y<value && bL.y<value)
contains=NO;
value=nonRotatedView.frame.origin.x+nonRotatedView.frame.size.width;
if(tL.x>value && tR.x>value && bR.x>value && bL.x>value)
contains=NO;
value=nonRotatedView.frame.origin.y+nonRotatedView.frame.size.height;
if(tL.y>value && tR.y>value && bR.y>value && bL.y>value)
contains=NO;
if(contains==NO){
NSLog(#"no intersection 1");
return;
}
//check for roatedView's edges
CGPoint rotatedVertexArray[]={tL,tR,bR,bL,tL,tR};
CGPoint nonRotatedVertexArray[4];
nonRotatedVertexArray[0]=CGPointMake(nonRotatedView.frame.origin.x,nonRotatedView.frame.origin.y);
nonRotatedVertexArray[1]=CGPointMake(nonRotatedView.frame.origin.x+nonRotatedView.frame.size.width,nonRotatedView.frame.origin.y);
nonRotatedVertexArray[2]=CGPointMake(nonRotatedView.frame.origin.x+nonRotatedView.frame.size.width,nonRotatedView.frame.origin.y+nonRotatedView.frame.size.height);
nonRotatedVertexArray[3]=CGPointMake(nonRotatedView.frame.origin.x,nonRotatedView.frame.origin.y+nonRotatedView.frame.size.height);
NSInteger i,j;
for (i=0; i<4; i++) {
CGPoint first=rotatedVertexArray[i];
CGPoint second=rotatedVertexArray[i+1];
CGPoint third=rotatedVertexArray[i+2];
CGPoint mainVector =CGPointMake(second.x-first.x, second.y-first.y);
CGPoint selfVector =CGPointMake(third.x-first.x, third.y-first.y);
BOOL sign;
sign=[self crossProductOf:mainVector withPoint:selfVector];
for (j=0; j<4; j++) {
CGPoint otherPoint=nonRotatedVertexArray[j];
CGPoint otherVector = CGPointMake(otherPoint.x-first.x, otherPoint.y-first.y);
BOOL checkSign=[self crossProductOf:mainVector withPoint:otherVector];
if(checkSign==sign)
break;
else if (j==3)
contains=NO;
}
if(contains==NO){
NSLog(#"no intersection 2");
return;
}
}
NSLog(#"intersection");
}
-(BOOL)crossProductOf:(CGPoint)point1 withPoint:(CGPoint)point2{
if((point1.x*point2.y-point1.y*point2.x)>=0)
return YES;
else
return NO;
}
Hope this helps.
This can be done much more efficiently and easily than what has been suggested... and both the black and blue views can be rotated if need be.
Just convert the 4 corners of the rotated blueView to their location in the superview and then convert those points to their location in the rotated blackView then check if those points are within the blackView's bounds.
UIView *superview = blueView.superview;//Assuming blueView.superview and blackView.superview are the same...
CGPoint blueView_topLeft_inSuperview = [blueView convertPoint:CGPointMake(0, 0) toView:superview];
CGPoint blueView_topRight_inSuperview = [blueView convertPoint:CGPointMake(blueView.bounds.size.width, 0) toView:superview];
CGPoint blueView_bottomLeft_inSuperview = [blueView convertPoint:CGPointMake(0, blueView.bounds.size.height) toView:superview];
CGPoint blueView_bottomRight_inSuperview = [blueView convertPoint:CGPointMake(blueView.bounds.size.width, blueView.bounds.size.height) toView:superview];
CGPoint blueView_topLeft_inBlackView = [superview convertPoint:blueView_topLeft_inSuperview toView:blackView];
CGPoint blueView_topRight_inBlackView = [superview convertPoint:blueView_topRight_inSuperview toView:blackView];
CGPoint blueView_bottomLeft_inBlackView = [superview convertPoint:blueView_bottomLeft_inSuperview toView:blackView];
CGPoint blueView_bottomRight_inBlackView = [superview convertPoint:blueView_bottomRight_inSuperview toView:blackView];
BOOL collision = (CGRectContainsPoint(blackView.bounds, blueView_topLeft_inBlackView) ||
CGRectContainsPoint(blackView.bounds, blueView_topRight_inBlackView) ||
CGRectContainsPoint(blackView.bounds, blueView_bottomLeft_inBlackView) ||
CGRectContainsPoint(blackView.bounds, blueView_bottomRight_inBlackView));
Related
I need to calculate the visible CGRect of a UIView subview, in the coordinates of the original view. I've got it working if the scale is 1, but if one of the superviews or the view itself is scaled (pinch), the visible CGRect origin is offset slightly.
This works when the scale of the views is 1 or the view is a subview of the root view:
// return the part of the passed view that is visible
// TODO: figure out why result origin is wrong for scaled subviews
//
- (CGRect)getVisibleRect:(UIView *)view {
// get the root view controller (and it's view is vc.view)
UIViewController *vc = UIApplication.sharedApplication.keyWindow.rootViewController;
// get the view's frame in the root view's coordinate system
CGRect frame = [vc.view convertRect:view.frame fromView:view.superview];
// get the intersection of the root view bounds and the passed view frame
CGRect intersection = CGRectIntersection(vc.view.bounds, frame);
// adjust the intersection coordinates thru any nested views
UIView *loopView = view;
do {
intersection = [loopView convertRect:intersection fromView:loopView.superview];
loopView = loopView.superview;
} while (loopView != vc.view);
return intersection; // may be same as the original view frame
}
When a subview is scaled, the size of the resultant view is correct, but the origin is offset by a small amount. It appears that the convertRect does not calculate the origin properly for scaled subviews.
I tried adjusting the origin relative to the X/Y transform scale but I could not get the calculation correct. Perhaps someone can help?
To save time, here is a complete test ViewController.m, where a box with an X is drawn on the visible part of the views - just create a reset button in the Main.storyboard and connect it to the reset method:
//
// ViewController.m
// VisibleViewDemo
//
// Copyright © 2018 ByteSlinger. All rights reserved.
//
#import "ViewController.h"
CG_INLINE void drawLine(UIView *view,CGPoint point1,CGPoint point2, UIColor *color, NSString *layerName) {
UIBezierPath *path = [UIBezierPath bezierPath];
[path moveToPoint:point1];
[path addLineToPoint:point2];
CAShapeLayer *shapeLayer = [CAShapeLayer layer];
shapeLayer.path = [path CGPath];
shapeLayer.strokeColor = color.CGColor;
shapeLayer.lineWidth = 2.0;
shapeLayer.fillColor = [UIColor clearColor].CGColor;
shapeLayer.name = layerName;
[view.layer addSublayer:shapeLayer];
}
CG_INLINE void removeShapeLayers(UIView *view,NSString *layerName) {
if (view.layer.sublayers.count > 0) {
for (CALayer *layer in [view.layer.sublayers copy]) {
if ([layer.name isEqualToString:layerName]) {
[layer removeFromSuperlayer];
}
}
}
}
CG_INLINE void drawXBox(UIView *view, CGRect rect,UIColor *color) {
NSString *layerName = #"xbox";
removeShapeLayers(view, layerName);
CGPoint topLeft = CGPointMake(rect.origin.x,rect.origin.y);
CGPoint topRight = CGPointMake(rect.origin.x + rect.size.width,rect.origin.y);
CGPoint bottomLeft = CGPointMake(rect.origin.x, rect.origin.y + rect.size.height);
CGPoint bottomRight = CGPointMake(rect.origin.x + rect.size.width, rect.origin.y + rect.size.height);
drawLine(view,topLeft,topRight,color,layerName);
drawLine(view,topRight,bottomRight,color,layerName);
drawLine(view,topLeft,bottomLeft,color,layerName);
drawLine(view,bottomLeft,bottomRight,color,layerName);
drawLine(view,topLeft,bottomRight,color,layerName);
drawLine(view,topRight,bottomLeft,color,layerName);
}
#interface ViewController ()
#end
#implementation ViewController
UIView *view1;
UIView *view2;
UIView *view3;
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
CGFloat width = [UIScreen mainScreen].bounds.size.width / 2;
CGFloat height = [UIScreen mainScreen].bounds.size.height / 4;
view1 = [[UIView alloc] initWithFrame:CGRectMake(width / 2, height / 2, width, height)];
view1.backgroundColor = UIColor.yellowColor;
[self.view addSubview:view1];
[self addGestures:view1];
view2 = [[UIView alloc] initWithFrame:CGRectMake(width / 2, height / 2 + height + 16, width, height)];
view2.backgroundColor = UIColor.greenColor;
[self.view addSubview:view2];
[self addGestures:view2];
view3 = [[UIView alloc] initWithFrame:CGRectMake(10, 10, width / 2, height / 2)];
view3.backgroundColor = [UIColor.blueColor colorWithAlphaComponent:0.5];
[view1 addSubview:view3]; // this one will behave differently
[self addGestures:view3];
}
- (void)viewWillLayoutSubviews {
[super viewWillLayoutSubviews];
[self checkOnScreen:view1];
[self checkOnScreen:view2];
[self checkOnScreen:view3];
}
- (IBAction)reset:(id)sender {
view1.transform = CGAffineTransformIdentity;
view2.transform = CGAffineTransformIdentity;
view3.transform = CGAffineTransformIdentity;
[self.view setNeedsLayout];
}
- (void)addGestures:(UIView *)view {
UIPanGestureRecognizer *panGestureRecognizer = [[UIPanGestureRecognizer alloc]
initWithTarget:self action:#selector(handlePan:)];
[view addGestureRecognizer:panGestureRecognizer];
UIPinchGestureRecognizer *pinchGestureRecognizer = [[UIPinchGestureRecognizer alloc]
initWithTarget:self action:#selector(handlePinch:)];
[view addGestureRecognizer:pinchGestureRecognizer];
}
// return the part of the passed view that is visible
- (CGRect)getVisibleRect:(UIView *)view {
// get the root view controller (and it's view is vc.view)
UIViewController *vc = UIApplication.sharedApplication.keyWindow.rootViewController;
// get the view's frame in the root view's coordinate system
CGRect frame = [vc.view convertRect:view.frame fromView:view.superview];
// get the intersection of the root view bounds and the passed view frame
CGRect intersection = CGRectIntersection(vc.view.bounds, frame);
// adjust the intersection coordinates thru any nested views
UIView *loopView = view;
do {
intersection = [loopView convertRect:intersection fromView:loopView.superview];
loopView = loopView.superview;
} while (loopView != vc.view);
return intersection; // may be same as the original view
}
- (void)checkOnScreen:(UIView *)view {
CGRect visibleRect = [self getVisibleRect:view];
if (CGRectEqualToRect(visibleRect, CGRectNull)) {
visibleRect = CGRectZero;
}
drawXBox(view,visibleRect,UIColor.blackColor);
}
//
// Pinch (resize) an image on the ViewController View
//
- (IBAction)handlePinch:(UIPinchGestureRecognizer *)recognizer {
static CGAffineTransform initialTransform;
if (recognizer.state == UIGestureRecognizerStateBegan) {
[self.view bringSubviewToFront:recognizer.view];
initialTransform = recognizer.view.transform;
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
} else {
recognizer.view.transform = CGAffineTransformScale(initialTransform,recognizer.scale,recognizer.scale);
[self checkOnScreen:recognizer.view];
[self.view setNeedsLayout]; // update subviews
}
}
- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer {
static CGAffineTransform initialTransform;
if (recognizer.state == UIGestureRecognizerStateBegan) {
[self.view bringSubviewToFront:recognizer.view];
initialTransform = recognizer.view.transform;
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
} else {
//get the translation amount in x,y
CGPoint translation = [recognizer translationInView:recognizer.view];
recognizer.view.transform = CGAffineTransformTranslate(initialTransform,translation.x,translation.y);
[self checkOnScreen:recognizer.view];
[self.view setNeedsLayout]; // update subviews
}
}
#end
So you need to know the real visible frame of a view that is somehow derived from bounds+center+transform and calculate everything else from that, instead of the ordinary frame value. This means you'll also have to recreate convertRect:fromView: to be based on that. I always sidestepped the problem by using transform only for short animations where such calculations are not necessary. Thinking about coding such a -getVisibleRect: method makes me want to run away screaming ;)
What is a frame?
The frame property is derived from center and bounds.
Example:
center is (60,50)
bounds is (0,0,100,100)
=> frame is (10,0,100,100)
Now you change the frame to (10,20,100,100). Because the size of the view did not change, this results only in a change to the center. The new center is now (60,70).
How about transform?
Say you now transform the view, by scaling it to 50%.
=> the view has now half the size than before, while still keeping the same center. It looks like the new frame is (35,45,50,50). However the real result is:
center is still (60,50): this is expected
bounds is still (0,0,100,100): this should be expected too
frame is still (10,20,100,100): this is somewhat counterintuitive
frame is a calculated property, and it doesn't care at all about the current transform. This means that the value of the frame is meaningless whenever transform is not the identity transform. This is even documented behaviour. Apple calls the value of frame to be "undefined" in this case.
Consequences
This has the additional consequences that methods such as convertRect:fromView: do not work properly when there are non-standard transforms involved. This is because all these methods rely on either frame or bounds of views, and they break as soon as there are transforms involved.
What can be done?
Say you have three views:
view1 (no transform)
view2 (scale transform 50%)
view3 (no transform)
and you want to know the coordinates of view3 from the point of view of view1.
From the point of view of view2, view3 has frame view3.frame. Easy.
From the point of view of view1, view2 has not frame view2.frame, but the visible frame is a rectangle with size view2.bounds/2 and center view2.center.
To get this right you need some basic linear algebra (with matrix multiplications). (And don't forget the anchorPoint..)
I hope it helps..
What can be done for real?
In your question you said that there is an offset. Maybe you can just calculate the error now? The error should be something like 0.5 * (1-scale) * (bounds.size) . If you can calculate the error, you can subtract it and call it a day :)
Thanks to #Michael for putting in so much effort in his answer. It didn't solve the problem but it made me think some more and try some other things.
And voila, I tried something that I'm certain I had done before, but this time I started with my latest code. It turns out a simple solution did the trick. The builtin UIView convertRect:fromView and convertRect:toView worked as expected when used together.
I apologize to anyone that has spent time on this. I'm humbled in my foolishness and how much time I have spent on this. I must have made a mistake somewhere when I tried this before because it didn't work. But this works very well now:
// return the part of the passed view that is visible
- (CGRect)getVisibleRect:(UIView *)view {
// get the root view controller (and it's view is vc.view)
UIViewController *vc = UIApplication.sharedApplication.keyWindow.rootViewController;
// get the view's frame in the root view's coordinate system
CGRect rootRect = [vc.view convertRect:view.frame fromView:view.superview];
// get the intersection of the root view bounds and the passed view frame
CGRect rootVisible = CGRectIntersection(vc.view.bounds, rootRect);
// convert the rect back to the initial view's coordinate system
CGRect visible = [view convertRect:rootVisible fromView:vc.view];
return visible; // may be same as the original view frame
}
If someone uses the Viewcontroller.m from my question, just replace the getVisibleRect method with this one and it will work very nicely.
NOTE: I tried rotating the view and the visible rect is rotated too because I displayed it on the view itself. I guess I could reverse whatever the view rotation is on the shape layers, but that's for another day!
I want to a UIView to drag to bottom of the screen on Pan Gesture but also the view alpha should scale down to "zero", when it reaches to the bottom of the screen.
And vise versa, when I will drag the view upwards then the UIView alpha should scale down to "1"
But the problem is that the view's alpha is scaling down to "Zero" on panning half of the screen or sometimes when I drag the view slower.
Initially I have made the UIView background color to Black.
I need to scale down the alpha of the view gradually , any idea or suggestion will be helpful.
UIPanGestureRecognizer * panner = nil;
panner = [[UIPanGestureRecognizer alloc] initWithTarget: self action:#selector(handlePanGesture:)];
[self.view addGestureRecognizer:panner ];
[panner setDelegate:self];
[panner release];
CGRect frame = CGRectMake(0, 0, 320, 460);
self.dimmer = [[UIView alloc] initWithFrame:frame];
[self.dimmer setBackgroundColor:[UIColor blackColor]];
[self.view addSubview:dimmer];
-(IBAction) handlePanGesture:(UIPanGestureRecognizer *) sender {
static CGPoint lastPosition = {0};
CGPoint nowPosition; float alpha = 0.0;
float new_alpha = 0.0;
nowPosition = [sender translationInView: [self view]];
alpha = [dimmer alpha] -0.0037;
dimmer.alpha -=alpha;
}
I would look at the point on the screen you are currently at inside your handlePanGesture: find the percentage you are at on the view CGFloat percentage = nowPosition.y/self.view.frame.size.height; then set the alpha to that dimmer.alpha = 1.0 - percentage;. This way no matter where you are moving, you are setting the alpha to how close to the bottom you are.
You aren't scaling relative to your gesture; you're setting dimmer.alpha = 0.0037 every time handlePanGesture: executes, regardless of pan direction or distance.
-(IBAction) handlePanGesture:(UIPanGestureRecognizer *) sender {
static CGPoint lastPosition = {0};
CGPoint nowPosition;
float alpha = 0.0;
float new_alpha = 0.0; // Unused!!
nowPosition = [sender translationInView: [self view]]; // Unused!!
alpha = [dimmer alpha] - 0.0037;
dimmer.alpha -= alpha; // === dimmer.alpha = dimmer.alpha - (dimmer.alpha - 0.0037)
// === dimmer.alpha = 0.0037 !!!
}
A better implementation might look something like this:
-(IBAction) handlePanGesture:(UIPanGestureRecognizer *) sender {
CGPoint nowPosition = [sender translationInView: [self view]];
CGFloat alpha = dimmer.alpha - ([sender translationInView: [self view]].y)/320.0;
dimmer.alpha = MAX(0, MIN(1, alpha));
}
I have a UIScrollView with a number of rectangular subviews lined up, of equal sizes. Then I need to be able to pass a CGPoint to that UIScrollView and I want it to give me the rectangular subview that contains the CGPoint. That's basically hitTest:event, except hitTest:event: doesn't work with UIScrollView once CGPoint goes beyond the UIScrollView bounds and doesn't look into its actual content.
What's everyone been doing about this? How to "hit test" on a UIScrollView content view?
Here's some code to illustrate the problem:
NSArray *rectangles = [self getBeautifulRectangles];
CGFloat rectangleLength;
rectangleLength = 100;
// add some rectangle subviews
for (int i = 0; i < rectangles.count; i++) {
UIView *rectangle = [rectangles objectAtIndex:i];
[rectangle setFrame:CGRectMake(i * rectangleLength, 0, rectangleLength, rectangleLength)];
[_scrollView addSubview:rectangle];
}
[_scrollView setContentSize:CGSizeMake(rectangleLength * rectangles.count, rectangleLength)];
// add scroll view to parent view
UIView *containerView = [[UIView alloc] initWithFrame:CGRectMake(0,0,320, rectangleLength)];
[containerView addSubview:_scrollView];
// compute CGPoint to center of first rectangle
CGPoint number1RectanglePoint = CGPointMake(0 * rectangleLength + 50, 50);
// compute CGPoint to center of fifth rectangle
CGPoint number5RectanglePoint = CGPointMake(4 * rectangleLength + 50, 50);
UIView *firstSubview = [containerView hitTest:number1RectanglePoint withEvent:nil];
UIView *fifthSubview = [containerView hitTest:number5RectanglePoint withEvent:nil];
if (firstSubview) NSLog(#"first rectangle OK");
if (fifthSubview) NSLog(#"fifth rectangle OK");
output: first rectangle OK
You should be able to loop through the scrollview subviews
+(UIView *)touchedViewIn:(UIScrollView *)scrollView atPoint:(CGPoint)touchPoint {
CGPoint actualPoint = CGPointMake(touchPoint.x + scrollView.contentOffset.x, scrollView.contentOffset.y + touchPoint.y);
for (UIView * subView in scrollView.subviews) {
if(CGRectContainsPoint(subView.frame, actualPoint)) {
NSLog(#"THIS IS THE ONE");
return subView;
}
}
//Nothing touched
return nil;
}
I guess you pass the wrong CGPoint coordinate to the hitTest:withEvent: method causing wrong behavior if the scroll view is scrolled.
The coordinate you pass to this method must be in the target views coordinate system. I guess your coordinate is in the UIScrollView's superview's coordinate system.
You can convert the coordinate prior to using it for the hit test using CGPoint hitPoint = [scrollView convertPoint:yourPoint fromView:scrollView.superview].
In your example you let the container view perform the hit testing, but the container can only see & hit the visible portion of the scroll view and thus your hit fails.
In order to hit subviews of the scroll view which are outside of the visible area you have to perform the hit test on the scroll view directly:
UIView *firstSubview = [_scrollView hitTest:number1RectanglePoint withEvent:nil];
UIView *fifthSubview = [_scrollView hitTest:number5RectanglePoint withEvent:nil];
//UPDATE
The updated code has been added that works as I expected. See didSimulatePhysics method in the updated code below. In my case, I only care about moving a character left or right on the x axis where 0 on the x axis is the absolute left and right on the x axis is a configurable value. The Apple adventure game really helped a lot too.
//ORIGINAL POST BELOW
I'm working with Apple SpriteKit and I'm struggling to implement a camera as I would like it to behave. What I've done in the code is load a sprite character, two buttons, and a red box that is off to the right outside of the view at the start. What I'd like to be able to do is move the character with the buttons, and once the player reaches the middle or end of the screen, the camera will then re-adjust to uncover what couldn't be seen in the view. So moving to the right should eventually show the red box that is off outside of the view initially once the player gets there. However, with the code I'm using below, I'm unable to get the camera to follow and adjust the coordinates to the main character at all. I've looked at Apple's advanced scene processing doc as well as a few other stack overflow posts but can't seem to get it right. If anyone could offer some advice it would be appreciated.
#define cameraEdge 150
-(id)initWithSize:(CGSize)size
{
if (self = [super initWithSize:size])
{
/* Setup your scene here */
//320 568
self.backgroundColor = [SKColor whiteColor];
myWorld = [[SKNode alloc] init];
[self addChild:myWorld];
mainCharacter = [SKSpriteNode spriteNodeWithImageNamed:#"0"];
mainCharacter.physicsBody.dynamic = YES;
mainCharacter.name = #"player";
mainCharacter.position = CGPointMake(20, 20);
CGRect totalScreenSize = CGRectMake(0, 0, 800, 320);
SKSpriteNode *box = [SKSpriteNode spriteNodeWithColor:[SKColor redColor] size:CGSizeMake(60, 60)];
SKSpriteNode *boxTwo = [SKSpriteNode spriteNodeWithColor:[SKColor greenColor] size:CGSizeMake(60, 60)];
SKSpriteNode *boxThree = [SKSpriteNode spriteNodeWithColor:[SKColor blueColor] size:CGSizeMake(60, 60)];
boxThree.position = CGPointMake(40, 50);
[myWorld addChild:boxThree];
boxTwo.position = CGPointMake(1100, 50);
box.position = CGPointMake(650, 50);
[myWorld addChild:box];
[myWorld addChild:boxTwo];
self.physicsBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:totalScreenSize];
self.physicsWorld.gravity = CGVectorMake(0, -5);
mainCharacter.name = #"mainCharacter";
mainCharacter.physicsBody.linearDamping = 0;
mainCharacter.physicsBody.friction = 0;
mainCharacter.physicsBody.restitution = 0;
mainCharacter.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:mainCharacter.size];
[myWorld addChild:mainCharacter];
[self addChild:[self buildLeftButton]];
[self addChild:[self buildRightButton]];
}
return self;
}
- (void)didSimulatePhysics
{
SKSpriteNode *hero = mainCharacter;
if(hero)
{
CGPoint heroPosition = hero.position;
CGPoint worldPosition = myWorld.position;
NSLog(#"%f", heroPosition.x);
CGFloat xCoordinate = worldPosition.x + heroPosition.x;
if(xCoordinate < cameraEdge && heroPosition.x > 0)
{
worldPosition.x = worldPosition.x - xCoordinate + cameraEdge;
self.worldMovedForUpdate = YES;
}
else if(xCoordinate > (self.frame.size.width - cameraEdge) && heroPosition.x < 2000)
{
worldPosition.x = worldPosition.x + (self.frame.size.width - xCoordinate) - cameraEdge;
self.worldMovedForUpdate = YES;
}
myWorld.position = worldPosition;
}
}
-(SKSpriteNode *)buildLeftButton
{
SKSpriteNode *leftButton = [SKSpriteNode spriteNodeWithImageNamed:#"left"];
leftButton.position = CGPointMake(20, 20);
leftButton.name = #"leftButton";
leftButton.zPosition = 1.0;
return leftButton;
}
-(SKSpriteNode *)buildRightButton
{
SKSpriteNode *leftButton = [SKSpriteNode spriteNodeWithImageNamed:#"right"];
leftButton.position = CGPointMake(60, 20);
leftButton.name = #"rightButton";
leftButton.zPosition = 1.0;
return leftButton;
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInNode:self];
SKNode *node = [self nodeAtPoint:location];
if([node.name isEqualToString:#"leftButton"])
{
[mainCharacter.physicsBody applyImpulse:CGVectorMake(-120, 0)];
}
else if([node.name isEqualToString:#"rightButton"])
{
[mainCharacter.physicsBody applyImpulse:CGVectorMake(120, 10)];
}
}
If you want the view to always be centered on your player's position, modify your code with these points in mind:
1) Create a SKNode and call it myWorld, worldNode or any other name like that.
2) Add the worldNode [self addChild:worldNode];
3) Add all other nodes to the worldNode, including your player.
4) In the didSimulatePhysics method, add this code:
worldNode.position = CGPointMake(-(player.position.x-(self.size.width/2)), -(player.position.y-(self.size.height/2)));
Your view will now always be centered on your player's position.
Update May 2015:
If you are using a map created with Tiled Map Editor, you can use the free SKAToolKit framework. Features include player camera auto follow, test player, test HUD and sprite buttons.
I am trying to rotate the rectangle around the circle. So far after putting together some code I found in various places (mainly here: https://stackoverflow.com/a/4657476/861181) , I am able to rotate rectangle around it's center axis.
How can I make it rotate around the circle?
Here is what I have:
OverlaySelectionView.h
#import <QuartzCore/QuartzCore.h>
#interface OverlaySelectionView : UIView {
#private
UIView* dragArea;
CGRect dragAreaBounds;
UIView* vectorArea;
UITouch *currentTouch;
CGPoint touchLocationpoint;
CGPoint PrevioustouchLocationpoint;
}
#property CGRect vectorBounds;
#end
OverlaySelectionView.m
#import "OverlaySelectionView.h"
#interface OverlaySelectionView()
#property (nonatomic, retain) UIView* vectorArea;
#end
#implementation OverlaySelectionView
#synthesize vectorArea, vectorBounds;
#synthesize delegate;
- (void) initialize {
self.userInteractionEnabled = YES;
self.multipleTouchEnabled = NO;
self.backgroundColor = [UIColor clearColor];
self.opaque = NO;
self.autoresizingMask = UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth;
UIPanGestureRecognizer *panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(rotateVector:)];
panRecognizer.maximumNumberOfTouches = 1;
[self addGestureRecognizer:panRecognizer];
}
- (id) initWithCoder: (NSCoder*) coder {
self = [super initWithCoder: coder];
if (self != nil) {
[self initialize];
}
return self;
}
- (id) initWithFrame: (CGRect) frame {
self = [super initWithFrame: frame];
if (self != nil) {
[self initialize];
}
return self;
}
- (void)drawRect:(CGRect)rect {
if (vectorBounds.origin.x){
UIView* area = [[UIView alloc] initWithFrame: vectorBounds];
area.backgroundColor = [UIColor grayColor];
area.opaque = YES;
area.userInteractionEnabled = NO;
vectorArea = area;
[self addSubview: vectorArea];
}
}
- (void)rotateVector: (UIPanGestureRecognizer *)panRecognizer{
if (touchLocationpoint.x){
PrevioustouchLocationpoint = touchLocationpoint;
}
if ([panRecognizer numberOfTouches] >= 1){
touchLocationpoint = [panRecognizer locationOfTouch:0 inView:self];
}
CGPoint origin;
origin.x=240;
origin.y=160;
CGPoint previousDifference = [self vectorFromPoint:origin toPoint:PrevioustouchLocationpoint];
CGAffineTransform newTransform =CGAffineTransformScale(vectorArea.transform, 1, 1);
CGFloat previousRotation = atan2(previousDifference.y, previousDifference.x);
CGPoint currentDifference = [self vectorFromPoint:origin toPoint:touchLocationpoint];
CGFloat currentRotation = atan2(currentDifference.y, currentDifference.x);
CGFloat newAngle = currentRotation- previousRotation;
newTransform = CGAffineTransformRotate(newTransform, newAngle);
[self animateView:vectorArea toPosition:newTransform];
}
-(CGPoint)vectorFromPoint:(CGPoint)firstPoint toPoint:(CGPoint)secondPoint
{
CGPoint result;
CGFloat x = secondPoint.x-firstPoint.x;
CGFloat y = secondPoint.y-firstPoint.y;
result = CGPointMake(x, y);
return result;
}
-(void)animateView:(UIView *)theView toPosition:(CGAffineTransform) newTransform
{
[UIView setAnimationsEnabled:YES];
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationCurve:UIViewAnimationCurveLinear];
[UIView setAnimationBeginsFromCurrentState:YES];
[UIView setAnimationDuration:0.0750];
vectorArea.transform = newTransform;
[UIView commitAnimations];
}
#end
here is attempt to clarify. I am creating the rectangle from a coordinates on a map. Here is the function that creates that rectangle in the main view. Essentially it is the middle of the screen:
overlay is the view created with the above code.
- (void)mapView:(MKMapView *)mapView didUpdateUserLocation:(MKUserLocation *)userLocation
{
if (!circle){
circle = [MKCircle circleWithCenterCoordinate: userLocation.coordinate radius:100];
[mainMapView addOverlay:circle];
CGPoint centerPoint = [mapView convertCoordinate:userLocation.coordinate toPointToView:self.view];
CGPoint upPoint = CGPointMake(centerPoint.x, centerPoint.y - 100);
overlay = [[OverlaySelectionView alloc] initWithFrame: self.view.frame];
overlay.vectorBounds = CGRectMake(upPoint.x, upPoint.y, 30, 100);
[self.view addSubview: overlay];
}
}
Here is the sketch of what I am trying to achieve:
Introduction
A rotation is always done around (0,0).
What you already know:
To rotate around the center of the rectangle you translate the rect to origin, rotate and translate back.
Now for your question:
to rotate around a center point of a circle, simply move the center of the rectangle such that the circle is at (0,0) then rotate, and move back.
start positioning the rectangle at 12 o clock, with the center line at 12.
1) as explained you always rotate around 0,0, so move the center of the circle to 0,0
CGAffineTransform trans1 = CGAffineTransformTranslation(-circ.x, -circ.y);
2) rotate by angle
CGAffineTransform transRot = CGAffineTransformRotation(angle); // or -angle try out.
3) Move back
CGAffineTransform transBack = CGAffineTransformTranslation(circ.x, circ.y);
Concat these 3 rotation matrices to one combibed matrix, and apply it to the rectangle.
CGAffineTransformation tCombo = CGAffineTransformConcat(trans1, transRot);
tCombo = CGTransformationConcat(tCombo, transback);
Apply
rectangle.transform = tCombo;
You probably should also read the chapter about Transformation matrices in Quartz docu.
This code is written with a text editor only, so expect slighly different function names.