Currently i am trying to implement the following. While making a compass, i would like to draw arrows(circles) at set locations and rotate around my view to display on compass.
I can use storyboard to create Imageviews and the like and parent them with one another.
I am trying to now do this programming code, as that when a new location is received by program it can display the new point on compass.
I have already worked out code to rotate around
Ideally my flow of code should be as follows:
For i = 1 to 5;
Draw empty square view[i]
Draw Circle and position within square[i] at co-ordinate (x,y) (pretty much at the north point of compass)
Parent circle to Square
Rotate square[i] to x degrees.
Next i
My question is how do i programmatically draw these views and then how do i parent the views. Such that i can rotate one with the other at a fixed point.
Thanks.
this is not exact answer, but it may help you.just play with value of i(loop index)
-(void) rotateOn360Degree
{
int x,y;
double radious=30;
for(int i=1;i<=360;i++)
{
x = radious* cos((i * 3.14) / 180));
y = radious* sin((i * 3.14) / 180));
UIView *tmpView = [[UIView alloc] init];
[tmpView setBackgroundColor:[UIColor greenColor]];
[tmpView.layer setCornerRadius:5];
tmpView.frame = CGRectMake(x,y, 10, 10);
[self.view addSubview:tmpView];
}
}
Related
I've never had this happen before and can't figure out what's going on. I suspect it might be auto-layout, but I don't see how. I have a "Compass" view that has several subviews it manages itself (not part of auto layout). Here's an example of their setup:
- (ITMView *) compass {
if (!_compass){
_compass = [ITMView new];
_compass.backgroundColor = [UIColor blueColor];
_compass.layer.anchorPoint = CGPointMake(.5, .5);
_compass.translatesAutoresizingMaskIntoConstraints = NO;
_compass.frame = self.bounds;
__weak ITMCompassView *_self = self;
_compass.onDraw = ^(CGContextRef ref, CGRect frame) { [_self drawCompassWithFrame:frame]; };
[self addSubview:_compass];
}
return _compass;
}
I need to rotate the compass in response to heading changes:
- (void) setCurrentHeading:(double)currentHeading{
_currentHeading = fmod(currentHeading, 360);
double rad = (M_PI / 180) * _currentHeading;
self.compass.transform = CGAffineTransformMakeRotation(rad);
}
The problem is that it's rotating in on the z-axis for some reason:
I'm not manipulating layer transforms on any other views. Does anyone have any idea why this is occurring?
Update
I checked the transform for all superviews. Every superview has an identity transform.
I logged the transform of the compass view before and after it was set for the first time. Before it was set, the transform was at identity, which is expected. After I set the transform to rotate 242.81 degrees (4.24 rad) I get:
[
-0.47700124155378526, -0.87890262006444575,
0.87890262006444575, -0.47700124155378526,
0, 0
]
Update 2
I checked CATransform3DIsAffine and it always returns YES. I double checked the layer transform and for a rotation of 159.7 (degrees) I get:
[
-0.935, 0.356, 0, 0,
-0.356, -0.935, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
]
That looks correct to me.
All of the transforms are correct but it's still not displaying correctly on screen.
Update 3
I removed my drawing code from the view and set the view background to blue. The view is definitely being rotated, squeezed, or something:
Some things to note:
The view displays correctly at 90, 180, 270 & 0 degrees.
The view disappears (turned on edge) at 45, 135, 225 & 315 degrees.
The view looks like it's being rotated in 3D as it progresses from 0 to 360 degrees.
I'm not sure why #matt withdrew his answer, but he was correct: The compass view had it's frame reset every time I made a rotation in the layoutSubviews method in my containing superview. I wasn't expecting this, thinking that a rotation wouldn't trigger a layoutSubviews. The frame never changed, but the applied transform distorted the results as the frame was re-applied to the view. What threw me was the results really looked like the view was being rotated in 3D, which led me down that particular rabbit hole. At least I know what to look for now.
Something I want to point out: The apparent 3D rotation was very particular. It rotated around each diagonal combination of {x,Y} sequentially between each 90° quadrant of the unit circle. This makes sense if you think about how the frame would distort over those periods.
The solution is simple enough, store and remove the transform before setting the subview frame and then reapply the transform. However, because the rotation is applied very, very frequently (inside an animation block no less) I added an escape to help reduce the load:
- (void) layoutSubviews{
[super layoutSubviews];
if (!CGRectEqualToRect(_lastLayout, self.bounds)){
CGRect frame = SquareRectAndPosition(self.bounds, CGRectXCenter, CGRectYCenter);
CGAffineTransform t;
t = self.compass.transform;
self.compass.transform = CGAffineTransformIdentity;
self.compass.frame = frame;
self.compass.transform = t;
t = self.target.transform;
self.target.transform = CGAffineTransformIdentity;
self.target.frame = frame;
self.target.transform = t;
}
_lastLayout = self.bounds;
}
I'm using UIKit Dynamics to push a UIView off screen, similar to how Tweetbot performs it in their image overlay.
I use a UIPanGestureRecognizer, and when they end the gesture, if they exceed the velocity threshold it goes offscreen.
[self.animator removeBehavior:self.panAttachmentBehavior];
CGPoint velocity = [panGestureRecognizer velocityInView:self.view];
if (fabs(velocity.y) > 100) {
self.pushBehavior = [[UIPushBehavior alloc] initWithItems:#[self.scrollView] mode:UIPushBehaviorModeInstantaneous];
[self.pushBehavior setTargetOffsetFromCenter:centerOffset forItem:self.scrollView];
self.pushBehavior.active = YES;
self.pushBehavior.action = ^{
CGPoint lowestPoint = CGPointMake(CGRectGetMinX(self.imageView.bounds), CGRectGetMaxY(self.imageView.bounds));
CGPoint convertedPoint = [self.imageView convertPoint:lowestPoint toView:self.view];
if (!CGRectIntersectsRect(self.view.bounds, self.imageView.frame)) {
NSLog(#"outside");
}
};
CGFloat area = CGRectGetWidth(self.scrollView.bounds) * CGRectGetHeight(self.scrollView.bounds);
CGFloat UIKitNewtonScaling = 5000000.0;
CGFloat scaling = area / UIKitNewtonScaling;
CGVector pushDirection = CGVectorMake(velocity.x * scaling, velocity.y * scaling);
self.pushBehavior.pushDirection = pushDirection;
[self.animator addBehavior:self.pushBehavior];
}
I'm having an immense amount of trouble detecting when my view actually completely disappears from the screen.
My view is setup rather simply. It's a UIScrollView with a UIImageView within it. Both are just within a UIViewController. I move the UIScrollView with the pan gesture, but want to detect when the image view is off screen.
In the action block I can monitor the view as it moves, and I've tried two methods:
1. Each time the action block is called, find the lowest point in y for the image view. Convert that to the view controller's reference point, and I was just trying to see when the y value of the converted point was less than 0 (negative) for when I "threw" the view upward. (This means the lowest point in the view has crossed into negative y values for the view controller's reference point, which is above the visible area of the view controller.)
This worked okay, except the x value I gave to lowestPoint really messes everything up. If I choose the minimum X, that is the furthest to the left, it will only tell me when the bottom left corner of the UIView has gone off screen. Often times as the view can be rotating depending on where the user pushes from, the bottom right may go off screen after the left, making it detect it too early. If I choose the middle X, it will only tell me when the middle bottom has gone off, etc. I can't seem to figure out how to tell it "just get me the absolute lowest y value.
2. I tried CGRectIntersectsRect as shown in the code above, and it never says it's outside, even seconds after it went shooting outside of any visible area.
What am I doing wrong? How should I be detecting it no longer being visible?
If you take a look on UIDynamicItem protocol properties, you can see they are center, bounds and transform. So UIDynamicAnimator actually modifies only these three properties. I'm not really sure what happens with the frame during the Dynamics animations, but from my experience I can tell it's value inside the action block is not always reliable. Maybe it's because the frame is actually being calculated by CALayer based on center, transform and bounds, as described in this excellent blog post.
But you for sure can make use of center and bounds in the action block. The following code worked for me in a case similar to yours:
CGPoint parentCenter = CGPointMake(CGRectGetMidX(self.view.bounds), CGRectGetMidY(self.view.bounds));
self.pushBehavior.action = ^{
CGFloat dx = self.imageView.center.x - parentCenter.x;
CGFloat dy = self.imageView.center.y - parentCenter.y;
CGFloat distance = sqrtf(dx * dx + dy * dy);
if(distance > MIN(parentCenter.y + CGRectGetHeight(self.imageView.bounds), parentCenter.x + CGRectGetWidth(self.imageView.bounds))) {
NSLog(#"Off screen!");
}
};
I want to draw a pentagon in iOS with UISliders as the circumradii (a set of five UISliders going in different directions and originating from the center of a pentagon). Currently I have rotated five UISliders using a common anchorPoint (:setAnchorPoint), but they dont seem to originate from a common point.
How can I go about this? Do I need to start working on a custom control? What more from Quartz2D can I use?
Some specific indicators would be useful (please don't just say "Use Quartz2D")
Here is what I have achieved Imgur
And here is what I want to achieve Imgur
I defined the various sliders as an IBOutletCollection, so I can access them as an array in the controller. Then I run the following code in viewWillAppear:
CGPoint viewCenter = self.view.center;
for (NSInteger i = 0, count = self.sliders.count; i < count; ++i) {
UISlider *slider = self.sliders[i];
CGFloat angle = 2.0f * M_PI / count;
CALayer *layer = slider.layer;
layer.anchorPoint = CGPointMake(0.0f, 0.5f);
layer.position = viewCenter;
layer.transform = CATransform3DMakeRotation(i * angle - M_PI_2, 0.0f, 0.0f, 1.0f);
}
Here's a screenshot of the result:
Is this that you want?
I would try just to turn the sliders. The following code should help:
firstSlider.transform=CGAffineTransformRotate(firstSlider.transform,72.0/180*M_PI);
secondSlider.transform=CGAffineTransformRotate(secondSlider.transform,144.0/180*M_PI);
//and so on
The 72.0/180*M_PI is transformation of degrees to radians.
Hope it helps.
P.S. don't have a mac nearby to check if it works
For example, if we are to draw a 100 x 100 pixel circle on the main view which covers up the whole screen on the iPad, then instead of using initWithFrame like following 2 steps in viewDidLoad:
UINodeView *nodeView = [[UINodeView alloc] initWithFrame:
CGRectMake(x, y, NSNodeWidth, NSNodeHeight)];
nodeView.center = CGPointMake(x, y);
because x and y is more elegantly as self.view.bounds.size.width / 2 to horizontally center the circle, instead of self.view.bounds.size.width / 2 - NSNodeWidth / 2. Is init by a frame first, and then reset the center a good way, or is there a better way, if there is a initWithCenterAndSize?
That's a fine way of doing it :)
I would have gone for generating the positioned frame first to avoid the extra method call but that's just a matter of personal preference :)
If you're using this alot in your app you could make a category on UIView that implements this (warning, untested code :)
- (id)initWithCenter:(CGPoint)point size:(CGSize)size {
CGRect frame = CGRectMake(point.x-size.width/2,
point.y-size.height/2,
size.width,
size.height);
return [self initWithFrame:frame];
}
I usually do this:
UINodeView *nodeView = [[UINodeView alloc] initWithFrame:
CGRectMake(0, 0, NSNodeWidth, NSNodeHeight)];
nodeView.center = CGPointMake(x, y);
It looks nice and clear.
Twitter for iPad implements a fancy "pinch to expand paper fold" effect. A short video clip here.
http://www.youtube.com/watch?v=B0TuPsNJ-XY
Can this be done with CATransform3D without OpenGL? A working example would be thankful.
Update: I was interested in the approach or implementation to this animation effect. That's why I offered bounty on this question - srikar
Here's a really simple example using a gesture recognizer and CATransform3D to get you started. Simply pinch to rotate the gray view.
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
// ...
CGRect rect = self.window.bounds;
view = [[UIView alloc] initWithFrame:CGRectMake(rect.size.width/4, rect.size.height/4,
rect.size.width/2, rect.size.height/2)];
view.backgroundColor = [UIColor lightGrayColor];
[self.window addSubview:view];
CATransform3D transform = CATransform3DIdentity;
transform.m34 = -1/500.0; // this allows perspective
self.window.layer.sublayerTransform = transform;
UIPinchGestureRecognizer *rec = [[UIPinchGestureRecognizer alloc] initWithTarget:self
action:#selector(pinch:)];
[self.window addGestureRecognizer:rec];
[rec release];
return YES;
}
- (void)pinch:(UIPinchGestureRecognizer *)rec
{
CATransform3D t = CATransform3DIdentity;
t = CATransform3DTranslate(t, 0, -self.view.bounds.size.height/2, 0);
t = CATransform3DRotate(t, rec.scale * M_PI, 1, 0, 0);
t = CATransform3DTranslate(t, 0, -self.view.bounds.size.height/2, 0);
self.view.layer.transform = t;
}
Essentially, this effect is comprised of several different steps:
Gesture recognizer to detect when a pinch-out is occurring.
When the gesture starts, Twitter is likely creating a graphics context for the top and bottom portion, essentially creating images from their layers.*
Attach the images as subviews on the top and bottom.
As the fingers flex in and out, use a CATransform3D to add perspective to the images.
Once the view has 'fully stretched out', make the real subviews visible and remove the graphics context-created images.
To collapse the views, do the inverse of the above.
*Because these views are relatively simple, they may not need to be rendered to a graphics context.
The effect is basically just a view rotating about the X axis: when you drag a tweet out of the list, there's a view that starts out parallel to the X-Z plane. As the user un-pinches, the view rotates around the X axis until it comes fully into the X-Y plane. The documentation says:
The CATransform3D data structure
defines a homogenous three-dimensional
transform (a 4 by 4 matrix of CGFloat
values) that is used to rotate, scale,
offset, skew, and apply perspective
transformations to a layer.
Furthermore, we know that CALayer's transform property is a CATransform3D structure, and that it's also animatable. Ergo, I think it's safe to say that the folding effect in question is do-able with Core Animation.