I am a beginner programmer and this is my first app(I am still learning). I have overlaid a polygon onto a map view. I have set its fill color to an image because I'm trying to match an image to a satellite picture. I want to rotate it so that the polygon contents match the map. Is it possible to rotate the image? If not, is there an easier way to overlay an image onto a map view that I could use.
Here is my code:
-(MKOverlayView*)mapView:(MKMapView *)mapView viewForOverlay:(id )overlay {
MKPolygonView *polyView = [[MKPolygonView alloc] initWithOverlay:overlay];
polyView.strokeColor = [UIColor whiteColor];
polyView.fillColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"Campus-map labels.jpg"]];
return polyView;
}
Here's what I'm trying to do, if it helps:
http://i.stack.imgur.com/x53HU.jpg
The road which is circled in red should match up. I know that the polygon isn't in the right position -- this is to illustrate how the polygon needs to be rotated.
You can modify the transform property of the polyView object. For example:
polyView.transform = CGAffineTransformMakeRotation(M_PI_4);
will rotate the polygon by pi/4 radians (45 degrees), in a clockwise direction.
You might need to change the polygon's center property to get the effect you want. The center property determines the center of rotation around which the transform rotation takes place.
Related
I am developing a program which identifies a rectangle in an image and draws a path on the border of that identified rectangle. Now I want to reposition that path in case if it is not on the exact position. For an example look at this image
In cases like this i need to drag the corners of the path and reposition it as it fits the rectangle.
To draw the path I used CAShapeLayer and UIBezierPath. Here is the code I used to draw the path.
// imgView is the UIImageView which contains the image with the rectangle
let line: CAShapeLayer = CAShapeLayer();
line.frame = imgView.bounds;
let linePath: UIBezierPath = UIBezierPath();
linePath.moveToPoint(CGPointMake(x1, y1);
linePath.addLineToPoint(CGPointMake(x2, y2);
linePath.addLineToPoint(CGPointMake(x3, y3);
linePath.addLineToPoint(CGPointMake(x4, y4);
linePath.addLineToPoint(CGPointMake(x1, y1);
linePath.closePath();
line.lineWidth = 5.0;
line.path = linePath.CGPath;
line.fillColor = UIColor.clearColor().CGColor;
line.strokeColor = UIColor.blueColor().CGColor;
imgView.layer.addSublayer(line);
The thing is I tried to add a gesture to UIBezierPath. But there is nothing like that as I noticed. Couldn't find anything regarding this. So can someone please let me know a way to get my work done. Any help would be highly appreciated.
You are right that there is no way to attach a gesture recognizer to a UIBezierPath. Gesture recognizers attach to UIView objects, and a UIBezierPath is not a view object.
There is no built-in mechanism to do this. You need to do it yourself. I would suggest building a family of classes to handle it. Create a rectangle view class. It would use a bezier path internally, as well as placing 4 corner point views on the vertexes and installing pan gesture recognizers each corner point view.
Note that Cocoa rectangles (CGRects) can't be rotated. You'll need to use a series of connected line segments and write logic that forces it to stay square.
I am creating a UI where we have a deck of cards that you can swipe off the screen.
What I had hoped to be able to do was create a subclass of UIView which would represent each card and then to modify the transform property to move them back (z-axis) and a little up (y-axis) to get the look of a deck of cards.
Reading up on it I found I needed to use a CATransformLayer instead of the normal CALayer in order for the z-axis to not get flattened. I prototyped this by creating a CATransformLayer which I added to the CardDeckView's layer, and then all my cards are added to that CATransformLayer. The code looks a little bit like this:
In init:
// Initialize the CATransformSublayer
_rootTransformLayer = [self constructRootTransformLayer];
[self.layer addSublayer:_rootTransformLayer];
constructRootTransformLayer (the angle method is redundant, was going to angle the deck but later decided not to):
CATransformLayer* transformLayer = [CATransformLayer layer];
transformLayer.frame = self.bounds;
// Angle the transform layer so we an see all of the cards
CATransform3D rootRotateTransform = [self transformWithZRotation:0.0];
transformLayer.transform = rootRotateTransform;
return transformLayer;
Then the code to add the cards looks like:
// Set up a CardView as a wrapper for the contentView
RVCardView* cardView = [[RVCardView alloc] initWithContentView:contentView];
cardView.layer.cornerRadius = 6.0;
if (cardView != nil) {
[_cardArray addObject:cardView];
//[self addSubview:cardView];
[_rootTransformLayer addSublayer:cardView.layer];
[self setNeedsLayout];
}
Note that what I originally wanted was to simply add the RVCardView directly as a subview - I want to preserve touch events which adding just the layer doesn't do. Unfortunately what ends up happening is the following:
If I add the cards to the rootTransformLayer I end up with the right look which is:
Note that I tried using the layerClass on the root view (CardDeckView) which looks like this:
+ (Class) layerClass
{
return [CATransformLayer class];
}
I've confirmed that the root layer type is now CATransformLayer but I still get the flattened look. What else do I need to do in order to prevent the flattening?
When you use views, you see a flat scene because there is no perspective set in place. To make a comparison with 3D graphics, like OpenGL, in order to render a scene you must set the camera matrix, the one that transforms the 3D world into a 2D image.
This is the same: sublayers content are transformed using CATransform3D in 3D space but then, when the parent CALayer displays them, by default it projects them on x and y ignoring the z coordinate.
See Adding Perspective to Your Animations on Apple documentation. This is the code you are missing:
CATransform3D perspective = CATransform3DIdentity;
perspective.m34 = -1.0 / eyePosition; // ...on the z axis
myParentDeckView.layer.sublayerTransform = perspective;
Note that for this, you don't need to use CATransformLayer, a simple CALayer would suffice:
here is the transformation applied to the subviews in the picture (eyePosition = -0.1):
// (from ViewController-viewDidLoad)
for (UIView *v in self.view.subviews) {
CGFloat dz = (float)(arc4random() % self.view.subviews.count);
CATransform3D t = CATransform3DRotate(CATransform3DMakeTranslation(0.f, 0.f, dz),
0.02,
1.0, 0.0, 0.0);
v.layer.transform = t;
}
The reason for using CATransformLayer is pointed out in this question. CALayer "rasterizes" its transformed sublayers and then applies its own transformation, while CATransformLayer preserves the full hierarchy and draws each sublayer independently; it is useful only if you have more than one level of 3D-transformed sublayers. In your case, the scene tree has only one level: the deck view (which itself has the identity matrix as transformation) and the card views, the children (which are instead moved in the 3D space). So CATransformLayer is superfluous in this case.
I'm using CoreGraphics in my UIView to draw a graph and I want to be able to interact with the graph using touch input. Since touches are received in device coordinates, I need to transform it into user coordinates in order to relate it to the graph, but that has become an obstacle since CGContextConvertPointToUserSpace doesn't work outside of the graphics drawing context.
Here's what I've tried.
In drawRect:
CGContextScaleCTM(ctx,...);
CGContextTranslateCTM(ctx,...); // transform graph to fit the view nicely
self.ctm = CGContextGetCTM(ctx); // save for later
// draw points using user coordinates
In my touch event handler:
CGPoint touchDevice = [gesture locationInView:self]; // touch point in device coords
CGPoint touchUser = CGPointApplyAffineTransform(touchDevice, self.ctm); // doesn't give me what I want
// CGContextConvertPointToUserSpace(touchDevice) <- what I want, but doesn't work here
Using the inverse of ctm doesn't work either. I'll admit I'm having trouble getting my head around the meaning and relationships between device coordinates, user coordinates, and the transformation matrix. I think it's not as simple as I want it to be.
EDIT: Some background from Apple's documentation (iOS Coordinate Systems and Drawing Model).
"A window is positioned and sized in screen coordinates, which are defined by the coordinate system for the display."
"Drawing commands make reference to a fixed-scale drawing space, known as the user coordinate space. The operating system maps coordinate units in this drawing space onto the actual pixels of the corresponding target device."
"You can change a view’s default coordinate system by modifying the current transformation matrix (CTM). The CTM maps points in a view’s coordinate system to points on the device’s screen."
I discovered that the CTM already included a transformation to map view coordinates (with origin at the top left) to screen coordinates (with origin at the bottom left). So (0,0) got transformed to (0,800), where the height of my view was 800, and (0,2) mapped to (0,798) etc. So I gather there are 3 coordinate systems we're talking about: screen coordinates, view/device coordinates, user coordinates. (Please correct me if I am wrong.)
The CGContext transform (CTM) maps from user coordinates all the way to screen coordinates. My solution was to maintain my own transform separately which maps from user coordinates to view coordinates. Then I could use it to go back to user coordinates from view coordinates.
My Solution:
In drawRect:
CGAffineTransform scale = CGAffineTransformMakeScale(...);
CGAffineTransform translate = CGAffineTransformMakeTranslation(...);
self.myTransform = CGAffineTransformConcat(translate, scale);
// draw points using user coordinates
In my touch event handler:
CGPoint touch = [gesture locationInView:self]; // touch point in view coords
CGPoint touchUser = CGPointApplyAffineTransform(touchPoint, CGAffineTransformInvert(self.myTransform)); // this does the trick
Alternate Solution:
Another approach is to manually setup an identical context, but I think this is more of a hack.
In my touch event handler:
#import <QuartzCore/QuartzCore.h>
CGPoint touch = [gesture locationInView:self]; // view coords
CGSize layerSize = [self.layer frame].size;
UIGraphicsBeginImageContext(layerSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// as in drawRect:
CGContextScaleCTM(...);
CGContextTranslateCTM(...);
CGPoint touchUser = CGContextConvertPointToUserSpace(context, touch); // now it gives me what I want
UIGraphicsEndImageContext();
The effect that I am trying to achieve is having a circle of light in an area of darkness. The effect is similar to that in pokemon games, when you are in a dark cave and only have limited vision surrounding you. From what I have tried and read, I have been unable to create a mask over nodes in sprite kit that has alpha levels. The masks I manage to create all have a hard edge, and basically just crop. Reading on the apple developer page about the SKCropNode, which has the maskNode property, it says "If the pixel in the mask has an alpha value of less than 0.05, the image pixel is masked out." This unfortunately sounds to me like the pixels will either be completely masked out or completely included, with no alpha values in between. If what I am trying to say has been hard to follow, here is an image of what I have achieved:
https://www.dropbox.com/s/y5gbk8qvuq4ynh0/iOS%20Simulator%20Screen%20shot%20Jan%2020%2C%202014%201.06.23%20PM.png
and here is an image of what I would like to achieve:
https://www.dropbox.com/s/wtwfdi1mjs2n8e6/iOS%20Simulator%20Screen%20shot%20Jan%2020%2C%202014%201.05.54%20PM.png
The way that I managed to get the above result, was I masked out the hard edge circle and then just added an image that has a gradient going from black on the outside to transparent on the inside. The reason this approach doesn't work is because I need to have multiple circles, and with the method I just mentioned, when the circles intersect the darkness on the outside of the transparent circle can be seen.
In conclusion, what I need is a way to have a circle that starts dark in the center, and then fades out. Then, have it so where the circle is dark, the image behind it can be seen, and where the circle is transparent, the image behind it cannot be seen. Again, sorry if what I am saying is difficult to follow. Here is the code I am using. Some of it was found from other posts.
SKSpriteNode *background = [SKSpriteNode spriteNodeWithColor:[SKColor redColor] size:CGSizeMake(500, 500)];
background.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
SKCropNode *cropNode = [[SKCropNode alloc] init];
SKNode *area = [[SKNode alloc] init];
int x = 65; //radius of the circle
_circleMask = [[SKShapeNode alloc ]init];
CGMutablePathRef circle = CGPathCreateMutable();
CGPathAddArc(circle, NULL, 0, 0, x/2, 0, M_PI*2, YES);
_circleMask.path = circle;
_circleMask.lineWidth = x*2;
_circleMask.strokeColor = [SKColor whiteColor];
_circleMask.name=#"circleMask";
_circleMask.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
//Here is where I just added in the gradient circle To give the desired appearance, but this isn't necessary to the code
//_circleDark = [SKSpriteNode spriteNodeWithImageNamed:#"GradientCircle"];
//_circleDark.position = [cropNode convertPoint:_circleMask.position fromNode:area];
[area addChild:_circleMask];
[cropNode setMaskNode:area];
[cropNode addChild:background];
//[cropNode addChild:_circleDark];
[self addChild:cropNode];
This method has also allowed me to move the circles around, revealing different parts of the image behind it, which is what I want. To do this I just set it to change the _circleMask.position when the user taps the screen.
Also, just to make this clear in case anyone was confused, the black is just the background color of the scene, the picture is on top of that, and then the circle is part of the mask node.
A very simple (and maybe less... or more performant) version of this would be to simply add a SKSpriteNode on top which has your vignette on a transparent background. In other words, if viewed in Photoshop, you would see a decreasing amount of checkerboard visible in the circle as you go from the center out, eventually displaying solid black. When the PNG image is used in your app, this transparency will be preserved when the two sprites are composited.
I have an idea... I hope it would help.
Make a PNG with the gradient you want from white to black with no transparency.
Use a separate sprite node with the png for each light you want and add them all to a SKEffectNode or SKCropNode node. It doesn't matter which since they are both rendered in a separate context. Set each sprite node to screen blending mode.
Then, when adding the parent SKEffectNode or SKCropNode to the scene, set it to multiply blend mode.
In the end, the screening will merge the "lights" together nicely, while the multiply will make the white area transparent.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Rotate CGPath without changing its position
I searched and tested a variety of code for a couple of hours and I can't get this to work.
I am adding an arbitrary UIBezierPath at a random location to a CAShapeLayer which gets added to a view. I need to rotate the path so that I can handle device rotations. I can rotate the layer instead of the path. I just need the result to be rotated.
I already have methods to handle transforming the bezier path by scaling and translation. It works great, but now I need to simply rotate 90 degrees left or right.
Any recommendations on how to do this?
Basic code:
UIBezierPath *path = <create arbitrary path>
CAShapeLayer *layer = [CAShapeLayer layer];
[self addPathToLayer:layer
fromPath:path];
// I could get the center of the box but where is the box center for the view it is in?
// CGRect box = CGPathGetPathBoundingBox(path.CGPath);
// layer.anchorPoint = ? How to find the center of the box for the anchor point?
// Rotating here appears to rotate around 0,0 of the view
layer.transform = CATransform3DMakeRotation(DegreesToRadians(-90), 0.0, 0.0, 1.0);
I see the following post:
BezierPath Rotation in a UIView
I suppose I could rotate as-is and then translate the path back into place. I just need to figure out what the translation values would be.
I should also state that what I am seeing after I try to rotate is that the image moves off-screen somewhere. I tried rotating 25 degrees to see movement and it pivots around the view's origin of 0,0 so that if I rotate 90 degrees the image is off-screen. I am running these test WITHOUT rotating the device - just to see how rotation works.
UPDATE #1 - 12/4/2012: For some bizarre reason if I set the position to a value I found empirically it moves the rotated bezier path into the correct position after rotation:
layer.position = CGPointMake(280, 60);
This values are a guess from starting/stopping the app and making adjustments. I have no idea why I need to adjust the position on rotation. The anchor point should be in the center of the layer. However, I did find that both the frame and position of a CAShapeLayer are all ZERO even though the path is set, and also the fact that the path is in the correct position within the view. The 280, 60 position shifts the path into what would be the center of the path bounding box when a rotation of +90 is made. If I change the rotation value I need to adjust the position. I should not have to do this manually adjustment.
I think a last resort is to somehow convert the bezier path to an image and then add it. I found that if I set the layer content to an image, then rotate, it rotates about its center point with no positional adjustment needed. Not so with setting the path.
UPDATE #2 12/4/2012 - I tried setting the frame and with fiddling I get it to center as follows:
CGRect box = CGPathGetPathBoundingBox(path.CGPath);
CGRect rect = CGRectMake(0, 0, box.origin.x + (3.5 * box.size.width), box.origin.y + (3.5 * box.size.height));
layer.frame = rect;
layer.transform = CATransform3DMakeRotation(DegreesToRadians(90), 0.0, 0.0, 1.0);
Why multiply by 3.5? I have no clue. I found that adding the box origin with about 3.5 times the size of the box shifts the rotated CAShapeLayer path to about where it should be.
There must be a better way to do this. This is a better solution than my previous post since the frame size does not depend on the rotation angle. I just don't know why the frame needs to be set to the value I am setting it to. I THOUGHT it should be
CGRectMake(0, 0, box.origin.x + (box.size.width / 2), box.origin.y + (box.size.height / 2));
However, it shifts the image to the left too much.
Another clue I found is that if I set the frame of [self view].frame (the frame of the entire parent view, which is the screen of the iPhone), then rotate, the rotation point is the center of the screen, an the path/image orbits around this center point. This is why I tried shifting the frame to what the center of the path should be so that it orbits around the box center.
UPDATE #3 12/4/2012 - I tried to render the layer as an image. However, it appears that just setting the path of a layer does not make it an "image" in the layer since it is empty
CGRect box = CGPathGetPathBoundingBox(path.CGPath);
layer.frame = box;
UIImage *image = [ImageHelper imageFromLayer:layer]; // ImageHelper library I created
CAShapeLayer *newLayer = [CAShapeLayer layer];
newLayer.frame = CGRectMake(box.origin.x, box.origin.y, image.size.width, image.size.height);
newLayer.contents = (id) image.CGImage;
It appears that rotating the layer with its path set is no different than simply rotating the bezier path itself. I will go back to rotating the bezier path and see if I can fiddle with the position elements or something. There's got to be a solution to this.
Goal: Rotate a UIBezierPath around its center point within the view it was originally created in.
UPDATE #4 12/4/2012 - I ran a series of tests measuring the values needed for translation in order to place a UIBezierPath in its previous center location.
CGAffineTransform rotate = CGAffineTransformMakeRotation(DegreesToRadians(-15));
[path applyTransform:rotate];
// CGAffineTransform translate = CGAffineTransformMakeTranslation(-110, 70); // -45
CGAffineTransform translate = CGAffineTransformMakeTranslation(-52, -58); // -15
[path applyTransform:translate];
However, the ratios of x/y translations do not correspond so I cannot extrapolate what translation is required based on the angle. It appears that 'CGAffineTransformMakeRotation' uses some arbitrary anchor put to make the rotation, which at the moment appears to be maybe (viewWidth / 2, 0). I am making this much harder than it needs to be. There's something I am missing to make a simple rotation so that the center point is maintained. I just need to "spin" the path 90 degrees left or right.
UPDATE #5 12/4/2012 - After running additional tests it appears that the anchor point for rotating a UIBezierPath is the origin from where all of the points were drawn. In this case the origin is 0,0 and all of the points are relative to that point. Therefore, it a rotation is applied, the rotation is occurring around the origin, and is why the path shifts up-right on -90 and up-left on 90. I need to somehow set the anchor point for the rotation to the center so it "spins" around the center, rather than the original origin point. 12 hours spent on this one issue.
After some detailed analysis and graphing the bounding box on paper I found my assertion that the origin of 0,0 is correct.
A solution to this problem is to translate the path (the underlying matrix) to the origin, with the center of the bounding box at origin, rotate, then translate the path back to its original location.
Here's how to rotate a UIBezierPath 90 degrees:
CGAffineTransform translate = CGAffineTransformMakeTranslation(-1 * (box.origin.x + (box.size.width / 2)), -1 * (box.origin.y + (box.size.height / 2)));
[path applyTransform:translate];
CGAffineTransform rotate = CGAffineTransformMakeRotation(DegreesToRadians(90));
[path applyTransform:rotate];
translate = CGAffineTransformMakeTranslation((box.origin.x + (box.size.width / 2)), (box.origin.y + (box.size.height / 2)));
[path applyTransform:translate];
Plug in -90 degrees to rotate in the other direction.
This formula can be used when rotating the device from portrait to landscape and vice/versa.
I still don't think this is the ideal solution but the result is what I need for now.
If anyone has a better solution for this please post.
UPDATE 12/7/2012 - I found what I think is the best solution, and very simple as I though it would be. Rather than using rotate, translate, and scale methods on the bezier path, I instead extract the array of points as CGPoint objects, and scale/translate them as needed based on the view size as well as the orientation. I then create a new bezier path and set the layer to this path.
The result is perfect scaling, translation, rotation.