Understand convertRect:toView:, convertRect:FromView:, convertPoint:toView: and convertPoint:fromView: methods - ios
I'm trying to understand the functionalities of these methods. Could you provide me with a simple use case to understand their semantics?
From the documentation, for example, convertPoint:fromView: method is described as follows:
Converts a point from the coordinate system of a given view to that of the receiver.
What does the coordinate system mean? What about the receiver?
For example, does it make sense using convertPoint:fromView: like the following?
CGPoint p = [view1 convertPoint:view1.center fromView:view1];
Using NSLog utility, I've verified that p value coincides with view1's center.
Thank you in advance.
EDIT: For those interested, I've created a simple code snippet to understand these methods.
UIView* view1 = [[UIView alloc] initWithFrame:CGRectMake(100, 100, 150, 200)];
view1.backgroundColor = [UIColor redColor];
NSLog(#"view1 frame: %#", NSStringFromCGRect(view1.frame));
NSLog(#"view1 center: %#", NSStringFromCGPoint(view1.center));
CGPoint originInWindowCoordinates = [self.window convertPoint:view1.bounds.origin fromView:view1];
NSLog(#"convertPoint:fromView: %#", NSStringFromCGPoint(originInWindowCoordinates));
CGPoint originInView1Coordinates = [self.window convertPoint:view1.frame.origin toView:view1];
NSLog(#"convertPoint:toView: %#", NSStringFromCGPoint(originInView1Coordinates));
In both cases self.window is the receiver. But there is a difference. In the first case the convertPoint parameter is expressed in view1 coordinates. The output is the following:
convertPoint:fromView: {100, 100}
In the second one, instead, convertPoint is expressed in superview (self.window) coordinates. The output is the following:
convertPoint:toView: {0, 0}
Each view has its own coordinate system - with an origin at 0,0 and a width and height. This is described in the bounds rectangle of the view. The frame of the view, however, will have its origin at the point within the bounds rectangle of its superview.
The outermost view of your view hierarchy has it's origin at 0,0 which corresponds to the top left of the screen in iOS.
If you add a subview at 20,30 to this view, then a point at 0,0 in the subview corresponds to a point at 20,30 in the superview. This conversion is what those methods are doing.
Your example above is pointless (no pun intended) since it converts a point from a view to itself, so nothing will happen. You would more commonly find out where some point of a view was in relation to its superview - to test if a view was moving off the screen, for example:
CGPoint originInSuperview = [superview convertPoint:CGPointZero fromView:subview];
The "receiver" is a standard objective-c term for the object that is receiving the message (methods are also known as messages) so in my example here the receiver is superview.
I always find this confusing so I made a playground where you can visually explore what the convert function does. This is done in Swift 3 and Xcode 8.1b:
import UIKit
import PlaygroundSupport
class MyViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Main view
view.backgroundColor = .black
view.frame = CGRect(x: 0, y: 0, width: 500, height: 500)
// Red view
let redView = UIView(frame: CGRect(x: 20, y: 20, width: 460, height: 460))
redView.backgroundColor = .red
view.addSubview(redView)
// Blue view
let blueView = UIView(frame: CGRect(x: 20, y: 20, width: 420, height: 420))
blueView.backgroundColor = .blue
redView.addSubview(blueView)
// Orange view
let orangeView = UIView(frame: CGRect(x: 20, y: 20, width: 380, height: 380))
orangeView.backgroundColor = .orange
blueView.addSubview(orangeView)
// Yellow view
let yellowView = UIView(frame: CGRect(x: 20, y: 20, width: 340, height: 100))
yellowView.backgroundColor = .yellow
orangeView.addSubview(yellowView)
// Let's try to convert now
var resultFrame = CGRect.zero
let randomRect: CGRect = CGRect(x: 0, y: 0, width: 100, height: 50)
/*
func convert(CGRect, from: UIView?)
Converts a rectangle from the coordinate system of another view to that of the receiver.
*/
// The following line converts a rectangle (randomRect) from the coordinate system of yellowView to that of self.view:
resultFrame = view.convert(randomRect, from: yellowView)
// Try also one of the following to get a feeling of how it works:
// resultFrame = view.convert(randomRect, from: orangeView)
// resultFrame = view.convert(randomRect, from: redView)
// resultFrame = view.convert(randomRect, from: nil)
/*
func convert(CGRect, to: UIView?)
Converts a rectangle from the receiver’s coordinate system to that of another view.
*/
// The following line converts a rectangle (randomRect) from the coordinate system of yellowView to that of self.view
resultFrame = yellowView.convert(randomRect, to: view)
// Same as what we did above, using "from:"
// resultFrame = view.convert(randomRect, from: yellowView)
// Also try:
// resultFrame = orangeView.convert(randomRect, to: view)
// resultFrame = redView.convert(randomRect, to: view)
// resultFrame = orangeView.convert(randomRect, to: nil)
// Add an overlay with the calculated frame to self.view
let overlay = UIView(frame: resultFrame)
overlay.backgroundColor = UIColor(white: 1.0, alpha: 0.9)
overlay.layer.borderColor = UIColor.black.cgColor
overlay.layer.borderWidth = 1.0
view.addSubview(overlay)
}
}
var ctrl = MyViewController()
PlaygroundPage.current.liveView = ctrl.view
Remember to show the Assistant Editor (⎇⌘⏎) in order to see the views, it should look like this:
Feel free to contribute more examples here or in this gist.
Here's an explanation in plain English.
When you want to convert the rect of a subview (aView is a subview of [aView superview]) to the coordinate space of another view (self).
// So here I want to take some subview and put it in my view's coordinate space
_originalFrame = [[aView superview] convertRect: aView.frame toView: self];
Every view in iOS have a coordinate system. A coordinate system is just like a graph, which has x axis(horizontal line) and y axis(vertical line). The point at which the lines interesect is called origin. A point is represented by (x, y). For example, (2, 1) means that the point is 2 pixels left, and 1 pixel down.
You can read up more about coordinate systems here - http://en.wikipedia.org/wiki/Coordinate_system
But what you need to know is that, in iOS, every view has it's OWN coordinate system, where the top left corner is the origin. X axis goes on increasing to the right, and y axis goes on increasing down.
For the converting points question, take this example.
There is a view, called V1, which is 100 pixels wide and 100 pixels high. Now inside that, there is another view, called V2, at (10, 10, 50, 50) which means that (10, 10) is the point in V1's coordinate system where the top left corner of V2 should be located, and (50, 50) is the width and height of V2. Now, take a point INSIDE V2's coordinate system, say (20, 20). Now, what would that point be inside V1's coordinate system? That is what the methods are for(of course you can calculate themselves, but they save you extra work). For the record, the point in V1 would be (30, 30).
Hope this helps.
Thank you all for posting the question and your answers: It helped me get this sorted out.
My view controller has it's normal view.
Inside that view there are a number of grouping views that do little more than give their child views a clean interaction with auto layout constraints.
Inside one of those grouping views I have an Add button that presents a popover view controller where the user enters some information.
view
--groupingView
----addButton
During device rotation the view controller is alerted via the UIPopoverViewControllerDelegate call popoverController:willRepositionPopoverToRect:inView:
- (void)popoverController:(UIPopoverController *)popoverController willRepositionPopoverToRect:(inout CGRect *)rect inView:(inout UIView *__autoreleasing *)view
{
*rect = [self.addButton convertRect:self.addbutton.bounds toView:*view];
}
The essential part that comes from the explanation given by the first two answers above was that the rect I needed to convert from was the bounds of the add button, not its frame.
I haven't tried this with a more complex view hierarchy, but I suspect that by using the view supplied in the method call (inView:) we get around the complications of multi-tiered leaf view kinds of ugliness.
I used this post to apply in my case. Hope this will help another reader in the future.
A view can only see its immediate children and parent views. It can't see its grand parents or its grandchildren views.
So, in my case, I have a grand parent view called self.view, in this self.view I have added subviews called self.child1OfView, self.child2OfView. In self.child1OfView, I have added subviews called self.child1OfView1, self.child2OfView1.
Now if I physically move self.child1OfView1 to an area outside the boundary of self.child1OfView to anther spot on self.view, then to calculator the new position for the self.child1OfView1 within the self.view:
CGPoint newPoint = [self.view convertPoint:self.child1OfView1.center fromView:self.child1OfView];
You can see below code so you can understand that how it actually works.
let scrollViewTemp = UIScrollView.init(frame: CGRect.init(x: 10, y: 10, width: deviceWidth - 20, height: deviceHeight - 20))
override func viewDidLoad() {
super.viewDidLoad()
scrollViewTemp.backgroundColor = UIColor.lightGray
scrollViewTemp.contentSize = CGSize.init(width: 2000, height: 2000)
self.view.addSubview(scrollViewTemp)
let viewTemp = UIView.init(frame: CGRect.init(x: 100, y: 100, width: 150, height: 150))
viewTemp.backgroundColor = UIColor.green
self.view.addSubview(viewTemp)
let viewSecond = UIView.init(frame: CGRect.init(x: 100, y: 700, width: 300, height: 300))
viewSecond.backgroundColor = UIColor.red
self.view.addSubview(viewSecond)
self.view.convert(viewTemp.frame, from: scrollViewTemp)
print(viewTemp.frame)
/* First take one point CGPoint(x: 10, y: 10) of viewTemp frame,then give distance from viewSecond frame to this point.
*/
let point = viewSecond.convert(CGPoint(x: 10, y: 10), from: viewTemp)
//output: (10.0, -190.0)
print(point)
/* First take one point CGPoint(x: 10, y: 10) of viewSecond frame,then give distance from viewTemp frame to this point.
*/
let point1 = viewSecond.convert(CGPoint(x: 10, y: 10), to: viewTemp)
//output: (10.0, 210.0)
print(point1)
/* First take one rect CGRect(x: 10, y: 10, width: 20, height: 20) of viewSecond frame,then give distance from viewTemp frame to this rect.
*/
let rect1 = viewSecond.convert(CGRect(x: 10, y: 10, width: 20, height: 20), to: viewTemp)
//output: (10.0, 210.0, 20.0, 20.0)
print(rect1)
/* First take one rect CGRect(x: 10, y: 10, width: 20, height: 20) of viewTemp frame,then give distance from viewSecond frame to this rect.
*/
let rect = viewSecond.convert(CGRect(x: 10, y: 10, width: 20, height: 20), from: viewTemp)
//output: (10.0, -190.0, 20.0, 20.0)
print(rect)
}
I read the answer and understand the mechanics but I think the final example is not correct. According to the API doc, the center property of a view contains the known center point of the view in the superview’s coordinate system.
If this is the case, than I think it would not make sense to try to ask the superview to convert the center of a subview FROM the subview coordinate system because the value is not in the subview coordinate system. What would make sense is to do the opposite i.e. convert from the superview coordinate system to that of a subview...
You can do it in two ways (both should yield the same value):
CGPoint centerInSubview = [subview convertPoint:subview.center fromView:subview.superview];
or
CGPoint centerInSubview = [subview.superview convertPoint:subview.center toView:subview];
Am I way off in understanding how this should work?
One more important point about using these APIs. Be sure that the parent view chain is complete between the rect you are converting and the to/from view.
For example - aView, bView, and cView -
aView is a subview of bView
we want to convert aView.frame to cView
If we try to execute the method before bView has been added as a subview of cView, we will get back a bunk response. Unfortunately there is no protection built into the methods for this case. This may seem obvious, but it is something to be aware of in cases where the conversion goes through a long chain of parents.
Related
How to specify position that popover controller's arrow points to
It's well known that in order to present UIAlertController with preferredStyle: .actionSheet, it's a good practice to set an anchor using the next code: popover.sourceView = view popover.sourceRect = view.bounds In my case, the view is UIImageView, it's size is 44x44, but its image (three vertical dots) is very thin (its size is 4x24). So the view is mostly transparent. As a result, popover "points" to empty space. (Note that I am using red color to show fully transparent area) Ideally, popover's arrow should point to the central white dot. But I do not know how to achieve it. I tried to modify sourceRect as follows: popover.sourceRect = CGRect(x: view.bounds.midX - 1, y: view.bounds.midY - 1, width: 2, height: 2) but it points to the top white dot for some reason: How can I make it pointing to the desired point?
The sourceRect worked for me, i just used the y coordinates to position the arrow and it worked for me. popover.sourceRect = CGRect(x: 0, y: yourPosition, width: 0, height: 0) You can also have a look at this.
Unable to minimize subview alongside mainView during animation
I have two views, videoView (which is the mainView) and subVideoView (which is the subView of mainView). I am trying to minimize both views using animation at the same time, as shown in the below code. I am able to minimize the videoView (i.e mainView) and not the subVideoView. However, when I hide code for minimising videoView (i.e mainView), I am able to minimise the subVideoView. I believe it has to do something with the way I am animating. Can some one please advice how I can minimise both views (proportionally) with animation at the same time and end up with the below result. RESULTS OF EXISTING CODE func minimiseOrMaximiseViews(animationType: String){ UIView.animate(withDuration: 0.5, delay: 0, options: [], animations: { [unowned self] in switch animationType { case "minimiseView" : // Minimising subVideoView self.subVideoView.frame = CGRect(x: self.mainScreenWidth - self.minSubVideoViewWidth - self.padding, y: self.mainScreenHeight - self.minSubVideoViewHeight - self.padding, width: self.minSubVideoViewWidth, height: self.minSubVideoViewHeight) // Minimising self i.e videoView self.frame = CGRect(x: self.mainScreenWidth - self.videoViewWidth - self.padding, y: self.mainScreenHeight - self.videoViewHeight - self.padding, width: self.videoViewWidth, height: self.videoViewHeight) self.layoutIfNeeded() case "maximiseView": // Maximising videoView self.frame = CGRect(x: 0, y: 0, width: self.mainScreenSize.width, height: self.mainScreenSize.height) // Maximising subVideoView self.subVideoView.frame = CGRect(x: self.mainScreenWidth - self.maxSubVideoViewWidth - self.padding, y: self.mainScreenHeight - self.maxSubVideoViewHeight - self.padding - self.buttonStackViewBottomPadding - buttonStackViewHeight, width: self.maxSubVideoViewWidth, height: self.maxSubVideoViewHeight) default: break }
As per your requirement add subViews in mainView and set mainView.clipsToBounds = true and In the minimiseOrMaximiseViews method you just need to manage Y axis of mainView for show and hide.
I could minimise both views simultaneously using animation, when I've set both views (videoView and subVideoView) as subViews of window i.e UIApplication.shared.keywindow.
Unless I'm missing something, this kind of animation is much more easily achieved by animating the transform property of the videoView (the main view). You will need to concatenate scaling and translation transforms for that, because scaling is by default applied to a view's center. The trick is that applying a transform to a view affects all its subviews automatically, e.g. applying a scaling transform to a view scales both the view and all its subviews. To make a reverse animation just set videoView.transform = CGAffineTransformIdentity inside an animation block. Caveat: You should be careful, though, with the frame property of a transformed view. The frame property is synthetic and is derived from bounds, center and transform. This means that setting a view's frame resets its transform property. In other words, if you manipulate your views using transform, you most likely want to avoid setting their frames directly.
UIScrollview zoomToRect center of currently visible view
I have a UIScrollView displaying a image. I want to programmatically zoom in to the a rect somewhere near the center (doesn't have to be exact) of the currently visible area. How would I get the coordinates of this rect to use with zoomToRect? Note that this image could be already zoomed in and only showing a fraction of the scrollView content area.
The the X and Y position of that image are relative to the scrollview's contentSize. The area shown on screen is defined by the scrollview's contentOffset. You then take the position of your scrollview on screen and the position of the selection rectangle on your screen. Finally you need to do rather simple maths (a few additions/subtractions) for both X and Y using the above values.
Grab UIImageView's frame and call insetBy(dx:dy:): Returns a rectangle that is smaller or larger than the source rectangle, with the same center point. From Apple Documentation Here's a quick visualisation in a PlayGround: let blueSquare = UIView(frame: CGRect(x: 0, y: 0, width: 300, height: 300)) let yellowSquare = UIView(frame: blueSquare.frame.insetBy(dx: 100, dy: 100)) blueSquare.backgroundColor = .blue yellowSquare.backgroundColor = .yellow blueSquare.addSubview(yellowSquare) Will result in this:
Using a UILabel Sublayer to Cut Off Corners Overlaying an Image
I've encountered a problem with code I'd written to cut off the corners of a UILabel (or, indeed, any UIView-derived object to which you can add sublayers) -- I do have to thank Kurt Revis for his answer to Use a CALayer to add a diagonal banner/badge to the corner of a UITableViewCell that pointed me in this direction. I don't have a problem if the corner overlays a solid color -- it's simple enough to make the cut-off corner match that color. But if the corner overlays an image, how would you let the image show through? I've searched SO for anything similar to this problem, but most of those answers have to do with cells in tables and all I'm doing here is putting a label on a screen's view. Here's the code I use: -(void)returnChoppedCorners:(UIView *)viewObject { NSLog(#"Object Width = %f", viewObject.layer.frame.size.width); NSLog(#"Object Height = %f", viewObject.layer.frame.size.height); CALayer* bannerLeftTop = [CALayer layer]; bannerLeftTop.backgroundColor = [UIColor blackColor].CGColor; // or whatever color the background is bannerLeftTop.bounds = CGRectMake(0, 0, 25, 25); bannerLeftTop.anchorPoint = CGPointMake(0.5, 1.0); bannerLeftTop.position = CGPointMake(10, 10); bannerLeftTop.affineTransform = CGAffineTransformMakeRotation(-45.0 / 180.0 * M_PI); [viewObject.layer addSublayer:bannerLeftTop]; CALayer* bannerRightTop = [CALayer layer]; bannerRightTop.backgroundColor = [UIColor blackColor].CGColor; bannerRightTop.bounds = CGRectMake(0, 0, 25, 25); bannerRightTop.anchorPoint = CGPointMake(0.5, 1.0); bannerRightTop.position = CGPointMake(viewObject.layer.frame.size.width - 10.0, 10.0); bannerRightTop.affineTransform = CGAffineTransformMakeRotation(45.0 / 180.0 * M_PI); [viewObject.layer addSublayer:bannerRightTop]; } I'll be adding similar code to do the BottomLeft and BottomRight corners, but, right now, those are corners that overlay an image. The bannerLeftTop and bannerRightTop are actually squares that are rotated over the corner against a black background. Making them clear only lets the underlying UILabel background color appear, not the image. Same for using the z property. Is masking the answer? Oo should I be working with the underlying image instead? I'm also encountering a problem with the Height and Width being passed to this method -- they don't match the constrained Height and Width of the object. But we'll save that for another question.
What you need to do, instead of drawing an opaque corner triangle over the label, is mask the label so its corners aren't drawn onto the screen. Since iOS 8.0, UIView has a maskView property, so we don't actually need to drop to the Core Animation level to do this. We can draw an image to use as a mask, with the appropriate corners clipped. Then we'll create an image view to hold the mask image, and set it as the maskView of the label (or whatever). The only problem is that (in my testing) UIKit won't resize the mask view automatically, either with constraints or autoresizing. We have to update the mask view's frame “manually” if the masked view is resized. I realize your question is tagged objective-c, but I developed my answer in a Swift playground for convenience. It shouldn't be hard to translate this to Objective-C. I didn't do anything particularly “Swifty”. So... here's a function that takes an array of corners (specified as UIViewContentMode cases, because that enum includes cases for the corners), a view, and a “depth”, which is how many points each corner triangle should measure along its square sides: func maskCorners(corners: [UIViewContentMode], ofView view: UIView, toDepth depth: CGFloat) { In Objective-C, for the corners argument, you could use a bitmask (e.g. (1 << UIViewContentModeTopLeft) | (1 << UIViewContentModeBottomRight)), or you could use an NSArray of NSNumbers (e.g. #[ #(UIViewContentModeTopLeft), #(UIViewContentModeBottomRight) ]). Anyway, I'm going to create a square, 9-slice resizable image. The image will need one point in the middle for stretching, and since each corner might need to be clipped, the corners need to be depth by depth points. Thus the image will have sides of length 1 + 2 * depth points: let s = 1 + 2 * depth Now I'm going to create a path that outlines the mask, with the corners clipped. let path = UIBezierPath() So, if the top left corner is clipped, I need the path to avoid the top left point of the square (which is at 0, 0). Otherwise, the path includes the top left point of the square. if corners.contains(.TopLeft) { path.moveToPoint(CGPoint(x: 0, y: 0 + depth)) path.addLineToPoint(CGPoint(x: 0 + depth, y: 0)) } else { path.moveToPoint(CGPoint(x: 0, y: 0)) } Do the same for each corner in turn, going around the square: if corners.contains(.TopRight) { path.addLineToPoint(CGPoint(x: s - depth, y: 0)) path.addLineToPoint(CGPoint(x: s, y: 0 + depth)) } else { path.addLineToPoint(CGPoint(x: s, y: 0)) } if corners.contains(.BottomRight) { path.addLineToPoint(CGPoint(x: s, y: s - depth)) path.addLineToPoint(CGPoint(x: s - depth, y: s)) } else { path.addLineToPoint(CGPoint(x: s, y: s)) } if corners.contains(.BottomLeft) { path.addLineToPoint(CGPoint(x: 0 + depth, y: s)) path.addLineToPoint(CGPoint(x: 0, y: s - depth)) } else { path.addLineToPoint(CGPoint(x: 0, y: s)) } Finally, close the path so I can fill it: path.closePath() Now I need to create the mask image. I'll do this using an alpha-only bitmap: let colorSpace = CGColorSpaceCreateDeviceGray() let scale = UIScreen.mainScreen().scale let gc = CGBitmapContextCreate(nil, Int(s * scale), Int(s * scale), 8, 0, colorSpace, CGImageAlphaInfo.Only.rawValue)! I need to adjust the coordinate system of the context to match UIKit: CGContextScaleCTM(gc, scale, -scale) CGContextTranslateCTM(gc, 0, -s) Now I can fill the path in the context. The use of white here is arbitrary; any color with an alpha of 1.0 would work: CGContextSetFillColorWithColor(gc, UIColor.whiteColor().CGColor) CGContextAddPath(gc, path.CGPath) CGContextFillPath(gc) Next I create a UIImage from the bitmap: let image = UIImage(CGImage: CGBitmapContextCreateImage(gc)!, scale: scale, orientation: .Up) If this were in Objective-C, you'd want to release the bitmap context at this point, with CGContextRelease(gc), but Swift takes care of it for me. Anyway, I convert the non-resizable image to a 9-slice resizable image: let maskImage = image.resizableImageWithCapInsets(UIEdgeInsets(top: depth, left: depth, bottom: depth, right: depth)) Finally, I set up the mask view. I might already have a mask view, because you might have clipped the view with different settings already, so I'll reuse an existing mask view if it is an image view: let maskView = view.maskView as? UIImageView ?? UIImageView() maskView.image = maskImage Finally, if I had to create the mask view, I need to set it as view.maskView and set its frame: if view.maskView != maskView { view.maskView = maskView maskView.frame = view.bounds } } OK, how do I use this function? To demonstrate, I'll make a purple background view, and put an image on top of it: let view = UIImageView(image: UIImage(named: "Kaz-256.jpg")) view.autoresizingMask = [ .FlexibleWidth, .FlexibleHeight ] let backgroundView = UIView(frame: view.frame) backgroundView.backgroundColor = UIColor.purpleColor() backgroundView.addSubview(view) XCPlaygroundPage.currentPage.liveView = backgroundView Then I'll mask some corners of the image view. Presumably you would do this in, say, viewDidLoad: maskCorners([.TopLeft, .BottomRight], ofView: view, toDepth: 50) Here's the result: You can see the purple background showing through the clipped corners. If I were to resize the view, I'd need to update the mask view's frame. For example, I might do this in my view controller: override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() self.cornerClippedView.maskView?.frame = self.cornerClippedView.bounds } Here's a gist of all the code, so you can copy and paste it into a playground to try out. You'll have to supply your own adorable test image. UPDATE Here's code to create a label with a white background, and overlay it (inset by 20 points on each side) on the background image: let backgroundView = UIImageView(image: UIImage(named: "Kaz-256.jpg")) let label = UILabel(frame: backgroundView.bounds.insetBy(dx: 20, dy: 20)) label.backgroundColor = UIColor.whiteColor() label.font = UIFont.systemFontOfSize(50) label.text = "This is the label" label.lineBreakMode = .ByWordWrapping label.numberOfLines = 0 label.textAlignment = .Center label.autoresizingMask = [ .FlexibleWidth, .FlexibleHeight ] backgroundView.addSubview(label) XCPlaygroundPage.currentPage.liveView = backgroundView maskCorners([.TopLeft, .BottomRight], ofView: label, toDepth: 50) Result:
Swift CGPoint in relation to another CGPoint
I am trying to have two UIButtons. One is 8 points from the edge of the UIView they are inside and the second one is 4 points from the start of the first. I am currently doing this let buttonFrames: CGSize = CGSizeMake(56, 42) let buttonRightPosition: CGPoint = CGPointMake(frame.size.width - 8, frame.size.height - 62) let buttonLeftPosition: CGPoint = CGPointMake(buttonRightPosition - 60, frame.size.height) Both buttons will have the frame 56x42 but I will also add size to fit both buttons. So, that is why I need to adjust the second one accordingly. When I do this an error occurs in variable buttonLeftPosition. I am trying to find any way to place a view in relation to another view in the same class.
buttonRightPosition is a CGPoint. In your last line, when you are trying to create the buttonLeftPosition CGPoint, you have to use buttonRightPosition.x to access its x position.. Note: In Swift I would use the CGSize and CGPoint initializers instead of CGSizeMake() or CGPointMake() let buttonFrames = CGSize(width: 56, height: 42) let buttonRightPosition = CGPoint(x: frame.size.width - 8, y: frame.size.height - 62) let buttonLeftPosition = CGPoint(x: buttonRightPosition.x - 60, y: frame.size.height)