Whatever size i give to it while allocation, it shows fixed size only. Is it possible to increase it?
Code:
activityIndicator = [[UIActivityIndicatorView alloc] initWithFrame:
CGRectMake(142.00, 212.00, 80.0, 80.0)];
[[self view] addSubview:activityIndicator];
[activityIndicator sizeToFit];
activityIndicator.autoresizingMask = (UIViewAutoresizingFlexibleLeftMargin |
UIViewAutoresizingFlexibleRightMargin |
UIViewAutoresizingFlexibleTopMargin |
UIViewAutoresizingFlexibleBottomMargin);
activityIndicator.hidesWhenStopped = YES;
activityIndicator.activityIndicatorViewStyle = UIActivityIndicatorViewStyleWhiteLarge;
The following will create an activity indicator 15px wide:
#import <QuartzCore/QuartzCore.h>
...
UIActivityIndicatorView *activityIndicator = [[[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleGray] autorelease];
activityIndicator.transform = CGAffineTransformMakeScale(0.75, 0.75);
[self addSubview:activityIndicator];
While I understand the sentiment of TechZen's answer, I don't think adjusting the size of a UIActivityIndicator by a relatively small amount is really a violation of Apple's standardized interface idioms - whether an activity indicator is 20px or 15px won't change a user's interpretation of what's going on.
Swift 3.0 & Swift 4.0
self.activityIndi.transform = CGAffineTransform(scaleX: 3, y: 3)
The size is fixed by the style. It's a standardized interface element so the API doesn't like to fiddle with it.
However, you probably could do a scaling transform on it. Not sure how that would affect it visually, however.
Just from a UI design perspective, its usually better to leave these common standardized elements alone. User have been taught that certain elements appear in a certain size and that they mean specific things. Altering the standard appearance alters the interface grammar and confuses the user.
It is possible to resize UIActivityIndicator.
CGAffineTransform transform = CGAffineTransformMakeScale(1.5f, 1.5f);
activityIndicator.transform = transform;
Original size is 1.0f. Now you increase and reduce size accordingly.
Swift3
var activityIndicator = UIActivityIndicatorView()
activityIndicator = UIActivityIndicatorView(activityIndicatorStyle: UIActivityIndicatorViewStyle.gray)
activityIndicator.frame = CGRect(x: 0, y: 0, width: 50, height: 50)
let transform: CGAffineTransform = CGAffineTransform(scaleX: 1.5, y: 1.5)
activityIndicator.transform = transform
activityIndicator.center = self.view.center
activityIndicator.startAnimating()
self.view.addSubview(activityIndicator)
Here is an extension that would work with Swift 3.0 & checks to prevent 0 scaling (or whatever value you want to prohibit):
extension UIActivityIndicatorView {
func scale(factor: CGFloat) {
guard factor > 0.0 else { return }
transform = CGAffineTransform(scaleX: factor, y: factor)
}
}
Call it like so to scale to 40 pts (2x):
activityIndicatorView.scale(factor: 2.0)
There also are lots of other useful "CGAffineTransform" tricks you can play with. For more details please see Apple Developer Library reference:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Reference/CGAffineTransform/Reference/reference.html
Good luck!
The best you can do is use the whiteLarge style.
let i = UIActivityIndicatorView(activityIndicatorStyle: UIActivityIndicatorViewStyle.whiteLarge).
Increasing the size of UIActivityIndicatorView does not change the size of the indicator proper, as you can see in these pictures.
activityIndicator.transform = CGAffineTransform(scaleX: 1.75, y: 1.75);
This worked me for transforming size of indicator .
Yes, as it is already answered, visible size of UIActivityIndicatorView can be changed using transform property. To allow set/get exact indicator size, I have added simple extension:
extension UIActivityIndicatorView {
var imageSize: CGSize {
let imgView = subviews.first { $0 is UIImageView }
return imgView?.bounds.size ?? .zero
}
var radius: CGFloat {
get {
imageSize.width * scale / 2.0
}
set {
let w = imageSize.width
scale = (w == 0.0) ? 0 : newValue * 2.0 / w
}
}
var scale: CGFloat {
get {
// just return x scale component as this var has meaning only
// if transform of scale type, and x and y scales are same)
transform.a
}
set {
transform = CGAffineTransform(scaleX: newValue, y: newValue);
}
}
}
With this extension you can simply write for example
indicatorView.radius = 16.0
It is also useful when you need to set exact spacing of indicator from some other view as UIActivityIndicatorView has zero frame.
Related
I need to animate a CATextLayer's bounds.size.height, position, and fontSize. When I add them to a CAAnimationGroup, the text jitters during the animation, just like this:
https://youtu.be/HfC1ZX-pbyM
The jittering of the text's tracking values (spacing between characters) seems to occur while animating fontSize with bounds.size.height AND/OR position. I've isolated fontSize, and it performs well on its own.
How can I prevent the text from jittering in CATextLayer if I animate bounds and font size at the same time?
EDIT
I've moved on from animating bounds. Now, I only care about fontSize + position. Here are two videos showing the difference.
fontSize only (smooth): https://youtu.be/FDPPGF_FzLI
fontSize + position (jittery): https://youtu.be/3rFTsp7wBzk
Here is the code for that.
let startFontSize: CGFloat = 16
let endFontSize: CGFloat = 30
let startPosition: CGPoint = CGPoint(x: 40, y: 100)
let endPosition: CGPoint = CGPoint(x: 20, y: 175)
// Initialize the layer
textLayer = CATextLayer()
textLayer.string = "Hello how are you?"
textLayer.font = UIFont.systemFont(ofSize: startFontSize, weight: UIFont.Weight.semibold)
textLayer.fontSize = startFontSize
textLayer.alignmentMode = kCAAlignmentLeft
textLayer.foregroundColor = UIColor.black.cgColor
textLayer.contentsScale = UIScreen.main.scale
textLayer.isWrapped = true
textLayer.backgroundColor = UIColor.lightGray.cgColor
textLayer.anchorPoint = CGPoint(x: 0, y: 0)
textLayer.position = startPosition
textLayer.bounds.size = CGSize(width: 450, height: 50)
view.layer.addSublayer(textLayer)
// Animate
let damping: CGFloat = 20
let mass: CGFloat = 1.2
var animations = [CASpringAnimation]()
let fontSizeAnim = CASpringAnimation(keyPath: "fontSize")
fontSizeAnim.fromValue = startFontSize
fontSizeAnim.toValue = endFontSize
fontSizeAnim.damping = damping
fontSizeAnim.mass = mass
fontSizeAnim.duration = fontSizeAnim.settlingDuration
animations.append(fontSizeAnim)
let positionAnim = CASpringAnimation(keyPath: "position.y")
positionAnim.fromValue = textLayer.position.y
positionAnim.toValue = endPosition.y
positionAnim.damping = damping
positionAnim.mass = mass
positionAnim.duration = positionAnim.settlingDuration
animations.append(positionAnim)
let animGroup = CAAnimationGroup()
animGroup.animations = animations
animGroup.duration = fontSizeAnim.settlingDuration
animGroup.isRemovedOnCompletion = true
animGroup.autoreverses = true
textLayer.add(animGroup, forKey: nil)
My device is running iOS 11.0.
EDIT 2
I've broken down each animation (fontSize only, and fontSize + position) frame-by-frame. In each video, I'm progressing 1 frame at a time.
In the fontSize only video (https://youtu.be/DZw2pMjDcl8), each frame yields an increase in fontSize, so there's no choppiness.
In the fontSize + position video (https://youtu.be/_idWte92F38), position is updated in every frame, but not fontSize. There is only an increase in fontSize in 60% of frames, meaning that fontSize isn't animating in sync with position, causing the perceived chopping.
So maybe the right question is: why does fontSize animate in each frame when it's the only animation added to a layer, but not when added as part of CAAnimationGroup in conjunction with the position animation?
Apple DTS believes this issue is a bug. A report has been filed.
In the meantime, I'll be using CADisplayLink to synchronize the redrawing of CATextLayer.fontSize to the refresh rate of the device, which will redraw the layer with the appropriate fontSize in each frame.
Edit
After tinkering with CADisplayLink for a day or so, drawing to the correct fontSize proved difficult, especially when paired with a custom timing function. So, I'm giving up on CATextLayer altogether and going back to UILabel.
In WWDC '17's Advanced Animations with UIKit, Apple recommends "view morphing" to animate between two label states — that is, the translation, scaling, and opacity blending of two views. UIViewPropertyAnimator provides a lot of flexibility for this, like blending multiple timing functions and scrubbing. View morphing is also useful for transitioning between 2 text values without having to fade out the text representation, changing text, and fading back in.
I do hope Apple can beef up CATextLayer support for non-interactive animations, as I'd prefer using one view to animate the same text representation.
Good day, I am trying to use a SKEmitterNode in swift, but I can't seem to be able to change its' width, so the particles only cover half of the screen.
My code:
if let particles = SKEmitterNode(fileNamed: "Snow.sks") {
particles.position = CGPointMake(frame.size.width/2, frame.size.height)
particles.targetNode = self.scene
particles.zPosition = 999
addChild(particles)
}
How can I make the particles to cover the whole screen width?
After looking at the so called "emitter editor", as suggested by #Knight0fDragon, I was able to find the right parameter - particlePositionRange
if let particles = SKEmitterNode(fileNamed: "Snow.sks") {
particles.position = CGPointMake(frame.size.width/2, frame.size.height)
particles.targetNode = self.scene
// frame.size.width to cover the length of the screen.
particles.particlePositionRange = CGVector(dx: frame.size.width, dy: frame.size.height)
particles.zPosition = 999
addChild(particles)
}
Through the position and particlePositionRange you can get your goal
By the documentation:
particlePositionRange
Declaration
var particlePositionRange: CGVector { get set }
Discussion
The default value is (0.0,0.0). If a component is non-zero, the same component of a particle’s position is randomly determined and may vary by plus or minus half of the range value
I've encountered a problem with code I'd written to cut off the corners of a UILabel (or, indeed, any UIView-derived object to which you can add sublayers) -- I do have to thank Kurt Revis for his answer to Use a CALayer to add a diagonal banner/badge to the corner of a UITableViewCell that pointed me in this direction.
I don't have a problem if the corner overlays a solid color -- it's simple enough to make the cut-off corner match that color. But if the corner overlays an image, how would you let the image show through?
I've searched SO for anything similar to this problem, but most of those answers have to do with cells in tables and all I'm doing here is putting a label on a screen's view.
Here's the code I use:
-(void)returnChoppedCorners:(UIView *)viewObject
{
NSLog(#"Object Width = %f", viewObject.layer.frame.size.width);
NSLog(#"Object Height = %f", viewObject.layer.frame.size.height);
CALayer* bannerLeftTop = [CALayer layer];
bannerLeftTop.backgroundColor = [UIColor blackColor].CGColor;
// or whatever color the background is
bannerLeftTop.bounds = CGRectMake(0, 0, 25, 25);
bannerLeftTop.anchorPoint = CGPointMake(0.5, 1.0);
bannerLeftTop.position = CGPointMake(10, 10);
bannerLeftTop.affineTransform = CGAffineTransformMakeRotation(-45.0 / 180.0 * M_PI);
[viewObject.layer addSublayer:bannerLeftTop];
CALayer* bannerRightTop = [CALayer layer];
bannerRightTop.backgroundColor = [UIColor blackColor].CGColor;
bannerRightTop.bounds = CGRectMake(0, 0, 25, 25);
bannerRightTop.anchorPoint = CGPointMake(0.5, 1.0);
bannerRightTop.position = CGPointMake(viewObject.layer.frame.size.width - 10.0, 10.0);
bannerRightTop.affineTransform = CGAffineTransformMakeRotation(45.0 / 180.0 * M_PI);
[viewObject.layer addSublayer:bannerRightTop];
}
I'll be adding similar code to do the BottomLeft and BottomRight corners, but, right now, those are corners that overlay an image. The bannerLeftTop and bannerRightTop are actually squares that are rotated over the corner against a black background. Making them clear only lets the underlying UILabel background color appear, not the image. Same for using the z property. Is masking the answer? Oo should I be working with the underlying image instead?
I'm also encountering a problem with the Height and Width being passed to this method -- they don't match the constrained Height and Width of the object. But we'll save that for another question.
What you need to do, instead of drawing an opaque corner triangle over the label, is mask the label so its corners aren't drawn onto the screen.
Since iOS 8.0, UIView has a maskView property, so we don't actually need to drop to the Core Animation level to do this. We can draw an image to use as a mask, with the appropriate corners clipped. Then we'll create an image view to hold the mask image, and set it as the maskView of the label (or whatever).
The only problem is that (in my testing) UIKit won't resize the mask view automatically, either with constraints or autoresizing. We have to update the mask view's frame “manually” if the masked view is resized.
I realize your question is tagged objective-c, but I developed my answer in a Swift playground for convenience. It shouldn't be hard to translate this to Objective-C. I didn't do anything particularly “Swifty”.
So... here's a function that takes an array of corners (specified as UIViewContentMode cases, because that enum includes cases for the corners), a view, and a “depth”, which is how many points each corner triangle should measure along its square sides:
func maskCorners(corners: [UIViewContentMode], ofView view: UIView, toDepth depth: CGFloat) {
In Objective-C, for the corners argument, you could use a bitmask (e.g. (1 << UIViewContentModeTopLeft) | (1 << UIViewContentModeBottomRight)), or you could use an NSArray of NSNumbers (e.g. #[ #(UIViewContentModeTopLeft), #(UIViewContentModeBottomRight) ]).
Anyway, I'm going to create a square, 9-slice resizable image. The image will need one point in the middle for stretching, and since each corner might need to be clipped, the corners need to be depth by depth points. Thus the image will have sides of length 1 + 2 * depth points:
let s = 1 + 2 * depth
Now I'm going to create a path that outlines the mask, with the corners clipped.
let path = UIBezierPath()
So, if the top left corner is clipped, I need the path to avoid the top left point of the square (which is at 0, 0). Otherwise, the path includes the top left point of the square.
if corners.contains(.TopLeft) {
path.moveToPoint(CGPoint(x: 0, y: 0 + depth))
path.addLineToPoint(CGPoint(x: 0 + depth, y: 0))
} else {
path.moveToPoint(CGPoint(x: 0, y: 0))
}
Do the same for each corner in turn, going around the square:
if corners.contains(.TopRight) {
path.addLineToPoint(CGPoint(x: s - depth, y: 0))
path.addLineToPoint(CGPoint(x: s, y: 0 + depth))
} else {
path.addLineToPoint(CGPoint(x: s, y: 0))
}
if corners.contains(.BottomRight) {
path.addLineToPoint(CGPoint(x: s, y: s - depth))
path.addLineToPoint(CGPoint(x: s - depth, y: s))
} else {
path.addLineToPoint(CGPoint(x: s, y: s))
}
if corners.contains(.BottomLeft) {
path.addLineToPoint(CGPoint(x: 0 + depth, y: s))
path.addLineToPoint(CGPoint(x: 0, y: s - depth))
} else {
path.addLineToPoint(CGPoint(x: 0, y: s))
}
Finally, close the path so I can fill it:
path.closePath()
Now I need to create the mask image. I'll do this using an alpha-only bitmap:
let colorSpace = CGColorSpaceCreateDeviceGray()
let scale = UIScreen.mainScreen().scale
let gc = CGBitmapContextCreate(nil, Int(s * scale), Int(s * scale), 8, 0, colorSpace, CGImageAlphaInfo.Only.rawValue)!
I need to adjust the coordinate system of the context to match UIKit:
CGContextScaleCTM(gc, scale, -scale)
CGContextTranslateCTM(gc, 0, -s)
Now I can fill the path in the context. The use of white here is arbitrary; any color with an alpha of 1.0 would work:
CGContextSetFillColorWithColor(gc, UIColor.whiteColor().CGColor)
CGContextAddPath(gc, path.CGPath)
CGContextFillPath(gc)
Next I create a UIImage from the bitmap:
let image = UIImage(CGImage: CGBitmapContextCreateImage(gc)!, scale: scale, orientation: .Up)
If this were in Objective-C, you'd want to release the bitmap context at this point, with CGContextRelease(gc), but Swift takes care of it for me.
Anyway, I convert the non-resizable image to a 9-slice resizable image:
let maskImage = image.resizableImageWithCapInsets(UIEdgeInsets(top: depth, left: depth, bottom: depth, right: depth))
Finally, I set up the mask view. I might already have a mask view, because you might have clipped the view with different settings already, so I'll reuse an existing mask view if it is an image view:
let maskView = view.maskView as? UIImageView ?? UIImageView()
maskView.image = maskImage
Finally, if I had to create the mask view, I need to set it as view.maskView and set its frame:
if view.maskView != maskView {
view.maskView = maskView
maskView.frame = view.bounds
}
}
OK, how do I use this function? To demonstrate, I'll make a purple background view, and put an image on top of it:
let view = UIImageView(image: UIImage(named: "Kaz-256.jpg"))
view.autoresizingMask = [ .FlexibleWidth, .FlexibleHeight ]
let backgroundView = UIView(frame: view.frame)
backgroundView.backgroundColor = UIColor.purpleColor()
backgroundView.addSubview(view)
XCPlaygroundPage.currentPage.liveView = backgroundView
Then I'll mask some corners of the image view. Presumably you would do this in, say, viewDidLoad:
maskCorners([.TopLeft, .BottomRight], ofView: view, toDepth: 50)
Here's the result:
You can see the purple background showing through the clipped corners.
If I were to resize the view, I'd need to update the mask view's frame. For example, I might do this in my view controller:
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
self.cornerClippedView.maskView?.frame = self.cornerClippedView.bounds
}
Here's a gist of all the code, so you can copy and paste it into a playground to try out. You'll have to supply your own adorable test image.
UPDATE
Here's code to create a label with a white background, and overlay it (inset by 20 points on each side) on the background image:
let backgroundView = UIImageView(image: UIImage(named: "Kaz-256.jpg"))
let label = UILabel(frame: backgroundView.bounds.insetBy(dx: 20, dy: 20))
label.backgroundColor = UIColor.whiteColor()
label.font = UIFont.systemFontOfSize(50)
label.text = "This is the label"
label.lineBreakMode = .ByWordWrapping
label.numberOfLines = 0
label.textAlignment = .Center
label.autoresizingMask = [ .FlexibleWidth, .FlexibleHeight ]
backgroundView.addSubview(label)
XCPlaygroundPage.currentPage.liveView = backgroundView
maskCorners([.TopLeft, .BottomRight], ofView: label, toDepth: 50)
Result:
I'm looking to perform a perspective transform on a UIView (such as seen in coverflow)
Does anyonew know if this is possible?
I've investigated using CALayer and have run through all the pragmatic programmer Core Animation podcasts, but I'm still no clearer on how to create this kind of transform on an iPhone.
Any help, pointers or example code snippets would be really appreciated!
As Ben said, you'll need to work with the UIView's layer, using a CATransform3D to perform the layer's rotation. The trick to get perspective working, as described here, is to directly access one of the matrix cells of the CATransform3D (m34). Matrix math has never been my thing, so I can't explain exactly why this works, but it does. You'll need to set this value to a negative fraction for your initial transform, then apply your layer rotation transforms to that. You should also be able to do the following:
Objective-C
UIView *myView = [[self subviews] objectAtIndex:0];
CALayer *layer = myView.layer;
CATransform3D rotationAndPerspectiveTransform = CATransform3DIdentity;
rotationAndPerspectiveTransform.m34 = 1.0 / -500;
rotationAndPerspectiveTransform = CATransform3DRotate(rotationAndPerspectiveTransform, 45.0f * M_PI / 180.0f, 0.0f, 1.0f, 0.0f);
layer.transform = rotationAndPerspectiveTransform;
Swift 5.0
if let myView = self.subviews.first {
let layer = myView.layer
var rotationAndPerspectiveTransform = CATransform3DIdentity
rotationAndPerspectiveTransform.m34 = 1.0 / -500
rotationAndPerspectiveTransform = CATransform3DRotate(rotationAndPerspectiveTransform, 45.0 * .pi / 180.0, 0.0, 1.0, 0.0)
layer.transform = rotationAndPerspectiveTransform
}
which rebuilds the layer transform from scratch for each rotation.
A full example of this (with code) can be found here, where I've implemented touch-based rotation and scaling on a couple of CALayers, based on an example by Bill Dudney. The newest version of the program, at the very bottom of the page, implements this kind of perspective operation. The code should be reasonably simple to read.
The sublayerTransform you refer to in your response is a transform that is applied to the sublayers of your UIView's CALayer. If you don't have any sublayers, don't worry about it. I use the sublayerTransform in my example simply because there are two CALayers contained within the one layer that I'm rotating.
You can only use Core Graphics (Quartz, 2D only) transforms directly applied to a UIView's transform property. To get the effects in coverflow, you'll have to use CATransform3D, which are applied in 3-D space, and so can give you the perspective view you want. You can only apply CATransform3Ds to layers, not views, so you're going to have to switch to layers for this.
Check out the "CovertFlow" sample that comes with Xcode. It's mac-only (ie not for iPhone), but a lot of the concepts transfer well.
Swift 5.0
func makeTransform(horizontalDegree: CGFloat, verticalDegree: CGFloat, maxVertical: CGFloat,rotateDegree: CGFloat, maxHorizontal: CGFloat) -> CATransform3D {
var transform = CATransform3DIdentity
transform.m34 = 1 / -500
let xAnchor = (horizontalDegree / (2 * maxHorizontal)) + 0.5
let yAnchor = (verticalDegree / (-2 * maxVertical)) + 0.5
let anchor = CGPoint(x: xAnchor, y: yAnchor)
setAnchorPoint(anchorPoint: anchor, forView: self.imgView)
let hDegree = (CGFloat(horizontalDegree) * .pi) / 180
let vDegree = (CGFloat(verticalDegree) * .pi) / 180
let rDegree = (CGFloat(rotateDegree) * .pi) / 180
transform = CATransform3DRotate(transform, vDegree , 1, 0, 0)
transform = CATransform3DRotate(transform, hDegree , 0, 1, 0)
transform = CATransform3DRotate(transform, rDegree , 0, 0, 1)
return transform
}
func setAnchorPoint(anchorPoint: CGPoint, forView view: UIView) {
var newPoint = CGPoint(x: view.bounds.size.width * anchorPoint.x, y: view.bounds.size.height * anchorPoint.y)
var oldPoint = CGPoint(x: view.bounds.size.width * view.layer.anchorPoint.x, y: view.bounds.size.height * view.layer.anchorPoint.y)
newPoint = newPoint.applying(view.transform)
oldPoint = oldPoint.applying(view.transform)
var position = view.layer.position
position.x -= oldPoint.x
position.x += newPoint.x
position.y -= oldPoint.y
position.y += newPoint.y
print("Anchor: \(anchorPoint)")
view.layer.position = position
view.layer.anchorPoint = anchorPoint
}
you only need to call the function with your degree. for example:
var transform = makeTransform(horizontalDegree: 20.0 , verticalDegree: 25.0, maxVertical: 25, rotateDegree: 20, maxHorizontal: 25)
imgView.layer.transform = transform
You can get accurate Carousel effect using iCarousel SDK.
You can get an instant Cover Flow effect on iOS by using the marvelous and free iCarousel library. You can download it from https://github.com/nicklockwood/iCarousel and drop it into your Xcode project fairly easily by adding a bridging header (it's written in Objective-C).
If you haven't added Objective-C code to a Swift project before, follow these steps:
Download iCarousel and unzip it
Go into the folder you unzipped, open its iCarousel subfolder, then select iCarousel.h and iCarousel.m and drag them into your project navigation – that's the left pane in Xcode. Just below Info.plist is fine.
Check "Copy items if needed" then click Finish.
Xcode will prompt you with the message "Would you like to configure an Objective-C bridging header?" Click "Create Bridging Header"
You should see a new file in your project, named YourProjectName-Bridging-Header.h.
Add this line to the file: #import "iCarousel.h"
Once you've added iCarousel to your project you can start using it.
Make sure you conform to both the iCarouselDelegate and iCarouselDataSource protocols.
Swift 3 Sample Code:
override func viewDidLoad() {
super.viewDidLoad()
let carousel = iCarousel(frame: CGRect(x: 0, y: 0, width: 300, height: 200))
carousel.dataSource = self
carousel.type = .coverFlow
view.addSubview(carousel)
}
func numberOfItems(in carousel: iCarousel) -> Int {
return 10
}
func carousel(_ carousel: iCarousel, viewForItemAt index: Int, reusing view: UIView?) -> UIView {
let imageView: UIImageView
if view != nil {
imageView = view as! UIImageView
} else {
imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 128, height: 128))
}
imageView.image = UIImage(named: "example")
return imageView
}
I'm trying to understand the functionalities of these methods. Could you provide me with a simple use case to understand their semantics?
From the documentation, for example, convertPoint:fromView: method is described as follows:
Converts a point from the coordinate system of a given view to that of the receiver.
What does the coordinate system mean? What about the receiver?
For example, does it make sense using convertPoint:fromView: like the following?
CGPoint p = [view1 convertPoint:view1.center fromView:view1];
Using NSLog utility, I've verified that p value coincides with view1's center.
Thank you in advance.
EDIT: For those interested, I've created a simple code snippet to understand these methods.
UIView* view1 = [[UIView alloc] initWithFrame:CGRectMake(100, 100, 150, 200)];
view1.backgroundColor = [UIColor redColor];
NSLog(#"view1 frame: %#", NSStringFromCGRect(view1.frame));
NSLog(#"view1 center: %#", NSStringFromCGPoint(view1.center));
CGPoint originInWindowCoordinates = [self.window convertPoint:view1.bounds.origin fromView:view1];
NSLog(#"convertPoint:fromView: %#", NSStringFromCGPoint(originInWindowCoordinates));
CGPoint originInView1Coordinates = [self.window convertPoint:view1.frame.origin toView:view1];
NSLog(#"convertPoint:toView: %#", NSStringFromCGPoint(originInView1Coordinates));
In both cases self.window is the receiver. But there is a difference. In the first case the convertPoint parameter is expressed in view1 coordinates. The output is the following:
convertPoint:fromView: {100, 100}
In the second one, instead, convertPoint is expressed in superview (self.window) coordinates. The output is the following:
convertPoint:toView: {0, 0}
Each view has its own coordinate system - with an origin at 0,0 and a width and height. This is described in the bounds rectangle of the view. The frame of the view, however, will have its origin at the point within the bounds rectangle of its superview.
The outermost view of your view hierarchy has it's origin at 0,0 which corresponds to the top left of the screen in iOS.
If you add a subview at 20,30 to this view, then a point at 0,0 in the subview corresponds to a point at 20,30 in the superview. This conversion is what those methods are doing.
Your example above is pointless (no pun intended) since it converts a point from a view to itself, so nothing will happen. You would more commonly find out where some point of a view was in relation to its superview - to test if a view was moving off the screen, for example:
CGPoint originInSuperview = [superview convertPoint:CGPointZero fromView:subview];
The "receiver" is a standard objective-c term for the object that is receiving the message (methods are also known as messages) so in my example here the receiver is superview.
I always find this confusing so I made a playground where you can visually explore what the convert function does. This is done in Swift 3 and Xcode 8.1b:
import UIKit
import PlaygroundSupport
class MyViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Main view
view.backgroundColor = .black
view.frame = CGRect(x: 0, y: 0, width: 500, height: 500)
// Red view
let redView = UIView(frame: CGRect(x: 20, y: 20, width: 460, height: 460))
redView.backgroundColor = .red
view.addSubview(redView)
// Blue view
let blueView = UIView(frame: CGRect(x: 20, y: 20, width: 420, height: 420))
blueView.backgroundColor = .blue
redView.addSubview(blueView)
// Orange view
let orangeView = UIView(frame: CGRect(x: 20, y: 20, width: 380, height: 380))
orangeView.backgroundColor = .orange
blueView.addSubview(orangeView)
// Yellow view
let yellowView = UIView(frame: CGRect(x: 20, y: 20, width: 340, height: 100))
yellowView.backgroundColor = .yellow
orangeView.addSubview(yellowView)
// Let's try to convert now
var resultFrame = CGRect.zero
let randomRect: CGRect = CGRect(x: 0, y: 0, width: 100, height: 50)
/*
func convert(CGRect, from: UIView?)
Converts a rectangle from the coordinate system of another view to that of the receiver.
*/
// The following line converts a rectangle (randomRect) from the coordinate system of yellowView to that of self.view:
resultFrame = view.convert(randomRect, from: yellowView)
// Try also one of the following to get a feeling of how it works:
// resultFrame = view.convert(randomRect, from: orangeView)
// resultFrame = view.convert(randomRect, from: redView)
// resultFrame = view.convert(randomRect, from: nil)
/*
func convert(CGRect, to: UIView?)
Converts a rectangle from the receiver’s coordinate system to that of another view.
*/
// The following line converts a rectangle (randomRect) from the coordinate system of yellowView to that of self.view
resultFrame = yellowView.convert(randomRect, to: view)
// Same as what we did above, using "from:"
// resultFrame = view.convert(randomRect, from: yellowView)
// Also try:
// resultFrame = orangeView.convert(randomRect, to: view)
// resultFrame = redView.convert(randomRect, to: view)
// resultFrame = orangeView.convert(randomRect, to: nil)
// Add an overlay with the calculated frame to self.view
let overlay = UIView(frame: resultFrame)
overlay.backgroundColor = UIColor(white: 1.0, alpha: 0.9)
overlay.layer.borderColor = UIColor.black.cgColor
overlay.layer.borderWidth = 1.0
view.addSubview(overlay)
}
}
var ctrl = MyViewController()
PlaygroundPage.current.liveView = ctrl.view
Remember to show the Assistant Editor (⎇⌘⏎) in order to see the views, it should look like this:
Feel free to contribute more examples here or in this gist.
Here's an explanation in plain English.
When you want to convert the rect of a subview (aView is a subview of [aView superview]) to the coordinate space of another view (self).
// So here I want to take some subview and put it in my view's coordinate space
_originalFrame = [[aView superview] convertRect: aView.frame toView: self];
Every view in iOS have a coordinate system. A coordinate system is just like a graph, which has x axis(horizontal line) and y axis(vertical line). The point at which the lines interesect is called origin. A point is represented by (x, y). For example, (2, 1) means that the point is 2 pixels left, and 1 pixel down.
You can read up more about coordinate systems here - http://en.wikipedia.org/wiki/Coordinate_system
But what you need to know is that, in iOS, every view has it's OWN coordinate system, where the top left corner is the origin. X axis goes on increasing to the right, and y axis goes on increasing down.
For the converting points question, take this example.
There is a view, called V1, which is 100 pixels wide and 100 pixels high. Now inside that, there is another view, called V2, at (10, 10, 50, 50) which means that (10, 10) is the point in V1's coordinate system where the top left corner of V2 should be located, and (50, 50) is the width and height of V2. Now, take a point INSIDE V2's coordinate system, say (20, 20). Now, what would that point be inside V1's coordinate system? That is what the methods are for(of course you can calculate themselves, but they save you extra work). For the record, the point in V1 would be (30, 30).
Hope this helps.
Thank you all for posting the question and your answers: It helped me get this sorted out.
My view controller has it's normal view.
Inside that view there are a number of grouping views that do little more than give their child views a clean interaction with auto layout constraints.
Inside one of those grouping views I have an Add button that presents a popover view controller where the user enters some information.
view
--groupingView
----addButton
During device rotation the view controller is alerted via the UIPopoverViewControllerDelegate call popoverController:willRepositionPopoverToRect:inView:
- (void)popoverController:(UIPopoverController *)popoverController willRepositionPopoverToRect:(inout CGRect *)rect inView:(inout UIView *__autoreleasing *)view
{
*rect = [self.addButton convertRect:self.addbutton.bounds toView:*view];
}
The essential part that comes from the explanation given by the first two answers above was that the rect I needed to convert from was the bounds of the add button, not its frame.
I haven't tried this with a more complex view hierarchy, but I suspect that by using the view supplied in the method call (inView:) we get around the complications of multi-tiered leaf view kinds of ugliness.
I used this post to apply in my case. Hope this will help another reader in the future.
A view can only see its immediate children and parent views. It can't see its grand parents or its grandchildren views.
So, in my case, I have a grand parent view called self.view, in this self.view I have added subviews called self.child1OfView, self.child2OfView. In self.child1OfView, I have added subviews called self.child1OfView1, self.child2OfView1.
Now if I physically move self.child1OfView1 to an area outside the boundary of self.child1OfView to anther spot on self.view, then to calculator the new position for the self.child1OfView1 within the self.view:
CGPoint newPoint = [self.view convertPoint:self.child1OfView1.center fromView:self.child1OfView];
You can see below code so you can understand that how it actually works.
let scrollViewTemp = UIScrollView.init(frame: CGRect.init(x: 10, y: 10, width: deviceWidth - 20, height: deviceHeight - 20))
override func viewDidLoad() {
super.viewDidLoad()
scrollViewTemp.backgroundColor = UIColor.lightGray
scrollViewTemp.contentSize = CGSize.init(width: 2000, height: 2000)
self.view.addSubview(scrollViewTemp)
let viewTemp = UIView.init(frame: CGRect.init(x: 100, y: 100, width: 150, height: 150))
viewTemp.backgroundColor = UIColor.green
self.view.addSubview(viewTemp)
let viewSecond = UIView.init(frame: CGRect.init(x: 100, y: 700, width: 300, height: 300))
viewSecond.backgroundColor = UIColor.red
self.view.addSubview(viewSecond)
self.view.convert(viewTemp.frame, from: scrollViewTemp)
print(viewTemp.frame)
/* First take one point CGPoint(x: 10, y: 10) of viewTemp frame,then give distance from viewSecond frame to this point.
*/
let point = viewSecond.convert(CGPoint(x: 10, y: 10), from: viewTemp)
//output: (10.0, -190.0)
print(point)
/* First take one point CGPoint(x: 10, y: 10) of viewSecond frame,then give distance from viewTemp frame to this point.
*/
let point1 = viewSecond.convert(CGPoint(x: 10, y: 10), to: viewTemp)
//output: (10.0, 210.0)
print(point1)
/* First take one rect CGRect(x: 10, y: 10, width: 20, height: 20) of viewSecond frame,then give distance from viewTemp frame to this rect.
*/
let rect1 = viewSecond.convert(CGRect(x: 10, y: 10, width: 20, height: 20), to: viewTemp)
//output: (10.0, 210.0, 20.0, 20.0)
print(rect1)
/* First take one rect CGRect(x: 10, y: 10, width: 20, height: 20) of viewTemp frame,then give distance from viewSecond frame to this rect.
*/
let rect = viewSecond.convert(CGRect(x: 10, y: 10, width: 20, height: 20), from: viewTemp)
//output: (10.0, -190.0, 20.0, 20.0)
print(rect)
}
I read the answer and understand the mechanics but I think the final example is not correct. According to the API doc, the center property of a view contains the known center point of the view in the superview’s coordinate system.
If this is the case, than I think it would not make sense to try to ask the superview to convert the center of a subview FROM the subview coordinate system because the value is not in the subview coordinate system. What would make sense is to do the opposite i.e. convert from the superview coordinate system to that of a subview...
You can do it in two ways (both should yield the same value):
CGPoint centerInSubview = [subview convertPoint:subview.center fromView:subview.superview];
or
CGPoint centerInSubview = [subview.superview convertPoint:subview.center toView:subview];
Am I way off in understanding how this should work?
One more important point about using these APIs. Be sure that the parent view chain is complete between the rect you are converting and the to/from view.
For example - aView, bView, and cView -
aView is a subview of bView
we want to convert aView.frame to cView
If we try to execute the method before bView has been added as a subview of cView, we will get back a bunk response. Unfortunately there is no protection built into the methods for this case. This may seem obvious, but it is something to be aware of in cases where the conversion goes through a long chain of parents.