UITableView same frame regardless of orientation - ios

In the debugger, I get the following output regardless of the orientation...
print self.tableView.frame
(CGRect) $R2 = (origin = (x = 0, y = 0), size = (width = 1000, height = 1000))
Why doesn't the size change depending on the orientation?

Related

iOS 13 Animating View Not Changing Frame

I have an app which is compiled in Xcode 10 on iOS 13 simulator. In one view there is a "tray" view which shows from the bottom when tapped, in iOS 12 it works perfectly, in iOS 13, the tap is calling the method, but the changes to the frame are not saving - I have included outputs from the debugger in comments so you can see what the outputs of the frame values are;
- (void) userClickActivityTray: (UITapGestureRecognizer *) gestureRecognizer {
if(self.activityTrayShown) {
/*
(lldb) po self.activityTrayContainerView.frame
(origin = (x = 0, y = 792), size = (width = 414, height = 104))
*/
[self hideActivityTray];
} else {
if (!self.activityTrayViewInitialFrameComputed) {
self.activityTrayViewInitialFrameComputed = YES;
self.activityTrayInitialFrame = self.activityTrayContainerView.frame;
}
/*
(lldb) po self.activityTrayContainerView.frame
(origin = (x = 0, y = 638), size = (width = 414, height = 224))
(origin = (x = 0, y = 638), size = (width = 414, height = 224))
(lldb) po self.activityTrayInitialFrame
(origin = (x = 0, y = 792), size = (width = 414, height = 104))
(origin = (x = 0, y = 792), size = (width = 414, height = 104))
(lldb)
*/
[UIView animateWithDuration:0.25 animations:^{
self.activityTrayContainerView.frame = CGRectMake(self.view.bounds.origin.x,
self.bottomView.frame.origin.y - self.activityTrayViewController.maximumHeight,
self.view.bounds.size.width,
self.activityTrayViewController.maximumHeight);
self.activityTrayBackgroundView.alpha = 1.0;
self.bottomView.alpha = self.dotsProgressView.alpha = 0;
} completion:^(BOOL finished) {
self.activityTrayShown = YES;
/*
(lldb) po self.activityTrayContainerView.frame
(origin = (x = 0, y = 557), size = (width = 414, height = 305))
(origin = (x = 0, y = 792), size = (width = 414, height = 104))
*/
}];
}
}
Layout system in iOS 13 is different, we had the same issue and in our case by switching layout from automatic to Translates Mask Into Constraints fixed the issue.
You can use yourview.layer.frame instead of yourview.frame
It worked for me.
I had the same issue when moving to iOS13.
In my case, the sizing constraints were overriding any change I was making to the frame, which was not the case on iOS12.
For my app to work correctly, I had to also update the NSLayoutContraint just before the "animateWithDuration" block, so that the new frame coordinates stay compatible with the layout constraints.
Hope that helps
only set .translatesAutoresizingMaskIntoConstraints 's View to YES, it worked for me.
Hope that helps

When the UIView's `bounds` origins are increased with positive numbers, why do the subviews shift in the negative direction?

This must be something really simple, and my basic math knowledge may be lacking. This is clear (from this question):
View's frame determines its location in superview. View's bounds
determines its subviews locations. That means, if you change view's
bounds, its location won't be changed, but all of its subviews
location will be changed.
The view controller, after starting a Single View App:
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let v1 = UIView(frame: CGRect(x: 100, y: 100, width: 200, height: 300))
v1.backgroundColor = UIColor.blue
let v2 = UIView(frame: v1.bounds.insetBy(dx: 50, dy: 50))
v2.backgroundColor = UIColor.red
self.view.addSubview(v1)
v1.addSubview(v2)
}
Checking on the LLDB console, this is completely clear too:
(lldb) p v1.frame
(CGRect) $R0 = (origin = (x = 100, y = 100), size = (width = 200, height = 300))
(lldb) p v1.bounds
(CGRect) $R1 = (origin = (x = 0, y = 0), size = (width = 200, height = 300))
(lldb) p v2.frame
(CGRect) $R2 = (origin = (x = 50, y = 50), size = (width = 100, height = 200))
(lldb) p v2.bounds
(CGRect) $R3 = (origin = (x = 0, y = 0), size = (width = 100, height = 200))
Adding v1.bounds.origin.x += 50 (or v1.bounds.origin.x = 50 for that matter) after v1.addSubview(v2) results in:
(lldb) p v1.frame
(CGRect) $R0 = (origin = (x = 100, y = 100), size = (width = 200, height = 300))
(lldb) p v1.bounds
(CGRect) $R1 = (origin = (x = 50, y = 0), size = (width = 200, height = 300))
(lldb) p v2.frame
(CGRect) $R2 = (origin = (x = 50, y = 50), size = (width = 100, height = 200))
(lldb) p v2.bounds
(CGRect) $R3 = (origin = (x = 0, y = 0), size = (width = 100, height = 200))
The LLDB console output still fits in with my current understanding, but then this is how it is rendered:
Why? Tried to reason about it (see below) and I understand that the views' coordinate systems are relative to each other, but if 50 is added to v1's origin.x, the the subviews' effective frame.origin is supposed to be (x=50+50, y=0).
I found a satisfying answer in Matt Neuburg's Programming iOS 11 book with a similar example:
/* ... */
let v2 = UIView(frame:v1.bounds.insetBy(dx: 10, dy: 10))
/* ... */
v1.bounds.origin.x += 10
v1.bounds.origin.y += 10
Nothing happens to the superview’s size or position. But the subview
has moved up and to the left so that it is flush with its superview’s
top-left corner. Basically, what we’ve done is to say to the
superview, “Instead of calling the point at your upper left
(0.0,0.0), call that point (10.0,10.0).” Because the subview’s frame
origin is itself at (10.0,10.0), the subview now touches the
superview’s top-left corner. The effect of changing a view’s bounds
origin may seem directionally backward — we increased the superview’s
origin in the positive direction, but the subview moved in the
negative direction — but think of it this way: a view’s bounds origin
point coincides with its frame’s top left.
Therefore it seems modifying the origin is more like a mapping operation than a coordinate system transformation. This would also explain why the results are the same for += 50 and = 50.
By adjusting the bounds' origin.x of v1, you are expanding the origin beyond the visible rectangle. (This is how a UIScrollView works.)
If you instead modify the frame's origin.x, you will, I believe, see results more in line with your expectations.

convertRect:toView: function has different result in iOS 9 and iOS 10

I have sourceFrame and destinationFrame which is calculated by the following code:
CGRect sourceFrame = [sourceView convertRect:sourceView.bounds toView:self.animationContainerView];
CGRect destinationFrame = [destinationView convertRect:destinationView.bounds toView:self.animationContainerView];
In iOS 9, the result is
sourceFrame = (origin = (x = 0, y = 386), size = (width = 375, height = 45))
destinationFrame = (origin = (x = 48, y = 28), size = (width = 319, height = 32))
While in iOS 10, the result is
sourceFrame = (origin = (x = 0, y = 386), size = (width = 375, height = 45))
destinationFrame = (origin = (x = -139.5, y = -281), size = (width = 319, height = 32))
destinationFrame's origin are totally different in iOS 9 and iOS 10.
I don't know why? I guess is that maybe in iOS 10, Apple changes the implementation for convertRect:toView that makes different results.
Anyone has idea why?

iOS: Detecting the actual screen size of iPhone

I'm not looking to get a scale of the size of screen. I'm trying to get the actual size of the screen. For iPhone 5. When I try to get the size of the iPhone 5 I get the size of the iPhone 4/4s.
Here is my code:
CGRect myScreen=[[UIScreen mainScreen] bounds];
But the size I get is the following:
po myScreen
(origin = (x = 0, y = 0), size = (width = 480, height = 320))
(origin = (x = 0, y = 0), size = (width = 480, height = 320))
But if Use the following line of code:
UIScreen *mainScreen = [UIScreen mainScreen];
po mainScreen
<UIScreen: 0x16d834b0; bounds = {{0, 0}, {480, 320}}; mode = <UIScreenMode: 0x16e68040; size = 640.000000 x 960.000000>>
But if I use:
po mainScreen.bounds.size
(width = 480, height = 320)
My question to you guys is how can I access to the size "size = 640.000000 x 960.000000" using UIScreen.
I really appreciate your help.
Simply use nativeBounds instead of bounds.

Convert scroll view frame to window coordinates

I am trying to convert scroll view coordinates to window coordinates. However the resulting frame seems to be shifted by status bar height, what's confusing is that the height remains the same which is not right.
CGRect visibleBounds = CGRectMake(0, 0, CGRectGetWidth(self.scrollView.frame), CGRectGetHeight(self.scrollView.frame));
CGRect scrollViewFrame = [self.scrollView convertRect:visibleBounds toView:nil];
lldb log:
Printing description of visibleBounds: (CGRect) visibleBounds =
(origin = (x = 0, y = 0), size = (width = 320, height = 568))
Printing description of scrollViewFrame: (CGRect) scrollViewFrame = (origin =
(x = 0, y = 20), size = (width = 320, height = 568))
Turns out scroll view bounds can be used to calculate the frame for scroll view in window coordinates, regardless the fact that I see negative bounds, produced frame will be correct anyway.
[self.scrollView convertRect:self.scrollView.bounds toView:nil];

Resources