Problems with Swift Declaration - ios

I just ask myself why I can't do something like this directly under my Class Declaration in Swift:
let width = 200.0
let height = 30.0
let widthheight = width-height
I can not create a constant with 2 other constants. If I use this inside a function/method everything works fine.
Thanks

When you write let widthheight = width - height, this implicitly means let widthheight = self.width - self.height. In Swift, you're simply not allowed to use self until all its members have been initialised — here, including widthheight.
You have a little bit more flexibility in an init method, though, where you can write things like this:
class Rect {
let width = 200.0
let height = 30.0
let widthheight: Double
let widthheightInverse: Double
init() {
// widthheightInverse = 1.0 / widthheight // widthheight not usable yet
widthheight = width - height
widthheightInverse = 1.0 / widthheight // works
}
}

This is a candidate for a computed property as such:
class Foo {
let width = 200.0
let height = 30.0
var widthheight : Double { return width - height }
}
You might raise an issue of 'but it is computed each time'; perhaps your application will depend on a single subtraction done repeatedly - but not likely. If the subtraction is an issue, set widthheight in init()

For things like that, you could make use of class variables. The code would look like this:
class var width = 200.0
class var height = 30.0
class var widthheight = width - height
But when you try it, you will see a compiler error:
Class variables are not yet supported
I guess they haven't implemented that feature yet. But there is a solution for now. Just move your declarations outside the class declaration, like following:
let width = 200.0
let height = 30.0
let widthheight = width - height
class YourClass { ...

Related

Getting the default height for a UITabbar without creating an instance?

Is there a way to get the default height for UITabBars? I want to avoid hardcoding the value but would also like to avoid creating an instance of one to only get the height. Below is how I'm currently getting the height, but it seems like there should be a more efficient way that doesn't involved creating a controller.
extension UITabBar {
private static var storedHeight: CGFloat?
#objc static var height: CGFloat {
get {
if let height = storedHeight { return height }
storedHeight = UITabBarController().tabBar.frame.size.height
return storedHeight ?? 0
}
}
}

How do I scale a UIView designed for iPad, for use with an iPhone?

I built an iPad game and am now trying to port it to the iPhone. One of my interfaces in the game has buttons that are laid in specific places on top of an image, so I'd like to simply scale what I have built for the iPad down to iPhone size.
Thus far, I have had a little success simply defining my buttons in terms of positions and sizes on the iPad, then simply scaling each button proportionally in code.
//
// ExerciseMenuView.swift
// Reading Expressway
//
// Created by Montana Burr on 7/29/16.
// Copyright © 2016 Montana. All rights reserved.
//
import UIKit
import QuartzCore
#IBDesignable #objc class ExerciseMenuView: UIView {
override func layoutSubviews() {
super.layoutSubviews()
// Adjust subviews according to device screen size.
let iPadProWidth = CGFloat(1024);
let iPadProHeight = CGFloat(768);
let deviceWidth = UIScreen.main.bounds.width;
let deviceHeight = UIScreen.main.bounds.height;
for subview in self.subviews {
let subviewWidthMultiplier = subview.frame.width / iPadProWidth;
let subviewHeightMultiplier = subview.frame.height / iPadProHeight;
let subviewCenterXMultiplier = subview.frame.origin.x / iPadProWidth;
let subviewCenterYMultiplier = subview.frame.origin.y / iPadProHeight;
let subviewHeight = subviewHeightMultiplier * deviceHeight;
let subviewWidth = deviceWidth * subviewWidthMultiplier;
let subviewX = subviewCenterXMultiplier * deviceWidth;
let subviewY = subviewCenterYMultiplier * deviceHeight;
subview.frame = CGRect.init(x: subviewX, y: subviewY, width: subviewWidth, height: subviewHeight)
}
}
}
This is roughly what the screen looks like now, on iPad:
This is what I want it to look like on an iPhone 8:
This is what it actually looks like on an iPhone 8 simulator:
I think layoutSubviews called 2 or more time so your result very small.
You can change frame in init method.
OR add flag for check is this first calling
var isFirstTime = true
override func layoutSubviews() {
super.layoutSubviews()
if isFirstTime {
isFirstTime = false
// Adjust subviews according to device screen size.
...
}
}

ScaleAspectFit blank spaces, imageView.image nul

I have a UIImageView, where the image is set with a given url. Then, I set the content mode to Scale Aspect Fit. This works fine, but there is a ton of blank space before and after the image, when the image is supposed to be directly at the top of the screen.
What I would like to do is rescale the UIImage size (maybe frame?) to match the new size created when Aspect Fit is applied (seems to be the suggestion most people received).
The problem is, whenever I test previous solutions, I'm getting a nul error. Particularly:
import UIKit
import AVFoundation
class OneItemViewController: UIViewController {
#IBOutlet weak var itemImage: UIImageView!
#IBOutlet weak var menuButton: UIBarButtonItem!
#IBOutlet weak var titleText: UILabel!
override func viewDidLoad() {
super.viewDidLoad()
let imageURL:NSURL? = NSURL(string: "https://upload.wikimedia.org/wikipedia/commons/d/d5/Pic_de_neige_cordier_Face_E.jpg")
if imageURL != nil {
itemImage.sd_setImageWithURL(imageURL)
itemImage.contentMode = UIViewContentMode.ScaleAspectFit
AVMakeRectWithAspectRatioInsideRect(itemImage.image!.size, itemImage.bounds)
/**
let imageSize:CGSize = onScreenPointSizeOfImageInImageView(itemImage)
var imageViewRect:CGRect = itemImage.frame
imageViewRect.size = imageSize
itemImage.frame = imageViewRect
**/
}
if self.revealViewController() != nil {
menuButton.target = self.revealViewController()
menuButton.action = "revealToggle:"
self.view.addGestureRecognizer(self.revealViewController().panGestureRecognizer())
}
self.titleText.text = "Title: " + "Earl and Countess of Derby with Edward, their Infant Son, and Chaplain"
// Do any additional setup after loading the view.
}
/**
func onScreenPointSizeOfImageInImageView(imageV: UIImageView) -> CGSize {
var scale: CGFloat
if (imageV.frame.size.width > imageV.frame.size.height) {
if (imageV.image!.size.width > imageV.image!.size.height) {
scale = imageV.image!.size.height / imageV.frame.size.height
} else {
scale = imageV.image!.size.width / imageV.frame.size.width
}
} else {
if (imageV.image!.size.width > imageV.image!.size.height) {
scale = imageV.image!.size.width / imageV.frame.size.width
} else {
scale = imageV.image!.size.height / imageV.frame.size.height
}
}
return CGSizeMake(imageV.image!.size.width / scale, imageV.image!.size.height / scale)
}
**/
}
Tried two things here to get rid of blank space.
First attempt is the call to AVMakeRectWithAspectRatioInsideRect.
Second attempt is the two chunks of code in the /** **/ comments. (onScreenPointSizeOfImageInImageView function and calls to it in viewDidLoad.)
But I can't tell if either work because itemImage.image!.size is causing an error.
So two questions:
1) Why is itemImage.image!.size giving me a nil while unwrapping?
2) Has anyone found a faster solution to removing blank spaces caused by AspectFit?
imageView.widthAnchor.constraint(equalTo: imageView.heightAnchor, multiplier: image.size.width / image.size.height).isActive = true
This answer is answered programmatically with UIKit with Swift 5
As mentioned by #Ignelio, using NSLayoutConstraint would do the work for UIImageView.
The reasoning is that you want to keep maintain the aspect ratio - by using
// let UIImage be whatever you decide to name it
UIImage.contentMode = .scaleAspectFit
would make the UIImage inside the UIImageView fit back to its ratio size given the width. However, as mentioned in Apple's documentation, that will leave remaining area with transparent spacing. Hence, what you want to tackle is UIImageView's size/frame.
--
With this method, you're giving your UIImageView its width constraint equal to the UIImage's ratio - scaling back perfectly in regards to its parent's width constraint (whatever that may be).
// let UIImageView be whatever you name
UIImageView.widthAnchor.constraint(equalTo: UIImageView.heightAnchor, multiplier: UIImage.size.width / UIImage.size.height).isActive = true

Property-like closures for local variables

I didn't found answer for my question in swiftbook.
Is this possible to create property-like closure for local variable in swift? I mean smt like further snippet:
func someFunc() {
// here goes our closure
var myRect:CGRect {
var x = 10
var y = 20
var width = 30
var heigth = 40
myRect = CGPointMake(x,y,width,heigth)
}
}
I have complexity evaluation of UI elements position. This trick should make my code much readable
This is called read-only computed property where you can omit the getter to simplify declaration:
var myRect: CGRect {
let x:CGFloat = 10
let y:CGFloat = 20
let width:CGFloat = 30
let height:CGFloat = 40
return CGRectMake(x, y, width, height)
}
Read-Only Computed Properties
A computed property with a getter but no setter is known as a
read-only computed property. A read-only computed property always
returns a value, and can be accessed through dot syntax, but cannot be
set to a different value.
NOTE
You must declare computed properties—including read-only computed
properties—as variable properties with the var keyword, because their
value is not fixed. The let keyword is only used for constant
properties, to indicate that their values cannot be changed once they
are set as part of instance initialization.
You can simplify the declaration of a read-only computed property by
removing the get keyword and its braces:
Documentation Swift Conceptual Properties
Why not try this way?
fun someFunc() {
var myRect = {() -> CGRect in
let x:CGFloat = 10.0
let y:CGFloat = 20.0
let width:CGFloat = 30.0
let height:CGFloat = 40.0
return CGRectMake(x,y,width,height)
}
myRect() //Call it
}
EDIT I think if there are some requirements to calculate some points position like maxElement use closure is good to save some small functions.

swift global constants: cannot use another constant for initialization

Here is what I am trying to do:
class ViewController: UIViewController {
let screenRect: CGRect = UIScreen.mainScreen().bounds
let screenWidth = screenRect.width;
let screenHeight = screenRect.height;
let screenX = screenRect.origin.x
let screenY = screenRect.origin.y
override func viewDidLoad() {
...and so on
Swift allows me to declare screenRect.
However, it does not allow me to declare any other constants using this. It shows me the error: 'ViewController.Type' does not have a member named 'screenRect'
How do I define these constants and why does not swift allow me to use another constnt to define them.
The consts do not know about the other global consts because they are not initialized at this point. What you could do though is to use computed properties:
class ViewController: UIViewController {
var screenRect: CGRect { return UIScreen.mainScreen().bounds }
var screenWidth: CGFloat { return self.screenRect.origin.x }
}
This only works with vars, due to the nature of it. This should be considered depending on the use case. If you have to call it often, it might not be the wisest way in regards to performance.
Its not possible to access self before initialisation process gets completed, therefore its giving error.
What probably happens is that during instantiation and initialization it's not guaranteed that properties are initialized in the same order as you have defined them in the code, so you cannot initialize a property with a value retrieved from another property.
My suggestion is to move the property initializations in the init() method.
Addendum 1: as suggested by #Yatheesha, it's correct to say that self is not available until all properties have been initialized - see "Two-Phase Initialization" paragraph in the swift book, under "Initialization"
In this line (as well as in the ones after it):
let screenWidth = screenRect.width;
you are implicitly using self.
So the correct way is to use init() as follows:
let screenRect: CGRect
let screenWidth: NSNumber
let screenHeight: NSNumber
let screenX: NSNumber
let screenY: NSNumber
init(coder aDecoder: NSCoder!) {
let bounds = UIScreen.mainScreen().bounds
screenRect = bounds
screenWidth = bounds.width
screenHeight = bounds.height
screenX = bounds.origin.x
screenY = bounds.origin.y
super.init(coder:aDecoder)
}
Of course if there is more than one designated init(), you have to replicate that code on each. In that case it might be better to use #Andy's approach, using computed properties, if you prefer to avoid code duplication.
Addendum 2
Another obvious way (and probably the best one) is to avoid using self at all:
let screenRect: CGRect = UIScreen.mainScreen().bounds
let screenWidth = UIScreen.mainScreen().bounds.width;
let screenHeight = UIScreen.mainScreen().bounds.height;
let screenX = UIScreen.mainScreen().bounds.origin.x
let screenY = UIScreen.mainScreen().bounds.origin.y

Resources