Why does NSDecimalNumber.notANumber.intValue return 9? - ios

I found a bug in my code that is caused by NSDecimalNumber.notANumber.intValue returning 9, while I would expect NaN (as floatValue or doubleValue return). Does anybody know why?

Like mentioned by Joakim Danielson and noted in the Apple Developer Documentation
... Because numeric types have different storage capabilities, attempting to initialize with a value of one type and access the value of another type may produce an erroneous result ...
And since Swift's Int struct cannot represent NaN values, you get this erroneous result.
Instead you could use Int's Failable Initialiser init(exactly:) that converts your NSDecimalNumber to an Int? that will either contain it's value or be nil if it is not representable by an Int.
let strangeNumber = NSDecimalNumber.notANumber // nan
let integerRepresentation = Int(exactly: strangeNumber) // nil

Related

Can this Int32 initializer ever return nil?

Objective C:
NSInteger x = // some value...
NSString* str = [NSString stringWithFormat:#"%d", (int)x];
// str is passed to swift
Swift:
let string:String = str!
let x = Int32(string)! // crash!
Sorry for the disjointed code, this is from a crash reported in a large existing codebase. I don't see how it's possible for the int->string->int32 conversion to fail. NSInteger can be too big for int32, but I would expect the explicit (int) to prevent that case (it will give the wrong value, but still shouldn't crash).
I have been unable to reproduce this, so I'm trying to figure out if my understanding is completely wrong.
Edit: obviously it is theoretically possible for it to return nil in the sense that the spec says so. I'm asking if/how it can in this specific situation.
Since you are using Int32, the initializer can return nil if the value supplied to it is out of the range Int32 can take. In your specific case this can easily happen, since as the documentation of NSInteger states, it can take 64bit values in 64bit applications (which is the only supported configuration since iOS11).
The documentation of Int32.init(_:String) clearly states that the cases when the failable initializer can fail:
If description is in an invalid format, or if the value it denotes in
base 10 is not representable, the result is nil. For example, the
following conversions result in nil:
Int(" 100") // Includes whitespace
Int("21-50") // Invalid format
Int("ff6600") // Characters out of bounds
Int("10000000000000000000000000") // Out of range

Why, in Swift, when I convert from a Double to an Int is it subtracting 1?

I have some very simple code that does a calculation and converts the resulting double to an int.
let startingAge = (Double(babyAge/2).rounded().nextDown)
print(startingAge)
for each in 0..<allQuestions.count {
if allQuestions[each] == "\(Int(startingAge))"
The first print of startingAge gives me the correct answer, for example 5.0. But when it converts to an Int, it gives me an answer of 4. When the Double is 6.0, the int is 5.
I'm feeling stupid, but can't figure out what I'm doing wrong.
When you call rounded(), you round your value to the nearest integer.
When you call .nextDown, you get the next possible value less than the existing value, which means you now have the highest value that's less than the nearest integer to your original value. This still displays as the integer when you print it, but that's just rounding; it's really slightly less than the integer. So if it's printing as "4.0", it's really something like 3.9999999999999 or some such.
When you convert the value to an Int, it keeps the integer part and discards the part to the right of the decimal. Since the floating-point value is slightly less than the integer you rounded to thanks to .nextDown, the integer part is going to be one less than that integer.
Solution: Get rid of the .nextDown.
When you cast you lose precession.
In your case the line returns a double: Assume baby age is 9 then startingAge is 3.999999
let startingAge = (Double(babyAge/2).rounded().nextDown)
and when you print it your answer becomes 3
print("\(Int(startingAge))")
To fix this use this line instead:
let startingAge = (Double(babyAge/2).rounded().nextDown).rounded()
This is what nextdown does, it does not round values, and if the number is
a floating point number it becomes slightly less. If the number was to be an int it would become 1 less I presume.

Optional properties in optional classes VS Optional values in optional dictionaries

I noticed some interesting behavior when trying to access values of optional properties in optional classes VS trying to access the values of optional values in optional dictionaires.
It seems that in the former case you only need to unwrap once to access the value. However, in the latter case you have to unwrap twice to access the value. I was wondering why this is, and was hoping someone could provide me with some insight!
Below is an example accessing the value of an optional property in an optional class
class Cat{
var food : Food?
}
class Food{
var amount : Int?
}
var meowzer = Cat()
meowzer.food = Food()
meowzer.food?.amount = 10
var catFoodAmt = meowzer.food?.amount
print("\(catFoodAmt)")
if let catFoodCount = meowzer.food?.amount{
print("\(catFoodCount)")
}
The result of the first print statement is:
Optional(10)
The result of the second print statement (after unwrapping) is:
10
Below is an example of accessing the value of an optional value in an optional dictionary
var dog : [String : Int?]?
dog = ["treat_count" : 10]
var dogTreatAmt = dog?["treat_count"]
print("\(dogTreatAmt)")
if let dogTreatCount = dog?["treat_count"] , dogTreatCountFinal = dogTreatCount{
print("\(dogTreatCount)")
print("\(dogTreatCountFinal)")
}
The result of the first print statement is:
Optional(Optional(10))
The result of the second print statement (after unwrapping once) is:
Optional(10)
The result of the third print statement (after unwrapping twice) is:
10
Why do I need to unwrap twice to access the desired value in the second case but not the first?
My guess is it has to do with that fact that if I were to have used a key other than "treat_count" (like "count" for example) then the value for that key would have been nil. However, I haven't been able to find a iOS "rule" or a better explanation on why this is. Any help would be much appreciated.
The difference is that Cat.food?.amount returns Int? while Dictionary<String, Int?>.subscript(String) returns Int??.
This is because Dictionary.subscript<Key> returns Element?, and Element here is Int?. Thus you're getting one extra level of Optional.
Optional chaining just removes an extra Optional wrapping at the point that it's used. It doesn't collapse all Optionals. In one case you have two, collapsed to 1 (for one ?) and in the second you have three, collapsed to 2 (for one ?).
As vadian suggests, this would be a crazy Dictionary, so it's unlikely to come up in good Swift, but it can be reasoned about. If it weren't like this, you would lose information.
It actually makes sense : simply count the number of ?s :).
With you first example, the property is an optional where, in your second example, you have an optional dictionary where its key is a string and the value is an optional integer.
So if you want to access the value of the second example, you have to unwrap the dictionary, then unwrap whatever there is in that dictionary for a given key.
This is a good question. Let's think the type definition of a Dictionary:
struct Dictionary<Key : Hashable, Value>
And the signature of the subscript function:
public subscript (key: Key) -> Value?
Which can also be thought of as:
public subscript (key: Key) -> Optional<Value>
Armed with this, let's look at your dictionary, which is typed as:
Dictionary<String, Int?>
Or more explicitly
Dictionary<String, Optional<Int>>
Where Value is of type Optional<Int>, and Key is of type String
So if we substitute in the key, the subscript definition is:
subscript (key: String) -> Optional<Value>
And if we substitute it in the Value, we get what you are seeing:
subscript (key: String) -> Optional<Optional<Int>
Now, let's break down your code and to make everything fit together:
var dogTreatAmt = dog?["treat_count"]
Since dog is optional, no matter what you call on it, the result will also be wrapped in an Optional, which we will temporarily think of as Optional(FunctionReturn)
Now let's take a look at FunctionReturn, which is the subscript function. We have already determined that this will return Optional<Optional<Int>>
Which means that really you're returning Optional<Optional<Optional<Int>>>, however as noted by Rob Napier,
"Optional chaining removes an extra Optional wrapping at the point that it's used".

cannot be applied to operands of type 'UITextField' and 'Int'

I am trying to populate a Label with a text field input * 365
I keep getting the message:
Binary operator '*' cannot be applied to operands of type 'UITextField' and 'Int'
var hours = (hoursTextField.text as NSString).doubleValue
var hoursInAYear = hoursTextField * 365
Your first line is calculating the doubleValue of what's entered into the text field, but you're not using that hours variable. Perhaps you want:
var hoursInAYear = hours * 365
The warning you are getting is telling you that you're trying to use the * operator between a variable whose type is UITextField and another variable whose type is Int (this is what your 365 literal is interpreted as).
This warning will come up any time we try to use an operator between two types for which the operator does not have an overload. It is particularly common when one of our operand's types is implicitly determined because we're using a literal somewhere. To resolve the issue, we must double check our instantiation of our operands and be sure they're of types for which our operator has an overload.
If they are not, then we should either change how we create these variables so they have the right type, or find some way of converting them when we use them with the operator.
When we change our mistaken variable from the text field to the double we just calculated, Swift is able to calculate this correctly. Despite previously claiming that 365 was an Int, being a literal, it can be interpreted as several different types, one of which includes Double.
When we attempt to use the * between a variable of type Double and a literal number, the literal number will be correctly converted to a Double, and we'll use the overload of the * operator which accepts two doubles (and returns a double).
You're trying to multiply hoursTextField by 365. Did you mean to write:
var hours = (hoursTextField.text as NSString).doubleValue
var hoursInAYear = hours * 365 // hours, not hoursTextField.
I think it is basically just a typo or copy-paste-mistake of yours since you already calculate the hours variable correctly and dont use it afterwards. Simply change your second line to
var hoursInAYear = hours * 365

2048 casted to BOOL returns 0

Consider this code
NSInteger q = 2048;
BOOL boolQ = q;
NSLog(#"%hhd",boolQ);
After execution boolQ is equal 0. Could someone explain why is this so?
BOOL probably is implemented as char or uint8_t/int8_t, as "hh" prints half of the half of an integer. which typically is a byte.
Converting to char is taking the lowest 8bit of 2048 (=0x800) and gives you 0.
The proper way to convert any integer to a boolean value is:
NSInteger q = some-value;
BOOL b = !!q;
Casting an integer value to a type too small to represent the value being converted is undefined behaviour in C (C11 standard Annex J.2), and therefore also in the part of Objective-C which deals with C-level matters. Since it's undefined behaviour it can represent the result however it wants, expected value or not.
As per 6.3.1.4, any integer can be used as a boolean value without casting, in which case it will show the expected behaviour (0 is 0, everything else is 1), giving rise to the !! idiom suggested by alk; perhaps counterintuitively, you convert the value by not explicitly converting the value (instead, the conversion is correctly handled by the implicit conversion operation inserted by the ! operator).

Resources