Objective C:
NSInteger x = // some value...
NSString* str = [NSString stringWithFormat:#"%d", (int)x];
// str is passed to swift
Swift:
let string:String = str!
let x = Int32(string)! // crash!
Sorry for the disjointed code, this is from a crash reported in a large existing codebase. I don't see how it's possible for the int->string->int32 conversion to fail. NSInteger can be too big for int32, but I would expect the explicit (int) to prevent that case (it will give the wrong value, but still shouldn't crash).
I have been unable to reproduce this, so I'm trying to figure out if my understanding is completely wrong.
Edit: obviously it is theoretically possible for it to return nil in the sense that the spec says so. I'm asking if/how it can in this specific situation.
Since you are using Int32, the initializer can return nil if the value supplied to it is out of the range Int32 can take. In your specific case this can easily happen, since as the documentation of NSInteger states, it can take 64bit values in 64bit applications (which is the only supported configuration since iOS11).
The documentation of Int32.init(_:String) clearly states that the cases when the failable initializer can fail:
If description is in an invalid format, or if the value it denotes in
base 10 is not representable, the result is nil. For example, the
following conversions result in nil:
Int(" 100") // Includes whitespace
Int("21-50") // Invalid format
Int("ff6600") // Characters out of bounds
Int("10000000000000000000000000") // Out of range
Related
I found a bug in my code that is caused by NSDecimalNumber.notANumber.intValue returning 9, while I would expect NaN (as floatValue or doubleValue return). Does anybody know why?
Like mentioned by Joakim Danielson and noted in the Apple Developer Documentation
... Because numeric types have different storage capabilities, attempting to initialize with a value of one type and access the value of another type may produce an erroneous result ...
And since Swift's Int struct cannot represent NaN values, you get this erroneous result.
Instead you could use Int's Failable Initialiser init(exactly:) that converts your NSDecimalNumber to an Int? that will either contain it's value or be nil if it is not representable by an Int.
let strangeNumber = NSDecimalNumber.notANumber // nan
let integerRepresentation = Int(exactly: strangeNumber) // nil
I have to deal with a pretty long Int, that comes to me as a String. Calling
Int64(String) works fine on 64bit devices, but I see crashes on 32bit devices. What is the reason for this?
Here is the code:
let predicateBarcode = NSPredicate(format: "barcode = %ld", Int64(searchTerm)!)
I cannot tell anything about the searchterm, it comes from the barcode scanner and is an ean-13. I can also not reproduce the crash, as this is only happening to my costomers.
It's not the problem of Int64.init(_:) but the problem of the format given to NSPredicate.
Length specifier l means its argument needs to be long or unsigned long, which are equivalent to Int or UInt in Swift.
String Format Specifiers
If you want to use Int64 value as a format argument, the right length specifier is ll, meaning long long which is equivalent to Int64 in Swift.
let predicateBarcode = NSPredicate(format: "barcode = %lld", Int64(searchTerm)!)
You may need to fix some other parts, but I cannot see as you are hiding other parts. (And as far as I test, I could not make my testing app crash.) In addition, are you 100%-sure about Int64(searchTerm)! may not crash?
Anyway, the format string needs to be fixed at least.
I'm creating an extension for String and I'm trying to decide what proper/expected/good behavior would be for a subscript operator. Currently, I have this:
// Will crash on 0 length strings
subscript(kIndex: Int) -> Character {
var index = kIndex
index = index < 0 ? 0 : index
index = index >= self.length ? self.length-1 : index
let i = self.startIndex.advancedBy(index)
return self.characters[i]
}
This causes all values outside the range of the string to be capped to the edge of the string. While this reduces crashing from passing a bad index to the subscript, it doesn't feel like the right thing to do. I am unable to throw an exception from a subscript and not checking the subscript causes a BAD_INSTRUCTION error if the index is out of bounds. The only other option I can think of is to return an optional, but that seems awkward. Weighing the options, what I have seems to be the most reasonable, but I don't think anybody using this would expect a bad index to return a valid result.
So, my question is: what is the "standard" expected behavior of the subscript operator and is returning a valid element from an invalid index acceptable/appropriate? Thanks.
If you're implementing a subscript on String, you might want to first think about why the standard library chooses not to.
When you call self.startIndex.advancedBy(index), you're effectively writing something like this:
var i = self.startIndex
while i < index { i = i.successor() }
This occurs because String.CharacterView.Index is not a random-access index type. See docs on advancedBy. String indices aren't random-access because each Character in a string may be any number of bytes in the string's underlying storage — you can't just get character n by jumping n * characterSize into the storage like you can with a C string.
So, if one were to use your subscript operator to iterate through the characters in a string:
for i in 0..<string.characters.count {
doSomethingWith(string[i])
}
... you'd have a loop that looks like it runs in linear time, because it looks just like an array iteration — each pass through the loop should take the same amount of time, because each one just increments i and uses a constant-time access to get string[i], right? Nope. The advancedBy call in first pass through the loop calls successor once, the next calls it twice, and so on... if your string has n characters, the last pass through the loop calls successor n times (even though that generates a result that was used in the previous pass through the loop when it called successor n-1 times). In other words, you've just made an O(n2) operation that looks like an O(n) operation, leaving a performance-cost bomb for whoever else uses your code.
This is the price of a fully Unicode-aware string library.
Anyhow, to answer your actual question — there are two schools of thought for subscripts and domain checking:
Have an optional return type: func subscript(index: Index) -> Element?
This makes sense when there's no sensible way for a client to check whether an index is valid without performing the same work as a lookup — e.g. for a dictionary, finding out if there's a value for a given key is the same as finding out what the value for a key is.
Require that the index be valid, and make a fatal error otherwise.
The usual case for this is situations where a client of your API can and should check for validity before accessing the subscript. This is what Swift arrays do, because arrays know their count and you don't need to look into an array to see if an index is valid.
The canonical test for this is precondition: e.g.
func subscript(index: Index) -> Element {
precondition(isValid(index), "index must be valid")
// ... do lookup ...
}
(Here, isValid is some operation specific to your class for validating an index — e.g. making sure it's > 0 and < count.)
In just about any use case, it's not idiomatic Swift to return a "real" value in the case of a bad index, nor is it appropriate to return a sentinel value — separating in-band values from sentinels is the reason Swift has Optionals.
Which of these is more appropriate for your use case is... well, since your use case is problematic to being with, it's sort of a wash. If you precondition that index < count, you still incur an O(n) cost just to check that (because a String has to examine its contents to figure out which sequences of bytes constitute each character before it knows how many characters it has). If you make your return type optional, and return nil after calling advancedBy or count, you've still incurred that O(n) cost.
Consider below struct:
typedef struct _Index {
NSInteger category;
NSInteger item;
} Index;
If I use this struct as a property:
#property (nonatomic, assign) Index aIndex;
When I access it without any initialization right after a view controller alloc init, LLDB print it as:
(lldb) po vc.aIndex
(category = 0, item = 0)
(lldb) po &_aIndex
0x000000014e2bcf70
I am a little confused, the struct already has valid memory address, even before I want to allocate one. Does Objective-C initialize struct automatically? If it is a NSObject, I have to do alloc init to get a valid object, but for C struct, I get a valid struct even before I tried to initialize it.
Could somebody explains, and is it ok like this, not manually initializing it?
To answer the subquestion, why you cannot assign to a structure component returned from a getter:
(As a motivation this is, because I have read this Q several times.)
A. This has nothing to do with Cbjective-C. It is a behavior stated in the C standard. You can check it for simple C code:
NSMakeSize( 1.0, 2.0 ).width = 3.0; // Error
B. No, it is not an improvement of the compiler. If it would be so, a warning would be the result, not an error. A compiler developer does not have the liberty to decide what an error is. (There are some cases, in which they have the liberty, but this are explicitly mentioned.)
C. The reason for this error is quite easy:
An assignment to the expression
NSMakeSize( 1.0, 2.0 ).width
would be legal, if that expression is a l-value. A . operator's result is an l-value, if the structure is an l-value:
A postfix expression followed by the . operator and an identifier designates a member of a structure or union object. The value is that of the named member,82) and is an lvalue if the first expression is an lvalue.
ISO/IEC 9899:TC3, 6.5.2.3
Therefore it would be assignable, if the expression
NSMakeSize( 1.0, 2.0 )
is an l-value. It is not. The reason is a little bit more complex. To understand that you have to know the links between ., -> and &:
In contrast to ., -> always is an l-value.
A postfix expression followed by the -> operator and an identifier designates a member of a structure or union object. The value is that of the named member of the object to which the first expression points, and is an lvalue. 83)
Therefore - that is what footnote 83 explains – ->, &, and . has a link:
If you can calculate the address of a structure S having a component C with the & operator, the expression (&S)->C is equivalent to S.C. This requires that you can calculate the address of S. But you can never do that with a return value, even it is a simple integer …
int f(void)
{
return 1;
}
f()=5; // Error
… or a pointer …
int *f(void)
{
return NULL;
}
f()=NULL; // Error
You always get the same error: It is not assignable. Because it is a r-value. This is obvious, because it is not clear,
a) whether the way the compiler returns a value, esp. whether he does it in address space.
b) when the time the life time of the returned value is over
Going back to the structure that means that the return value is a r-value. Therefore the result of the . operator on that is a r-value. You are not allowed to assign a value to a r-value.
D. The solution
There is a solution to assign to a "returned structure". One might decide, whether it is good or not. Since -> always is an l-value, you can return a pointer to the structure. Dereferencing this pointer with the -> operator has always an l-value as result, so you can assign a value to it:
// obj.aIndex returns a pointer
obj.aIndex->category = 1;
You do not need #public for that. (What really is a bad idea.)
The semantics of the property are to copy the struct, so it doesn't need to be allocated and initialized like an Objective-C object would. It's given its own space like a primitive type is.
You will need to be careful updating it, as this won't work:
obj.aIndex.category = 1;
Instead you will need to do this:
Index index = obj.aIndex;
index.category = 1;
obj.aIndex = index;
This is because the property getter will return a copy of the struct and not a reference to it (the first snippet is like the second snippet, without the last line that assigns the copy back to the object).
So you might be better off making it a first class object, depending on how it will be used.
The first definition below produces the warning in the title when compiled with f# 3.0 and the warning level set to 5. The second definition compiles cleanly. I wondered if someone could please explain just what the compiler worries I might accidentally mutate, or how would splitting the expression with a let clause help avoid that. Many thanks.
let ticks_with_warning () : int64 =
System.DateTime.Now.Ticks
let ticks_clean () : int64 =
let t = System.DateTime.Now
t.Ticks
I cannot really explain why the compiler emits this warning in your particular case - I agree with #ildjarn that you can safely ignore it, because the compiler is probably just being overly cautious.
However, I can give you an example where the warning might actually give you a useful hint that something might not go as you would expect. If we had a mutable struct like this:
[<Struct>]
type Test =
val mutable ticks : int64
member x.Inc() = x.ticks <- x.ticks + 1L
new (init) = { ticks = init }
Now, the Inc method mutates the struct (and you can also access the mutable field ticks). We can try writing a function that creates a Test value and mutates it:
let foo () =
let t = Test(1L)
t.Inc() // Warning: The value has been copied to ensure the original is not mutated
t
We did not mark the local value t as mutable, so the compiler tries to make sure the value is not mutated when we call Inc. It does not know whether Inc mutates the value or not, so the only safe thing is to create a copy - and thus foo returns the value Test(1L).
If we mark t as mutable, then the compiler does not have to worry about mutating it as a result of a call and so it does not give the warning (and the function returns Test(2L)):
let foo () =
let mutable t = Test(1L)
t.Inc()
t
I'm not really sure what is causing the warning in your example though. Perhaps the compiler thinks (as a result of some intermediate representation) that Ticks operation could mutate the left-hand-side value (System.DateTime.Now and t respectively) and it wants to prevent that.
The odd thing is that if you write your own DateTime struct in F#, you get a warning in both cases unless you mark the variable t as mutable (which is what I'd expect), but the behaviour with standard DateTime is different. So perhaps the compiler knows something about the standard type that I'm missing...