I have to deal with a pretty long Int, that comes to me as a String. Calling
Int64(String) works fine on 64bit devices, but I see crashes on 32bit devices. What is the reason for this?
Here is the code:
let predicateBarcode = NSPredicate(format: "barcode = %ld", Int64(searchTerm)!)
I cannot tell anything about the searchterm, it comes from the barcode scanner and is an ean-13. I can also not reproduce the crash, as this is only happening to my costomers.
It's not the problem of Int64.init(_:) but the problem of the format given to NSPredicate.
Length specifier l means its argument needs to be long or unsigned long, which are equivalent to Int or UInt in Swift.
String Format Specifiers
If you want to use Int64 value as a format argument, the right length specifier is ll, meaning long long which is equivalent to Int64 in Swift.
let predicateBarcode = NSPredicate(format: "barcode = %lld", Int64(searchTerm)!)
You may need to fix some other parts, but I cannot see as you are hiding other parts. (And as far as I test, I could not make my testing app crash.) In addition, are you 100%-sure about Int64(searchTerm)! may not crash?
Anyway, the format string needs to be fixed at least.
Related
I am trying to convert NSString to long but I am getting garbage value. Below is my code :
long t1 = [[jsonDict valueForKeyPath:#"detail.amount"]doubleValue] * 1000000000000000000;
long t2 = [[jsonDict valueForKeyPath:#"detail.fee"]doubleValue] * 10000000000000000;
NSLog(#"t1: %ld",t1);
NSLog(#"t2: %ld",t2);
detail.amout = 51.74
detail.fee = 2.72
O/P :
t1: 9223372036854775807 (Getting Garbage value here)
t2: 27200000000000000 (Working fine)
Thanks in advance.
Each number types (int, long, double, float) has limits. For your long 64 bit (because your device is 64bit) number the upper limit is :9,223,372,036,854,775,807 (see here: https://en.wikipedia.org/wiki/9,223,372,036,854,775,807)
In your case, 51.74 * 1,000,000,000,000,000,000 =
51,740,000,000,000,000,000
While Long 64bit only has a maximum of
9,223,372,036,854,775,807
So an overflow happens at 9,223,372,036,854,775,808 and above. Which is what your calculation evaluates into.
Also to note, that what you are doing will also cause problem if you only cater for 64bit long range, because what happens when your app runs on a 32bit (like iPhone 5c or below)?
Generally a bad idea to use large numbers, unless you're doing complex maths. If number accuracies are not critical, then you should consider simplifying the number like 51,740G (G = Giga). etc.
It's because you're storing the product to long type variables t1 and t2.
Use either float or double, and you'll get the correct answer.
Based on C's data types:
Long signed integer type. Capable of containing at least the
[−2,147,483,647, +2,147,483,647] range; thus, it is at least 32
bits in size.
Ref: https://en.wikipedia.org/wiki/C_data_types
9223372036854775807 is the maximum value of a 64-bit signed long. I deduce that [[jsonDict valueForKeyPath:#"detail.amount"]doubleValue] * 1000000000000000000 is larger than the maximum long value, so when you cast it to long, you get the closest value that long can represent.
As you read, it is not possible with long. Since it looks like you do finance math, you should use NSDecimalNumber instead of double to solve that problem.
I have a library from zendesk for iOS and I use a number they give me to sort help desk items by category. This is how I tell it what category I want:
hcConfig.groupIds = [115000159351]
However, XCODE is throwing the error of
Integer literal '115000159351' overflows when stored into 'Int'
Ok, I understand. Probably because the number is more than 32 bits. But I have another app I made that I have an equally long number with, and that one builds just fine with no errors. Same exact code, except slightly different number.
hcConfig.groupIds = [115000158052]
Why will one project build but the other will not?
For reference here is their instructions:
https://developer.zendesk.com/embeddables/docs/ios_support_sdk/help_center#filter-articles-by-category
The Swift Int is a 32-bit or 64-bit integer, depending on the platform.
To create a NSNumber (array) from a 64-bit literal on all platforms, use
let groupIds = [NSNumber(value: 115000159351 as Int64)]
or
let groupIds = [115000159351 as Int64 as NSNumber]
When both the integers converted to binary they needed equal bits around ~37
1101011000110100010110010110001110111 = 115000159351
1101011000110100010110010011101100100 = 115000158052
So, it seems that the one which is working is compiled as 64 bit app, where in the one failing is being compiled as 32 bit app.
Please verify once.
Please refer How to convert xcode 32 bit app into 64 bit xcode app to convert your app from 32 bit to 64 bit
For using large numbers in NSNumber use following method :
NSNumber *value = [NSNumber numberWithLongLong:115000159351];
Further, following code works fine for me on 32 bit also :
var groupIds = [NSNumber]();
groupIds = [115000158052, 115000158053, 115000158054]
groupIds = [115000158052 as NSNumber, 115000158053 as NSNumber, 115000158054 as NSNumber]
groupIds = [115000158052 as Int64 as NSNumber, 115000158053 as Int64 as NSNumber, 115000158054 as Int64 as NSNumber]
I think groupIds are not NSNumber but Int.
Objective C:
NSInteger x = // some value...
NSString* str = [NSString stringWithFormat:#"%d", (int)x];
// str is passed to swift
Swift:
let string:String = str!
let x = Int32(string)! // crash!
Sorry for the disjointed code, this is from a crash reported in a large existing codebase. I don't see how it's possible for the int->string->int32 conversion to fail. NSInteger can be too big for int32, but I would expect the explicit (int) to prevent that case (it will give the wrong value, but still shouldn't crash).
I have been unable to reproduce this, so I'm trying to figure out if my understanding is completely wrong.
Edit: obviously it is theoretically possible for it to return nil in the sense that the spec says so. I'm asking if/how it can in this specific situation.
Since you are using Int32, the initializer can return nil if the value supplied to it is out of the range Int32 can take. In your specific case this can easily happen, since as the documentation of NSInteger states, it can take 64bit values in 64bit applications (which is the only supported configuration since iOS11).
The documentation of Int32.init(_:String) clearly states that the cases when the failable initializer can fail:
If description is in an invalid format, or if the value it denotes in
base 10 is not representable, the result is nil. For example, the
following conversions result in nil:
Int(" 100") // Includes whitespace
Int("21-50") // Invalid format
Int("ff6600") // Characters out of bounds
Int("10000000000000000000000000") // Out of range
I am receiving a creation date for an object in a database as milliseconds (# of milliseconds since epoch or whatever) and would like to convert it to/from a string in Swift!
I think I'd need a data type of CUnsignedLong?
I am trying something like this but it outputs the wrong number:
var trial: CUnsignedLong = 1397016000000
println(trial) //outputs 1151628800 instead!
I'm guess this is the wrong data type so what would you all advise in a situation like this?
In Java I was using long which worked.
Thanks!
func currentTimeMillis() -> Int64{
let nowDouble = NSDate().timeIntervalSince1970
return Int64(nowDouble*1000)
}
Working fine
On 32-bit platforms, CUnsignedLong is a 32-bit integer, which is not large
enough to hold the number 1397016000000. (This is different from Java, where
long is generally a 64-bit integer.)
You can use UInt64 or NSTimeInterval (a type alias for Double), which is what the
NSDate methods use.
I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!
NSString *test = #"short";
int calFailed = MAX(test.length - 10, 0); // returns -5
After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.
NSString *test = #"short";
int calExpected = MAX((int)test.length - 10, 0); // returns 0
This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.
Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.
The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.
The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [#"short" length] - 10 will evaluate to 4294967291.
What might be more surprising is that even without the subtraction, something like MAX([#"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [#"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([#"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).
So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.
Quoting from Understand integer conversion rules [cert.org]:
If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
(emphasis mine)
Consider this example:
int s = -1;
unsigned int u = 1;
NSLog(#"%i", s < u);
// -> 0
The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.
It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)
So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.
You probably don't have enough compiler warnings turned on. If you turn on -Wsign-compare (which can be turned on with -Wextra) you will generate a warning that looks like the following
warning: signed and unsigned type in conditional expression [-Wsign-compare]
This allows you to place the casts in the right places if necessary and you shouldn't need to rewrite the MAX or MIN macros