Overflow when converting from UInt32 to Int - ios

I'm having a really weird issue with Swift/Xcode (not really sure where the source lies, to be honest).
I have to following code:
extension Int {
func random(min : Int = 0, max : Int = Int(UInt32.max - 1)) {
return min + Int(arc4random_uniform(UInt32(max - min + 1)))
}
}
When I build this code in Xcode, it works perfectly fine. When I try to build it using xcodebuild though, the compiler gives me the following error:
integer overflows when converted from 'UInt32' to 'Int'
public static func random(min : Int = 0, max : Int = Int(UInt32.max - 1)) -> Int {
Which is weird, since the values of Int.max and UInt32.max are no where close.
I'm using Xcode 7.0 beta 5 for compilation if it is any help...'cause I'm absolutely stumped.

That error occurs if you compile for a 32-bit device (e.g. iPhone 5),
because Int is then a signed 32-bit integer, and UInt32.max - 1
outside of its range.
Another problem is the calculation of UInt32(max - min + 1),
which can crash at runtime due to an overflow, e.g. if you call
random(min : Int.min, max : Int.max)
See How can I generate large, ranged random numbers in Swift? for a possible solution to avoid
overflows when generating random numbers for arbitrary ranges.

Related

Modifying too many bytes of Data object crashes swift app

I am creating an app where I need to modify specific bytes in a Data object, but it crashes when I modify too many bytes at one time.
I thought the problem might be because my app was hanging for too long, but I put my code in a DispatchGroup and it didn’t help.
//amount and interval are Ints
var pos: Int = 1
let count = data.count
var tempData: Data = data
while (pos < count) {
tempData[pos - 1] = tempData[pos - 1] + UInt8(amount)
pos += interval
}
This code crashes my app when I provide it with a large Data object, but works fine with small ones.
I found my problem. Since I was adding two UInt8s together, there was a chance that the resulting UInt8 would be invalid (greater than 255), which resulted in a crash. I fixed this by changing my + to &+ so the UInt8 overflows back to 1.

Int to format %.2f returns unexpected number on iPhone 5

I'm passing 0 as an argument to String(format: "%.2f"), it works on iPhone 5s, se, 6, 6s etc as expected ... However, it stopped working on iPhone 5, I guessed that it was a problem of 32 bit and 64 bit systems, because %f formats 64-bit floating-point number. Wrapped 0 with Double(0) and it worked, result was 0.00.
Can someone explain it in more details ?
String(format:) uses the same conversion specifications as
printf
(with some additions like %# for objects). In particular, the %f
conversion expects a Double on the argument list, and passing
anything else causes undefined behaviour: It may produce unexpected
output or crash.
On a 64-bit platform, passing 0 may work by chance because then
Int is a 64-bit integer and thus has the same size as a Double.
But even that is not guaranteed to work:
passing an integer argument instead of the expected floating
pointer number is still undefined behaviour.
You can use swift inbuilt method for a more consistent behavior
// Round the given value to a specified number
// of decimal places
func round(_ value: Double, toDecimalPlaces places: Int) -> Double {
let divisor = pow(10.0, Double(places))
return round(value * divisor) / divisor
}
Example:
round(52.3761, toDecimalPlaces: 3) // 52.376
round(52.3761, toDecimalPlaces: 2) // 52.38

How two UInt8 variables are added in iOS Swift?

Say there are two variables:
let number1 : UInt8 = 100;
let number2 : UInt8 = 100;
You add and print them
print(number1 + number2) //This prints 200
Now define one more
let number3 : UInt8 = 200;
And try to add now
print(number1 + number3) // Throws execution was interrupted
I understand that the sum of number1 and number3 would be out of range of UInt8 but explicit casting also does not help, for example following line also gives the same error:
print(UInt8(number1 + number3)) // Throws execution was interrupted
The way I found was to do the following:
print(Int(number1) + Int(number3))
Is there a better way of adding UInt8 number when their sum goes out of range?
Girish K Gupta,
UInt8 has max range 0 to 255. Which you can check using UInt8.min and UInt8.max. Basically 0 to 2 power 8.
Issue with print(number1 + number3) will return 300. 300 is greater then 255 hence crash.
When you add two UInt8 result will be by default casted to UInt8 hence the crash
Finally when you Int(number1) + Int(number3) you are forcefully casting number1 and number3 to Int.
When you use Int, range of its value depends on the platform on which you are running it either 32 bit or 64 bit. for example its range can be -2,147,483,648 to 2,147,483,647 for 32 bit.
When you add Int to Int result will be typecasted to Int. And believe me 300 is way inside the range :)
As per your question is there a better way to do it :)
Apple's docs clearly specifies and instructs to use Int rather then UInt8 or UInt32 or even UInt64 until and unless using UInt8, UInt32 or UInt64 is absolutely essential.
Here is the quote from apple's doc :)
“Use UInt only when you specifically need an unsigned integer type
with the same size as the platform’s native word size. If this is not
the case, Int is preferred, even when the values to be stored are
known to be non-negative. A consistent use of Int for integer values
aids code interoperability, avoids the need to convert between
different number types, and matches integer type inference,”
Excerpt From: Apple Inc. “The Swift Programming Language (Swift 2.2).”
iBooks. https://itun.es/in/jEUH0.l
So best thing for you to do :) follow apples instruction :) Change the number1,number2 and number3 to Int :) Problem solved :)
Hence no crash :)
As you've said casting both UInt8 variables to Int overrides the default exception on overflow as the resulting Int now has room to fit the sum.
To avoid casting the variables for every operation we would like to overload the operator like this:
func + (left: UInt8, right: UInt8) -> Int {
return Int(left) + Int(right)
}
However this will give us a compiler error as the + operator is already defined for adding two UInt8's.
What we could do instead is to define a custom operator, say ^+ to mean addition of two UInt8's but add them as Int's like so:
infix operator ^+ { associativity left precedence 140 }
func ^+ (left: UInt8, right: UInt8) -> Int {
return Int(left) + Int(right)
}
Then we can use it in our algorithms:
print(number1 ^+ number3) // Prints 300
If you however want the result to just overflow you can use the overflow operators from the standard library:
print(number1 &+ number3) // Prints 44

iOS Swift - EXC_BAD_INSTRUCTION on certain devices

I'm very new to Swift and iOS development but I've come across a bug that is causing my app to crash when running on the following devices:
iPhone 4S
iPhone 5
iPad 2
iPad Retina
Here is the code that is being flagged up:
// bin2dec - converts binary string into decimal string
func bin2dec(input: String) -> String {
var counter = countElements(input)
var digit: Character
var power = 1
var result = 0
while counter > 0 {
digit = input[advance(input.startIndex, counter-1)]
switch digit {
case "0":
result += 0
case "1":
result += 1 * power
default:
power = power / 2
break
}
counter--
power *= 2
}
return "\(result)"
}
and the error is:
Thread 1: EXC_BAD_INSTRUCTION(code=EXC_I386_INVOP,subcode=0x0)
Any help would be appreciated, thanks!
iPhone 4S, iPhone 5, iPad 2, iPad Retina are 32-bit devices, where Int
is a 32-bit integer. Therefore starting with
var power = 1
and then calling
power *= 2
32 times will overflow and cause an exception. In Swift, integer arithmetic does not silently "wrap around" as in (Objective-)C,
unless you explicitly use the "overflow operators" &*, &+ etc.
Possible solutions:
Use Int64 instead of Int.
Avoid the final multiplication of power (whose result is not
needed).
Note that there are simpler methods to convert a string of binary
digits to a number, see for example How to convert a binary to decimal in Swift?.

Bug in fminf and fmaxf on iOS 64bit processors

I came across a bug with the 64bit processors that I wanted to share.
CGFloat test1 = 0.58;
CGFloat test2 = 0.40;
CGFloat value;
value = fmaxf( test1, test2 );
The result would be:
value = 0.5799999833106995
This obviously is a rounding issue, but if you needed to check to see which value was picked you would get an erroneous result.
if( test1 == value ){
// do something
}
however if you use either MIN( A, B ) or MAX( A, B ) it would work as expected.
I thought this is was worth sharing
Thanks
This has nothing to do with a bug in fminf or fmaxf. There is a bug in your code.
On 64-bit systems, CGFloat is typedef'd to double, but you're using the fmaxf function, which operates on float (not double), which causes its arguments to be rounded to single precision, thus changing the value. Don't do that.
On 32-bit systems, this doesn't happen because CGFloat is typedef'd to float, matching the argument and return type of fmaxf; no rounding occurs.
Instead, either include <tgmath.h> and use fmax without the f suffix, or use float instead of CGFloat.

Resources