iOS Swift - EXC_BAD_INSTRUCTION on certain devices - ios

I'm very new to Swift and iOS development but I've come across a bug that is causing my app to crash when running on the following devices:
iPhone 4S
iPhone 5
iPad 2
iPad Retina
Here is the code that is being flagged up:
// bin2dec - converts binary string into decimal string
func bin2dec(input: String) -> String {
var counter = countElements(input)
var digit: Character
var power = 1
var result = 0
while counter > 0 {
digit = input[advance(input.startIndex, counter-1)]
switch digit {
case "0":
result += 0
case "1":
result += 1 * power
default:
power = power / 2
break
}
counter--
power *= 2
}
return "\(result)"
}
and the error is:
Thread 1: EXC_BAD_INSTRUCTION(code=EXC_I386_INVOP,subcode=0x0)
Any help would be appreciated, thanks!

iPhone 4S, iPhone 5, iPad 2, iPad Retina are 32-bit devices, where Int
is a 32-bit integer. Therefore starting with
var power = 1
and then calling
power *= 2
32 times will overflow and cause an exception. In Swift, integer arithmetic does not silently "wrap around" as in (Objective-)C,
unless you explicitly use the "overflow operators" &*, &+ etc.
Possible solutions:
Use Int64 instead of Int.
Avoid the final multiplication of power (whose result is not
needed).
Note that there are simpler methods to convert a string of binary
digits to a number, see for example How to convert a binary to decimal in Swift?.

Related

Swift array accessors 5 times slower than native arrays - which are the recommended ones?

I'm doing loops on big arrays (images) and through Instruments I found out that the major bottleneck was Array.subscript.nativePinningMutableAddressor, so I made this unit tests to compare,
// average: 0.461 seconds (iPhone6 iOS 10.2) ~5.8 times slower than native arrays
func testArrayPerformance() {
self.measure {
var array = [Float](repeating: 1, count: 2048 * 2048)
for i in 0..<array.count {
array[(i+1)%array.count] = Float(i)
}
}
}
// average: 0.079 seconds
func testNativeArrayPerformance() {
self.measure {
let count = 2048 * 2048
let array = UnsafeMutablePointer<Float>.allocate(capacity: count)
for i in 0..<count {
array[(i+1)%count] = Float(i)
}
array.deallocate(capacity: count)
}
}
As you can see, the native array is much faster. Is there any other way to access the array faster? "Unsafe" doesn't sound "safe", but what would you guys do in this situation? Is there any other type of array that wraps a native one?
For a more complex example, you can see follow the comments in this article: Rendering Text in Metal with Signed-Distance Fields
I re-implemented that example in Swift, and the original implementation took 52 seconds to start up, https://github.com/endavid/VidEngine/tree/textprimitive-fail
After switching to native arrays, I went down to 10 seconds, https://github.com/endavid/VidEngine/tree/fontatlas-array-optimization
Tested on Xcode 8.3.3.
Edit1:
The timings for this test are in Debug configuration, but the timings for the Signed Distance Fields example are in Release configuration. Thanks for the micro-optimizations (count, initialization) for the unit tests in the comments, but in the real world example those are negligible and the memory buffer solution is still 5 times faster on iOS.
Edit2:
Here are the timings (Instruments session on iPhone6) of the most expensive functions in the Signed Distance Fields example,
using Swift arrays,
using memory buffers,
Edit3: apart from performance issues, I had severe memory problems using Swift arrays. NSKeyedArchiver would run out of memory and crash the app. I had to use the byte buffer instead, and store it in a NSData. Ref. commit: https://github.com/endavid/VidEngine/commit/6c1822523a2b18759f294def3188755eaaf98b41
So I guess the answer to my question is: for big arrays of numeric data (e.g. images), better use memory buffers.
Simply caching the count improved the speed from 0.2s to 0.14s, which is twice the time it takes the pointer-based code. This is entirely expected, given that the array based code does a preinitialization of all elements to 1.
Baseline:
After caching the count:
I decided to test the uninitialized Array performance on my 2014 Macbook Pro :
// average: 0.315 seconds (macOS Sierra 10.12.5)
func testInitializedArrayPerformance() {
self.measure {
var array = [Float](repeating: 1, count: 2048 * 2048)
for i in 0..<array.count {
array[(i+1)%array.count] = Float(i)
}
}
}
// average: 0.043 seconds (macOS Sierra 10.12.5)
func testUninitializedArrayPerformance() {
self.measure {
var array : [Float] = []
array.reserveCapacity(2048 * 2048)
array.append(0)
for i in 0..<(2048 * 2048) {
array.append(Float(i))
}
array[0] = Float(2048 * 2048-1)
}
}
// average: 0.077 seconds (macOS Sierra 10.12.5)
func testNativeArrayPerformance() {
self.measure {
let count = 2048 * 2048
let array = UnsafeMutablePointer<Float>.allocate(capacity: count)
for i in 0..<count {
array[(i+1)%count] = Float(i)
}
array.deallocate(capacity: count)
}
}
This confirms that the array initialization is causing a big performance hit.
As mentioned by Alexander, UnsafeMutablePointer is not a native array, it's just a pointer operation.
Testing on iPhone 7+/iOS 10.3.2, in equivalent condition (both initialized) with Release build:
//0.030,0.027,0.017,0.027,0.024 -> avg 0.025
func testArrayPerformance2() {
self.measure {
let count = 2048 * 2048
var array = [Float](repeating: 1, count: count)
for i in 0..<count {
array[(i+1)%count] = Float(i)
}
}
}
//0.021,0.022,0.011,0.021,0.021 -> avg 0.0192
func testPointerOpPerformance2() {
self.measure {
let count = 2048 * 2048
let array = UnsafeMutablePointer<Float>.allocate(capacity: count)
array.initialize(to: 1, count: count)
for i in 0..<count {
array[(i+1)%count] = Float(i)
}
array.deinitialize(count: count)
array.deallocate(capacity: count)
}
}
Not a big difference. Less than 2 times. (About 1.3 times.)
Generally, Swift optimizer for Arrays work well for:
Block-local variables
Private properties
Whole Module Optimization would affect, but I have not tested.
If your more complex example takes 5 times to start up, it may be written in a hard-to-optimize manner. (Please pick up the core parts affecting the performance and include it in your question.)

Int to format %.2f returns unexpected number on iPhone 5

I'm passing 0 as an argument to String(format: "%.2f"), it works on iPhone 5s, se, 6, 6s etc as expected ... However, it stopped working on iPhone 5, I guessed that it was a problem of 32 bit and 64 bit systems, because %f formats 64-bit floating-point number. Wrapped 0 with Double(0) and it worked, result was 0.00.
Can someone explain it in more details ?
String(format:) uses the same conversion specifications as
printf
(with some additions like %# for objects). In particular, the %f
conversion expects a Double on the argument list, and passing
anything else causes undefined behaviour: It may produce unexpected
output or crash.
On a 64-bit platform, passing 0 may work by chance because then
Int is a 64-bit integer and thus has the same size as a Double.
But even that is not guaranteed to work:
passing an integer argument instead of the expected floating
pointer number is still undefined behaviour.
You can use swift inbuilt method for a more consistent behavior
// Round the given value to a specified number
// of decimal places
func round(_ value: Double, toDecimalPlaces places: Int) -> Double {
let divisor = pow(10.0, Double(places))
return round(value * divisor) / divisor
}
Example:
round(52.3761, toDecimalPlaces: 3) // 52.376
round(52.3761, toDecimalPlaces: 2) // 52.38

Getting weird value in Double

Hello i made a "Clicker" as a first project while learning swift i have an automated timer that is supposed to remove some numbers from other numbers but sometimes i get values like 0.600000000000001 and i have no idea why.
Here is my "Attack" function that removes 0.2 from the Health of a zombie.
let fGruppenAttackTimer = NSTimer.scheduledTimerWithTimeInterval(1, target: self, selector: Selector("fGruppenAttackTime"), userInfo: nil, repeats: true)
func fGruppenAttackTime() {
zHealth -= 0.2
if zHealth <= 0 {
zHealth = zSize
pPengar += pPengarut
}
...
}
And here is my attackZ button that is supposed to remove 1 from the health of the zombie
#IBAction func attackZ(sender: UIButton) {
zHealth -= Double(pAttack)
fHunger -= 0.05
fGruppenHunger.progress = Float(fHunger / 100)
Actionlbl.text = ""
if zHealth <= 0 {
zHealth = zSize
pPengar += pPengarut
}
}
Lastly here are the variables value:
var zHealth = 10.0
var zSize = 10.0
var pAttack = 1
var pPengar = 0
var pPengarut = 1
When the timer is on and the function is running and i click the button i sometimes get weird values like 0.600000000000001 and if i set the 0.2 in the function to 0.25 i get 0.0999999999999996 sometimes. I wonder why this happens and what to do with it.
In trojanfoe's answer, he shares a link that describes the source of the problem regarding rounding of floating point numbers.
In terms of what to do, there are a number of approaches:
You can shift to integer types. For example, if your existing values can all be represented with a maximum of two decimal places, multiply those by 100 and then use Int types everywhere, excising the Double and Float representations from your code.
You can simply deal with the very small variations that Double type introduces. For example:
If displaying the results in the UI, use NumberFormatter to convert the Double value to a String using a specified number of decimal places.
let formatter = NumberFormatter()
formatter.maximumFractionDigits = 2
formatter.minimumFractionDigits = 0 // or you might use `2` here, too
formatter.numberStyle = .decimal
print(formatter.string(for: value)!)
By the way, the NSNumberFormatter enjoys another benefit, too, namely that it honors the localization settings for the user. For example, if the user lives in Germany, where the decimal place is represented with a , rather than a ., the NSNumberFormatter will use the user's native number formatting.
When testing to see if a number is equal to some value, rather than just using == operator, look at the difference between two values and seeing if they're within some permissible rounding threshold.
You can use Decimal/NSDecimalNumber, which doesn't suffer from rounding issues when dealing with decimals:
var value = Decimal(string: "1.0")!
value -= Decimal(string: "0.9")!
value -= Decimal(string: "0.1")!
Or:
var value = Decimal(1)
value -= Decimal(sign: .plus, exponent: -1, significand: 9)
value -= Decimal(sign: .plus, exponent: -1, significand: 1)
Or:
var value = Decimal(1)
value -= Decimal(9) / Decimal(10)
value -= Decimal(1) / Decimal(10)
Note, I explicitly avoid using any Double values such as Decimal(0.1) because creating a Decimal from a fractional Double only captures whatever imprecision Double entails, where as the three examples above avoid that entirely.
It's because of floating point rounding errors.
For further reading, see What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Squeezing infinitely many real numbers into a finite number of bits
requires an approximate representation. Although there are infinitely
many integers, in most programs the result of integer computations can
be stored in 32 bits. In contrast, given any fixed number of bits,
most calculations with real numbers will produce quantities that
cannot be exactly represented using that many bits. Therefore the
result of a floating-point calculation must often be rounded in order
to fit back into its finite representation. This rounding error is the
characteristic feature of floating-point computation.

Overflow when converting from UInt32 to Int

I'm having a really weird issue with Swift/Xcode (not really sure where the source lies, to be honest).
I have to following code:
extension Int {
func random(min : Int = 0, max : Int = Int(UInt32.max - 1)) {
return min + Int(arc4random_uniform(UInt32(max - min + 1)))
}
}
When I build this code in Xcode, it works perfectly fine. When I try to build it using xcodebuild though, the compiler gives me the following error:
integer overflows when converted from 'UInt32' to 'Int'
public static func random(min : Int = 0, max : Int = Int(UInt32.max - 1)) -> Int {
Which is weird, since the values of Int.max and UInt32.max are no where close.
I'm using Xcode 7.0 beta 5 for compilation if it is any help...'cause I'm absolutely stumped.
That error occurs if you compile for a 32-bit device (e.g. iPhone 5),
because Int is then a signed 32-bit integer, and UInt32.max - 1
outside of its range.
Another problem is the calculation of UInt32(max - min + 1),
which can crash at runtime due to an overflow, e.g. if you call
random(min : Int.min, max : Int.max)
See How can I generate large, ranged random numbers in Swift? for a possible solution to avoid
overflows when generating random numbers for arbitrary ranges.

Is there an API in Swift to determine processor register size of iOS device?

I am displaying to the user whether their device is 32-bit or 64-bit. Currently, I am determining that based on the value of UInt.max:
let cpuRegisterSize = count(String(UInt.max, radix: 2))
It feels a little hacky, so I was wondering if there was an API in Swift that returns that value instead. UIDevice doesn't seem to hold that information, from what I can tell.
MemoryLayout<Int>.size == MemoryLayout<Int32>.size ? 32 : 64
On 32-bit devices CGFloat is Float. On 64-bit devices CGFloat is Double. So you can use CGFLOAT_IS_DOUBLE to detect current architecture.
let bit = 32 * (CGFLOAT_IS_DOUBLE + 1)
You can also use sizeof(Int):
let bit = sizeof(Int) == sizeof(Int64) ? 64 : 32

Resources