Say there are two variables:
let number1 : UInt8 = 100;
let number2 : UInt8 = 100;
You add and print them
print(number1 + number2) //This prints 200
Now define one more
let number3 : UInt8 = 200;
And try to add now
print(number1 + number3) // Throws execution was interrupted
I understand that the sum of number1 and number3 would be out of range of UInt8 but explicit casting also does not help, for example following line also gives the same error:
print(UInt8(number1 + number3)) // Throws execution was interrupted
The way I found was to do the following:
print(Int(number1) + Int(number3))
Is there a better way of adding UInt8 number when their sum goes out of range?
Girish K Gupta,
UInt8 has max range 0 to 255. Which you can check using UInt8.min and UInt8.max. Basically 0 to 2 power 8.
Issue with print(number1 + number3) will return 300. 300 is greater then 255 hence crash.
When you add two UInt8 result will be by default casted to UInt8 hence the crash
Finally when you Int(number1) + Int(number3) you are forcefully casting number1 and number3 to Int.
When you use Int, range of its value depends on the platform on which you are running it either 32 bit or 64 bit. for example its range can be -2,147,483,648 to 2,147,483,647 for 32 bit.
When you add Int to Int result will be typecasted to Int. And believe me 300 is way inside the range :)
As per your question is there a better way to do it :)
Apple's docs clearly specifies and instructs to use Int rather then UInt8 or UInt32 or even UInt64 until and unless using UInt8, UInt32 or UInt64 is absolutely essential.
Here is the quote from apple's doc :)
“Use UInt only when you specifically need an unsigned integer type
with the same size as the platform’s native word size. If this is not
the case, Int is preferred, even when the values to be stored are
known to be non-negative. A consistent use of Int for integer values
aids code interoperability, avoids the need to convert between
different number types, and matches integer type inference,”
Excerpt From: Apple Inc. “The Swift Programming Language (Swift 2.2).”
iBooks. https://itun.es/in/jEUH0.l
So best thing for you to do :) follow apples instruction :) Change the number1,number2 and number3 to Int :) Problem solved :)
Hence no crash :)
As you've said casting both UInt8 variables to Int overrides the default exception on overflow as the resulting Int now has room to fit the sum.
To avoid casting the variables for every operation we would like to overload the operator like this:
func + (left: UInt8, right: UInt8) -> Int {
return Int(left) + Int(right)
}
However this will give us a compiler error as the + operator is already defined for adding two UInt8's.
What we could do instead is to define a custom operator, say ^+ to mean addition of two UInt8's but add them as Int's like so:
infix operator ^+ { associativity left precedence 140 }
func ^+ (left: UInt8, right: UInt8) -> Int {
return Int(left) + Int(right)
}
Then we can use it in our algorithms:
print(number1 ^+ number3) // Prints 300
If you however want the result to just overflow you can use the overflow operators from the standard library:
print(number1 &+ number3) // Prints 44
Related
I'm passing 0 as an argument to String(format: "%.2f"), it works on iPhone 5s, se, 6, 6s etc as expected ... However, it stopped working on iPhone 5, I guessed that it was a problem of 32 bit and 64 bit systems, because %f formats 64-bit floating-point number. Wrapped 0 with Double(0) and it worked, result was 0.00.
Can someone explain it in more details ?
String(format:) uses the same conversion specifications as
printf
(with some additions like %# for objects). In particular, the %f
conversion expects a Double on the argument list, and passing
anything else causes undefined behaviour: It may produce unexpected
output or crash.
On a 64-bit platform, passing 0 may work by chance because then
Int is a 64-bit integer and thus has the same size as a Double.
But even that is not guaranteed to work:
passing an integer argument instead of the expected floating
pointer number is still undefined behaviour.
You can use swift inbuilt method for a more consistent behavior
// Round the given value to a specified number
// of decimal places
func round(_ value: Double, toDecimalPlaces places: Int) -> Double {
let divisor = pow(10.0, Double(places))
return round(value * divisor) / divisor
}
Example:
round(52.3761, toDecimalPlaces: 3) // 52.376
round(52.3761, toDecimalPlaces: 2) // 52.38
I'm having a really weird issue with Swift/Xcode (not really sure where the source lies, to be honest).
I have to following code:
extension Int {
func random(min : Int = 0, max : Int = Int(UInt32.max - 1)) {
return min + Int(arc4random_uniform(UInt32(max - min + 1)))
}
}
When I build this code in Xcode, it works perfectly fine. When I try to build it using xcodebuild though, the compiler gives me the following error:
integer overflows when converted from 'UInt32' to 'Int'
public static func random(min : Int = 0, max : Int = Int(UInt32.max - 1)) -> Int {
Which is weird, since the values of Int.max and UInt32.max are no where close.
I'm using Xcode 7.0 beta 5 for compilation if it is any help...'cause I'm absolutely stumped.
That error occurs if you compile for a 32-bit device (e.g. iPhone 5),
because Int is then a signed 32-bit integer, and UInt32.max - 1
outside of its range.
Another problem is the calculation of UInt32(max - min + 1),
which can crash at runtime due to an overflow, e.g. if you call
random(min : Int.min, max : Int.max)
See How can I generate large, ranged random numbers in Swift? for a possible solution to avoid
overflows when generating random numbers for arbitrary ranges.
Let's say I have a number like 134658 and I want the 3rd digit (hundreds place) which is "6".
What's the shortest length code to get it in Objective-C?
This is my current code:
int theNumber = 204398234;
int theDigitPlace = 3;//hundreds place
int theDigit = (int)floorf((float)((10)*((((float)theNumber)/(pow(10, theDigitPlace)))-(floorf(((float)theNumber)/(pow(10, theDigitPlace)))))));
//Returns "2"
There are probably better solutions, but this one is slightly shorter:
int theNumber = 204398234;
int theDigitPlace = 3;//hundreds place
int theDigit = (theNumber/(int)(pow(10, theDigitPlace - 1))) % 10;
In your case, it divides the number by 100 to get 2043982 and then "extracts"
the last decimal digit with the "remainder operator" %.
Remark: The solution assumes that the result of pow(10, theDigitPlace - 1) is
exact. This works because double has about 16 significant decimal digits and int on iOS
is a 32-bit number and has at most 10 decimal digits.
How about good old C?
int theNumber = 204398234;
char output[20]; //Create a string bigger than any number we might get.
sprintf(output, "%d", theNumber);
int theDigit = output[strlen(output)-4]-'0'; //index is zero-based.
That's really only 2 executable lines.
Yours is only 1 line, but that's a nasty, hard-to-understand expression you've got there, and uses very slow transcendental math.
Note: Fixed to take the 3rd digit from the right instead of the 3rd from the left. (Thanks #Maddy for catching my mistake)
Another solution that uses integer math, and a single line of code:
int theNumber = 204398234;
int result = (theNumber/100) % 10;
This is likely the fastest solution proposed yet.
It shifts the hundreds place down into the 1s place, then uses modulo arithmetic to get rid of everything but the lowest-order decimal digit.
I came across a bug with the 64bit processors that I wanted to share.
CGFloat test1 = 0.58;
CGFloat test2 = 0.40;
CGFloat value;
value = fmaxf( test1, test2 );
The result would be:
value = 0.5799999833106995
This obviously is a rounding issue, but if you needed to check to see which value was picked you would get an erroneous result.
if( test1 == value ){
// do something
}
however if you use either MIN( A, B ) or MAX( A, B ) it would work as expected.
I thought this is was worth sharing
Thanks
This has nothing to do with a bug in fminf or fmaxf. There is a bug in your code.
On 64-bit systems, CGFloat is typedef'd to double, but you're using the fmaxf function, which operates on float (not double), which causes its arguments to be rounded to single precision, thus changing the value. Don't do that.
On 32-bit systems, this doesn't happen because CGFloat is typedef'd to float, matching the argument and return type of fmaxf; no rounding occurs.
Instead, either include <tgmath.h> and use fmax without the f suffix, or use float instead of CGFloat.
What is the maximum value for a UInt32?
Is there a way I can use the sizeof operator to get the maximum value (as it is unsigned)? So I don't end up with #defines or magic numbers in my code.
There's a macro UINT32_MAX defined in stdint.h which you can use
#include <stdint.h>
uint32_t max = UINT32_MAX;
More about the relevant header <stdint.h>:
http://pubs.opengroup.org/onlinepubs/009695299/basedefs/stdint.h.html
The maximum value for UInt32 is 0xFFFFFFFF (or 4294967295 in decimal).
sizeof(UInt32) would not return the maximum value; it would return 4, the size in bytes of a 32 bit unsigned integer.
The portable way:
std::numeric_limits<uint32_t>::max()
Just set the max using standard hexadecimal notation and then check it against whatever you need. 32-bits is 8 hexadecimals bytes, so it'd be like this:
let myMax: UInt32 = 0xFFFFFFFF
if myOtherNumber > myMax {
// resolve problem
}
4_294_967_295 is the maximal value or in hexadecimal 0xFFFFFFFF.
An alternative for any unsigned in C or C++ is:
anUnsigned = -1;
This is useful since it works for them all, so if you change from unsigned int to unsigned long you don't need to go through your code. You will also see this used in a lot of bit fiddling code:
anUnsigned |= -(aBoolOrConditionThatWhenTrueCausesAnUnsignedToBeSetToAll1s)
anUnsigned |= -(!aValueThatWhenZeroCausesAnUnsignedToBeSetToAll1s)
anUnsigned |= -(!!aValueThatWhenNonZeroCausesAnUnsignedToBeSetToAll1s)
The downside is that it looks odd, assigning a negative number to an unsigned!