What is the maximum value for a UInt32?
Is there a way I can use the sizeof operator to get the maximum value (as it is unsigned)? So I don't end up with #defines or magic numbers in my code.
There's a macro UINT32_MAX defined in stdint.h which you can use
#include <stdint.h>
uint32_t max = UINT32_MAX;
More about the relevant header <stdint.h>:
http://pubs.opengroup.org/onlinepubs/009695299/basedefs/stdint.h.html
The maximum value for UInt32 is 0xFFFFFFFF (or 4294967295 in decimal).
sizeof(UInt32) would not return the maximum value; it would return 4, the size in bytes of a 32 bit unsigned integer.
The portable way:
std::numeric_limits<uint32_t>::max()
Just set the max using standard hexadecimal notation and then check it against whatever you need. 32-bits is 8 hexadecimals bytes, so it'd be like this:
let myMax: UInt32 = 0xFFFFFFFF
if myOtherNumber > myMax {
// resolve problem
}
4_294_967_295 is the maximal value or in hexadecimal 0xFFFFFFFF.
An alternative for any unsigned in C or C++ is:
anUnsigned = -1;
This is useful since it works for them all, so if you change from unsigned int to unsigned long you don't need to go through your code. You will also see this used in a lot of bit fiddling code:
anUnsigned |= -(aBoolOrConditionThatWhenTrueCausesAnUnsignedToBeSetToAll1s)
anUnsigned |= -(!aValueThatWhenZeroCausesAnUnsignedToBeSetToAll1s)
anUnsigned |= -(!!aValueThatWhenNonZeroCausesAnUnsignedToBeSetToAll1s)
The downside is that it looks odd, assigning a negative number to an unsigned!
Related
Say there are two variables:
let number1 : UInt8 = 100;
let number2 : UInt8 = 100;
You add and print them
print(number1 + number2) //This prints 200
Now define one more
let number3 : UInt8 = 200;
And try to add now
print(number1 + number3) // Throws execution was interrupted
I understand that the sum of number1 and number3 would be out of range of UInt8 but explicit casting also does not help, for example following line also gives the same error:
print(UInt8(number1 + number3)) // Throws execution was interrupted
The way I found was to do the following:
print(Int(number1) + Int(number3))
Is there a better way of adding UInt8 number when their sum goes out of range?
Girish K Gupta,
UInt8 has max range 0 to 255. Which you can check using UInt8.min and UInt8.max. Basically 0 to 2 power 8.
Issue with print(number1 + number3) will return 300. 300 is greater then 255 hence crash.
When you add two UInt8 result will be by default casted to UInt8 hence the crash
Finally when you Int(number1) + Int(number3) you are forcefully casting number1 and number3 to Int.
When you use Int, range of its value depends on the platform on which you are running it either 32 bit or 64 bit. for example its range can be -2,147,483,648 to 2,147,483,647 for 32 bit.
When you add Int to Int result will be typecasted to Int. And believe me 300 is way inside the range :)
As per your question is there a better way to do it :)
Apple's docs clearly specifies and instructs to use Int rather then UInt8 or UInt32 or even UInt64 until and unless using UInt8, UInt32 or UInt64 is absolutely essential.
Here is the quote from apple's doc :)
“Use UInt only when you specifically need an unsigned integer type
with the same size as the platform’s native word size. If this is not
the case, Int is preferred, even when the values to be stored are
known to be non-negative. A consistent use of Int for integer values
aids code interoperability, avoids the need to convert between
different number types, and matches integer type inference,”
Excerpt From: Apple Inc. “The Swift Programming Language (Swift 2.2).”
iBooks. https://itun.es/in/jEUH0.l
So best thing for you to do :) follow apples instruction :) Change the number1,number2 and number3 to Int :) Problem solved :)
Hence no crash :)
As you've said casting both UInt8 variables to Int overrides the default exception on overflow as the resulting Int now has room to fit the sum.
To avoid casting the variables for every operation we would like to overload the operator like this:
func + (left: UInt8, right: UInt8) -> Int {
return Int(left) + Int(right)
}
However this will give us a compiler error as the + operator is already defined for adding two UInt8's.
What we could do instead is to define a custom operator, say ^+ to mean addition of two UInt8's but add them as Int's like so:
infix operator ^+ { associativity left precedence 140 }
func ^+ (left: UInt8, right: UInt8) -> Int {
return Int(left) + Int(right)
}
Then we can use it in our algorithms:
print(number1 ^+ number3) // Prints 300
If you however want the result to just overflow you can use the overflow operators from the standard library:
print(number1 &+ number3) // Prints 44
Let's say I have a number like 134658 and I want the 3rd digit (hundreds place) which is "6".
What's the shortest length code to get it in Objective-C?
This is my current code:
int theNumber = 204398234;
int theDigitPlace = 3;//hundreds place
int theDigit = (int)floorf((float)((10)*((((float)theNumber)/(pow(10, theDigitPlace)))-(floorf(((float)theNumber)/(pow(10, theDigitPlace)))))));
//Returns "2"
There are probably better solutions, but this one is slightly shorter:
int theNumber = 204398234;
int theDigitPlace = 3;//hundreds place
int theDigit = (theNumber/(int)(pow(10, theDigitPlace - 1))) % 10;
In your case, it divides the number by 100 to get 2043982 and then "extracts"
the last decimal digit with the "remainder operator" %.
Remark: The solution assumes that the result of pow(10, theDigitPlace - 1) is
exact. This works because double has about 16 significant decimal digits and int on iOS
is a 32-bit number and has at most 10 decimal digits.
How about good old C?
int theNumber = 204398234;
char output[20]; //Create a string bigger than any number we might get.
sprintf(output, "%d", theNumber);
int theDigit = output[strlen(output)-4]-'0'; //index is zero-based.
That's really only 2 executable lines.
Yours is only 1 line, but that's a nasty, hard-to-understand expression you've got there, and uses very slow transcendental math.
Note: Fixed to take the 3rd digit from the right instead of the 3rd from the left. (Thanks #Maddy for catching my mistake)
Another solution that uses integer math, and a single line of code:
int theNumber = 204398234;
int result = (theNumber/100) % 10;
This is likely the fastest solution proposed yet.
It shifts the hundreds place down into the 1s place, then uses modulo arithmetic to get rid of everything but the lowest-order decimal digit.
I am making an app that downloads a 32bit integer from server, and use the first 16bit and second 16bit for different purposes...
I am responsible for the second 16bit, which should be used to form an int, I know I should use bitwise operation to do this, but unable to achieve, below is the code that I am using, please give me more information.
//CID is a 32bit integer, in nslog it shows as 68913219 - its different for every user
Byte lowByte = (CID>>16)&0xFF; //the 3rd byte
Byte highByte = (CID>>24)&0xFF; //the 4th byte
uint16_t value = lowByte & highByte; //combine, but the result is 0.
uint16_t value = lowByte & highByte; //combine, but the result is 0.
This is not how you combine two bytes into a single uint16_t: you are ANDing them in place, while you need to shift the high byte, and OR it with the low byte:
uint16_t value = lowByte | (((uint16_t)highByte) << 8);
However, this is suboptimal in terms of readability: if you start with a 32-bit integer, and you need to cut out the upper 16 bits, you could simply shift by 16, and mask with 0xFFFF - i.e. the same way that you cut out the third byte, but with a 16-bit mask.
You shouldn't need to do this byte-by-byte. Try this:
//CID is a 32bit integer, in nslog it shows as 68913219 - its different for every user
uint16_t value = (CID>>16)&0xffff;
You can also use NSData.
If you have this integer:
int i = 1;
You can use :
NSData *data = [NSData dataWithBytes: &i length: sizeof(i)];
and then, to get the second 16bit do:
int16_t recover;
[data getBytes: &recover range: NSMakeRange(2, 2)]
Then you have your 16 bits on "recover".
Maybe this way is not so efficient like bit operation, but it is clearer.
:D
The UITableView data source method numberOfSectionsInTableView: has a return type of NSInteger. However, a UITableView cannot have a negative amount of rows; it has 0 or greater rows, so why is the return type of NSInteger? Doesn't that allow for crashes relating to a negative integer being returned?
You can't do the check (if var < 0) return; with an unsigned integer. That is the standard reason for preferring one. Really the only reason to use an unsigned integer is if you need the extra room for larger digits, and you can guarantee the input will never try to be less than zero.
It seems I cannot make C++/CLI structures be aligned with less than 8 bytes. I have a struct of two Int32, allocate a million of them, and voilà: 16 MB memory according to ".NET Memory Profiler" (plus the list data). I set the compiler option to /Zp4 (also tried /Zp1), to Minimize Size (/O1) and Small Code (/Os), just to make sure, I additionally put a "#pragma pack(1)" into my code, to no avail. My struct is still taking up 16 Bytes. I changed it to class, still the same.
Why that?
How to change?
Ciao,
Eike
using namespace System;
#pragma pack(1)
ref struct myStruct
{
Int32 a;
Int32 b;
};
int main(array<System::String ^> ^args)
{
System::Collections::Generic::List<myStruct^> list;
for (int i = 0; i < 1000000; i++)
{
list.Add(gcnew myStruct());
}
// avoid optimization
Console::WriteLine(list[333333]->a);
return 0;
}
You need to use value types to be able to specify alignment. Beyond that I'm not sure this is the best way to measure this. Reference types also have some small built in overhead. Try value struct/value class instead.