Do you know what is the maximum value of Process ID both in 32-bit and 64-bit QNX systems and if there is a header where it is defined?
Thank you.
The maximum value for a pid_t in QNX is the largest positive integer representable in that type. For QNX 6.x, where pid_t == int32_t, that would be INT_MAX from <limits.h>. I don't have QNX 7.x handy to check, so you'll have to check the definition (try <sys/target_nto.h>) to find out what's being used there.
Related
Swift's Int type has different maximum and minimum values that can be assigned, depending on whether the execution environment is 32-bit or 64-bit.
In a 32-bit execution environment, the range would be -2_147_483_648 to 2_147_483_647.
This is the same range as the Int32 type.
In a 64-bit execution environment, the range is -9_223_372_036_854_775_808 to 9_223_372_036_854_775_807.
This is the same range as the Int64 type.
I am currently working on an app that targets iOS 13 or later.
According to my research, all iPhones, iPod touches, and iPads that can install iOS 13 are 64-bit execution environments.
Also, Apple Silicon Macs that can run iOS apps are also 64-bit environments.
Then can I write a program that assumes that the range of type Int is the same as that of type Int64?
Specifically, can I assign values that would crash in a 32-bit environment (for example, values larger than 2_147_483_647) to the Int type variables as a matter of course?
Or should I not write such a program?
(I used a translation tool to ask this question.)
Require iOS 13 and just use Ints. To assert the range of Int is the same as the range of Int64:
assert(Int.max == Int64.max && Int.min == Int64.min)
In iOS 11 and later, all apps use the 64-bit architecture.
See also.
AFAICT wchar_t is always 32-bit wide on Apple targets.
What is the sign of wchar_t on:
x86 Apple Darwin (32-bit MacOSX)
x86_64 Apple Darwin (64-bit MacOSX)
ARM iOS (32-bit)
AArch64 iOS (64-bit)
?
ISO/IEC 9899:2017 §7.20.3/4:
If wchar_t (see 7.19) is defined as a signed integer type, the value of WCHAR_MIN shall be no greater than −127 and the value of WCHAR_MAX shall be no less than 127; otherwise, wchar_t is defined as an unsigned integer type, and the value of WCHAR_MIN shall be 0 and the value of WCHAR_MAX shall be no less than 255.
So looking at WCHAR_MIN will tell you.
iOS ABI Function Call Guide:
In iOS, as with other Darwin platforms, both char and wchar_t are signed types.
I'm writing a library on top of the OpenCV and I have a question about crossplatformability.
My question is: does OpenCV runs if int size is other than 32 bits, but 16, 64 or 128? Because if yes, I'd like to support those platforms, otherwise it would simplify my high level interfaces. I didn't find any information and I'm not this familiar with C++ to solve this conundrum myself by looking on sources.
The int type is assumed to be 4-bytes wide: https://github.com/opencv/opencv/blob/master/modules/core/include/opencv2/core/hal/interface.h
bit shifts and masks; it will definitely broke for 16-bit.
generic 128-bit SIMD C++ implementation assumes 128-bit register = 4 x int values
some algorithms use SoftFloat library with assumption int = int32_t (example)
the type identifier for int matrices is named CV_32S
...?
So: answer is "No".
Thanks to #mshabunin
I am trying to comprehend how development is affected when developing for both 32-bit and 64-bit architectures. From what I have researched thus far, I understand an int is always 4 bytes regardless of the architecture of the device running the app. But an NSInteger is 4 bytes on a 32-bit device and 8 bytes on a 64-bit device. I get the impression NSInteger is "safer" and recommended but I'm not sure what the reasoning is for that.
My question is, if you know the possible value you're using is never going to be large (maybe you're using it to index into an array of 200 items or store the count of objects in an array), why define it as an NSInteger? That's just going to take up 8 bytes when you won't use it all. Is it better to define it as an int in those cases? If so, in what case would you want to use an NSInteger (as opposed to int or long etc)? Obviously if you needed to utilize larger numbers, you could with the 64-bit architecture. But if you needed it to also work on 32-bit devices, would you not use long long because it's 8 bytes on 32-bit devices as well? I don't see why one would use NSInteger, at least when creating an app that runs on both architectures.
Also I cannot think of a method which takes in or returns a primitive type - int, and instead utilizes NSInteger, and am wondering if there is more to it than just the size of the values. For example, (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section. I'd like to understand why this is the case. Assuming it's possible to have a table with 2,147,483,647 rows, what would occur on a 32-bit device when you add one more - does it wrap around to a -2,147,483,647? And on a 64-bit device it would be 2,147,483,648. (Why return a signed value? I'd think it should be unsigned since you can't have a negative number of rows.)
Ultimately, I'd like to obtain a better understanding of actual use of these number data types, perhaps some code examples would be great!
I personally think that, 64-bit is actually the reason for existence for NSInteger and NSUInteger; before 10.5, those did not exist. The two are simply defined as longs in 64-bit, and as ints in 32-bit.
NSInteger/NSUInteger are defined as *dynamic typedef*s to one of these types, and they are defined like this:
#if __LP64__ || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Thus, using them in place of the more basic C types when you want the 'bit-native' size.
I suggest you to throughly read this link.
CocoaDev has some more info.
For proper format specifier you should use for each of these types, see the String Programming Guide's section on Platform Dependencies
I remember when attending iOS developer conference. you have to take a look on the data-type in iOS7. for example, you use NSInteger in 64-bit device and save it on iCloud. then you want to sync to lower device (say iPad 2nd gen), your app will not behave the same, because it recognizes NSInteger in 4 bytes not 8 bytes, then your calculation would be wrong.
But so far, I use NSInteger because mostly my app doesn't use iCloud or doesn't sync. and to avoid compiler warning.
Apple uses int because for a loop control variable (which is only used to control the loop iterations) int datatype is fine, both in datatype size and in the values it can hold for your loop. No need for platform dependent datatype here. For a loop control variable even a 16-bit int will do most of the time.
Apple uses NSInteger for a function return value or for a function argument because in this case datatype [size] matters, because what you are doing with a function is communicating/passing data with other programs or with other pieces of code.
Apple uses NSInteger (or NSUInteger) when passing a value as an
argument to a function or returning a value from a function.
The only thing I would use NSInteger for is passing values to and from an API that specifies it. Other than that it has no advantage over an int or a long. At least with an int or a long you know what format specifiers to use in a printf or similar statement.
As a continue to Irfan's answer:
sizeof(NSInteger)
equals a processor word's size. It is much more simple and faster for processor to operate with words
What is the maximum value of an int in ChucK? Is there a symbolic constant for it?
New in the latest version!
<<<Math.INT_MAX>>>;
For reference though, it uses the "long" keyword in C++ to represent integers.
So on 32-bit computers the max should be 0x7FFFFFFF, or 2147483647.
On 64-bit computers it will be 0x7FFFFFFFFFFFFFFFFF, or 9223372036854775807.
Answer from Kassen and Stephen Sinclair on the chuck-users mailing list.
The ChucK API reference uses the C int type, so the maximum value would depend on your local machine (2^31-1, around two billion on standard 32-bit x86). I don't see any references to retrieving limits, but if ChucK is extensible using C you could add a function that returns MAXINT.