AFAICT wchar_t is always 32-bit wide on Apple targets.
What is the sign of wchar_t on:
x86 Apple Darwin (32-bit MacOSX)
x86_64 Apple Darwin (64-bit MacOSX)
ARM iOS (32-bit)
AArch64 iOS (64-bit)
?
ISO/IEC 9899:2017 §7.20.3/4:
If wchar_t (see 7.19) is defined as a signed integer type, the value of WCHAR_MIN shall be no greater than −127 and the value of WCHAR_MAX shall be no less than 127; otherwise, wchar_t is defined as an unsigned integer type, and the value of WCHAR_MIN shall be 0 and the value of WCHAR_MAX shall be no less than 255.
So looking at WCHAR_MIN will tell you.
iOS ABI Function Call Guide:
In iOS, as with other Darwin platforms, both char and wchar_t are signed types.
Related
Do you know what is the maximum value of Process ID both in 32-bit and 64-bit QNX systems and if there is a header where it is defined?
Thank you.
The maximum value for a pid_t in QNX is the largest positive integer representable in that type. For QNX 6.x, where pid_t == int32_t, that would be INT_MAX from <limits.h>. I don't have QNX 7.x handy to check, so you'll have to check the definition (try <sys/target_nto.h>) to find out what's being used there.
Swift's Int type has different maximum and minimum values that can be assigned, depending on whether the execution environment is 32-bit or 64-bit.
In a 32-bit execution environment, the range would be -2_147_483_648 to 2_147_483_647.
This is the same range as the Int32 type.
In a 64-bit execution environment, the range is -9_223_372_036_854_775_808 to 9_223_372_036_854_775_807.
This is the same range as the Int64 type.
I am currently working on an app that targets iOS 13 or later.
According to my research, all iPhones, iPod touches, and iPads that can install iOS 13 are 64-bit execution environments.
Also, Apple Silicon Macs that can run iOS apps are also 64-bit environments.
Then can I write a program that assumes that the range of type Int is the same as that of type Int64?
Specifically, can I assign values that would crash in a 32-bit environment (for example, values larger than 2_147_483_647) to the Int type variables as a matter of course?
Or should I not write such a program?
(I used a translation tool to ask this question.)
Require iOS 13 and just use Ints. To assert the range of Int is the same as the range of Int64:
assert(Int.max == Int64.max && Int.min == Int64.min)
In iOS 11 and later, all apps use the 64-bit architecture.
See also.
On 64-bit platforms, Int is the same size as Int64, and on 32-bit platforms, Int is the same size as Int32.
Can this behavior be changed, i.e. can Int's size be forced to be Int32 on 64-bit platforms?
The idea behind Int is that it reflects the native size (32-bit on 32-bit system and 64-bit on 64-bit system).
If you really want a 32-bit int no matter what platform you're on then you use Int32.
If you really want a 64-bit int no matter what platform you're on then you use Int64.
To solve your problem be explicit and just use Int32 instead of Int.
There are multiple data types available in Swift to define an integer
- Int, Int8, Int16, Int32, Int64
- UInt, UInt8, UInt16, UInt32, UInt64
You can use any of the above as per your requirement independent of whether you're using a 32-bit or 64-bit platform.
I have the following piece of code:
inc(integer(DestPixel), DestDelta); //DestPixel: PColorRGB; DestDelta: integer;
This works fine on 32-bit platforms. If I change the platform to 64-bit in the compiler the compiler emits this error:
E2064 Left side cannot be assigned to
The problem seems to be in the integer() typecast. How can I fix the problem?
On the 64 bit platform, DestPixel is 8 bytes wide, Integer is 4 bytes and so the typecast is invalid. You can fix this problem by using NativeInt instead.
inc(NativeInt(DestPixel), DestDelta);
The NativeInt type is the same size as a pointer and so floats between 4 bytes and 8 bytes wide depending on the output target.
Having said that, I personally would typecast with PByte because that more correctly describes the operation you are performing.
inc(PByte(DestPixel), DestDelta);
I'd like to know how much bytes a
32-bit integer
ASCII character (char in C++?)
Pointer (4 bytes?)
Short
Float
Takes up in Delphi, and if it is generally the same in most languages
Also, do the data types mentioned above have a constant size? I mean are the integers 0, 4, 123 and 32231 all of the same size?
A 32-bit integer is ALWAYS four bytes, because 1 byte = 8 bits.
An Integer is a signed 32-bit integer, and a Cardinal is a unsigned 32-bit integer. These thus always occupy four bytes, irrespective of the value they represent. (In fact, it is an extremely important fact that simple types do have fixed widths -- low-level programming really depends on this! It is even a cornerstone part of how computers work.)
Smaller integer types are Smallint (16-bit signed), Word (16-bit unsigned) and Byte (8-bit unsigned). Larger integer types are Int64 (64-bit signed) and UInt64 (64-bit unsigned).
Char was a 1-byte AnsiChar prior to Delphi 2009; now it is a 2-byte WideChar.
Pointer is always 4 bytes, because Delphi currently creates 32-bit applications only. When it supports 64-bit applications, Pointer will become 8 bytes.
There are three common floating-point types in Delphi. These are Single, Double (=Real), and Extended. These occupy 4, 8, and 10 bytes, respectively.
To investigate the size of a given type, e.g. Short, simply try
ShowMessage(IntToStr(SizeOf(Short)))
Reference:
http://docwiki.embarcadero.com/RADStudio/en/Simple_Types
In C/C++, SizeOf(Char) = 1 byte as required by C/C++ standard.
In Delphi, SizeOf(Char) is version dependent (1 byte for non-Unicode versions, 2 bytes for Unicode versions), so Char in Delphi is more like TChar in C++.
It may be different for different machines, so you can use the following code to determine the size of integer(for examle):
cout << "Integer size:" << sizeof(int);
I don't want to confuse you too much, but there's also an alignment issue; If you define a record like this, it will depend on the compiler how it's layout will turn out :
type Test = record
A: Byte;
B: Pointer;
end;
If compiled with {$A1}, SizeOf(Test) will end up as 5, while compiling it with {$A4} would give you 8 (at least, on current 32 bit Delphi's that is!)
There are all sorts of little gotcha's here, so I'd advise to ignore this for now and read an article like this when the need arises ;-)