Is it possible in iOS to detect the current processor of the device.
The project I am working on requires, programmatically, to check if the processor is ArmV7 or ArmV7s.
I don't know for certain that this will provide the information you'd like but I would try sysctl:
int32_t value = 0;
size_t length = sizeof(value);
sysctlbyname("hw.cpusubtype", &value, &length, NULL, 0);
The values for subtype are in mach/machine.h.
/*
* ARM subtypes
*/
#define CPU_SUBTYPE_ARM_ALL ((cpu_subtype_t) 0)
#define CPU_SUBTYPE_ARM_V4T ((cpu_subtype_t) 5)
#define CPU_SUBTYPE_ARM_V6 ((cpu_subtype_t) 6)
#define CPU_SUBTYPE_ARM_V5TEJ ((cpu_subtype_t) 7)
#define CPU_SUBTYPE_ARM_XSCALE ((cpu_subtype_t) 8)
#define CPU_SUBTYPE_ARM_V7 ((cpu_subtype_t) 9)
#define CPU_SUBTYPE_ARM_V7F ((cpu_subtype_t) 10) /* Cortex A9 */
#define CPU_SUBTYPE_ARM_V7K ((cpu_subtype_t) 12) /* Kirkwood40 */
I'm not aware of any API to check this, but you could perhaps write one yourself by providing v7 and v7s assembler implementations for the same symbol that simply return true or false as required.
Assuming that the v7s implementation will be used if and only if the processor supports v7s, it should work.
This way:
sysctlbyname("hw.cpusubtype", &value, &length, NULL, 0);
is good, but there are more easy way:
#include <mach-o/ldsyms.h>
int processorType = _mh_execute_header.cputype;
int processorSubtype = _mh_execute_header.cpusubtype;
Although, it shows you which cpu used in compile time for current code..
You don't.
That is, it is not advisable to programmatically detect ARM architecture version support at runtime on iOS. This is the answer I got from an Apple engineer when I specifically asked this question, and given the pitfalls (e.g. what do you do when you get an unexpected value?), I can believe them.
(yes, sometimes the correct answer to an "How do I?" question is "You don't.")
Related
Here is the full shadertypes.h file. I am going to break it down into parts that I cant understand.
#ifndef ShaderTypes_h
#define ShaderTypes_h
//Part 1 Compiler flags
#ifdef __METAL_VERSION__
#define NS_ENUM(_type, _name) enum _name : _type _name; enum _name : _type
#define NSInteger metal::int32_t
#else
#import <Foundation/Foundation.h>
#endif
#include <simd/simd.h>
//Part 2 buffer index
typedef NS_ENUM(NSInteger, BufferIndex)
{
BufferIndexMeshPositions = 0,
BufferIndexMeshGenerics = 1,
BufferIndexUniforms = 2
};
//Part 3 vertex attribute and position
typedef NS_ENUM(NSInteger, VertexAttribute)
{
VertexAttributePosition = 0,
VertexAttributeTexcoord = 1,
};
//Part 4 texture index color
typedef NS_ENUM(NSInteger, TextureIndex)
{
TextureIndexColor = 0,
};
//Part 5 uniforms
typedef struct
{
matrix_float4x4 projectionMatrix;
matrix_float4x4 modelViewMatrix;
} Uniforms;
#endif /* ShaderTypes_h */
Part 1
#ifdef __METAL_VERSION__
#define NS_ENUM(_type, _name) enum _name : _type _name; enum _name : _type
#define NSInteger metal::int32_t
#else
#import <Foundation/Foundation.h>
#endif
//A bunch of definitions go after this
#endif
My confusion is mostly why we are doing all of this code. It seems to be checking if the user has metal but then what is this NS_ENUM it is defining? Why is it defining a metal variable? While is foundation conditionally imported?
Part 2
typedef NS_ENUM(NSInteger, BufferIndex)
{
BufferIndexMeshPositions = 0,
BufferIndexMeshGenerics = 1,
BufferIndexUniforms = 2
};
Not exactly sure what this is doing especially since I cant find anyone referencing them explicitly anywhere.
Part 3
typedef NS_ENUM(NSInteger, VertexAttribute)
{
VertexAttributePosition = 0,
VertexAttributeTexcoord = 1,
};
This one has a bit more of a clear usage because it is referenced in the .metal file
typedef struct
{
float3 position [[attribute(VertexAttributePosition)]];
float2 texCoord [[attribute(VertexAttributeTexcoord)]];
} Vertex;
as well as in the attribute section of the vertex descriptor code
mtlVertexDescriptor.attributes[VertexAttribute.position.rawValue].format = MTLVertexFormat.float3
mtlVertexDescriptor.attributes[VertexAttribute.position.rawValue].offset = 0
mtlVertexDescriptor.attributes[VertexAttribute.position.rawValue].bufferIndex = BufferIndex.meshPositions.rawValue
...
Somehow it appears to be keeping track of indexes of the various elements rather like the buffer.
Part 4
This one I sort of get as it is referenced in the render here
renderEncoder.setFragmentTexture(colorMap, index: TextureIndex.color.rawValue)
as well as the shader here
fragment float4 fragmentShader(..., texture2d<half> colorMap [[ texture(TextureIndexColor) ]]){
I sort of get this one (Minus the NSEnum part) however I dont get how it is good practice to do this for just one thing.
Part 5
This one is actually the only one I understand it looks like its a struct for all the uniform components which makes a lot of sense as it is storing the actual types of the uniforms allowing the shader to be exposed to the struct as well as the renderer.
I guess ultimately I am wondering why this approach was taken and why it is a suggested best practice by Apple. I suppose it sort of makes sense to do things this way except for the fact that metal appears to play better with objective-c even though it looks like swift.
This is a header that's meant to be shared by Metal shader code and app code (either Objective-C or, via bridging, Swift). Having to share it between those two languages requires a bit of care.
The #ifdef __METAL_VERSION__ test determines which language it's being compiled in. For Metal, a couple of macros are defined. For Objective-C, it imports Foundation. (Foundation can't, of course, be imported when compiling Metal code.)
In their (Objective-)C headers, Apple routinely declare enums using the NS_ENUM() macro. That macro is defined within the Foundation headers, so it can be used when this header is being compiled into app code. It's not defined in the Metal headers, so this header needs to define it for itself if it wants to use it (as it does). The Foundation version of the macro is useful because it detects compiler capabilities to use typed enums if available and conventional enums if not. In this header, Apple is implementing that same macro for the Metal shader language. Since there's no question whether the compiler supports typed enums (it does), there's only one definition of the macro which takes advantage of that feature.
Later in the header, they plan on using the NSInteger type. That's defined by the Foundation headers for app code. Since they want to use the same code for Metal but that type isn't defined there, they need to define it in this header. They make a couple of weird choices, though. First, they use a macro instead of a typedef. Second, they define it to be equivalent to int32_t (in the metal namespace). That's weird because the app code is going to be 64-bit (Metal isn't available to 32-bit apps) causing NSInteger to be a 64-bit integer type (equivalent to int64_t). So, the two worlds are going to disagree about the size of NSInteger and therefore all of the enums based on it. That's kind of bad, but probably doesn't cause a real issue given the ways these enums are actually used.
It would probably have been better to simply base the enums on int, which is 32-bit in all Apple environments.
Parts 2, 3, and 4 are all similar. It's generally good practice to use "symbolic" constants rather than just magic numbers (i.e. integer literals within the code). It is less error-prone and improves readability. These parts are simply defining some such symbolic constants for use by the Metal code and app code to share. The fact that some of these names aren't used in the particular sample project you're examining suggests that Apple uses this same header for multiple sample projects, or something like that.
I was looking at Apple's Lister (for Apple Watch, iOS, and OS X) sample. The sample performs a test for iOS and OS X:
#import <TargetConditionals.h>
#if (TARGET_OS_IPHONE || TARGET_IPHONE_SIMULATOR)
#import ListerKit;
#elif TARGET_OS_MAC
#import ListerKitOSX;
#endif
However, there is no test for TARGET_OS_WATCH or similar. Grepping for watch in TargetConditionals.h delivers no hits:
$ cat /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer
/SDKs/iPhoneOS7.1.sdk/usr/include/TargetConditionals.h | grep -i watch
$
From TargetConditionals.h, I know there are:
These conditionals specify in which Operating System the generated code will
run. The MAC/WIN32/UNIX conditionals are mutually exclusive. The EMBEDDED/IPHONE
conditionals are variants of TARGET_OS_MAC.
TARGET_OS_MAC - Generate code will run under Mac OS
TARGET_OS_WIN32 - Generate code will run under 32-bit Windows
TARGET_OS_UNIX - Generate code will run under some non Mac OS X unix
TARGET_OS_EMBEDDED - Generate code will run under an embedded OS variant
of TARGET_OS_MAC
TARGET_OS_IPHONE - Generate code will run under iPhone OS which
is a variant of TARGET_OS_MAC.
TARGET_IPHONE_SIMULATOR - Generate code for running under iPhone Simulator
Question: Is there a preprocessor for Apple's watch?
I'm tagging with ios, but I'm not sure that's the correct OS for this question.
The list below was compiled from iPhone's TargetConditionals.h. The Simulator and OS X are similar (they just have different bits set to 1):
#define TARGET_OS_MAC 1
#define TARGET_OS_WIN32 0
#define TARGET_OS_UNIX 0
#define TARGET_OS_EMBEDDED 1
#define TARGET_OS_IPHONE 1
#define TARGET_IPHONE_SIMULATOR 0
Questions: Does the watch use TARGET_OS_EMBEDDED? Does the watch omit TARGET_OS_IPHONE?
You can find all kind of target conditionals in the TargetConditionals.h (cmd + shift + o and type TargetConditionals.h).
In this list you can find a list like this and many more useful defines.
Currently it does contain TARGET_OS_WATCH since WatchOS 2. For WatchOS 1 it was not possible to run custom code on the watch so it was not needed back then since everything ran on the phone itself.
#define TARGET_OS_MAC 1
#define TARGET_OS_WIN32 0
#define TARGET_OS_UNIX 0
#define TARGET_OS_IPHONE 1
#define TARGET_OS_IOS 0
#define TARGET_OS_WATCH 1
#define TARGET_OS_TV 0
#define TARGET_OS_SIMULATOR 0
#define TARGET_OS_EMBEDDED 1
Swift Addition
#if os(watchOS)
[Watch code]
#else
[Code for iOS, appleTV, or any else clause]
#endif
Some other valid values: iOS, OSX, tvOS
A small explanation about this and more http://nshipster.com/swift-system-version-checking/
At the bottom of this document
https://developer.apple.com/library/ios/documentation/Swift/Conceptual/BuildingCocoaApps/InteractingWithCAPIs.html#//apple_ref/doc/uid/TP40014216-CH8-XID_15#Build Configurations
Under the section 'Build Configurations' you can find a (hopefully) up to date list with all these values that are currently available
As of watchOS 2.0, you can run native code on the watch, so this is a more relevant question.
I'm using the first early beta of watchOS 2, so this may change, but right now, TARGET_OS_WATCH is set to 1 on watchOS.
(Also, be careful: TARGET_OS_IPHONE is also set to 1 on watchOS, though TARGET_OS_IOS is 0.)
There is no WatchKit or app extension target conditional. But you can create your own per-target conditionals that you use in the same way.
If you look in the "Build Settings" section for any target, there's a section called "Other C Flags". Add an entry for the WatchKit target. If you add something like -DMY_WATCHKIT_FLAG=1, you can then do #if MY_WATCHKIT_FLAG in code.
Make your custom flag, well, custom. It's not impossible that Apple might add a flag in the future called something like TARGET_WATCH_APP or whatever. Use a prefix on the flag name to make it specific to you.
With the current WatchKit SDK, all code in a Watch application runs on the phone it’s paired with, so there’s no point at which your preprocessor is going to encounter code that’s going to run on the Watch and thus not much use for a macro to tell it what to do when it does. The code in the ListerWatch target of the sample you linked to will run as an extension on the iPhone and talk to its watch UI via WatchKit.
I have updated my Xcode to version 5.1 recently. After update, it runs fine with all simulators except iOS 7.1, in which it gives a mach-O link error. Moreover, there is only a 64-bit architecture option in 'Build setting' tab. According to me, it is the cause of all the problems & errors. Does anybody know reason for this & how to solve it?
Here are some warnings & errors I've got:
Warning :
Values of type 'NSInteger' should not be used as format arguments; add an explicit cast to 'long' instead
Error:
Symbol(s) not found for architecture x86_64
Thanks.
Xcode 5.1 changed the standard project settings. Among others it now includes the arm64 architecture for the project - but a lot of 3rd party projects don't support that yet, so I recommend to remove it from the architectures settings again (leaving you with armv7 and armv7s). Keep it in the valid architectures setting, though - this specifies on which architectures it may run.
It also introduced the default activation of some extra warnings, like the typecast to long warning - same counts here, when you use quite some other libraries (as source code) you might get lots of warnings you can't (or don't want to) do much about. You can disable the warnings again for those projects, or choose not to update your project settings right away.
Got to Build Settings -> Architectures
You probably have Standard Architectures set, right?
As of Xcode 5.1 Standard Architectures includes arm64, which you are not ready to support.
Select Other..
doubleclick $(ARCHS_STANDARD) and change it to $(ARCHS_STANDARD_32_BIT)
Note: This is a temporary fix. You are probably using some static library that didn't come with a 64-bit slice. See if there is one available and then switch Architecture back to Standard Architectures.
Indeed XCode now includes the arm64 architecture.
NSInteger is something completely different now as it is define in NSObjCRuntime.h :
#if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
To deal with it you should improve your codebase. First of all, you have to be really consistent. Assign NSInteger only to NSInteger and not to int. Avoid all kind of :
int i = [aString integerValue] ( as it returns a NSInteger)
but
NSInteger i = [aString integerValue] (and if it's a long type then you won't have any trouble.)
Moreover, another issue you might have is when you want to create a string from a value.
What you could do is something like:
#define cL(v) (long)(v)
#define cUL(v) (unsigned long)(v)
NSLog(#"array.count: %ld", cUL(anArray.count));
array.count returns an unsigned int under armv7(s) and an unsigned long under arm64. By always casting into an unsigned long you will not face any warning anymore and, more important, do not get any bug.
This "logic" have been introduced by Apple itself on some tech-talks videos there:
https://developer.apple.com/tech-talks/videos/ (video "Architecting Modern iOS Games". Play the video around 10m00s)
I just upgraded Xcode + iOS SDK to the latest versions (5.1/7.1) and am now getting a bunch of errors about implicit conversions losing precision (NSInteger to int etc).
Does anybody know if there is a compiler flag or something that allows me to tell the compiler to treat those as warnings rather than errors again? I couldn't find anything so far. I really don't want to go through the code and add explicit casts everywhere as this would be in a lot of places.
This is an error for good reason. NSInteger throughout your codebase will make sure that when you compile the code for 32 and 64 bit iOS devices they are handled consistently. In the 32 bit world NSInteger and int were the same, but with the advent of the iPhone 5S and the iPad Air, iOS is no longer 32 bit only.
As others have said, there really is no way around this if you don't want trouble with modern devices.
If you just want to get back to work and deal with this problem later, then you need to remove arm64 from the 'Valid Architectures' and 'Architectures' items in your project's Build Settings.
As others have said, you really should fix the warnings, but this was a nasty little surprise by Apple (that is, adding the arm64 architecture to Build Settings on Xcode 5.1) so I can totally understand wanting to put these warnings away for a while and work on what ever you were working on before you decided to upgrade.
Indeed XCode now includes the arm64 architecture.
NSInteger is something completely different now as it is define in NSObjCRuntime.h :
#if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
To deal with it you should improve your codebase. First of all, you have to be really consistent. Assign NSInteger only to NSInteger and not to int. Avoid all kind of :
int i = [aString integerValue] ( as it returns a NSInteger)
but
NSInteger i = [aString integerValue] (and if it's a long type then you won't have any trouble.)
Moreover, another issue you might have is when you want to create a string from a value.
What you could do is something like:
#define cL(v) (long)(v)
#define cUL(v) (unsigned long)(v)
NSLog(#"array.count: %ld", cUL(anArray.count));
array.count returns an unsigned int under armv7(s) and an unsigned long under arm64. By always casting into an unsigned long you will not face any warning anymore and, more important, do not get any bug.
This "logic" have been introduced by Apple itself on some tech-talks videos there:
https://developer.apple.com/tech-talks/videos/ (video "Architecting Modern iOS Games". Play the video around 10m00s)
You should go and cast them the way they ought to be. Apple isn't complaining for no reason - it also ensures that you don't get any weird unexpected behaviour later down the line. I suggest you go and cast them all. It's extreme but well, it's clean.
Hey there,
I have the following piece of code:
#if USE_CONST == 1
__constant__ double PNT[ SIZE ];
#else
__device__ double *PNT;
#endif
and a bit later I have:
#if USE_CONST == 0
cudaMalloc((void **)&PNT, sizeof(double)*SIZE);
cudaMemcpy(PNT, point, sizeof(double)*SIZE, cudaMemcpyHostToDevice);
#else
cudaMemcpyToSymbol(PNT, point, sizeof(double)*SIZE);
#endif
whereas point is somewhere defined in the code before. When working with USE_CONST=1 everything works as expected, but when working without it, than it doesn't. I access the array in my kernel-function via
PNT[ index ]
Where's the problem between the both variants?
Thanks!
The correct usage of cudaMemcpyToSymbol prior to CUDA 4.0 is:
cudaMemcpyToSymbol("PNT", point, sizeof(double)*SIZE)
or alternatively:
double *cpnt;
cudaGetSymbolAddress((void **)&cpnt, "PNT");
cudaMemcpy(cpnt, point, sizeof(double)*SIZE, cudaMemcpyHostToDevice);
which might be a bit faster if you are planning to access the symbol from the host API more than once.
EDIT: misunderstood the question. For the global memory version, do something similar to the second version for constant memory
double *gpnt;
cudaGetSymbolAddress((void **)&gpnt, "PNT");
cudaMemcpy(gpnt, point, sizeof(double)*SIZE. cudaMemcpyHostToDevice););
Although this is an old question I add this for future googlers:
The problem is here:
cudaMalloc((void **)&PNT, sizeof(double)*SIZE);
cudaMemcpy(PNT, point, sizeof(double)*SIZE, cudaMemcpyHostToDevice);
The cudaMalloc writes to the host version of PNT which is actually a device variable that must not be accessed from host. So correct would be to allocate memory, copy the address to the device symbol and copy the memory to the the memory pointed to by that symbol:
void* memPtr;
cudaMalloc(&memPtr, sizeof(double)*SIZE);
cudaMemcpyToSymbol(PNT, &memPtr, sizeof(memPtr));
// In other places you'll need an additional:
// cudaMemcpyFromSymbol(&memPtr, PNT, sizeof(memPtr));
cudaMemcpy(memPtr, point, sizeof(double)*SIZE, cudaMemcpyHostToDevice);
Easier would be:
#if USE_CONST == 1
__constant__ double PNT[ SIZE ];
#else
__device__ double PNT[ SIZE ];
#endif
// No #if required anymore:
cudaMemcpyToSymbol(PNT, point, sizeof(double)*SIZE);