Will CoreGraphics support Metal on Apple Devices? - ios

I have read that Core Graphics is based on OpenGL ES and uses the Quartz Drawing Engine on Apple devices (iOS, OSX)
However with the upcoming deprecation of OpenGL ES for Metal, will Core Graphics be updated to support Metal and/or software rendering for coming iOS/OSX devices?

First, Core Graphics doesn't "use" Quartz. "Core Graphics" and "Quartz" are just two names for the same thing. They are equivalent.
Second, Apple doesn't promise what technology Core Graphics uses under the hood. They've occasionally touted the acceleration they were able to accomplish by using some particular technology, but that's marketing — marketing to developers, but marketing nonetheless — not a design contract. They reserve the right and ability to change how Core Graphics is implemented, and have done so often. Any developer who writes code which depends on the specific implementation is risking having their code break with future updates to the OS. Developers should only rely on the design contract in the documentation and headers.
It is very likely that Core Graphics is already using Metal under the hood. It makes no difference to you as a developer or user whether it is or isn't.
Finally, Core Graphics has not been deprecated. That means that there's no reason to expect it to go away, break, or lose functionality any time soon.

Related

Metal 2 API features on older devices

According to documentation(https://developer.apple.com/documentation/metal/gpu_features/understanding_gpu_family_4) "On A7- through A10-based devices, Metal doesn't explicitly describe this tile-based architecture". In the same article I seen "Metal 2 on the A11 GPU" and get confused because not found any more info about Metal 2 support in metal shading language specification. For example I found table "Attributes for fragment function tile input arguments" and note "iOS: attributes in Table 5.5 are supported since Metal 2.0."
Is Metal 2 support specific for gpu family?
Not all features are supported by all devices. Newer devices generally support more features, older devices might not support newer features.
There are several factors of this support.
First, each MTLDevice has a set of MTLGPUFamily it supports that you can query with supportsFamily method. Some documentation articles mention what kind of family the device needs to support to use this or that feature, but generally, you can find that info in the Metal Feature Set Tables. The support for the families may vary depending on the chip itself, how much memory or some other units is available to it. And the chips are grouped into families based on those.
There are other supports* queries on an MTLDevice though, that don't depend on the family of the device, but rather on a device itself. Like, for example, supportsRaytracing query. These are also based on the GPU itself, but are separate probably because they don't fall neatly into any of the "families".
Third kind of support is based on an OS version. Newer versions of OS might ship new APIs or an extensions to existing APIs. Those are marked with API_AVAILABLE macroses in the headers and may only be used on the OSes that are the same version or higher. To query support for these ones, you need to use either macroses or if #available syntax in Objective-C or similar syntax in Swift. Here, the API availability isn't so much affected by the GPU itself, but rather by having newer OS and drivers to go with it.
Last kind of "support" to limit some features is the Metal Shading Language version. It's tied to the OS version, and it refers to those notes in the Metal Shading Language specification you mentioned in your question. Here, the availability of the features is a mix of limitations of a compiler version (not everyone is going to use latest and greatest spec, I think most production game engines are using something like Metal 2.1, at least the games that aren't using latest and greatest game engine versions do) and the device limitations. For example, tile shaders are limited to a version of a compiler, but also they are limited to Apple Silicon GPUs.
So there are different types of support at play when you are using Metal in your application and it's easy to confuse them, but it's important to know each one.

Opengl ES 3.1+ support on iOS through Vulkan wrapper

Now that a Vulkan to Metal wrapper is officially supported by Khronos (MoltenVK), and that OpenGL to Vulkan wrappers began to appear (glo), would it be technically possible to use OpenGL ES 3.1 or even 3.2 (so even with support to OpenGL compute shaders) on modern iOS versions/HW by chaining these two technologies? Has anybody tried this combination?
I'm not much interested in the performance drop (that would obviously be there due to the two additional layers of abstraction), but only on the enabling factor and cross-platform aspect of the solution.
In theory, yes :).
MoltenVK doesn't support every bit of Vulkan (see the Vulkan Portable Subset section), and some of those features might be required by OpenGL ES 3.1. Triangle fans are an obvious one, full texture swizzle is another. MoltenVK has focused on things that could translate directly; if the ES-on-Vulkan translator was willing to accept extra overhead, it could fake some or all of these features.
The core ANGLE team is working on both OpenGL ES 3.1 support and a Vulkan backend, according to their README and recent commits. They have a history of emulating features (like triangle fans) needed by ES that weren't available in D3D.

Easiest way to display text in OpenGL ES 2.0

I'm creating a simple Breakout style game and would like a simple way to display the score.
I've been doing some research and found several ways to do text in OpenGL ES, but most methods look fairly complicated.
This looks like it would do the trick, but I couldn't get it to work.
I've looked into FTGL and FreeType, but they look complicated.
I've also read one can display a UILabel over the EAGLContext, but not sure how that would be in the performance department.
I could probably get any of these options to work, I'm just wondering what the best solution is for this situation.
For simple use cases like you're describing, on even vaguely modern hardware (i.e. iPhone 3GS and later, I think), the compositing penalty for layering UIKit/CoreAnimation content on top of OpenGL ES content is negligible. (You can see this if you run your app in Instruments with the "Color OpenGL ES fast path blue" option turned on.)
They say premature optimization is the root of all evil — it's pretty easy to try UILabel, see if it makes a significant difference to your app's performance, and look into third-party libraries and more complicated solutions only if it does.
(Also, it sounds like you might be trying to manage your own CAEAGLLayer. For common use cases, it's a lot easier to use GLKView, plus GLKViewController for animation.)
I'd recommend checking out the Print3D functionality of the PowerVR SDK's PVRTools framework. Print3D is free to use, cross-platform (iOS, Android, Linux, Windows, OS X etc.) and it efficiently renders text within OpenGL ES 1.x, 2.0 & 3.0 applications. The SDK includes an example application with source that demonstrates how to use the Print3D framework (IntroducingPrint3D).
The PowerVR Graphics SDK can be downloaded for free from Imagination's website: http://www.imgtec.com/powervr/insider/sdkdownloads/index.asp
An overview of the source included in the SDK can be found here: http://www.imgtec.com/powervr/insider/sdkdownloads/learn_more.asp

General GPU programming on iPhone [duplicate]

With the push towards multimedia enabled mobile devices this seems like a logical way to boost performance on these platforms, while keeping general purpose software power efficient. I've been interested in the IPad hardware as a developement platform for UI and data display / entry usage. But am curious of how much processing capability the device itself is capable of. OpenCL would make it a JUICY hardware platform to develop on, even though the licensing seems like it kinda stinks.
OpenCL is not yet part of iOS.
However, the newer iPhones, iPod touches, and the iPad all have GPUs that support OpenGL ES 2.0. 2.0 lets you create your own programmable shaders to run on the GPU, which would let you do high-performance parallel calculations. While not as elegant as OpenCL, you might be able to solve many of the same problems.
Additionally, iOS 4.0 brought with it the Accelerate framework which gives you access to many common vector-based operations for high-performance computing on the CPU. See Session 202 - The Accelerate framework for iPhone OS in the WWDC 2010 videos for more on this.
Caution! This question is ranked as 2nd result by google. However most answers here (including mine) are out-of-date. People interested in OpenCL on iOS should visit more update-to-date entries like this -- https://stackoverflow.com/a/18847804/443016.
http://www.macrumors.com/2011/01/14/ios-4-3-beta-hints-at-opencl-capable-sgx543-gpu-in-future-devices/
iPad2's GPU, PowerVR SGX543 is capable of OpenCL.
Let's wait and see which iOS release will bring OpenCL APIs to us.:)
Following from nacho4d:
There is indeed an OpenCL.framework in iOS5s private frameworks directory, so I would suppose iOS6 is the one to watch for OpenCL.
Actually, I've seen it in OpenGL-related crash logs for my iPad 1, although that could just be CPU (implementing parts of the graphics stack perhaps, like on OSX).
You can compile and run OpenCL code on iOS using the private OpenCL framework, but you probably don't get a project into the App Store (Apple doesn't want you to use private frameworks).
Here is how to do it:
https://github.com/linusyang/opencl-test-ios
OpenCL ? No yet.
A good way of guessing next Public Frameworks in iOSs is by looking at Private Frameworks Directory.
If you see there what you are looking for, then there are chances.
If not, then wait for the next release and look again in the Private stuff.
I guess CoreImage is coming first because OpenCL is too low level ;)
Anyway, this is just a guess

ExEn (XNA -> iOS, Android) and accelerometers, etc

I've been reading about Andrew Russell's ExEn project and I'm wondering what the flow would be like for creating a WP7 accelerometer-based game and then porting it to another platform, say iOS. Here's what I hope would happen:
Create fully functional game in XNA, avoiding dependance on device
specific items like the 'back' button.
Run the project through ExEn (I have no idea on how this would
happen), creating fully functional iOS game.
Run game on iPhone.
Sorry for that pitiful outline, but I just don't have a solid high-level view after reading about it.
Also, being software conversion, surely it wouldn't totally work. How would you iron out the wrinkles? I assume you'd have to know iOS or Android fairly well to pin it down.
Anyway, if anyone can move me one step closer I would appreciate it.
ExEn is an implementation of a subset of the XNA API that runs on different platforms (including iOS and Android). Put simply, it makes the classes and methods that you use when writing XNA code available to you on these other platforms. (Plus appropriate instructions, examples, etc.)
When using ExEn, the bulk of your code should simply "just work". However in most real-world cases you will need to write at least some platform-specific code (and probably provide some platform-specific assets). In particular to support different device resolutions, and also in cases where you use XNA features not available in ExEn.
At time of writing, ExEn does not implement the XNA/WP7 APIs for accelerometer support. At some time in the future they may be added (either by me or anyone who wants to contribute a patch). ExEn is distributed as source code, so you could even add the necessary support yourself.
The alternative would be to write platform-specific code for the parts of your game that query the accelerometer. Using ExEn does not prevent you from also using the APIs of the underlying platform.
ExEn (on iOS and Android) runs on top of Xamarin's MonoTouch and Mono for Android products. These two products provide C# bindings for the underlying platform APIs. Also, much like ExEn implements the XNA APIs, Mono implements the .NET APIs. These products also provide you with the tools you need (IDE, compiler, debugger, etc).
So the iOS API that you would use is UIAccelerometer (doc). This is exposed in C# via MonoTouch.UIKit.UIAccelerometer (doc). I'll leave looking up the Android equivalents as an exercise.
You can't expect:
porting a game to other platform and don't modify it.
porting a game with special platform inherent abilities to other platform that lacks this abilities, or vice versa

Resources