Opengl ES 3.1+ support on iOS through Vulkan wrapper - ios

Now that a Vulkan to Metal wrapper is officially supported by Khronos (MoltenVK), and that OpenGL to Vulkan wrappers began to appear (glo), would it be technically possible to use OpenGL ES 3.1 or even 3.2 (so even with support to OpenGL compute shaders) on modern iOS versions/HW by chaining these two technologies? Has anybody tried this combination?
I'm not much interested in the performance drop (that would obviously be there due to the two additional layers of abstraction), but only on the enabling factor and cross-platform aspect of the solution.

In theory, yes :).
MoltenVK doesn't support every bit of Vulkan (see the Vulkan Portable Subset section), and some of those features might be required by OpenGL ES 3.1. Triangle fans are an obvious one, full texture swizzle is another. MoltenVK has focused on things that could translate directly; if the ES-on-Vulkan translator was willing to accept extra overhead, it could fake some or all of these features.
The core ANGLE team is working on both OpenGL ES 3.1 support and a Vulkan backend, according to their README and recent commits. They have a history of emulating features (like triangle fans) needed by ES that weren't available in D3D.

Related

Will CoreGraphics support Metal on Apple Devices?

I have read that Core Graphics is based on OpenGL ES and uses the Quartz Drawing Engine on Apple devices (iOS, OSX)
However with the upcoming deprecation of OpenGL ES for Metal, will Core Graphics be updated to support Metal and/or software rendering for coming iOS/OSX devices?
First, Core Graphics doesn't "use" Quartz. "Core Graphics" and "Quartz" are just two names for the same thing. They are equivalent.
Second, Apple doesn't promise what technology Core Graphics uses under the hood. They've occasionally touted the acceleration they were able to accomplish by using some particular technology, but that's marketing — marketing to developers, but marketing nonetheless — not a design contract. They reserve the right and ability to change how Core Graphics is implemented, and have done so often. Any developer who writes code which depends on the specific implementation is risking having their code break with future updates to the OS. Developers should only rely on the design contract in the documentation and headers.
It is very likely that Core Graphics is already using Metal under the hood. It makes no difference to you as a developer or user whether it is or isn't.
Finally, Core Graphics has not been deprecated. That means that there's no reason to expect it to go away, break, or lose functionality any time soon.

WebGL Compute Shader and VBO/UBO's

AFAIK is the compute shader model very limited in WebGL. The documentation on this is even less. I have a hard time to find any answers to my questions.
Is there a possibility to execute a compute shader on one or multiple VBO/UBO's and alter their values?
Update: On April 9 2019, the Khronos group released the a draft standard for compute shaders in WebGL 2.
Original answer:
In this press release, the Khronos group stated that they are working on an extension to WebGL 2 to allow for compute shaders:
What’s next? An extension to WebGL 2.0 providing compute shader support is under development, which will bring many leading-edge graphics algorithms to the web. Khronos is also beginning work on the next generation of WebGL, to bring the enhanced performance of the new generation of explicit 3D APIs to the web. Stay tuned for more news!
Your best bet is to wait about a year or two for it to happen on a limited number of GPU + browser combination.
2022 UPDATE
It has been declared here (in red) that the WebGL 2.0 Compute specification has instead been moved into the new WebGPU spec and is deprecated for WebGL 2.0.
WebGPU has nowhere near global coverage across browsers yet, whereas WebGL 2.0 reached global coverage as of Feb 2022. WebGL 2.0 Compute is implemented only in Google Chrome (Windows, Linux) and Microsoft Edge Insider Channels and will not be implemented elsewhere.
This is obviously a severe limitation for those wanting compute capability on the web. But it is still possible to do informal compute using other methods, such as using regular graphics shaders + the expanded input and output buffer functionalities supplied by WebGL 2.0.
I would recommend Amanda Ghassaei's gpu-io for this. It does all the work for you in wrapping regular GL calls to give compute capability that "just works" (in either WebGL or WebGL 2.0).

What is the difference between prefixes of WebGL extensions

What is the difference between prefixes of WebGL extensions?
There is several prefixes for WebGL extensions like ANGLE , OES , WEBGL or EXT. What is actual difference between them?
Taken from here.
WebGL API extensions may derive from many sources, and the naming of
each extension reflects its origin and intent.
More about each tags:
ANGLE tag should be used if Angle library is used.
OES tag should be used for mirroring functionality from OpenGL ES or
OpenGL API extensions approved by the respective architecture review
boards.
EXT tag should be used or mirroring other OpenGL ES or OpenGL API
extensions. If only small differences in behavior compared to OpenGL
ES or OpenGL are specified for a given extension, the original tag
should be maintained.
WEBGL tag should be used for WebGL-specific extensions which are
intended to be compatible with multiple web browsers. It should also
be used for extensions which originated with the OpenGL ES or OpenGL
APIs, but whose behavior has been significantly altered.

DirectX, GDI+ or SDI on Windows XP?

If I want to do scaling and compositing of 2D anti-aliased vector and bitmap images in real-time on Windows XP and later versions of Windows, making the best use of hardware acceleration available, should I be using GDI+ or DirectX 9.0c? (Actually, Windows XP and Windows 7 are important but we're not concerned about performance on Vista.)
Is there any merit in using SDL, given that the application is not cross-platform (and never will be)? I wonder if SDL might make it easier to switch to whichever underlying drawing API gives better performance…
Where can I find the documentation for doing scaling and compositing of 2D images in DirectX 9.0c? (I found the documentation for DirectDraw but read that it is deprecated after DirectX 7. But Direct2D is not available until DirectX 10.)
Can I reasonably expect scaling and compositing to be hardware accelerated on Windows XP on a mid- to low-spec PC (i.e. integrated graphics)? If not then does it even matter whether I use GDI+ or DirectX 9.0c?
Do not use GDI+. It does everything in software, and it has a rendering model that is not good for performance in software. You'd be better off with just about anything else.
Direct3D or OpenGL (which you can access via SDL if you want a more complete API that is cross-platform) will give you the best performance on hardware that supports it. Direct2D is in the same boat but is not available on Windows XP. My understanding is that, at least in the case of Intel's integrated GPU's, the hardware is able to do simple operations like transforming and composing, and that most of the problems with these GPU's are with games that have high demands for features and performance, and are optimized for ATI/Nvidia cards. If you somehow find a machine where Direct3D is not supported by the video card and is falling back to software, then you might have a problem.
I believe SDL uses DirectDraw on Windows for its non-OpenGL drawing. Somehow I got the impression that DirectDraw does all its operations in software in modern releases of Windows (and given what DirectDraw is used for it never really mattered since the win9x era), but I'm not able to verify that.
The ideal would be a cross-platform vector graphics library that can make use of Direct3D or OpenGL for rendering, but AFAICT no such thing is available. The Cairo graphics library lacks acceleration on Windows, and Mozilla has started a project called Azure that apparently has that but doesn't appear to be designed for use outside of their projects.
I just found this: 2D Rendering in DirectX 8.
It appears that since Microsoft removed DirectDraw after DirectX 7 they expected all 2D drawing to be done using the 3D API. This would explain why I totally failed to find the documentation I was looking for.
The article looks promising so far.
Here's another: 2D Programming in a 3D World

Will OpenGL ES 1.1 become obsolete in iOS?

I am at a crossroads between using OpenGL ES 1.1 and 2.0 for iPhone development. I plan on creating 2D applications and simple 3D applications. In the interest of simplicity, should I use 1.1? Or will this be discontinued at some point in iOS? I would like to know if the shaders in 2.0 are significantly more beneficial in making simple programs than the shaders in 1.1. Please tell me the advantages of each. Thanks.
I would say the definitive reason the use 2.0 starting out is complex effects, 2.0 is a wild card in this matter, shaders are little bits of software that run inside the graphics card (ok, not always true) and have the ability to affect individual pixels at run time. In 1.1 there's a lot you can do, but most of the time you're affecting all the pixels in the texture, to affect individual pixels, you have to combine textures, and the after a while there's just stuff you can't do in 1.1.
Now if you don't need complex effects, you can start and use 1.1, but let me show you your journey:
In the beginning 1.1 will be easier, glTranslatef, glRotatef, glScalef, etc do save you time and allow you to start manipulating objects easily.
But has you progress and start to do more complex things, you learn about matrix manipulations, say there's a routine that's a bit slow because you're doing a lot of tranlastes, rotates, etc. So you read up on the subject and learn you can combine all those operations into one matrix, so you start doing your own matrix calc funcs and start to use glMultMatrixf more, after a while it's just easier to always glMultMatrixf, because you can latter add stuff without having to rewrite code.
At this point you have no reason to use 1.1, going from glMultMatrixf to 2.0 is a very small step, and you get that whole new world from the shaders.
So, if you don't need big effects and are itching to go, alright use 1.1.
Anything else just go direct to 2.0.
Disclaimer
Yes this is a simplification, but a journey i've made myself.
2.0 has a programmable pipeline
This seems like not very much information (as when i first started i was like "uh ok?") but it really means a lot.
In allowing you to take control of the transformations of vertices
Now in the smaller things I have done (3d) the transformations could easily be managed by 1.1 automatically, but it is cool to have total control over them in 2.0
If you're learning opengles on the idevices i suggest making programs at first that support both 1.1 and 2.0 so you can get even more experience and understanding into opengles
If you need full control over the vertices then 2.0 is the way to go, otherwise 1.1 should be fine

Resources