It's my first time trying Native Client. I read an article and found each sample about 3D graphics used OpenGL ES 2.0. Can I port a Direct3D game to Native Client, or do I have to rewrite my code with OpenGL ES?
You would have to rewrite your D3D code to OpenGL ES 2.0 (or introduce a runtime translation layer that translated DirectX calls to OpenGL ES).
Native Client is designed to be portable across operating systems (currently Windows, Linux, Mac OS, and Chrome OS), so you cannot use anything that is specific to one operating system. In Native Client you can think of the Pepper API (PPAPI) as your system call interface for accessing capabilities like graphics, audio, networking, etc. And for hardware accelerated graphics specifically, Native Client supports OpenGL ES 2.0.
Related
Now that a Vulkan to Metal wrapper is officially supported by Khronos (MoltenVK), and that OpenGL to Vulkan wrappers began to appear (glo), would it be technically possible to use OpenGL ES 3.1 or even 3.2 (so even with support to OpenGL compute shaders) on modern iOS versions/HW by chaining these two technologies? Has anybody tried this combination?
I'm not much interested in the performance drop (that would obviously be there due to the two additional layers of abstraction), but only on the enabling factor and cross-platform aspect of the solution.
In theory, yes :).
MoltenVK doesn't support every bit of Vulkan (see the Vulkan Portable Subset section), and some of those features might be required by OpenGL ES 3.1. Triangle fans are an obvious one, full texture swizzle is another. MoltenVK has focused on things that could translate directly; if the ES-on-Vulkan translator was willing to accept extra overhead, it could fake some or all of these features.
The core ANGLE team is working on both OpenGL ES 3.1 support and a Vulkan backend, according to their README and recent commits. They have a history of emulating features (like triangle fans) needed by ES that weren't available in D3D.
The DXGI Overview on MSDN says that the Direct3D API (10, 11 and 12) sits on top of DXGI wheras DXGI sits on top of the Hardware which is illustrated by the following picture:
The article further mentions that the tasks of DXGI basically are enumerating adapters and present images on the screen. Now, if DirectX sits on top of DXGI, how are all the math related tasks invoked on the actual hardware (GPU)? Or is the the architectural overview wrong and D3D_ also directly access the hardware?
This diagram is a logical map, not an explicit map of how everything actually gets implemented. In reality, Direct3D and DXGI are more 'side-by-side' and the layer that includes the User Mode Driver (UMD) and the Kernel Mode Driver (KMD) is the Windows Display Driver Model (WDDM) which uses the Device Driver Interface (DDI) to communicate to the kernel mode which in turns communicates with the hardware. The various versions of Direct3D are also 'lofted' together to use the same DDI in most cases (i.e. Direct3D 9 an Direct3D 10 legacy applications end up going through the same Direct3D 11 codepaths where possible).
Since "DXGI" means "DirectX Graphics Infrastructure" this diagram is lumping the DXGI APIs with WDDM and DDI.
The purpose of the DXGI API was to separate the video hardware/output enumeration as well as swapchain creation/presentation from Direct3D. Back in Direct3D 9 and prior, these were all lumped together. In theory DXGI was supposed to not change much between Direct3D versions, but in practice it has evolved at basically the same pace with a lot of changes dealing with the CoreWindow swapchain model for Windows Store apps / Universal Windows Platform apps.
Many of the DXGI APIs are really for internal use, particularly when dealing with surface creation. You need to create Direct3D resources with the Direct3D APIs and not try to create them directly with DXGI, but you can use QueryInterface in places to get a DXGI surface for doing certain operations like inter-API surface sharing. With Direct3D 11.1 or later, most of the device sharing behavior has been automated so you don't have to deal with DXGI to use Direct2D/DirectWrite with Direct3D 11.
The real question is: Why does it matter to you?
See DirectX Graphics Infrastructure (DXGI): Best Practices and Surface Sharing Between Windows Graphics APIs
Under windows, there are two main 3D libraries. I am wondering WebGL use which? is it configurable? Is it configurable per browser?
Google Chrome and Firefox will by default make use of ANGLE wrapper to convert OpenGL API calls to Direct3D 9.0 (to achieve better compatibility with most hardware). Users can change this default behavior but it seems to be very inconvenient to override this (currently it's not possible to change this settings programatically).
All other major browsers (on windows) will use OpenGL.
I am wondering WebGL use which?
Depends on the browser and the OS.
is it configurable?
Depends on the browser.
Is it configurable per browser?
You mean JavaScript? No.
But why do you care?
Chrome and Firefox use ANGLE so that they work out of the box on a Windows system with only the default drivers supplied by Microsoft installed. For a proper OpenGL implementation installed the user needs to have downloaded and installed the original drivers from the HW vendor. If not, all you get is a rather crappy OpenGL-1.4 implementation/emulation built upon Direct3D 9.
Can I use clean c++ version of openGL in my iOS App? I want write some basic wrapper, then connect my code in c++ with this wrapper and App. Or I must use only openGLES? With GLKit. Describe me all variants.
iOS supports OpenGL ES only. Currently supported devices are exclusively 2.0 and 3.0, which are both programmable pipelines; older devices were 1.1 which was the fixed pipeline.
ES is integrated as the Core Animation level. Prior to GLKit you were required to create a layer — the simplest thing that the compositor can display — and build that into a view hierarchy. CADisplayLink is the 3.0+ way of tying in to the device's [virtual] horizontal sync.
GLKit is separate and aims to:
provide easy view-level wrappings, creating and tying together a GL context, a layer, a view and a display link;
provide shaders equivalent to the old fixed-functionality pipeline so that ES 2.0+ can be used just as easily as 1.1 was for the same set of purposes.
It's up to you whether you use it.
One of the languages supported by LLVM is Objective-C++. That's C++ and Objective-C code intermingled, each able to call the other. You could easily create a single Objective-C++ file that exposes an ordinary C++ class for all of the rest of your ordinary C++ code but which internally makes appropriate calls to bridge into the Objective-C world. So you'd probably have a few hundred lines of Objective-C dealing with the OS stuff and exposing stuff you care about for C++ actors.
iOS doesn't support OpenGL at all. You must use OpenGL ES for iOS devices.
You can use OpenGL ES 1.1 and 2.0 on every single iOS devices (actually you can use only OpenGL ES 1.1 on iPhone 3G, however recent iOS doesn't support iPhone 3G at all).
Also you can use OpenGL ES 3.0 on Apple A7 and A8 GPU devices, such as iPhone 6.
See the Apple document for more details.
All you need to use OpenGL ES on iOS, is CAEAGLLayer and EAGLContext. GLKit is just a useful wrapper classes for those classes.
After setting up those classes, you can use OpenGL ES API as the other environment.
By the way, this project https://code.google.com/p/gl-wes-v2/ provides some OpenGL 2.0 APIs on OpenGL ES 2.0 environment. It seems it isn't compatible with iOS, but you might be able to use some code from the project.
If I want to do scaling and compositing of 2D anti-aliased vector and bitmap images in real-time on Windows XP and later versions of Windows, making the best use of hardware acceleration available, should I be using GDI+ or DirectX 9.0c? (Actually, Windows XP and Windows 7 are important but we're not concerned about performance on Vista.)
Is there any merit in using SDL, given that the application is not cross-platform (and never will be)? I wonder if SDL might make it easier to switch to whichever underlying drawing API gives better performance…
Where can I find the documentation for doing scaling and compositing of 2D images in DirectX 9.0c? (I found the documentation for DirectDraw but read that it is deprecated after DirectX 7. But Direct2D is not available until DirectX 10.)
Can I reasonably expect scaling and compositing to be hardware accelerated on Windows XP on a mid- to low-spec PC (i.e. integrated graphics)? If not then does it even matter whether I use GDI+ or DirectX 9.0c?
Do not use GDI+. It does everything in software, and it has a rendering model that is not good for performance in software. You'd be better off with just about anything else.
Direct3D or OpenGL (which you can access via SDL if you want a more complete API that is cross-platform) will give you the best performance on hardware that supports it. Direct2D is in the same boat but is not available on Windows XP. My understanding is that, at least in the case of Intel's integrated GPU's, the hardware is able to do simple operations like transforming and composing, and that most of the problems with these GPU's are with games that have high demands for features and performance, and are optimized for ATI/Nvidia cards. If you somehow find a machine where Direct3D is not supported by the video card and is falling back to software, then you might have a problem.
I believe SDL uses DirectDraw on Windows for its non-OpenGL drawing. Somehow I got the impression that DirectDraw does all its operations in software in modern releases of Windows (and given what DirectDraw is used for it never really mattered since the win9x era), but I'm not able to verify that.
The ideal would be a cross-platform vector graphics library that can make use of Direct3D or OpenGL for rendering, but AFAICT no such thing is available. The Cairo graphics library lacks acceleration on Windows, and Mozilla has started a project called Azure that apparently has that but doesn't appear to be designed for use outside of their projects.
I just found this: 2D Rendering in DirectX 8.
It appears that since Microsoft removed DirectDraw after DirectX 7 they expected all 2D drawing to be done using the 3D API. This would explain why I totally failed to find the documentation I was looking for.
The article looks promising so far.
Here's another: 2D Programming in a 3D World