Difference between "webgl" and "experimental-webgl" - webgl

Some sites say that you should initialize webgl the following way:
var gl = c.getContext("webgl") || c.getContext("experimental-webgl");
if (!gl)
alert("This browser doesn't support WebGL!");
What's the difference between webgl and experimental-webgl? The only description I could find was on MDN:
getContext(in DOMString contextId) RenderingContext Returns a drawing context on the canvas, or null if the context ID is not supported. A drawing context lets you draw on the canvas. Calling getContext with "2d" returns a CanvasRenderingContext2D object, whereas calling it with "experimental-webgl" (or "webgl") returns a WebGLRenderingContext object. This context is only available on browsers that implement WebGL.
However this makes the 2 seem like they're the same. Is there a difference?

TL;DR: "experimental-webgl" = beta, it was used by browsers before WebGL 1.0 shipped to try to indicate this is not final. When WebGL 1.0 shipped and a browser passed all the conformance tests then that browser would start accepting "webgl" and "experimental-webgl" would be just be a synonym and deprecated.
Long version:
Browser vendors used to prefix things that were not standardized yet or complete. The hope was developers would try things out. When the standard finalized the prefix would be removed and everyone would use the unprefixed version only. The prefix "experimental-" is left over from that era. Browser vendors figured out prefixes were a bad idea because 1000s of websites would use the prefix in production sites and then the browsers could not remove the prefix without breaking thousands of websites.
Browser vendors have generally agreed not to do this anymore. Instead they put new features behind flags, ask developers to test, when the browser vendors and standards committees are reasonably sure everything is good and stable they allow the new feature to run without the flag. WebGL2 was done this new way so there is no "experimental-webgl2", just browser flags.
The only browser that still needs "experimental-webgl" is Edge. It uses that only because Edge still does not implement the entire WebGL spec. For all other browsers "experimental-webgl" is exactly the same as "webgl" and it's just leftover from that old era
Personally I no longer use "experimental-webgl" at all. That means my code won't work in Edge until they are actually standards compliant. IMO They need the pressure of websites not working to get them to spend resources fixing their broken implementation as it's been broken for years now.

Related

Is there a programmatic way to see what graphics API a game is using?

For games like DOTA 2 which can be run with different graphics API's such as DX9, DX11, Vulkan, I have not been able to come up with a viable solution to checking which of the API's its currently using. I want to do this to correctly inject a dll in order to display images over the game.
I have looked into manually checking what dll's the games have loaded,
this tool for example: https://learn.microsoft.com/en-us/sysinternals/downloads/listdlls
however, in the case of DOTA, it loads in both d3d9.dll and d3d11.dll libraries if none is specified in launch options on steam. Anyone have any other ideas as to how to determine the correct graphics API used?
In Vulkan, a clean way would be to implement a Vulkan Layer doing the overlay. It is slightly cleaner than outright injecting dlls. And it could work on multiple platforms.
In DirectX, screencap software typically does this. Some software adds FPS counter and such overlays. There seems to be open source with similar goals e.g. here: https://github.com/GPUOpen-Tools/OCAT. I believe conventionally the method is to intercept (i.e. "hook" in win32 api terminology) all the appropriate API calls.
As for simple detection, if it calls D3D12CreateDevice then it likely is Direct3D 12. But then again the app could create devices for all the APIs too and proceed not to use them. But I think the API detection is not particularly important for you if you only want to make an overlay; as long as you just intercept all the present calls and draw your stuff on top of it.

SharpDX numbered classes, where do i find their respective documentation/responsibilities?

I'm having a hard time everytime i look at SharpDX code and try to follow DirectX documentation. Is there a place where what each of the numbered classes map to and why they exist is clearly laid out?
I'm talking about things like :
DXGI.Device
DXGI.Device1
DXGI.Device2
DXGI.Device3
DXGI.Device4
SharpDX.Direct3D11.Device
SharpDX.Direct3D11.Device1
SharpDX.Direct3D11.Device11On12
SharpDX.Direct3D11.Device2
SharpDX.Direct3D11.Device3
SharpDX.Direct3D11.Device4
SharpDX.Direct3D11.Device5
SharpDX.Direct3D11.DeviceContext
SharpDX.Direct3D11.DeviceContext1
SharpDX.Direct3D11.DeviceContext2
SharpDX.Direct3D11.DeviceContext3
SharpDX.Direct3D11.DeviceContext4
Everytime i start from code i find it seems to be picked by black magic and i have no idea where to go from there, for example i'm using this (from code i found) and i have no idea why it's device3, factory 3 going with swapchain1 on which we queryinterface swapchain2 :
using (DXGI.Device3 dxgiDevice3 = this.device.QueryInterface<DXGI.Device3>())
using (DXGI.Factory3 dxgiFactory3 = dxgiDevice3.Adapter.GetParent<DXGI.Factory3>())
{
DXGI.SwapChain1 swapChain1 = new DXGI.SwapChain1(dxgiFactory3, this.device, ref swapChainDescription);
this.swapChain = swapChain1.QueryInterface<DXGI.SwapChain2>();
}
If full explanation is too large of a the scope of an answer here any link to get me started on figuring out what C++ DX maps to which numbered object and why is most welcome.
In case this matters i'm only interested in DX >= 11, and i'm using SharpDX within an UWP project.
SharpDx is a pretty thin wrapper around DirectX, and pretty much everything in DirectX is expressed in SharpDx as a pass-through with some naming and calling conventions to accommodate the .net world.
Real documentation on SharpDx is essentially nonexistent, so you will have to do what everybody else does. If you are starting with something in SharpDx then look directly at the SharpDx API listings and the header files to understand what underlying DirectX functions are being expressed. Once you have the name of the DirectX function, you can read the MSDN documentation to understand how that function works. If you are starting with something in DirectX, then look first at MSDN to understand how it works and how it's named, and then go to the SharpDx API and header files to find out how that function is wrapped (named and exposed) in SharpDx.
For the specific question you ask, SharpDx device numbering identifies the Direct3D version that is being wrapped.
Direct3D 11.1 device ==> ID3D11Device1 ==> SharpDX.Direct3D11.Device1
Direct3D 11.2 device ==> ID3D11Device2 ==> SharpDX.Direct3D11.Device2
Direct3D 11.3 device ==> ID3D11Device3 ==> SharpDX.Direct3D11.Device3
and so on.
Naturally each version has a slightly different ("improved") interface. Lower version numbers will work pretty much anywhere, and higher version numbers include additional functionality that may require something specific from your video card and/or your operating system. You can read about the API for each version in sections found here.
For example, the description of the new methods added to the ID3D11Device5 interface (i.e, what's new since ID3D11Device4) is here. In this case, Device5 adds the ability to create a fence object and to open a handle for a shared fence.
When example code uses a specific device number, it's usually because the code requires some functionality that wasn't there in a previous version of Direct3D. In general you want to use the lowest numbered device (and factory, etc.) possible, because that will permit your code to run on the widest variety of machines and video cards.
If you find example code that creates a SharpDX.Direct3D11.Device1 but doesn't appear to use any methods beyond those in SharpDX.Direct3D11.Device, it's probably for one of two reasons. First, the author may know that a later example will require a method or field that doesn't exist before Direct3D 11.1. Second, the author may know that every video card and operating system capable of running the example at all will be capable of running Direct3D 11.1.
For a person just starting out, I would suggest you just stick with Direct3D (and Direct2D) version 11.1, thus DXGI.Device1, SharpDX.Direct3D11.Device1 and SharpDX.Direct3D11.DeviceContext1. These are likely to run on any machine you'll encounter. Only increase the version number if you actually need some functionality that doesn't appear in that version.
One additional hint: if you read a thread about some Direct3D or Direct2D functionality and you can't seem to find it anywhere in SharpDx, look at the Direct3D API to see what version number first contains that functionality. Then go through the SharpDx API (or better yet the header files) for that version until you see a similarly named element. It may be wrapped in an unexpected way, but AFAIK it's all exposed, even when you have a hard time finding it.
Here you can find about all SharpDx objects, specifically for DXGI you can found here, There you can see the Device mapped to IDXGIDevice.
Note the words IDXGIDevice are hyperlink that references to documentation for C++ object. And on this way Device1 and Device2 etc.
You can see that there is a very simple logic here, SharpDx divides the name of the C++ object into Namespace and a class name,
For example instead of IDXGIDevice, you get Namespace: DXG and class Name: Device.
In the documentation for each C++ object you can find Requirements.
And there is detailed in which operating system you can use the object.
As the number is higher, the object will work in a newer operating system.
For example, IDXGIDevice1 works under Windows 7, however IDXGIDevice3 works under Windows 8.1 or higher.

If WebGL isn't supported by my graphics card by default, why should I use it?

I am looking to make a web-based game. I saw lots of cool libraries that used WebGL (three.js, pixi.js, and more). The problem is that when I run the examples, I get the message "Your video card does not support Web GL."
Now, I could update my video card driver, but I'm not going to do that. Why not? Because the users of my game would never do that. Asking casual users (my target audience) to update their video card driver is a complete non-starter.
Is there a way to use WebGL without alienating a ton of users?
PS: In case you are wondering, my laptop is no slouch. I bought it Christmas '13 and it runs games like Bioshock Infinite flawlessly.
There are generally two reasons a video card driver may be blacklisted for WebGL by browsers:
because it has bugs that would allow malicious web content to exploit the driver/GPU to attack the system, or
because it has bugs or limitations that would result in severely incorrect rendering.
Because of the first, it's unlikely that, as a content author, there's anything you could do to enable WebGL, as that would constitute a vulnerability.
And this is why you can run BioShock but not WebGL: since the game is a regular application, by running it you've already trusted it with your system, so there's little benefit in preventing it from abusing the GPU/driver. Web pages, on the other hand, are not assumed to be trusted in this way. (If you ask me, it's a long-standing architectural mistake that regular applications have such free run, but that's a rant for another day.)

Rendering canvas from Ruby

I have a chart in my current web app that I’ve implemented in canvas. I’d like this to work in IE8, but excanvas doesn’t seem to support translucency or composite operations. My fallback solution is to render a chart on the server as an image and send that out to IE8 instead of rendering it client-side.
I’d assumed there’d be a canvas gem that I could use with a direct port of my JS code to Ruby, but I can’t find anything. Has no-one done this? If not, what would people recommend? It’s not a particularly complex drawing, but I’d like to keep the amount of duplication to a minimum.
(It’s worth pointing out that I’ve considered using a headless Webkit to render and return a data URI, but I expect this would be fairly slow to spin up. Another possibility is to pre-render all the possible charts – somewhere around 120K of them – but that feels like a last resort!).
I haven't found such an implementation.
There is at least one canvas implementation for node.js. You could use it to write a small node program to generate the images using the exact same code you're using on the client. It wouldn't be the most efficient solution but I'd guess it'd be better than using PhantomJS or the like.

How do I check if a browser supports HTML5 and CSS3 features using Ruby?

I need to make an if statement using Ruby that checks to see if the client's browser support HTML5 or not.
Short version: you won't be able to, nor should you.
Long version: It may be possible, if you do some user-agent sniffing, to identify whether or not the user's browser supports HTML5 or not. But this would take a fair amount of effort to get right. The better solution is to use something like Modernizr (http://www.modernizr.com/) to do your feature detection on the client-side.
It's possible to read the browser info based on the HTTP_USER_AGENT string, but, as mentioned above and many other places, it's also really easy to spoof that info. On the server-side we only cared because it gave us an overall view of the client browsers being used to access our sites.
Trying to react to a browser on the backend and present different content was tried by sites for a while, but it fails because of how browsers spoof other browsers, but don't have the same bugs.
As #Stephen Orr said, CSS is a better way of dealing with it. Sure it's hell and still error-prone, but it's better than sniffing the browser's signature. We used to cuss every release of IE because it broke the previous fixes. Luckily things seem to be getting better as the vendors creep toward toeing standards.
Most features can be detected (with JavaScript), but some kinds like the form-date-feature field is a problem: http://united-coders.com/matthias-reuter/user-agent-sniffing-is-back
It is possible to do Feature detection on HTML5, to detect single features from HTML5 as you need them. There is, however, no way to detect if a browser supports HTML5 as one big chunk - as there is no "official" way to tell if a browser supports all of HTML5 or just parts of it.
< [html5 element] id="somethingtobedazzledby">
Upgrade your browser
</ [html5 element] >

Resources