Why is WebGL Acting as a Wrapper for Direct3D? [duplicate] - webgl

Under windows, there are two main 3D libraries. I am wondering WebGL use which? is it configurable? Is it configurable per browser?

Google Chrome and Firefox will by default make use of ANGLE wrapper to convert OpenGL API calls to Direct3D 9.0 (to achieve better compatibility with most hardware). Users can change this default behavior but it seems to be very inconvenient to override this (currently it's not possible to change this settings programatically).
All other major browsers (on windows) will use OpenGL.

I am wondering WebGL use which?
Depends on the browser and the OS.
is it configurable?
Depends on the browser.
Is it configurable per browser?
You mean JavaScript? No.
But why do you care?
Chrome and Firefox use ANGLE so that they work out of the box on a Windows system with only the default drivers supplied by Microsoft installed. For a proper OpenGL implementation installed the user needs to have downloaded and installed the original drivers from the HW vendor. If not, all you get is a rather crappy OpenGL-1.4 implementation/emulation built upon Direct3D 9.

Related

iOS: HTML5 with Three.js development

I am planning to develop a HTML5 with Three.js pages, so that i can achieve 3D screens using that and support for Mobile platforms easily in Cross platform method. I read Three.js uses WebGL, but iOS and Android browsers doesn't support it, so i cannot develop such combination apps.
Could someone please advise, whether i can develop HTML5 with Three.js pages to achieve 3D interactions on iOS and Android apps or not as web app? If not, please give me the official links where it is mentioned that it is not supported.
Thank you.
Three.js will work fine without WebGL. You can use the Canvas renderer -- it's described in the Three.js documentation, I'm sure you can find it on github using google or even Bing. It's not as fast as using WebGL, but one doesn't race Monte Carlo in a Nissan Sentra.
You should not expect high performance 3D on any mobile platform -- mostly becasue javascript performance can be numbingly slow. Try accessing some of the various Three.js examples using mobile devices (both with and without webgl) to get some idea of performance.
I tried to run Three.js Canvas demos on my iPad Air and it was awful. Usually Safari crashes and even if render works it performs 2-3 fps. So seems it's very raw on iOS devices right now
iOS Safari and Android Chrome have supported WebGL for several years now.
The issue you might be referring to is Android's WebView or iOS's UIWebView (the control you can put in a native app to display a webpage) may or may not support WebGL. WebGL is supported in WebViews as of Android 5.0 apparently although people have run into issues
Another option is to use something like CocoonJS. They apparently provide custom webview implementations for Android 4.0+ and iOS 8+.
As for performance, three.js will perform just fine on simple scenes with mobile. There are plenty of examples. 3D on mobile in general though requires optimized assets (lower polygon models, smaller textures, simpler shaders, less lights, etc...) relative to desktop and particularly with three.js
iOS and Android browser support webgl not as well as Web browser, sometimes you can reduce the render effects to make it work. This usually do. For app, android webview render ability is low. Ios have a better one.
And you can plugin some render engine into you app, such as XWalkView, Here is a post of mine how to use it to render the 3D models with three.js in my android4.2.0 App. For IOS there is also relevance solutions.
The chosen answer to this question recommends using the Three.js CanvasRenderer, but as shown here in the Three.js documentation, CanvasRenderer has been deprecated:
NOTE: The Canvas renderer has been deprecated and is no longer part of the Three.js core.
It is important to note that it can still be used for simple 3d scenes and can be found on Github here
WebGL is supported in Androids WebView v36 and above, as seen here.
Alternatively with older Android Chrome browsers WebGL can be enabled by typing: chrome://flags into the browser and selecting the Enable link that appears under the Enable WebGL header in the browser window, or by updating webview from the Android Google Play Store.
WebGL has been supported on Mobile Safari since iOS 8.0b.
At mobilehtml5.org you can see what mobile device versions support specific HTML5 features:
Sony Xperia Android 2.3 also supports WebGL.

DirectX, GDI+ or SDI on Windows XP?

If I want to do scaling and compositing of 2D anti-aliased vector and bitmap images in real-time on Windows XP and later versions of Windows, making the best use of hardware acceleration available, should I be using GDI+ or DirectX 9.0c? (Actually, Windows XP and Windows 7 are important but we're not concerned about performance on Vista.)
Is there any merit in using SDL, given that the application is not cross-platform (and never will be)? I wonder if SDL might make it easier to switch to whichever underlying drawing API gives better performance…
Where can I find the documentation for doing scaling and compositing of 2D images in DirectX 9.0c? (I found the documentation for DirectDraw but read that it is deprecated after DirectX 7. But Direct2D is not available until DirectX 10.)
Can I reasonably expect scaling and compositing to be hardware accelerated on Windows XP on a mid- to low-spec PC (i.e. integrated graphics)? If not then does it even matter whether I use GDI+ or DirectX 9.0c?
Do not use GDI+. It does everything in software, and it has a rendering model that is not good for performance in software. You'd be better off with just about anything else.
Direct3D or OpenGL (which you can access via SDL if you want a more complete API that is cross-platform) will give you the best performance on hardware that supports it. Direct2D is in the same boat but is not available on Windows XP. My understanding is that, at least in the case of Intel's integrated GPU's, the hardware is able to do simple operations like transforming and composing, and that most of the problems with these GPU's are with games that have high demands for features and performance, and are optimized for ATI/Nvidia cards. If you somehow find a machine where Direct3D is not supported by the video card and is falling back to software, then you might have a problem.
I believe SDL uses DirectDraw on Windows for its non-OpenGL drawing. Somehow I got the impression that DirectDraw does all its operations in software in modern releases of Windows (and given what DirectDraw is used for it never really mattered since the win9x era), but I'm not able to verify that.
The ideal would be a cross-platform vector graphics library that can make use of Direct3D or OpenGL for rendering, but AFAICT no such thing is available. The Cairo graphics library lacks acceleration on Windows, and Mozilla has started a project called Azure that apparently has that but doesn't appear to be designed for use outside of their projects.
I just found this: 2D Rendering in DirectX 8.
It appears that since Microsoft removed DirectDraw after DirectX 7 they expected all 2D drawing to be done using the 3D API. This would explain why I totally failed to find the documentation I was looking for.
The article looks promising so far.
Here's another: 2D Programming in a 3D World

Is Blackberry WebWorks a good development choice?

This is kind of a dumb question but I've aware of classic style JDE development for Blackberry but I've never tried using WebWorks. BB website says that it's possible to build applications for both smartphones (OS 6.0+) and tablets - sounds fantastic, but what's the price?
Is here anyone using WebWorks on a daily basis and capable of describing pros and cons?
Thanks in advance
I would suggest using it if you build webOS applications before hand. It make porting to the blackberry a breeze.
Use WebWorks if you know html5, Css3 and javascript over Java and C++.
I haven't ran into any issues with the webWorks, ported two applications without running into any issues. Its your standard html5, css3 and javascript you love with blackberry APIs
WebWorks is a good development choice, particularly as it allows easy migration from earlier BB OSes to BB10. It's mostly standard web technologies (HTML5, CSS3, etc.) and the team seems focused on making it perform well (e.g. hardware accelerated WebGL graphics) while at the same time providing BlackBerry-specific APIs to make WebWork apps capable and with good UX (e.g. you can make it look like a native app).
For native apps, you should look into Cascades. This is a modern development environment with good tooling, accelerated graphics, and APIs for building snazzy apps. It's the one that will most be a "BlackBerry app".
AIR remains an option, but I would recommend WebWorks over AIR, as even Adobe is migrating from Flash to web technologies. Likewise, you can develop Android apps on BB10, but unless you are keen on Java programming, you will get more cross-platform support from WebWorks (or even AIR) so there's no particular reason to go the Android route.
WebWorks API is limited, for example it does not have socket, so you cannot port a VNC (UltaVNC, tightVNC ..) to it but you can do it with JDE.
For UI, WebWorks allowed me to write UI of acceptable quality quickly and easily, a thing that I have never succeeded with JDE.
Still on the UI side, I can make use of multi-touch (PlayBook), I don't think this one is possible with JDE.
So depending on your needs you should go either WebWorks or Native, having heard that Java may not be supported in BB10, and Air may not be future proof (Adobe favors HTML5 instead of Flash). Android appli has some lag on start up when it is run on PlayBook, some customers are sensitive to the initial even just one time slow response time.
I'm a huge proponent of Webworks. Ever since I've started using it, it quickly became the default option for my apps going forward. Especially for someone like me who is just writing a few apps on the side, I don't have the time to do it in c++.
The apps I'm writing revolve around home automation. They are client/server based from the get go.
Here's why I like it:
First and foremost, native API support. I can very easily create my own active frames, import invocation from other apps (think camera, stuff like that). I can export portions of my webworks app as an invocation card! Which means I can write say 3 unique apps (in this case home automation, lights, thermostat, security cameras). And I can very easily pull features from each app into the other. Maybe I want to turn my lights on in the living room, I can also import the camera card from my IPcam app and view the results, without having to add that code into my lights app and maintain two separate code lines.
Rapid design. Since I've been dabbling in html since I was a kid, it's now very easy for me to whip up an appealing UI in little time. Because web engines these days offer good performance in terms of graphics capability, I also can make apps that behave very fluid.
Considering the time to make something beautiful, it's hard for me to leave webworks and go for something in c++. Also the big plus is often these apps I'm making are intended for multiple devices, namely an app on my phone and being hosted on my personal website. By maintaining two slightly different css files, most of the time I need no code changes, just load a different css depending on if it's a phone or a pc. (Exactly what you'd do if you were developing a regular old website).
For that matter, I actually don't put my code on the device, I host all of my html and javascript, images etc on my server. The webworks app is just the config.xml pointing it's source to my server, and an icon. A glorified website bookmark on the homescreen, only difference is I can use native API and there's no browser bar in the app.
Also, this way I can still continue to edit the same single codeline on my server, and instantly apply changes to the in-browser app and the on-device app.
This is especially cool if you're designing an app where all of it's data is out in the "cloud", say you work for a publication and you want to write a magazine app which pulls content from your servers on the net.

Porting Windows demo apps to WinCE/XP Embedded

We have a range of PC demonstration programs for our microcontroller products. The programs typically connect to a USB HID chip on the microcontroller board. The USB chip acts as a communications bridge, allowing the programs to communicate with the micros over SPI/I2C/UART. The programs can configure the micros, and get back status information to display to the user.
We are now looking to build some standalone demonstrations using single board PCs. We would like to reuse as much as possible of our existing demo app source code. Ideally, we could just run them as-is.
Does anybody have any advice on the best way forward? The basic options seem to be WinCE or XP Embedded boards. WinCE boards seem to pull less power, which would be an advantage from a battery life point of view.
Our existing demos are built either in C++ under Borland Builder, or in Delphi.
Thanks in advance.
EDIT: see my answer below with info from a board vendor.
Free Pascal/Lazarus can compile some forms of Delphi apps to WiNCE/arm. Even visual ones.
There isn't a Delphi version for WinCE, so you would need to rewrite the applications. The same applies for the Borland Builder's control libraries. Only if you have used plain Win32 API, you would be able to port your application to WinCE easily. You may also encounter problems with the hardware access part. The Serial Port driver may not work as is. Also, you need to find a WinCE board that can act as USB host and provides HID drivers (this isn't very common).
In conclusion, I believe that you would be better of with Windows XP Embedded boards. These should run your applications as they are.
As an update, and for future reference, I thought I'd post the results of our discussions with a WinCE board vendor here. Caveat: I haven't actually tried any of this.
The bottom line is that there isn't a straightforward way to do what we were hoping for (i.e., re-compile our existing demo applications to run under WinCE). The reason is that the generic HID drivers and standard APIs that exist in desktop flavours of Windows just aren't there in WinCE.
To talk to HID devices in WinCE you need to implement a custom HID driver. This needs to support an interface allowing user mode applications to communicate with the driver, and to construct HID reports to be sent to the physical device. As this interface would itself be custom, application code needs to be updated accordingly.
WinCE application development is generally done using Visual Studio and the Microsoft compilers. The approach recommended to us was:
Create a custom HID class driver. This could be based on, for instance, the Microsoft keyboard HID driver.
Create an API for talking to the driver.
Use .net to create our GUI applications, and use PInvoke to actually talk to the API.
The end result of all this head-scratching is that to avoid the time and learning curve associated with this approach, we're going to go for a board running XP. We can then use our existing demo applications straight out of box. The trade-off is that we'll have to live with substantially reduced battery life.

Anyone ever tried to develop in C or C++ for Blackberry platforms?

Every indication I have, based on my experience in embedded computing is that doing something like this would require expensive equipment to get access to the platform (ICE debuggers, JTAG probes, I2C programmers, etc, etc), but I've always wondered if some ambitious hacker out there has found a way to load native code on a Blackberry device. Anyone?
Edit: I'm aware of the published SDK and it's attendant restrictions. I'm curious if anyone has attempted to get around them, and if so, how far they got.
I've seen this question pop up in a number of different forums over time. The original Blackberries were programmable in C++ but I think that RIM ran up against the problems of trying to implement a secure platform in the C/C++ compile to native paradigm.
The devices do have JTAG ports, but unless one could get hands on the RIM code as a place to start the problem is enormous.
I also have to wonder how useful a Blackberry with a replacement FOSS operating system would be, since it would not likely have the protocols to connect to BES or BIS, send PIN's etc. If one was simply looking for a the power of the hand held computing platform I suspect there are many more likely candidates available.
No, C++ is no longer a supported RIM development tool, as they phased it out a number of years ago. Client applications can be developed in Java (or one of a few 5GL frameworks), and web + sever-side apps can be developed using standard tools.
For those looking for updated information, the new Playbook os, also known as QNX, also known as Blackberry 10 (or it will be when the phones running it come out) is in fact c/c++ based, also using QML and a C++ add on called Cascades.
Unfortunately the official SDK website only seems to mention Java. According to wikipedia, different versions of the BlackBerry use different processors. Combined with the fact that RIM uses a proprietary operating system for the devices, it becomes pretty difficult to develop native code without official tools. There is also a partial API-level security restriction which would further prohibit advanced tinkering.
Just randomly searching for an answer to this and came across http://supportforums.blackberry.com/t5/Tablet-OS-SDK-for-Adobe-AIR/Native-C-C-SDK/td-p/778009 which mentions that BB intend to release a C/C++ SDK soon, more details will be provided at the 2011 Game Developer Conference.

Resources