Is there any generic way to enable frame-sequential stereo with WebGL content?
(I'm not talking about hard-coding this into the presentation/game.)
I can use shutter glasses with anything that can display two images in sync with the monitor's refresh rate. (Bino, Blender Game Engine) These applications use standard OpenGL quadbuffering.
How would one go about getting stereo to work with stuff like let's say the fish tank demo?
I understand on Windows one can emulate WebGL in DirectX and somehow force stereoscopy that way. I'm a casual Linux user therefore have no access to this. There should't be a need for us to hack things like that get stereo.
Would it be even possible to write let's say an add-on for Firefox that hijacks the camera in any WebGL scene to enable this functionality?
Related
I'm working on an application where the concept is that you can 'select' objects before actually placing them. So what I wanted to do was have some low quality objects on a shelf or something like it. When the user selects the object he then can tap to place the high quality version of the object in his area for further viewing.
I was wondering if it's possible with vuforia. I wanted to use this platform since it works well from what I could tell and it's cross platform (The application needs to be for android and the HoloLens).
I have set up the basic application where you can place a capsule in the area. Now I wanted to automatically place the (in this case capsule) once vuforia has detected a ground plane. From what I could see the plane finder has events that go off when an input is detected, but I couldn't find an event that goes off when the ground plane is detected. Is it still possible with vuforia? I know it's doable with the HoloLens, but I would like to know if it's possible for android or other mobile devices. I really don't know where to start/look for so I hope someone can point me in the right direction.
Let me know if I need to include more information!
The Vuforia PlaneFinderBehaviour (see doc here) has the event OnAutomaticHitTest which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test instead of the On Interactive Hit Test list of the "Plane Finder":
I've heard that vuforia fusion, does not yet support ARCore (it supports ARKit) so it uses an internal implementation to simulate ARCore functionality, and they are waiting for a final release of ARCore to support it. Many users reported that their objects move even when they use an ARCore supported device.
Is it possible to use Vuforia without a camera for image tracking?
Basically I would like a function I could call with an image as a indata parameter and coordinates of a image target as a result. Does that exist?
It is unfortunately not possible. I've been looking for such an option myself several times while working on a Moodstocks (image recognition SDK) / Vuforia mashup (see these 2 blog posts if you are interested in it), but the Vuforia SDK prevents the use of any other source than the camera.
I guess the main reason for this is that the camera management is fully handled internally by the Vuforia SDK, probably in order to make it easier to use as managing the camera by ourselves is at best a boring task (lines and lines of code to repeat in each project...), at worst a huge pain in the ass (especially on Android where there are sometimes devices than don't behave as expected).
By the way, it looks to me like the Vuforia SDK is not the best solution you can find for your use case: it is mainly an augmented-reality SDK, focussed on real-time tracking, which imply working with a camera stream... so using it to do "simple" image recognition looks really overkill!
I need to simultaneously display a video that is playing in my applciation, full screen on a larger monitor. On some video cards, this is called Theater mode and is configured using a tool that the card manufacturer supplies.
I would like to do this with only software. Can I do this with DirectX?
My idea is to take the currently active video playing using DirectShow and repaint it on a second display (as configured by the user) in full screen mode.
What technologies or methods would I use for this?
The straightforward way is to split yet encoded video into two branches and use two video renderer set to present video on different monitors. One renderer could be a part of your application UI, the other could expand full screen on the large secondary monitor.
Splitting encoded video give you an option to still leverage hardware assisted decoding (DXVA) if available. You might prefer to use software only decoder and split already decoded video - this is also going to work.
You might additionally want to implement filter which would separately temporarily disable one or the other renderer, such as for example by stopping passing media samples through.
Another thing you can do is to use bridging to even more flexibly control the renderers and be able to detach them from media source.
I want to write something like this: http://www.youtube.com/watch?v=5S4KpCkHDqM I mean, I want to have 2D gaming space, but to have stylized as 3D, so my characters will move on the surface, but will have nice 3D effect. I wounder if Flash/ActionScript will do? Any other suggestions?
Flash and Actionscript can definitely accomplish this. There are at least 2 ways to accomplish the 3D look in 2D space.
The easiest is to do as #Blender said in the comments. Render some 3D images and bring them into flash. There are easy tools in flash to create animated sprites, including a native movieClip class, that has a timeline to play back frame-based animation.
But there is also full 3D in flash. You can bring low-polygon 3D models into flash easily using free and open source libraries such as Away3d (away3d.org) and papervision (papervision3d.org). Presently, flash player 10 has runs slowly when using these libraries.
But Adobe is about to release a new version of the player (version 11) that supports open GL for 3D and has significant performance improvements.
Away3D and papervision have already developed version of their libraries to support the new beta player and openGL.
So to summarize, yes - flash can make a game like that. It is currently the best way to develop games that are intended to be played in a browser. Because at least for the time being it has the most widespread support, and is stable between platforms and browsers.
Your example is pretty much entirely 2D: it just uses effects like shadows, animation and parallax scrolling between layers to achieve a (mildly) 3D effect.
As Plastic Sturgeon and Blender have pointed out, Blender might help for creating your assets - but it has a pretty steep learning curve, and you might be more comfortable 'faking it' in Adobe Illustrator or Photoshop if you've used those before.
Once you've created your assets, you need a platform to put together your gameplay: Flash is one possibility, but you could also look at Unity3D, which has good support for 2D and 3D, and has a browser plug-in if you want to make your game web-based.
If you're looking for a java-based solution, you could try Processing, which is cross-platform, and can export to javascript for web deployment. It's not exactly designed as a gaming environment, but it might do the trick - and it's free.
Hope this helps.
I want to make something like they have at US dmv's where you sit down and it takes your picture, maybe like photobooth.
I want to connect a high end camera via usb, fire the camera and get the picture.
There's the Picture Transfer Protocol http://en.wikipedia.org/wiki/Picture_Transfer_Protocol a nastly little thing. All the cameras I held in my hands so far, claiming they had proper PTP support failed it somewhere. But in theory one can use PTP to remote control a camera, i.e. trigger the shutter, retrieve the picture and so on.
Rater than reimplementing the whole thing I recommend you get some readily usable PTP library. There are some open source ones listed on http://ptp.sourceforge.net
The easiest method is probably to use OpenCV: http://opencv.willowgarage.com/wiki/
If you need a high end camera - most digital SLRs have a tethered mode where you can control the camera, fire the shutter and retrieve the image data. Each camera maker has a proprietary (but normally free) sdk.
For a webcam type camera - these normally run in video mode, you simply grab an image out f the video stream - as PaulR says - use openCV