Why cant virtual reality sets (HTC Vive/Oculus) play standard games - virtual

Visually speaking, the "displayed image" (in the steam/vive window) looks very similar to any other game being rendered on the desktop. Eg: Counterstrike, WoW, etc.
Question: Why is it then these games don't "feel" like being in a VR environment?
Also, programmatically speaking (image rendering, camera angles, depth field, etc)
Question: Can a non-VR game work with the VR sets as long as you configure the controls to the headset and wands? Eg: Headset = joystick; wand buttons = menu etc.
Thank you.
Edit: Please let me know if you have any reading recommendations on this subject.

The non-VR games simply weren't made for VR.
That said, there are hacks that make non-VR games semi-work in VR. You can check out Vorpx for Oculus, but I don't know of anything for Vive. There will be very big issues and headaches, though.
A lot of things will look bad - like missing graphics as almost all games go through shortcuts so they don't render what you will not see. For example there is no sky in RTS games and the map ends just after the end of scrollable space. Or when you're driving a car in a race game, there probably isn't even more to the car then the dashboard (no seats, back of the car etc). No one should see them, so no one made them.
It's even worse with the user interface of these games, no one had depth in mind when they designed this, so you'll have an ammo counter that makes you cross eyes end such.
I could go on and on with the issues, as this is just the tip of the iceberg.

Related

How can you make an object disappear in ROBLOX over a certain interval?

I'm making a game in ROBLOX, which has a cutscene in it at the start. At the end of the cutscene, the camera zooms up on the character and you spawn in. However, when I spawn in, I can see the dummy I used for the cutscene, so how after a certain interval can you make that dummy disappear?
Does the dummy just need to become invisible? If so, every physical object in ROBLOX (or more formally Part) has a .Transparency field that spans from 0 (for no transparency) to 1 (for full transparency, or in other words, invisible). I don't know what your "dummy" looks like in the object hierarchy, but let's say your dummy were a Model located at workspace.dummy, and that it has a head, torso, left arm, etc. located at workspace.dummy.Head, workspace.dummy.Torso, workspace.dummy.LeftArm, etc. To make the Parts of the dummy invisible, you would have code that looks like this:
workspace.dummy.Head.Transparency = 1
workspace.dummy.Torso.Transparency = 1
workspace.dummy.LeftArm.Transparency = 1
...
And so on. This, however, will make the dummy invisible to all players. If you are making a single-player game, this will not matter; however, if it is a multiplayer game, then this could be a problem. Making the dummy non-transparent again to do the cutscene for a new player would make the dummy visible to all players. If this is a problem for you, there are two things you could do that I know of:
The first and easiest way would be to just have the cutscene take place at a location very far away from where your game occurs; for example, you could shift everything in your cutscene 10,000 studs in the X direction. This would ensure the objects in the cutscene would be out of the render distance of the players playing the actual game, so only the players whose cameras are being manipulated to carry out the cutscene would see it.
The second, more complicated, and not future-proof option involves a very useful bug that is frequently taken advantage of but subject to being fixed at any time since it is not an official feature. This bug is the exploitation of a Camera (or less commonly a Message, which is deprecated) to create what are called local parts—Parts only visible to a certain player. How to create local parts and discussion of benefits and consequences of using local parts is a little complicated and beyond the scope of this answer. Go here if you'd like to learn more. Taken directly from the ROBLOX Wiki at the time of writing:
Local parts are in no way supported by Roblox. They exploit unspecified replication behaviour - at any given moment, the development team could release an update that changes how Camera and Message instances behave, preventing you from making local parts.

iOS Heavy image switching

I'm developing a app that will showcase products. One of the features of this app is that you will be able to "rotate" the product, using your finger/Pan-Gesture.
I was thinking in implementing this by taking photos of the product from different angles so when you "drag" the image, all I would have to do is switch the image according. If you drag a little, i switch only 1 image... if you drag a lot, i will switch them in cadence making it look like a movie... but i have a concerns and a probable solution:
Is this "performatic"? Since its a art/museum product showcase, the photos will be quite large in size/definition, and loading/switching when "dragged a lot" might be a problem because it would cause "flickering"... And the solution would be: instead of loading pic-by-pic i would put them all inside one massive sheet, and work through them as if they were a sprite...
Is that a good ideia? Or should I stick with the pic-by-pic rotation?
Edit 1: There`s a complicator: the user will be able to zoom in/out and to rotate the product in any axis (X, Y and Z)...
My personal opinion, I don't think this will work the way you hope or the performance and/or aesthetics will not be what you want.
1) Taking individuals shots that you then try to keyframe to based on touch events won't work well because you will have inevitable inconsistencies in 'framing' the shots such that the playback won't be smooth
2) The best way to do this, I suspect, will be to shoot it with video and shoot it with some sort of rig that allows you to keep the camera fixed while rotating the object
3) I'm pretty sure this is how most 'professional' grade product carousel type presentations work
4) Even then you will have more image frames than you need -- not sure whether you plan to embed the images files in app or download on demand -- but that is also a consideration in terms of how much downsampling you'll need to do to reduce frames/file size
Suggestion
Look at shooting these as video (somewhat like described above) and downsampling and removing excess frames using a video editor. Then you could use AVFoundation for playback and use your gestures to 'scrub' into the video frames. I worked on something like this for HTML playback at a large company and I can assure you it was done with video.
Alternatively, if video won't work for you. Your sprite sheet solution might work (consider using SpriteKit). But then keep in mind what I said about trying to keyframe one off camera shots together -- it just won't work well. Maybe a compromise would be to shoot static images but do so by fixing the camera and rotating the objects at very specific increments. That could work as well I suppose but you will need to be very careful about light and other atmospehrics. It doesn't take much variation at all to be detectable to the human eye causing the whole presentation to seem strange. Good luck.
A coder from my company did something like this before using 360 images of an object and it worked just great but it didn't have zoom. Maybe you could add zoom by adding a pinch gesture recognizer and placing the image view into a scroll view to zoom in on the static image.
This scenario sounds like what you really need is a simple 3D model loader library or write it in OpenGL yourself. But this pan and zoom behavior is really basic when you make that jump to 3D so it should be easy to find lots of examples.
All depends on your situation and time constraints :)

XNA game on Xbox screen dimming

I have a made a game for the Xbox 360 using XNA and whilst testing the game the screen seems to dim every 30 seconds. The way it dims is as if I have been away from the Xbox for a while. If I press the Xbox guide it goes back to normal. I've tried googling this issue and i've found a few people who have had the same problem but I couldn't seem to find any replies on those posts. If anyone knows what the issue is and how I could fix it it would be of great help as this is the last kink i'm trying to resolve with my game.
Just to convert my comment into an answer:
Many modern TVs and some monitors have a "dynamic contrast" feature where, if the displayed image is predominantly black, the intensity of the backlight will be reduced.
(Often it's really annoying, just making high-contrast black-background scenes go dark for no reason.)
The backlight can also be turned down as a power-saving feature, kind of like a screen-saver.
You could test on a monitor without this feature, disable this feature in your TV settings, or use a scene that isn't so dark.

Show an applicatio non top of a Direct X game?

Is it possible to build an application that displays itself (TopMost) even when a game is running (Quake, Farcry, Black Ops, any Direct X driven game)
I would like to be able to record my key presses while I play a game for video recording.
It must be possible because FRAPS displays the FPS on top of everything that uses direct X, including video players.
Any thoughts?
First of all, it isn't that easy. FRAPs works with API-Injection, to set some code into the drawing-steps of the programs, where it takes the many different versions of directx and opengl into account. If found a link, where it is explained a little more: Case Study Fraps.
Maybe a solution to work in windowed mode and capture the input with global hooks is easier, but I never tried out something in this direction. If you want to work with api-hooks maybe this link will be useful: Direct3DHook

OpenCV tracking people from overhead view

I have a broad but interesting OpenCV question and I'm wondering where to start.
I am looking for any strategies or white papers that might help.
I need to get the position of people sitting at a conference table from a fixed overhead view. Ideally, I will assign a persistent ID to each person, and maintain a list of people with ID and coordinates. This problem could be easy in a specific case - for example, if designed for a single conference room table - but it gets harder in the general case, especially with people entering and leaving the scene.
My first question: is it a detection or a motion tracking problem? Or some combination of the two?
Well it seems like both to me. I would think you would need to take a long average of the visible area which becomes the background. Then based on your background information you can track movement of other objects.
Assigning an ID may become difficult if objects merge together (at least as far as the camera is concerned) and then separate again, say someone removing a hat placing it down and placing it back on.
But all that in mind it is possible even if it presents a challenge. I once saw a similar project tracking people in a train station using a similar approach (it was in a lecture so I can't provide a link sorry)

Resources