Is it possible to build an application that displays itself (TopMost) even when a game is running (Quake, Farcry, Black Ops, any Direct X driven game)
I would like to be able to record my key presses while I play a game for video recording.
It must be possible because FRAPS displays the FPS on top of everything that uses direct X, including video players.
Any thoughts?
First of all, it isn't that easy. FRAPs works with API-Injection, to set some code into the drawing-steps of the programs, where it takes the many different versions of directx and opengl into account. If found a link, where it is explained a little more: Case Study Fraps.
Maybe a solution to work in windowed mode and capture the input with global hooks is easier, but I never tried out something in this direction. If you want to work with api-hooks maybe this link will be useful: Direct3DHook
Related
I'm working on an application where the concept is that you can 'select' objects before actually placing them. So what I wanted to do was have some low quality objects on a shelf or something like it. When the user selects the object he then can tap to place the high quality version of the object in his area for further viewing.
I was wondering if it's possible with vuforia. I wanted to use this platform since it works well from what I could tell and it's cross platform (The application needs to be for android and the HoloLens).
I have set up the basic application where you can place a capsule in the area. Now I wanted to automatically place the (in this case capsule) once vuforia has detected a ground plane. From what I could see the plane finder has events that go off when an input is detected, but I couldn't find an event that goes off when the ground plane is detected. Is it still possible with vuforia? I know it's doable with the HoloLens, but I would like to know if it's possible for android or other mobile devices. I really don't know where to start/look for so I hope someone can point me in the right direction.
Let me know if I need to include more information!
The Vuforia PlaneFinderBehaviour (see doc here) has the event OnAutomaticHitTest which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test instead of the On Interactive Hit Test list of the "Plane Finder":
I've heard that vuforia fusion, does not yet support ARCore (it supports ARKit) so it uses an internal implementation to simulate ARCore functionality, and they are waiting for a final release of ARCore to support it. Many users reported that their objects move even when they use an ARCore supported device.
Visually speaking, the "displayed image" (in the steam/vive window) looks very similar to any other game being rendered on the desktop. Eg: Counterstrike, WoW, etc.
Question: Why is it then these games don't "feel" like being in a VR environment?
Also, programmatically speaking (image rendering, camera angles, depth field, etc)
Question: Can a non-VR game work with the VR sets as long as you configure the controls to the headset and wands? Eg: Headset = joystick; wand buttons = menu etc.
Thank you.
Edit: Please let me know if you have any reading recommendations on this subject.
The non-VR games simply weren't made for VR.
That said, there are hacks that make non-VR games semi-work in VR. You can check out Vorpx for Oculus, but I don't know of anything for Vive. There will be very big issues and headaches, though.
A lot of things will look bad - like missing graphics as almost all games go through shortcuts so they don't render what you will not see. For example there is no sky in RTS games and the map ends just after the end of scrollable space. Or when you're driving a car in a race game, there probably isn't even more to the car then the dashboard (no seats, back of the car etc). No one should see them, so no one made them.
It's even worse with the user interface of these games, no one had depth in mind when they designed this, so you'll have an ammo counter that makes you cross eyes end such.
I could go on and on with the issues, as this is just the tip of the iceberg.
I have a made a game for the Xbox 360 using XNA and whilst testing the game the screen seems to dim every 30 seconds. The way it dims is as if I have been away from the Xbox for a while. If I press the Xbox guide it goes back to normal. I've tried googling this issue and i've found a few people who have had the same problem but I couldn't seem to find any replies on those posts. If anyone knows what the issue is and how I could fix it it would be of great help as this is the last kink i'm trying to resolve with my game.
Just to convert my comment into an answer:
Many modern TVs and some monitors have a "dynamic contrast" feature where, if the displayed image is predominantly black, the intensity of the backlight will be reduced.
(Often it's really annoying, just making high-contrast black-background scenes go dark for no reason.)
The backlight can also be turned down as a power-saving feature, kind of like a screen-saver.
You could test on a monitor without this feature, disable this feature in your TV settings, or use a scene that isn't so dark.
I am asking this because I couldn't find the answer anywhere, at least using the keywords I could think.
The most relevant question/answer I've found is : (Create interactive videos in iPad - An app for product demo) . The user Jano replied:
The easiest way to create interactive videos for iOS is to use Apple's HTTP Live Streaming technology. You have to create a video, embed metadata, play it using MPMoviePlayerController or AVPlayerItem, and then display clickable areas in response to metadata notifications.
Metadata should contain coordinates for the element you are tracking, eg: a dress, and a identifier for the product. You overlay this info with a clickable subview that reveals more information about the product. There are several applications of this kind in iTunes, here is one.
Once you get a working product and weeks-time of videos, the most difficult part is to perform motion tracking with the less possible human interaction. One approach is to use Adobe After Effects, another is to code your own solution based on OpenCV.
The example I've found concerning this technology (http://vimeo.com/16455248) showed the automatic addition of NSButtons when the video reaches the meta-tags embedded. My client wants a human body interactive video that pauses at a specific time (maybe using the meta-tags) and reacts to user tapping in an element in video (e.g: imagine a pill inside stomach; after tapping this pill it triggers another pre-rendered video, in a way not transparent to user). I have thought about animations using Cocos2D or Open GL ES, but I lack people who master these technologies.
I didn't quite understand the "motion tracking" reference in the quote above. Jano mentions Adobe After Effects and OpenCV. This motion tracking is like an "UIGestureRecognizer" ? Does it track parts of the video itself or motions initiated by user, as taps ?
I expect I've exposed the question in the most clear form possible. Thank you in advance.
This question is a year old, but I can give you insight into the After Effects question. AE has a feature where you can define an area in a video frame and the software will track that area across the timeline, logging the coordinates at specific intervals. For example, in a video of a person riding a mountain bike, you could select an area around their helmet and AE will log coordinates of the helmet throughout the timeline.
Since Flash was the most likely target for interactive video, the typical workflow would encode this coordinate data into a Flash video as cue point events (this is the only method I have personally experienced). According to some googling, the data is stored in key frames and can be extracted using scripts.
More info: http://helpx.adobe.com/after-effects/using/tracking-stabilizing-motion-cs5.html
Here's a manual method for extracting the data:
In the timeline panel select the footage and press the U key, all
track points keyframes will show up. Here’s the magic, select the
Feature Center property of each track point and copy it (Cmd+C for Mac
or Ctrl+C for PC)
Now open any text editor such as TextMate or Notepad and paste the
data (Cmd+V for Mac or Ctrl+V for PC)
I need to simultaneously display a video that is playing in my applciation, full screen on a larger monitor. On some video cards, this is called Theater mode and is configured using a tool that the card manufacturer supplies.
I would like to do this with only software. Can I do this with DirectX?
My idea is to take the currently active video playing using DirectShow and repaint it on a second display (as configured by the user) in full screen mode.
What technologies or methods would I use for this?
The straightforward way is to split yet encoded video into two branches and use two video renderer set to present video on different monitors. One renderer could be a part of your application UI, the other could expand full screen on the large secondary monitor.
Splitting encoded video give you an option to still leverage hardware assisted decoding (DXVA) if available. You might prefer to use software only decoder and split already decoded video - this is also going to work.
You might additionally want to implement filter which would separately temporarily disable one or the other renderer, such as for example by stopping passing media samples through.
Another thing you can do is to use bridging to even more flexibly control the renderers and be able to detach them from media source.