I need to capture frame from application that runs in direct mode into D3D11Texture2D. I was performing that by hooking Present() or Present1() commonly, but now some apps (e.g. SteamVR games, OVR games etc) output frames in direct mode (Nvidia and AMD opened this feature for VR).
Does anyone have any ideas?
To solve the problem I dove into Output-Merger stage in graphics pipeline. I found that all I need is in render target(s). Multiple render targets are possible to implement multi-buffering.
Related
I am using a proprietary RTSP server (I don't have access to the source code) running on a Linaro based embedded system. I am connecting to the device using WiFi and use VLC player to watch the stream. Every often, VLC player's window resizes to different sizes.
Is this a normal behavior in RTSP stream (resizing the video)?
-If yes, what is causing this change? Is it my WiFi bandwidth?
-If not, what are the suggested steps to find the root cause of this problem.
Thank you
Ahmad
Is this a normal behavior in RTSP stream (resizing the video)?
Yes, the RTSP DESCRIBE Request should give info about the resolution. (See this discussion)
-If yes, what is causing this change? Is it my WiFi bandwidth?
Most probably not. However I guess more info would be needed on your bandwidth and network setup.
-If not, what are the suggested steps to find the root cause of this problem.
Option 1: Try to disable (uncheck) VLC's preference to resize the interface to native video size, and see what happens.
Also see the following post over at superuser discussing about automatic resizing options
Option 2: Enable VLC's verbose mode (console log) and see what errors or messages come up. This often helps, and points into new directions to look for solutions.
Option 3: It could be a problem with how information is encoded in the stream concerning the resolution. You would need to get in touch with the vendor of your RTSP server software in order to dig deeper.
Open the VLC player press (Ctrl + P) or go to
Tools -> Prefrences -> Interface (look for below options)
Integrated video in interface [Check]
Resize interface to video size [UnCheck]
Must close and open again the VLC player
is it possible to use NetStream to publish the stage constantly to a FMS?
I have tried to attach a camera to the netstream which works perfectly. However I want to publish a stream showing the stage and all its elements / objects including the case where a user interacts with the elements and changes their position/appearance.
Thank you very much.
As I know it's not possible this way.
You can't use a custom input for the netstream to encode it.
You have to following options:
if you can reproduce the same elements on the other side, create an API, that only passes the interactions (i.e. drawLine(startX,startY,endX,endY), loadImage(url), etc). This way everything will be shown on both PC, with much less data traffic and CPU usage
if you have a very complex stage and somehow it's impossible to reproduce it on the other side, then you can create bitmap shots, and send them through FMS JPEGencode (not too nice)
use a webcam splitter that grabs the stage, and it can be a webcam source (not too nice)
In my app I'm using direct2d to write to a shared (d3d11/d3d10) texture. This is the only kind of render target that is used in my app. Since devices can be lost in direct2d (D2DERR_RECREATE_RENDER_TARGET), lots of code exists to abstract and/or recreate device dependent resources. However, I have yet to see this situation actually occur, and am curious whether I am wasting effort. Can the render target actually be lost in this scenario, or am I protected since the texture is created via d3d11 (though shared with d3d10)? If so, does anyone know a reproducible, simple way to cause the render target to be lost so I can at least test the code that handles this condition?
It’s not a wasted effort. Many scenarios may cause device loss to occur. A simple way to induce this for testing purposes is to update your graphics driver. Your application should handle this gracefully. It can also happen behind the scenes if your graphics driver crashes or Windows Update installs a new version in the background. There are other cases but those are probably the most common.
You can use the Device Manager to roll back and update your driver quickly.
A D2D window render target will always be lost when another program uses any version of the D3D API to go fullscreen and back (exclusive mode, not the new windowed mode supported since D3D10/11). In D3D11, I think you have to cause a resolution change for the D2D render target to be lost.
So if you do not get the D2DERR_RECREATE_RENDER_TARGET HRESULT in this case, when Presenting to your texture render target, then maybe you do not need to re-create the render target, but I would still handle D2DERR_RECREATE_RENDER_TARGET. To test it, you could just replace the texture render target with a window render target during development.
We need to drive 8 to 12 monitors from one pc, all rendering different views of a single 3d scenegraph, so have to use several graphics cards. We're currently running on dx9, so are looking to move to dx11 to hopefully make this easier.
Initial investigations seem to suggest that the obvious approach doesn't work - performance is lousy unless we drive each card from a separate process. Web searches are turning up nothing. Can anybody suggest the best way to go about utilising several cards simultaneously from a single process with dx11?
I see that you've already come to a solution, but I thought it'd be good to throw in my own recent experiences for anyone else who comes onto this question...
Yes, you can drive any number of adapters and outputs from a single process. Here's some information that might be helpful:
In DXGI and DX11:
Each graphics card is an "Adapter". Each monitor is an "Output". See here for more information about enumerating through these.
Once you have pointers to the adapters that you want to use, create a device (ID3D11Device) using D3D11CreateDevice for each of the adapters. Maybe you want a different thread for interacting with each of your devices. This thread may have a specific processor affinity if that helps speed things up for you.
Once each adapter has its own device, create a swap chain and render target for each output. You can also create your depth stencil view for each output as well while you're at it.
The process of creating a swap chain will require your windows to be set up: one window per output. I don't think there is much benefit in driving your rendering from the window that contains the swap chain. You can just create the windows as hosts for your swap chain and then forget about them entirely afterwards.
For rendering, you will need to iterate through each Output of each Device. For each output change the render target of the device to the render target that you created for the current output using OMSetRenderTargets. Again, you can be running each device on a different thread if you'd like, so each thread/device pair will have its own iteration through outputs for rendering.
Here are a bunch of links that might be of help when going through this process:
Display Different images per monitor directX 10
DXGI and 2+ full screen displays on Windows 7
http://msdn.microsoft.com/en-us/library/windows/desktop/ee417025%28v=vs.85%29.aspx#multiple_monitors
Good luck!
Maybe you not need to upgrade the Directx.
See this article.
Enumerate the available devices with IDXGIFactory, create a ID3D11Device for each and then feed them from different threads. Should work fine.
We're working on an application that displays information through a Direct3D visualisation. A late client request is the ability to view this application via some Remote Desktop solution.
Has anyone done anything similar? What options are available / unavailable? I'm thinking RDC, VNC, Citrix...
Any advice?
I think you can still use all of the normal D3D tools, but you won't be able to render to a surface associated with the screen. You'll have to render to a DIB (or some such) and Blt it with GDI to a normal window HDC. RDC/VNC/Citrix should all work with this technique.
Performance will definitely suffer - but that's going to be the case over remote desktop anyway. In fact, if I were you, I would mock up a VERY simple prototype and demonstrate the performance before committing to it.
Good luck!
I think Windows 7 has D3D remoting stuff - probably requires both client and server to be W7 though.
The build-in remote desktop works. (You don't have to do anything special)
But it is extremely slow, because when in doubt, it just sends the contents of a window as a bitmap.