In my app I'm using direct2d to write to a shared (d3d11/d3d10) texture. This is the only kind of render target that is used in my app. Since devices can be lost in direct2d (D2DERR_RECREATE_RENDER_TARGET), lots of code exists to abstract and/or recreate device dependent resources. However, I have yet to see this situation actually occur, and am curious whether I am wasting effort. Can the render target actually be lost in this scenario, or am I protected since the texture is created via d3d11 (though shared with d3d10)? If so, does anyone know a reproducible, simple way to cause the render target to be lost so I can at least test the code that handles this condition?
It’s not a wasted effort. Many scenarios may cause device loss to occur. A simple way to induce this for testing purposes is to update your graphics driver. Your application should handle this gracefully. It can also happen behind the scenes if your graphics driver crashes or Windows Update installs a new version in the background. There are other cases but those are probably the most common.
You can use the Device Manager to roll back and update your driver quickly.
A D2D window render target will always be lost when another program uses any version of the D3D API to go fullscreen and back (exclusive mode, not the new windowed mode supported since D3D10/11). In D3D11, I think you have to cause a resolution change for the D2D render target to be lost.
So if you do not get the D2DERR_RECREATE_RENDER_TARGET HRESULT in this case, when Presenting to your texture render target, then maybe you do not need to re-create the render target, but I would still handle D2DERR_RECREATE_RENDER_TARGET. To test it, you could just replace the texture render target with a window render target during development.
Related
When using a texture with a purgeability state of volatile my app crashes with this error:
"MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion `MTLResource is in volatile or empty purgeable state at commit"
It works perfectly fine when I run the app by itself (not using the play button in Xcode but just clicking on the build icon) and also works when testing on iOS. This is a recent problem since updating to a newer version of Xcode recently. Is this something I can turn off so that the command buffers don't lock purgeable objects?
It's working as intended. Let me explain.
First, the fact that you are not seeing this problem in your app is due to the fact that by default, apps that are launched from Xcode run with Metal Validation Layer. This is an API layer that sits between an actual API and your app and verifies that all the objects are in a consistent state and meet all the required preconditions and such. Apps run outside of Xcode don't have this layer enabled by default, because doing all the validation has its cost that you don't want to pass to the users, because Metal Validation Layer exists to be used during development. You can learn more about it by typing man MetalValidation in your terminal. You can also run your app with Validation enabled without Xcode, by prepending the invocation from the terminal with MTL_DEBUG_LAYER=1.
The fact that the app is not actually crashing and seems to work fine without validation layer does not necessarily mean that it will work in every case and on every platform. Some drivers may be more strict, some less. That's why Validation Layer exists.
Second, let's address what the actual problem is. Purgable state exists so that Metal can have an option of discarding some resources when the memory pressure on the system gets too high, instead of jetsaming your app. Only those resources that are marked volatile can be discarded in such a way. But you can't just "set it and forget it". It's intended to be used for non-frequently used resources that are pretty big and can be discarded safely. The general pattern is described in this WWDC video starting at around min 39. Basically, if you are going to use a volatile resource, you need to make sure that it wasn't already discarded and also make it non-volatile. You need to explicitly call setPurgeableState with a nonVolatile state and check if it returns empty (setPurgeableState returns the state the resource was in before the call). If it does, then the resource was discarded and you need to re-generate or reload the resource. If it didn't, then the resource is still there. You can safely use it in a command buffer, for example, and then set it back to volatile in a completion handler.
I would suggest watching that part of the video, because it goes more in depth.
Also, refer to an article Reducing the Memory Footprint of Metal Apps
, WWDC video Debug GPU-side errors in Metal
and documentation page for setPurgableState
Is there any way of knowing the render mode of an application at runtime on iOS device ?
I need to get the render mode of my running application and pass on different logic based on whatever render mode(CPU, GPU and Direct) I get at runtime but I am struggling to find any such API or method that can solve my purpose.
Any suggestions ?
Thanks,
Ken
From a pure AS3 tack, you'd be limited to wmodeGPU (which still doesn't give you quite what you want), however, with Air you have access to NativeWindow classes. That said, everything I've read on it seems to indicate this is an init state only property, and not something you can read out of your NativeWindow.
Try stage.nativeWindow.renderMode?
Good afternoon all!
I'm experiencing a rather annoying issue with one of my current projects. I'm working with a hardware library (NVAPI Pascal header translation by Andreas Hausladen) in one of my current projects. This lib allows me to retrieve information from an NVIDIA GPU. I'm using it to retrieve temperatures, and with the help of Firemonkey's TAnimateFloat, i'm adjusting the angle on a custom-made dial to indicate the temperature.
As FMX defaults to Direct 2D on Windows, i can monitor the FPS with any of the various "gamer" tools out there (MSI Afterburner, FRAPS, etc).
The issue i'm having is that when i put the system into sleep mode (suspend to RAM/S3), and then start it up again, the interface on my application is blacked out (partially or completely), and nothing on the UI is visibly refreshing. I'm calling the initialization for the NVAPI library regularly and checking the result via a timer, but this doesn't fix the issue. I'm also running ProcessMessages and repaint on the parent dial and it's children controls (since i can't seem to find a repaint for the form or even an equivalent).
I tried various versions of the library, and each one presents the same issue. The next paragraph indicates that this was in fact NOT the issue, and that it's actually the renderer at fault.
I have one solution, but i want to know if there's something more... elegant, available. The solution i have involves adding FMX.Types.GlobalUseDirect2D := False; before Application.Initialize in my projects source. However, this forces FMX to use GDI+ rather than Direct2D. It works of course, but i'd like to keep D2D open as an option if i can. I can use FindCmdLineSwitch to toggle this on/off dependant on parameters, but this still requires me to restart the application to change from D2D to GDI+ or vice-versa.
What's weird about it is that the FPS counter (from FRAPS in my case) indicates that there's still activity happening in the UI (as the value changes as would be expected), but the UI itself isn't visibly refreshing.
Is this an issue related to Direct2D, or a bug with Firemonkey's implementation? More importantly, is there a better method to fixing it than disabling D2D? Lastly, also related, is it possible to "reinitialize" an application without terminating it first (so perhaps i can allow the user to switch between GDI+ and D2D without needing to restart the application)?
This is may be of the issues with FM prior to the update 4 hotfix - 26664/QC 104210
Fixes the issue of a FireMonkey HD form being unresponsive after user unlock - installing this might resolve the issue for you.
The update should be part of your registered user downloads from the EDN (direct link http://cc.embarcadero.com/item/28881).
We need to drive 8 to 12 monitors from one pc, all rendering different views of a single 3d scenegraph, so have to use several graphics cards. We're currently running on dx9, so are looking to move to dx11 to hopefully make this easier.
Initial investigations seem to suggest that the obvious approach doesn't work - performance is lousy unless we drive each card from a separate process. Web searches are turning up nothing. Can anybody suggest the best way to go about utilising several cards simultaneously from a single process with dx11?
I see that you've already come to a solution, but I thought it'd be good to throw in my own recent experiences for anyone else who comes onto this question...
Yes, you can drive any number of adapters and outputs from a single process. Here's some information that might be helpful:
In DXGI and DX11:
Each graphics card is an "Adapter". Each monitor is an "Output". See here for more information about enumerating through these.
Once you have pointers to the adapters that you want to use, create a device (ID3D11Device) using D3D11CreateDevice for each of the adapters. Maybe you want a different thread for interacting with each of your devices. This thread may have a specific processor affinity if that helps speed things up for you.
Once each adapter has its own device, create a swap chain and render target for each output. You can also create your depth stencil view for each output as well while you're at it.
The process of creating a swap chain will require your windows to be set up: one window per output. I don't think there is much benefit in driving your rendering from the window that contains the swap chain. You can just create the windows as hosts for your swap chain and then forget about them entirely afterwards.
For rendering, you will need to iterate through each Output of each Device. For each output change the render target of the device to the render target that you created for the current output using OMSetRenderTargets. Again, you can be running each device on a different thread if you'd like, so each thread/device pair will have its own iteration through outputs for rendering.
Here are a bunch of links that might be of help when going through this process:
Display Different images per monitor directX 10
DXGI and 2+ full screen displays on Windows 7
http://msdn.microsoft.com/en-us/library/windows/desktop/ee417025%28v=vs.85%29.aspx#multiple_monitors
Good luck!
Maybe you not need to upgrade the Directx.
See this article.
Enumerate the available devices with IDXGIFactory, create a ID3D11Device for each and then feed them from different threads. Should work fine.
We're working on an application that displays information through a Direct3D visualisation. A late client request is the ability to view this application via some Remote Desktop solution.
Has anyone done anything similar? What options are available / unavailable? I'm thinking RDC, VNC, Citrix...
Any advice?
I think you can still use all of the normal D3D tools, but you won't be able to render to a surface associated with the screen. You'll have to render to a DIB (or some such) and Blt it with GDI to a normal window HDC. RDC/VNC/Citrix should all work with this technique.
Performance will definitely suffer - but that's going to be the case over remote desktop anyway. In fact, if I were you, I would mock up a VERY simple prototype and demonstrate the performance before committing to it.
Good luck!
I think Windows 7 has D3D remoting stuff - probably requires both client and server to be W7 though.
The build-in remote desktop works. (You don't have to do anything special)
But it is extremely slow, because when in doubt, it just sends the contents of a window as a bitmap.