I have searched a lot over the internet to find a solution for this one, but I cant find any solution. For one of my project to work, I need to increase the refresh rate of directx 11 running on windows server 2012. All the solutions that I found are able to solve the same problem, but for directx 9 or lower. Can any one tell me that how can I increase the refresh rate of directx 11 on Windows Server 2012?
You can turn off VSync, which will have the effect of decoupling your frame rate from your display device. Call Present(0,0) to see unlimited frame rates.
If you leave VSync turned on, you cannot Present frames any faster than your display device allows, likely 60hz. You're free to set your display device to whatever refresh rate you like, but that has nothing to do with DirectX 11. If you're using an LCD monitor it's likely only capable of 60hz.
Related
I am using Metal for rendering live video frames plus some custom control (a circular slider) for zooming that I implemented using Quartz 2D API. When I run the app in debugger, I see FPS drop from 30 to sometimes 11 and zoom is not smooth on older devices such as iPad Mini 2. I then run the code in Time Profiler and surprisingly, there is no fps drop in Time Profiler. App runs smooth in Profiler. How do I know what is causing fps drop in debug?
It's probably the Metal Validation layer that's active for your debug scheme. It's not typically surprising that performance of programs is worse in general when debugging (due to lack of optimizations, or asserts being enabled, etc.).
If you want to get similar Metal performance when debugging, you can try disabling the Metal Validation in the scheme settings. But, then, of course, you lose the actual debugging benefit of the validation of your use of Metal.
I'm almost finishing my iOS game written in Swift + SpriteKit.
It's a quite simple game, 30-32 nodes at max. Only 1 thing has physics. The rest is a few animated clouds (around 6). The CPU usage is around 2-3% and max RAM usage of 75-80MB.
Including that I also get frame drops when changing from one scene to another. Why that could be?
(I'm pre-loading all the textures and sounds during game init, and not on the scenes)
When I use the simulator for 5S up to 6S Plus, I don't see any frame drop in there. So that's weird. Looks like it's not my game but my iPhone 6S?
Now, I do also have other games installed on the same device from different developers, and I frequently get random frame drops too. Lags for 2-3 seconds and then comes back to 60fps.
Does anyone know if this is something that's happening after an X iOS update ? or I was even thinking this my be some kind of background service running that's killing my phone. Call it facebook, whatsapp, messenger, etc.
Is there any way I could possibly check on what's going on?
Was this caused by the way that newer versions of SpriteKit are defaulting to Metal render mode as compared to OpenGL mode? For example, do your problems go away when PrefersOpenGL=YES is added to Info.plist? I covered a bit of this performance issue in my blog post about a SpriteKit repeat shader. Note that you should only be testing on an actual iOS device, not the simulator.
I am having some trouble grasping why my app performance is at 24fps as opposed to the usual 30fps.
The frame time for both CPU and GPU varies between 6-18 ms, and the GPU utilization never surpasses 55%. Doesn't this mean that the frame rate should be higher?
When I use 'Analyze Performance', Xcode tells me:
Your performance is not limited by the OpenGL ES commands issued. Use the Instruments tool to investigate where your application is bottlenecked.
I am a beginner at this, so can someone explain to me how the frame time can be so low, yet the frame rate so high? (The device is not the issue)
Edit 4/2/2013
New development:
This frame rate drop only occurs when I run the app from Xcode (I know this because when the app performance is poor, the sound is not in sync and accelerometer sensitivity is lowered). When I stop running from Xcode and run the app directly from my iPod, the frame rate is perfect. Now I am wondering if there really are performance issues with my app. Is there any way that Xcode could be impeding on the app's performance by running tests or monitoring the device?
You should set frame rate to your avCaptureConnection.And then it shoud be ok.
Currently I am working on an Air app for iOS and Android. Air 3.5 is targeted.
Performance on iPhone 4 / 4s has been acceptable overall, after a lot of optimising: gpu rendering, StageQuality.LOW, avoiding vectors as much as possible etc. I really put a lot of effort in boosting performance.
Still, every once in a while, the app becomes very slow. There is no precise point in time or action or combination of actions after which this occurs. Sometimes, it doesn't occur for days. But when it occurs, only killing the app and launching it again helps, because the app stays slow after that. So I am not talking about minor hiccups that
The problem occurs only on (some) iPhones 4 and 4s. Not on iPad 3,4, iPhone 5, any Android device...
Has anyone had similar experiences and pointers as to where a solution might be found?
What happens when gpu memory fills up? Or device memory? Could this be involved?
Please don't expect Adobe Air to have performance as Native Apps. I am developing App with Adobe Air as well.
By the sound of your development experience. I think it's to do with memory issue, because the performance is not too bad at the begging stage, but it gets bad overtime (so u have to kill the app). I suggest you looking into memory leaking issue.
Hopefully my experience can help you.
I had a similar problem where sometime during gameplay the framerate would drop from 30fps to an unrecoverable 12fps. At first I thought I was running out of GPU memory and it was falling back on rendering with CPU.
Using Adobe Scout I found that when this occurred, the rendering time was ridiculousness high.
Updating to Air 3.8, I fixed the problem by limiting the amount of bitmaps that were being rendered and in memory at once. I would only create new instances of backgrounds for appropriate levels, and then flagging them for garbage collection when the level ended, waiting a few seconds and then moving to the next level.
What might solve your problem is if you reduce the amount of textures you have in memory at one time, only showing the ones you need to. If you want to swap out active textures for new ones, set all the objects with that texture data to null:
testMovieClip = null;
and remove all listeners from it so that garbage collection will pick it up.
Next, you can force garbage collection with AIR:
System.gc();
Instantiate the new texture you want to render a few frames after calling gc. Monitor resources with Scout and the iOS companion app to confirm that it's working.
You could also try to detect when the framerate drops, and set some objects to null then force garbage collection. In my case, if I moved my game to an empty frame for a few seconds with garbage collection, the framerate would recover and the game would resume rendering with GPU.
Hope this helps!
I'm looking to create a hardware accelarated DirectX (9 at the moment) window on a secondary screen. This screen is connected to the same graphics display as the primary screen (at least at the moment).
Currently, when I try to open the window on the secondary screen based on window position or by dragging it there, CPU usage jumps by about 10%, which seems to indicate that windows is switching to a software fallback rather than the hardware accelaration.
Machine is windows XP running a NVIDIA graphics card (varying cards as this runs on several machines), with the latest driver. It's also running CUDA at the same time to produce the images if that matters. Programming language is c++, manual window and message queue creation, no tookbox used at the moment to manage the GUI
Thanks
When you call CreateDevice, make sure to use the index of the monitor you are targeting. The standard D3DADAPTER_DEFAULT value is just 0, which is the primary monitor. DirectX is a bit kludgy that way, but if the window is on a different monitor than is specified in CreateDevice, then it will silently render in a framebuffer targeting the first monitor, then buffer copy to a framebuffer on the second monitor using the OS window manager.
So, the quick and dirty solution is to use CreateDevice(1, ...) instead since that is almost always be how a dual monitor setup is indexed.
A more robust solution is to use MonitorFromWindow(hwnd) to find the monitor that the window covers the most, then iterate through available d3d adapters looking for one that returns the same monitor handle using GetAdapterMonitor(). If you have a system with more than two monitors, or if you don't know in advance what monitor you want and just have an HWND, then you need the longer method.