Monogame - How to properly cap FPS in Monogame? - xna

I wanted to cap FPS of my game at 30fps conditionally if being built for Windows Phone, as I don't need it to be running at 60fps on it and I heard from too many it is better to have it capped on mobile device because of battery draining.
I used same snippet of code used by XNA for Windows Phone 7:
//FrameRate is 30fps by default for WindowsPhone.
TargetElapsedTime = TimeSpan.FromTicks(333333);
But... As it is doing its job capping FPS, it is affecting everything else too, causing stuttering and sound issues. Because of this, I suppose I'm doing something wrong.
Anything which would help me would be great, as I was not able to find anything on the internet regarding this issue (most people wanted quite the opposite :D )

To fix your sound issues look into multi-threading and running your sound system on a separate, uncapped thread. For the game code, specifically the code that updates your assets, your method should work, but personally I do it differently.
// in your game1 class variable definitions
private const float timeToNextUpdate = 1.0f / 30.0f;
private float timeSinceLastUpdate;
//in your game1 update method
public override void Update(GameTime gameTime)
{
timeSinceLastUpdate += (float)gameTime.ElapsedGameTime.TotalSeconds;
if (timeSinceLastUpdate >= timeToNextUpdate)
{
//update game
timeSinceLastUpdate = 0;
}
//systems you don't want to limit would be updated here
}

Related

iOS - AVAudioPlayerNode.play() execution is very slow

I'm using AVAudioEngine for audio in an iOS game application. A problem I've encountered is that AVAudioPlayerNode.play() takes a long time to execute, which can be a problem in real-time applications such as games.
play() just activates the player node - you don't have to call it every time you play a sound. As such, it doesn't have to be called that often, but it does have to be called occasionally, such as to activate the player initially, or after it's been deactivated (which happens in some situations). Even if only called occasionally, the long execution times can be a problem, especially if you need to call play() on multiple players at once.
The execution time for play() seems to be proportional to the value of AVAudioSession.ioBufferDuration, which you can request to be changed using AVAudioSession.setPreferredIOBufferDuration(). Here's some code I'm using to test this:
import AVFoundation
import UIKit
class ViewController: UIViewController {
private let engine = AVAudioEngine()
private let player = AVAudioPlayerNode()
private let ioBufferSize = 1024.0 // Or 256.0
override func viewDidLoad() {
super.viewDidLoad()
let audioSession = AVAudioSession.sharedInstance()
try! audioSession.setPreferredIOBufferDuration(ioBufferSize / 44100.0)
try! audioSession.setActive(true)
engine.attach(player)
engine.connect(player, to: engine.mainMixerNode, format: nil)
try! engine.start()
print("IO buffer duration: \(audioSession.ioBufferDuration)")
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if player.isPlaying {
player.stop()
} else {
let startTime = CACurrentMediaTime()
player.play()
let endTime = CACurrentMediaTime()
print("\(endTime - startTime)")
}
}
}
Here are some sample timings for play() that I got using a buffer size of 1024 (which I believe is the default):
0.0218
0.0147
0.0211
0.0160
0.0184
0.0194
0.0129
0.0160
Here are some sample timings using a buffer size of 256:
0.0014
0.0029
0.0033
0.0023
0.0030
0.0039
0.0031
0.0032
As you can see above, for a buffer size of 1024, execution times tend to be in the 15-20 ms range (around a full frame at 60 FPS). With a buffer size of 256, it's around 3 ms - not as bad, but still costly when you only have ~17 ms per frame to work with.
This is on an iPad Mini 2 running iOS 12.4.2. This is obviously an old device, but the results I see on the simulator seem similarly proportional, so it seems to have more to do with the buffer size and the behavior of the function itself than with the hardware being used. I don't know what's going on under the hood, but it seems possible that play() blocks until the beginning of the next audio cycle, or something like that.
Requesting a lower buffer size seems like a partial solution, but there are some potential drawbacks. According to the documentation here, lower buffer sizes can mean more disk access when streaming from a file, and irrespective of that, the request may not be honored at all. Also, here, someone reports playback problems related to low buffer sizes. Taking all this into account, I'm disinclined to pursue this as a solution.
That leaves me with execution times for play() in the 15-20 ms range, which typically means a missed frame at 60 FPS. If I arrange things so that only one call to play() is made at a time, and only infrequently, maybe it won't be noticeable, but it's not ideal.
I've searched for information and asked about this in other places, but it seems either not many people are encountering this behavior in practice, or it isn't an issue for them.
AVAudioEngine is intended for use in real-time applications, so if I'm right that AVAudioPlayerNode.play() blocks for a significant amount of time proportional to the buffer size, that seems like a design issue. I realize this probably isn't an issue many are dealing with, but I'm posting here to ask if anyone has encountered this specific issue with AVAudioEngine, and if so, if there's any insight, suggestions, or workarounds anyone can offer.
I've investigated this fairly thoroughly. Here are my findings.
Having now tested the behavior on a variety of devices and iOS versions (including the latest version at the time of this writing, 13.2), and having had others test it as well, my current conclusion is that the long execution times for AVAudioPlayerNode.play() are inherent and that there's no obvious workaround. As noted in my original post, the execution times can be reduced by requesting a lower buffer duration, but as discussed earlier, this doesn't seem like a viable solution.
I heard from a credible source that calling play() on a background thread (e.g. using Grand Central Dispatch) should be safe, and indeed this would be one way to solve the problem. However, although it may technically be safe to call play() (or other AVAudioEngine-related functions) on different threads, I'm skeptical as to whether this is a good idea (further explanation below).
The documentation doesn't state this as far as I can tell, but AVAudioEngine will throw NSException's under various circumstances, which, without special handling, will result in application termination in Swift.
One of the things that will cause an NSException to be thrown is if you call AVAudioPlayerNode.play() while the engine is not running. Obviously if you only have your own code to worry about, you can take steps to ensure this doesn't occur.
However, iOS itself will sometimes stop the engine of its own accord, for example when an audio interruption occurs. If you call play() subsequent to that and before restarting the engine, an NSException will be thrown. It's fairly easy to avoid this mistake if all your calls to play() are on the main thread, but multithreading complicates the issue and seems like it could introduce the risk of accidentally calling play() after the engine has been stopped. Although there may be ways to work around this, multithreading seems to introduce undesirable complexity and fragility, so I've opted not to pursue it.
My current strategy is as follows. For the reasons discussed earlier, I'm not using multithreading. Instead, I'm doing everything I can to reduce the number of calls to play(), both overall and per-frame. This includes, among other things, only supporting stereo audio (for various reasons, supporting both mono and stereo can lead to more calls to play(), which is undesirable).
Lastly, I also investigated alternatives to AVAudioEngine. OpenAL is still supported on iOS, but is deprecated. A custom implementation using low-level APIs such as Audio Queue Services or Audio Units would be a possibility, but would be non-trivial. I've also looked at some open-source solutions, but the options I looked at use AVAudioEngine under the hood themselves and therefore suffer from the same problems, and/or have other shortcomings or limitations of their own. Of course there are also commercial options available, which may provide a solution for some developers.

Monotouch and catiledlayer

One of my old question had to do with viewing pdf files in monotouch ( I managed to accomplish this). Port of the iOS pdf viewer for xamarin
My issue is as following: if I start to close and open a pdf view( view with catiledlayer) really fast and often my app crashes with a:
Got a SIGSEGV while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.
After researching around the internet for a few days I found a post saying something along the lines of: The image back store is being cleaned and this is causing the error.
Edit:
Ok, I have come to the conclusion that my app is cleaning the memory and my pointers are turning into nulls. I called Gc.Collect() a couple of times and this seems to be the root of the problem.
I have removed all my calls to GC.Collect() and I currently running a stress test and will update as I identify the issue.
After running some more tests this is what I found out:
The error seems to orignate from the TiledLayerDelegate : CALayerDelegate class.
The app only crashes if the method Dispose from CALayerDelegate is called, overriding the method as empty seems to prevent the app from crashing.
Running the app seems to cause no issue whatsoever anymore. It is apparent that something is going really wrong on the Dispose method of the CALayerDelegate.
Last finding: Running the app like a monkey tends to heat up the app a good bit. I assume this is due to the intensive rendering of pdf pages ( they are huge sheets about 4,000 X 3,000 pxs)
protected override void Dispose (bool disposing)
{
try{
view = null;
GC.Collect (2);
//base.Dispose (disposing);
}catch(Exception e) {
//System.Console.Write(e);
}
}
Now more than anything, I am just wondering if the phone heating up is really as I assume nothing more than the CPU rendering the sheets and is normal. Does anyone have any ideas as to how best deal with the Dispose override?
Last Edit: for anyone wanting to prevents crashes this is what my last version of the layer view class looks like.
public class TiledPdfView : UIView {
CATiledLayer tiledLayer;
public TiledPdfView (CGRect frame, float scale)
: base (frame)
{
tiledLayer = Layer as CATiledLayer;
tiledLayer.LevelsOfDetail = 4; //4
tiledLayer.LevelsOfDetailBias = 4;//4
tiledLayer.TileSize = new CGSize (1024, 1024);
// here we still need to implement the delegate
tiledLayer.Delegate = new TiledLayerDelegate (this);
Scale = scale;
}
public CGPDFPage Page { get; set; }
public float Scale { get; set; }
public override void Draw (CGRect rect)
{
// empty (on purpose so the delegate will draw)
}
[Export ("layerClass")]
public static Class LayerClass ()
{
// instruct that we want a CATileLayer (not the default CALayer) for the Layer property
return new Class (typeof (CATiledLayer));
}
protected override void Dispose (bool disposing)
{
Cleanup ();
base.Dispose (disposing);
}
private void Cleanup ()
{
InvokeOnMainThread (() => {
tiledLayer.Delegate = null;
this.RemoveFromSuperview ();
this.tiledLayer.RemoveFromSuperLayer ();
});
}
Apple's sample code around that is not really great. Looking at the source of your tiled view I do not see a place where you set the layer delegate to nil. Under the hood, CATiledLayer creates a queue to call the tiled rendering in the background. This can lead to races and one way to work around this is explicitly nilling the delegate. Experiments showed that this can sometimes block, so expect some performance degradation. Yes, this is a bug and you should file a feedback - I did so years ago.
I'm working on a commercial PDF SDK (and we have a pretty popular Xamarin wrapper) and we moved away from CATiledLayer years ago. It's a relatively simple solution but the nature of PDF is that to render a part, one has to traverse the whole render tree - it's not always easy to figure out what is on screen and what is not. Apple's renderer is doing an ok-ish job on that and performance is okay, but you'll get a better performance if you render into one image and then move that around/re-render as the user scrolls. (Of course, this is trickier and harder to get right with memory, especially on retina screens.)
If you don't have the time to move away from CATiledLayer, some people go with the nuclear option and also manually remove the layer from the view. See e.g. this question for more details.

Timing accuracy with swift using GCD dispatch_after

I'm trying to create a metronome for iOS in Swift. I'm using a GCD dispatch queue to time an AVAudioPlayer. The variable machineDelay is being used to time the player, but its running slower than the time I'm asking of it.
For example, if I ask for a delay of 1sec, it plays at 1.2sec. 0.749sec plays at about 0.92sec, and 0.5sec plays at about 0.652sec. I could try to compensate by adjusting for this discrepancy but I feel like there's something I'm missing here.
If there's a better way to do this altogether, please give suggestions. This is my first personal project so I welcome ideas.
Here are the various functions that should apply to this question:
func milliseconds(beats: Int) -> Double {
let ms = (60 / Double(beats))
return ms
}
func audioPlayerDidFinishPlaying(player: AVAudioPlayer, successfully flag: Bool) {
if self.playState == false {
return
}
playerPlay(playerTick, delay: NSTimeInterval(milliseconds(bpm)))
}
func playerPlay(player: AVAudioPlayer, delay: NSTimeInterval) {
let machineDelay: Int64 = Int64((delay - player.duration) * Double(NSEC_PER_SEC))
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, machineDelay),dispatch_get_main_queue(), { () -> Void in
player.play()
})
}
I have never really done anything with sound on iOS but I can tell you why you are getting those inconsistent timings.
What happens when you use dispatch_after() is that some timer is set somewhere in the OS and at some point soon after it expires, it puts your block on the queue. "at some point after" is going to be short, but depending on what the OS is doing, it will almost certainly not be close to zero.
The main queue is serviced by the main thread using the run loop. This means your task to play the sound is competing for use of the CPU with all the UI functionality. This means that the chance of it playing the sound immediately is quite low.
Finally, the completion handler will fire at some short time after the sound finishes playing but not necessarily straight away.
All of these little delays add up to the latency you are seeing. Unfortunately, depending on what the device is doing, that latency can vary. This is never going to work for something that needs precise timings.
There are, I think, a couple of ways to achieve what you want. However, audio programming is beyond my area of expertise. You probably want to start by looking at Core Audio. My five minutes of research suggests either Audio Queue Services or OpenAL, but those five minutes are literally everything I know about sound on iOS.
dispatch_after is not intended for sample accurate callbacks.
If you are writing audio applications there is no way to escape, you need to implement some CoreAudio code in one way or another.
It will "pull" specific counts of samples. Do the math (figuratively ;)

Introducing delay in live stream using opencv boost

I am trying to create a delay in live stream obtained from webcam. I am using opencv. However, i am unable to generate the desired delay. I am confused how to set and handle FPS and delay. below is my code:
I am using a constant value for fps at the moment. But i am not sure if we can do that.
Currently, the stream is shown with some initial delay while the queue is being filled. but after that, there is no delay in the stream.
fps=15;
wait= (1000.0/fps);
queue<cv::Mat> _buffer;
while(1)
{
int size_x=0;
//grad a frame from the video camers
boost::unique_lock<boost::mutex> lock(mutex, boost::defer_lock);
bool read = cap.read(image);
if(!read)
break;
locked= lock.try_lock();
if(locked){
if(image.data){
_buffer.push(image);
waitKey(wait);
if((int)_buffer.size() > (buffer_lenght))
{
popped_img=_buffer.front();
_buffer.pop();
imshow("VideoCaptureTutorial", popped_img);
}
}
lock.unlock();
}
I found two problems in your code.
1) Try decreasing waitKey value, with that long waiting period, opencv might skip frames when its a live stream. Which isn't related to your question, but, I think it might be helpful.
waitKey(30);
the above line might be good enough.
2) you have to push Mat.clone(), I assume, this might solve your problem in this case.
_buffer.push(image.clone());
Opencv's get/set fps won't work on live stream, in case, if you want to get fps for live feed then, you have to use your own counter. As I would have did, if, I were you.
VideoCapture cap(0);
double counter=0;
clock_t t1 = clock();
Mat frame;
while(1)
{
counter++;
cap.read(frame);
}
double fps = (Clock()-t1)/counter;
//assuming, you are on windows, clock() would give in seconds or else, it would be in ms
Completely, otherway around is, save the livefeed as a video file using videowriter and then use get fps to know the fps.

Where is my AVPlayer's memory, and how do I get it back?

I'm playing heaps of videos at the same time with AVPlayer. To reduce loading times, I'm storing the corresponding views in a NSCache.
This works fine until reaching a certain number of videos, from which the videos simply stop playing, or even appearing.
There's no error, log or memory warning. In particular, I'm listening to UIApplicationDidReceiveMemoryWarningNotification to clear the cache but this is never received.
If I remove the cache, all the videos play at expense of worse performance.
This makes me suspect that AVPlayer is using memory from a different process (which one?). And when that memory reaches a certain limit, new players cease to work.
Is this correct?
If so, is there a way to be notified when this magic limit is reached to take the appropriate measures (e.g., clear the cache) to ensure playback of other media?
Good news and bad news - good is you can probably fix the problem, bad is it takes work and is somewhat complex.
Root Problem
The reason you don't get notified early happens because iOS does not find out that your app has exceeded its memory budget til its almost too late, then it immediately kills it. The problem has to do with the way iOS (and OS X) manage the file system cache. Normally, when files get opened, as you read the data, the file data gets transferred into a buffer in the Unified Buffer Cache (a term you can google for more info) - I'll call it UBC from now on.
So suppose you have 10 open files, and you have read every file to the end, but have not closed the files. Well, all that data is sitting in the UBC. Now, if you close the files, the buffers are all freed. And technically, the OS can purge this buffers too - only it seems by the time it realizes that memory is tight, it chooses to blow the app away first (and there may be valid reasons for it to do this). So imagine that you app is showing videos, and the way the videos get loaded is through the file system, the number of free buffers starts dropping. At some point iOS notices this, tracks down who most belong to (your app), and wham, kills your app ASAP.
I hit this problem myself in an open source project I support, PhotoScrollerNetwork. Users started complaining that their project was getting terminated by the system, like you, without any notification. I tried in vain to monitor the UBC (there are APIs on OS X to do so, but not on iOS). In the end I found a solution using an heuristic - monitor all your memory usage including the UBC, and don't exceed 50% of the total available iOS memory pool.
So (you might ask) - what is the Apple approved way to solve this problem? Well, there is none. How do I know that? Because I had a half hour long discussion at WWDC 2012 with the Director of Core iOS in one of the labs (after getting ping ponged around by others who had no idea what I was talking about). In the end, after I explained the above heuristic, he told me directly that the solution was probably as good as any he could think of. Without an API to directly monitor the UBC, you can only approximate its usage and adjust accordingly.
But you say, I'm using the NSCache - why doesn't the system account for the AVPlayer memory there? There reason is undoubtedly the UBC - an AVPlayer instance probably only consumes a few thousand K of memory itself - its the open file to the video that is not accounted for by iOS.
Possible Solutions
1) If you can load the videos directly into a NSData object, and keep that in the NSCache, you can most likely totally avoid the UBC issues mentioned above. [I don't know enough about the AV system to know if you can do this.] In this case the system should be capable of purging memory when it needs to.
2) Continue using your original code, but add memory management to it. That is, when you get create an AVPlayer instance, you will need to account for the size of the video in bytes, and keep a running tally of all this memory. When you approach 50% of total device free memory, then start purging old AVPlayers.
Code
For completeness, I've provided the relevant code from PhotoScrollerNetwork below. If you want more details you can peruse the project - however its quite complex so expect to spend some time (its doing JPEG decoding on the fly for massive images and writing tiles to the file system as the decode proceeds).
// Data Structure
typedef struct {
size_t freeMemory;
size_t usedMemory;
size_t totlMemory;
size_t resident_size;
size_t virtual_size;
} freeMemory;
Early on in your app:
// ubc_threshold_ratio defaults to 0.5f
// Take a big chunk of either free memory or all memory
freeMemory fm = [self freeMemory:#"Initialize"];
float freeThresh = (float)fm.freeMemory*ubc_threshold_ratio;
float totalThresh = (float)fm.totlMemory*ubc_threshold_ratio;
size_t ubc_threshold = lrintf(MAX(freeThresh, totalThresh));
size_t ubc_usage = 0;
// Method on some class to monitor the memory pool
- (freeMemory)freeMemory:(NSString *)msg
{
// http://stackoverflow.com/questions/5012886
mach_port_t host_port;
mach_msg_type_number_t host_size;
vm_size_t pagesize;
freeMemory fm = { 0, 0, 0, 0, 0 };
host_port = mach_host_self();
host_size = sizeof(vm_statistics_data_t) / sizeof(integer_t);
host_page_size(host_port, &pagesize);
vm_statistics_data_t vm_stat;
if (host_statistics(host_port, HOST_VM_INFO, (host_info_t)&vm_stat, &host_size) != KERN_SUCCESS) {
LOG(#"Failed to fetch vm statistics");
} else {
/* Stats in bytes */
natural_t mem_used = (vm_stat.active_count +
vm_stat.inactive_count +
vm_stat.wire_count) * pagesize;
natural_t mem_free = vm_stat.free_count * pagesize;
natural_t mem_total = mem_used + mem_free;
fm.freeMemory = (size_t)mem_free;
fm.usedMemory = (size_t)mem_used;
fm.totlMemory = (size_t)mem_total;
struct task_basic_info info;
if(dump_memory_usage(&info)) {
fm.resident_size = (size_t)info.resident_size;
fm.virtual_size = (size_t)info.virtual_size;
}
#if MEMORY_DEBUGGING == 1
LOG(#"%#: "
"total: %u "
"used: %u "
"FREE: %u "
" [resident=%u virtual=%u]",
msg,
(unsigned int)mem_total,
(unsigned int)mem_used,
(unsigned int)mem_free,
(unsigned int)fm.resident_size,
(unsigned int)fm.virtual_size
);
#endif
}
return fm;
}
When you open a video, add the size to ubc_usage, and when you close one decrement it. When you want to open a new video, test ubc_usage against ubc_threadhold, and its it exceeds the value you have to close something first.
PS: you can try calling that freeMemory method at other times, and see, but in my case it hardly changes at all when files get opened - the system seems to consider the whole UBC as "free", since it could purge it if it needed to (I guess).
If you're throwing all of these videos in a NSCache, you have to be prepared for the cache to throw away items when it feels like they are consuming too much memory. From the NSCache documentation:
The NSCache class incorporates various auto-removal policies, which
ensure that it does not use too much of the system’s memory. The
system automatically carries out these policies if memory is needed by
other applications. When invoked, these policies remove some items
from the cache, minimizing its memory footprint.
Check to see if you're getting nils back from the cache, and if you are, you'll have to reconstruct your objects.
Edit:
It is also worth mentioning that objc.io #7 advises against storing large objects in a NSCache:
The eviction method of NSCache is non-deterministic and not
documented. It’s not a good idea to put in super-large objects like
images that might fill up your cache faster than it can evict itself.

Resources