I'm using OpenAL on iOS and seem to be getting some kind of race condition where rapid alternating calls to get and set AL_BYTE_OFFSET end up resetting the value to 0. This is all done while the audio source is playing.
+ (void) seekTrackSeconds:(float)bytesDelta
{
ALint position;
alGetSourcei(sourcesArr[targetSourceIdx], AL_BYTE_OFFSET, &position);
NSLog(#"byte pos read: %d", position);
alSourcei(sourcesArr[targetSourceIdx], AL_BYTE_OFFSET, position + bytesDelta);
}
After the first 3 calls (always 3...kinda weird), the output of byte pos read resets to 0. I feel like there's a race condition with setting / reading that value. I put a sleep for 20ms between the read and write and get a less frequent occurrence of the error. At 200ms sleep, I don't notice it, but that's unacceptable performance.
Anyone experience the same issue? Background is I'm trying to implement a 'scroll to seek' functionality for song playback. As you scroll a virtual jog wheel, the song should fast forward.
Related
there is another question with the same title on the site, but that one didn't solve my problem
I'm writing a Direct3D 11 desktop application, and I'm trying to implement waitable swap chain introduced by this document to reduce latency (specifically, the latency between when user moved the mouse and when the monitor displayed the change)
Now the problem is, I called WaitForSingleObject on the handle returned by GetFrameLatencyWaitableObject, but it did not wait at all and returned immediately, (which result in my application getting about 200 to 1000 fps, when my monitor is 60Hz) so my questions are:
Did even I understand correctly what a waitable swap chain does? According to my understanding, this thing is very similar to VSync (which is done by passing 1 for the SyncInterval param when calling Present on the swap chain), except instead of waiting for a previous frame to finish presenting on the screen at the end of a render loop (which is when we're calling Present), we can wait at the start of a render loop (by calling WaitForSingleObject on the waitable object)
If I understood correctly, then what am I missing? or is this thing only works for UWP applications? (because that document and its sample project are in UWP?)
here's my code to create swap chain:
SwapChainDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
SwapChainDesc.Stereo = false;
SwapChainDesc.SampleDesc.Count = 1;
SwapChainDesc.SampleDesc.Quality = 0;
SwapChainDesc.BufferUsage = D3D11_BIND_RENDER_TARGET;
SwapChainDesc.BufferCount = 2;
SwapChainDesc.Scaling = DXGI_SCALING_STRETCH;
SwapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD;
SwapChainDesc.AlphaMode = DXGI_ALPHA_MODE_UNSPECIFIED;
SwapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT;
result = Factory2->CreateSwapChainForHwnd(Device.Get(), hWnd, &SwapChainDesc, &FullscreenDesc, nullptr, &SwapChain1);
if (FAILED(result)) return result;
here's my code to get waitable object:
result = SwapChain2->SetMaximumFrameLatency(1); // also tried setting it to "2"
if (FAILED(result)) return result;
WaitableObject = SwapChain2->GetFrameLatencyWaitableObject(); // also, I never call ResizeBuffers
if (WaitableObject == NULL) return E_FAIL;
and here's my code for render loop:
while (Running) {
if (WaitForSingleObject(WaitableObject, 1000) == WAIT_OBJECT_0) {
Render();
HRESULT result = SwapChain->Present(0, 0);
if (FAILED(result)) return result;
}
}
So I took some time to download and test the official sample, now I think I'm ready to answer my own questions:
No, waitable swap chain does not work like how I think, it does not wait until a previous frame is presented on the monitor. Instead, I think what it does is probably to wait until all the work before Present are finished (GPU finished rendering to render target, but not yet displayed it on the monitor) or queued (CPU finished sending GPU all the commands, but GPU haven't finished executing them yet) I'm not sure which one is the real case, but either one, in theory, would help reduce input latency (and according to my tests, it did, both when VSync is on and off), also, now that I know that this thing has almost nothing to do with framerate control, I know now that it shouldn't be compared with VSync.
I don't think it's limited to UWP
And now I'd like to share some ideas that I have concluded for myself about input latency and framerate control:
I now believe that the concept of reducing input latency and the concept of framerate control are mutually exclusive and that a perfect balance point between them probably doesn't exist;
for example, if I want to limit framerate to 1 frame per "vblank", then the input latency (in an ideal scenario) would be as high as the monitor frame latency (which is about 16ms for a 60hz monitor);
but when I don't limit framerate, the input latency would be as high as how long a GPU would take to finish a frame (which in an ideal scenario, about 1 or 2ms, which is significantly faster not only in numbers, the improvement is visible to user's perspective as well), but a lot of frames (and CPU/GPU resources used to render them) would be wasted
As an FPS game player myself, the reason why I want to reduce input latency is obvious, is because I hate input lag;
and the reasons why I want to invest in framerate control are: firstly, I hate frame tearing (a little more than how much I hate input lag), secondly, I want to ease CPU/GPU usage when possible.
However, recently I discovered that frame tearing is perfectly defeated by using flip model (I just don't get any tearing at all when using flip model, no VSync needed), so I don't need to worry about tearing anymore.
So I now plan to prioritize latency reduction rather than framerate control until when and if one day I move on to D3D12 to figure out a way to ease CPU/GPU usage while preserving low input latency.
I'd like create a class/struct/other that contains each measure of a song, complete with independent tempo and beat count, then play the entire song back (with potential updates from user input). I'm only aware of how to change those variables on an AKSequencer track as a whole; is there a way to store that data independently and then have it play back as one? And keep coherence between the measures so as not to "jump" between them? Thanks!
AKSequencer is not good at setting loop length on the fly, but it is totally fine for adding to or re-writing the contents of a track while the sequencer is running. This includes tempo events.
Why don't you set the length to something arbitrarily long, and string together your MIDI events measure after measure without ever looping? Keep track of how many beats have been written so far, and just keep adding after that point. Doing this while the sequencer is running should be no problem. You could even automate writing the next bar by triggering a callback function near the end of each measure getting it to write the next segment (which could be selected or 'cued up' at run-time). You can schedule the tempo events with addTempoEventAt(), with the starting point for the next segment.
When your user stops the sequence, clear the track(s), reset the tempo, rewind the sequence and start over.
I'm building a drum machine to learn how to use MIDIs on iOS. I managed to get it working to one point, however, I have the following problem. When the user taps a certain button I need to add a sound to my MIDI loop while the MIDI player is playing and unfortunately I can't simply do:
MusicTrackNewMIDINoteEvent(track, 0, &message);
although the track is looping and has a determined length, so theoretically it should come back to 0 at one point. I also tried this:
MusicTrackNewMIDINoteEvent(track, noteTimestamp, &message);
where noteTimestamp is the timestamp I receive from the player. Finally, I managed to get it working with something like this:
MusicTrackNewMIDINoteEvent(track, noteTimestamp+.5, &message);
but it's needless to say that the .5 delay is not really what I would want for my drum machine, which should be as responsive as possible.
So, how does one tackle this problem? How can you push a note on the track as soon as possible, without any delay?
You're laying down an event on the track, and by the time you lay the event down the "playhead" is already past the point where it can do anything with it.
So continue to do what you're doing (without shifting the time) as a means to "record" the event for the next time the loop "comes around", but you'll need to fire off a midi message manually -- apart from the track as:
(int) note = 60;
(int) velocuty = 127;
(int) offset = 0;
MusicDeviceMIDIEvent(_yourSamplerUnit, kMIDIMessage_NoteOn << 4 | 0, note, velocity, offset);
Again, firing a manual midi event will allow the listener to hear the sound, and laying down the event into the track will allow your track to "record" it for the next time 'round.
I'm trying to use the currentPlaybackRate property on MPMusicPlayerController to adjust the tempo of a music track as it plays. The property works as expected when the rate is less than 0.90 or greater than 1.13, but for the range just above and below 1, there seems to be no change in tempo. Here's what I'm trying:
UIAppDelegate.musicPlayer = [MPMusicPlayerController iPodMusicPlayer];
... load music player with track from library
[UIAppDelegate.musicPlayer play];
- (void)speedUp{
UIAppDelegate.musicPlayer.currentPlaybackRate = UIAppDelegate.musicPlayer.currentPlaybackRate + 0.03125;
}
- (void)speedDown
{
UIAppDelegate.musicPlayer.currentPlaybackRate = UIAppDelegate.musicPlayer.currentPlaybackRate - 0.03125;
}
I can monitor the value currentPlaybackRate and see that it's being correctly set, but there seems to be no different in playback tempo until the 0.9 or 1.13 threshold has been reached. Does anyone have any guidance or experience on the matter?
I'm no expert, but I suspect that this phenomenon may be merely an artefact of the algorithm used to change the playback speed without raising or lowering the pitch. It's a tricky business, and here it must be done in real time without much distortion, so probably an integral multiple of the tempo is needed. You might want to read the wikipedia article on time stretching, http://en.wikipedia.org/wiki/Audio_timescale-pitch_modification
Actually I've found out the problem: the sentence myMusicPlayer.currentPlaybackRate = 1.2 must be placed after the sentence .play(). If you put the rate setting before the .play(), it would not work.
I'm trying to optimize a function (an FFT) on iOS, and I've set up a test program to time its execution over several hundred calls. I'm using mach_absolute_time() before and after the function call to time it. I'm doing the tests on an iPod touch 4th generation running iOS 6.
Most of the timing results are roughly consistent with each other, but occasionally one run will take much longer than the others (as much as 100x longer).
I'm pretty certain this has nothing to do with my actual function. Each run has the same input data, and is a purely numerical calculation (i.e. there are no system calls or memory allocations). I can also reproduce this if I replace the FFT with an otherwise empty for loop.
Has anyone else noticed anything like this?
My current guess is that my app's thread is somehow being interrupted by the OS. If so, is there any way to prevent this from happening? (This is not an app that will be released on the App Store, so non-public APIs would be OK for this.)
I no longer have an iOS 5.x device, but I'm pretty sure this was not happening prior to the update to iOS 6.
EDIT:
Here's a simpler way to reproduce:
for (int i = 0; i < 1000; ++i)
{
uint64_t start = mach_absolute_time();
for (int j = 0; j < 1000000; ++j);
uint64_t stop = mach_absolute_time();
printf("%llu\n", stop-start);
}
Compile this in debug (so the for loop is not optimized away) and run; most of the values are around 220000, but occasionally a value is 10 times larger or more.
In my experience, mach_absolute_time is not reliable. Now I use CFAbsoluteTime instead. It returns the current time in seconds with a much better precision than the second.
const CFAbsoluteTime newTime = CFAbsoluteTimeGetCurrent();
mach_absolute_time() is actually very low level and reliable. It runs at a steady 24MHz on all iOS devices, from the 3GS to the iPad 4th gen. It's also the fastest way to get timing information, taking between 0.5µs and 2µs depending on CPU. But if you get interrupted by another thread, of course you're going to get spurious results.
SCHED_FIFO with maximum priority will allow you to hog the CPU, but only for a few seconds at most, then the OS decides you're being too greedy. You might want to try sleep( 5 ) before running your timing test, as this will build up some "credit".
You don't actually need to start a new thread, you can temporarily change the priority of the current thread with this:
struct sched_param sched;
sched.sched_priority = 62;
pthread_setschedparam( pthread_self(), SCHED_FIFO, &sched );
Note that sched_get_priority_min & max return a conservative 15 & 47, but this only corresponds to an absolute priority of about 0.25 to 0.75. The actual usable range is 0 to 62, which corresponds to 0.0 to 1.0.
It happens while app spend some time in another threads.