MIDI Note On in middle of Note - ios

I am trying to play MIDI Notes in iOS using AudioGraph and AudioUnit. I used kAudioUnitSubType_MIDISynth to successfully create MIDISynth Unit, loaded sound font file into the unit, and used NOTE ON (0x90|0) message to start a note.
osStatus = MusicDeviceMIDIEvent(midiSynthUnit,
0x90|0,
60, //Pitch
100, //Velocity
0);
However, I would like to pause the music player at some point. When the music player has resumed after pause, the note on time might be passed already.
The image hereunder is an example, the vertical line is where the MIDI has to be resumed in time, a note D3 has passed its start time but not yet ended:
How can I play the midi note from the middle its time?
Thanks

Related

How do you fast forward or rewind audio (like a song) for a certain time interval with AV Audio Player in Swift 5?

I need help using AV Audio Player, a Swift tool I'm not very familiar with. I need to know how to fast forward and/or rewind audio time with AV Audio Player in Swift 5. There is a currentTime() property for AV Audio Player that may be helpful in doing this, but I'm not sure how to use it.
You are correct: Set the AVAudioPlayer's currentTime. It measures the position (time) within the song, in seconds. (It calls this a TimeInterval, but that's just a Double signifying seconds or a fraction thereof.)
As the documentation remarks:
By setting this property you can seek to a specific point in a sound file or implement audio fast-forward and rewind functions.

Synchonizing Looping Video to Custom Audio

WWDC2012's Real-Time Media Effects and Processing during Playback Session explains how to synchronize an AVPlayer with custom audio. Paraphrasing, you
start playback of the AVPlayer and custom audio at the same time
play both at the same rate
The first part is achieved by priming the AVPlayer with prerollAtRate:completionHandler:, so playback can be started with "minimal latency", and the second part by making the AVPlayer use The iOS Audio Clock.
The code snippet assumes you have calculated the future host time that you anticipate audio hitting the speaker (taken literally this last phrase seems to imply supreme omniscience, so let's just read it as your [desired] audio start hosttime).
CMClockRef audioClock = NULL;
OSStatus err = CMAudioClockCreate(kCFAllocatorDefault, &audioClock);
if (err == noErr) {
[myPlayer setMasterClock:audioClock];
[myPlayer prerollAtRate:1.0 completionHandler:^(BOOL finished){
if (finished) {
// Calculate future host time here
[myPlayer setRate:1.0 time:newItemTime atHostTime:hostTime];
} else {
// Preroll interrupted or cancelled
}
}];
}
It's a tiny amount of code, yet it raises so many questions. What happens if the preroll currentTime and newItemTime don't agree? Don't video and audio play at the same rate of one second per second? So shouldn't their clocks be the same? Doesn't 48kHz divide 60fps? How can the code only need to know the desired start time and no other details of my audio code? Is it due to the one iOS Audio Clock? Is this API ingenious or an awful non-orthogonal mish-mash that won't compose well with other AVFoundation features?
Despite my doubts, the code seems to work, but I want to seamlessly loop the video and custom audio. The question is how?
I can't preroll the playing AVPlayer because that happens from currentTime (and the player wouldn't appreciate having its buffers changed while playing). Maybe an alternating pair of prerolled AVPlayers? AVPlayerLooper sounds promising. It's not actually an AVPlayer, but it wraps an AVQueuePlayer (which is). Assuming preroll works on the AVQueuePlayer and I pay extra special attention to looping the custom audio, then this may work. Otherwise I think the only remaining option is to drop the prerolling and shoehorn the video and custom audio into an audio tap within an AVComposition, which would be looped with the help of an AVPlayerLooper.

How can I make Apple's mixer audio unit on iOS not do an audio fade?

I am using The Amazing Audio Engine to simply play an audio file, but I find that when the channel starts playing, there is some automatic fade in happening.
You can see the top waveform is the output of my iPad, and the bottom waveform is the actual raw audio file. There is definitely a 30ms microfade being done.
There is nothing doing that within the amazing audio engine library, so it's something internally happening from apple's mixer audio unit. Is there any way to turn off this behavior?
I suspect that the AudioFilePlayer (used by TAAE) uses the Extended Audio File Services under the hood. ExtAudioFileRef will do that on the first read after a seek if there is any de-coding or sample rate conversion. I had to use the Audio File Services directly to get ride of the implicit fading.

The Amazing audio engine 2 - Crossfade looping

I am using the amazing audio engine 2 library for my sequencer app and I want to implement Crossfade loop audio.
Here is explanation :
When user press any key in sequencer piano it will play some audio file and and that audio file will continue to play in loop until user release the key. But that loop will be crossfade to itself.
I am using AEAudioFilePlayerModule for looping but not sure how to crossfade audio file with this class.
Explanation of Cross fade :
Start/End: This setting allows me to choose where in the audio file I want the app to constantly loop so that if user taps+holds note down for a long time, the audio will sound continuously until the user releases his finger.
XFade: This function (crossfade) allows me to chose how to fade between the end and start of the audio loop. This is good so
that the sound will loop smoothly. Here, 9999 is set. So at about 5k samples before the 200k end point, the audio for this note
will begin to fade away and at the same time, the audio loop starting at 50k samples will fade in for a duration of about 5k samples (1/2 the XFade amount).
Please help.
Thank you.

Capture when recording audio, stops to receiving audio AVAudioRecorder

There is a talking cat app well known for iOS devices, in which you speak your voice and he repeats. Analyzing this app, you'll see that it stops talking when you stop talking, that is, it stops to capture the audio when not receive another voice.
I was giving a analyzing the methods of AVAudioRecorder class, and not found any method in which to capture when the User stop to talking or recorder stops to receive external audio.
How can I capture when the audio recorder stops to receiving audio.
Process the audio stream as it is coming through. You can look at the frequency and volume of the stream. From there you can determine if the user has stopped talking.
I suggest frequency and volume as the recorder still picks up background audio. If the volume drops dramatically then the sounds the recorder is picking up must be further away from the device than before. The frequency can also lend itself to:
A.) Filter out the background audio in the audio used to replay the audio with a pitch change or any other changes. etc.
B.) I do not know the limits of frequency for the average human. But this covers the use case where the user has stopped talking, but have moved the device in such a way that the recorder still picks up load shuffling from moving fingers near the mic.

Resources