iOS: RTSP Stream in background - ios

I'm developing an iOS app to play a RTSP Stream (with two tracks, one audio and one for video), and i'm using libVLC to do it.
Playing the video or only audio (adding the option "--no-video") works perfectly. If i start the player with only audio and then enter background the player keeps playing the stream.
The problem i'm having is that if i enter background when video is playing, i want to stop the video and start a new libVLC player with only audio. In that scenario i get this error message:
ERROR: [0x48e2000] >aurioc> 783: failed: '!int' (enable 2, outf< 2 ch, 48000 Hz, Float32, inter> inf< 2 ch, 0 Hz, Float32, non-inter>)
[1973b5e4] audiounit_ios audio output error: failed to init AudioUnit (560557684)
[1973b5e4] audiounit_ios audio output error: opening AudioUnit output failed
[1973b5e4] core audio output error: module not functional
[17a27e74] core decoder error: failed to create audio output
The code in my appDelegate:
- (void)applicationDidEnterBackground:(UIApplication *)application
{
NSLog(#"applicationDidEnterBackground");
if(_playerController!=NULL){
[_playerController performSelector:#selector(goToBackground) withObject:nil afterDelay:1];
_playerController=NULL;
}
}
And in my uiViewController:
-(void)close:(BOOL)enterBackground
{
[_mediaPlayer stop];
NSArray* options = #[[NSString stringWithFormat:#"--no-video"]];
_mediaPlayer = [[VLCMediaPlayer alloc] initWithOptions:options];
_mediaPlayer.delegate = self;
_mediaPlayer.drawable = _videoView;
_mediaPlayer.media = [VLCMedia mediaWithURL:[NSURL URLWithString:url]];
[_mediaPlayer play];
}
Am i doing anything wrong?

yes, don't stop the video and start a new player. Just disable video decoding on the existing player and re-enable it once your app is in the foreground again. This is way more efficient, elegant and faster. Additionally, you won't run into this audio session locking issue.
How is VLC for iOS doing this? When the app is on the way to the background, we store the current video track's ID, which can be "1" but also something entirely else depending on the played stream, in a variable next to the media player object. Then, we we switch the media player's video track to "-1", which is the value for "off" in any case. Video decoding stops. Once the app moves to the foreground again, the media player's video track is set to the cached track ID again and video decoding starts again.

Related

AVAudioPlayerNode stops when the apps goes to background

I've implemented the audio EQ via AVAudioEngine and AVAudioPlayerNode and it is working fine (tried both scheduling a buffer or a file). However, once the app goes to background the sound just fades away. Background mode is correctly set, as is the audio session, and I've verified it by playing music with AVPlayer and then going to background). No audio engine notifications are received.
Here's the code for initializing the engine:
let x = CrbnPlayerEQ()
let eq = AVAudioUnitEQ(numberOfBands: 8)
x.audioUnitEQ = eq
x.audioUnitEQ?.globalGain = 0
x.audioEngine.attach(eq)
x.audioEngine.attach(CrbnPlayer.shared.player.nodePlayer)
let mixer = x.audioEngine.mainMixerNode
x.audioEngine.connect(CrbnPlayer.shared.player.nodePlayer, to: eq, format: mixer.outputFormat(forBus: 0))
x.audioEngine.connect(eq, to: mixer, format: mixer.outputFormat(forBus: 0))
try? x.audioEngine.start()
And here's the play part for the AVAudioPlayerNode:
CrbnPlayerEQ.shared.audioEngine.prepare()
try? CrbnPlayerEQ.shared.audioEngine.start()
self.nodePlayer.stop()
self.nodePlayer.scheduleFile(audioFile, at: nil) {
}
The result remains the same when I use the scheduleBuffer instead of the scheduleFile. I've tried changing playback modes and audio session options but none of that helped. I've also tried stopping and starting the audio session when the app goes to background.
One solution would be to switch to the AVPlayer once the app goes to background but then I'd lose the EQ.
Does anyone know how to ensure the buffer keeps playing even after the app goes to background?

Disable input/output AGC from RemoteIO and VPIO on iOS

CoreAudio is always a mystery due to lack of documentations. Recently I hit some stone again:
In my program, I invoke RemoteIO and VoiceProcessingIO (VPIO) back and forth, and also change AVAudiosession in between. I tried to turn off AGC on VPIO with the follwing code:
if (ASBD.componentSubType == kAudioUnitSubType_VoiceProcessingIO) {
UInt32 turnOff = 0;
status = AudioUnitSetProperty(_myAudioUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
0,
&turnOff,
sizeof(turnOff));
NSAssert1(status == noErr, #"Error setting AGC status: %d", (int)status);
}
Well I'm still not sure if this code disables AGC on the microphone side or the speaker side on VPIO, but anyways, let's continue. Here's the sequence to reproduce the problem:
Create a RemoteIO output audio unit with PlayAndRecord audio session category, work with it and destroy the unit;
Switch audio session to Playback only category;
Switch audio session to PlayAndRecord again and create another VPIO, work with it and destroy it;
Switch audio session to Playback and then PlayAndRecord category;
After these steps, then whatever RemoteIO/VPIO created later will bear this amplified microphone signal (as if a huge AGC is always applied) and there's no way to go back until manually kill the app and start over.
Maybe it's my particular sequence that triggered this, wonder if anyone seen this before and maybe know a correct workaround?
Try setting the mode AVAudioSessionModeMeasurement, or AVAudioSession.Mode .measurement, when configuring your app's Audio Session.

AVFoundation no audio tracks for long videos

While recording a video using AVFoundation's - (void)startRecordingToOutputFileURL:(NSURL*)outputFileURL recordingDelegate:(id<AVCaptureFileOutputRecordingDelegate>)delegate; method, if the video duration is more than 12 seconds, there is no audio track in the output file. Everything works fine, if the video duration is less than 12 seconds...
Delegate in which the output file URL is received is:
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error
{
NSLog(#"AUDIO %#", [[AVAsset assetWithURL:outputFileURL] tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]); //App crashes here...
NSLog(#"VIDEO %#", [[AVAsset assetWithURL:outputFileURL] tracksWithMediaType:AVMediaTypeVideo]);
}
My app crashes for a video that is longer than 12 seconds with this error: *** -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array'
My guess is that AVCaptureMovieFileOutput has better support for QuickTime containers (.qt, .mov) than for mp4 although it is the industry standard. For instance when writing a movie file in fragments to an .mp4, something probably happens to the fragment table (sample table).
So you could either change the file format to .mov or turn of writing the file in fragments. See this question:
ios-8-ipad-avcapturemoviefileoutput-drops-loses-never-gets-audio-track-after
Spent almost 1 day to solve this & This is the perfect solution for this...
After a lot got help from iOS 8 iPad AVCaptureMovieFileOutput drops / loses / never gets audio track after 13 - 14 seconds of recording ...
Just add this line
avCaptureMovieFileOutput.movieFragmentInterval = kCMTimeInvalid
Just changed the extension of the path to which the video is being recorded to mov from mp4 and it worked...

Removing Silence from Audio Queue session recorded audio in ios

I'm using Audio Queue to record audio from the iphone's mic and stop recording when silence detected (no audio input for 10seconds) but I want to discard the silence from audio file.
In AudioInputCallback function I am using following code to detect silence :
AudioQueueLevelMeterState meters[1];
UInt32 dlen = sizeof(meters);
OSStatus Status AudioQueueGetProperty(inAQ,kAudioQueueProperty_CurrentLevelMeterDB,meters,&dlen);
if(meters[0].mPeakPower < _threshold)
{ // NSLog(#"Silence detected");}
But how to remove these packets? Or Is there any better option?
Instead of removing the packets from the AudioQueue, you can delay the write up by writing it to a buffer first. The buffer can be easily defined by having it inside the inUserData.
When you finish recording, if the last 10 seconds is not silent, you write it back to whatever file you are going to write. Otherwise just free the buffer.
after the file is recorded and closed, simply open and truncate the sample data you are not interested in (note: you can use AudioFile/ExtAudioFile APIs to properly update any dependent chunk/header sizes).

Does receiving moviePreloadDidFinish imply a successful load of content

I am using the MPMoviePlayerController to play an audio stream. To verify that there isn't a problem with playback, I set a movie playback error timer and I implement moviePreloadDidFinish. When moviePreloadDidFinish is called, I check the loadState for MPMovieLoadStatePlaythroughOK. If it is not called and my timer expires, I assume the download has failed.
- (void) moviePreloadDidFinish:(NSNotification*)notification
{
if (self.moviePlayer.loadState & MPMovieLoadStatePlaythroughOK) {
NSLog(#"The movie or mp3 finished loading and will now start playing");
// cancel movie playback error timer.
}
}
Occasionally, I do not receive this notification, yet audio keeps playing until my movie playback error timer expires (30 seconds). Does the absence of this moviePreloadDidFinish imply that the download of the audio stream is going to fail soon? If not, is there a better way to programmatically determine that there is a playback problem?

Resources