While recording a video using AVFoundation's - (void)startRecordingToOutputFileURL:(NSURL*)outputFileURL recordingDelegate:(id<AVCaptureFileOutputRecordingDelegate>)delegate; method, if the video duration is more than 12 seconds, there is no audio track in the output file. Everything works fine, if the video duration is less than 12 seconds...
Delegate in which the output file URL is received is:
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error
{
NSLog(#"AUDIO %#", [[AVAsset assetWithURL:outputFileURL] tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]); //App crashes here...
NSLog(#"VIDEO %#", [[AVAsset assetWithURL:outputFileURL] tracksWithMediaType:AVMediaTypeVideo]);
}
My app crashes for a video that is longer than 12 seconds with this error: *** -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array'
My guess is that AVCaptureMovieFileOutput has better support for QuickTime containers (.qt, .mov) than for mp4 although it is the industry standard. For instance when writing a movie file in fragments to an .mp4, something probably happens to the fragment table (sample table).
So you could either change the file format to .mov or turn of writing the file in fragments. See this question:
ios-8-ipad-avcapturemoviefileoutput-drops-loses-never-gets-audio-track-after
Spent almost 1 day to solve this & This is the perfect solution for this...
After a lot got help from iOS 8 iPad AVCaptureMovieFileOutput drops / loses / never gets audio track after 13 - 14 seconds of recording ...
Just add this line
avCaptureMovieFileOutput.movieFragmentInterval = kCMTimeInvalid
Just changed the extension of the path to which the video is being recorded to mov from mp4 and it worked...
Related
I am using AVAssetWriter to create an MPEG4 file.
I start a video session with:
[assetWriter startSessionAtSourceTime:kCMTimeZero];
Now the video file is written fine if I finish the session with this:
[assetWriter finishWritingWithCompletionHandler:^{
}];
But if I call [assetWriter endSessionAtSourceTime:endTime]; before [assetWriter finishWritingWithCompletionHandler then it doesn't write the file.
This is how I call endSessionAtSourceTime:
endTime = CMTimeMakeWithSeconds(secondsRecorded, 30);
[assetWriter endSessionAtSourceTime:endTime];
Any ideas what i am doing wrong?
I think the issue is that the behavior of endSessionAtSourceTime: doesn't do what you're expecting.
endSessionAtSourceTime: is almost the same thing as calling finishRecording() in that it stops the recording when called. The difference is that after recording, endSessionAtSourceTime: will edit out (remove) any frames received after the specified sourceTime.
Instead, if your intended result is to record a 30second clip, you need to setup an NSTimer or something similar and then call endSessionAtSourceTime: or finishRecording() when the 30 seconds has elapsed
I'm developing an iOS app to play a RTSP Stream (with two tracks, one audio and one for video), and i'm using libVLC to do it.
Playing the video or only audio (adding the option "--no-video") works perfectly. If i start the player with only audio and then enter background the player keeps playing the stream.
The problem i'm having is that if i enter background when video is playing, i want to stop the video and start a new libVLC player with only audio. In that scenario i get this error message:
ERROR: [0x48e2000] >aurioc> 783: failed: '!int' (enable 2, outf< 2 ch, 48000 Hz, Float32, inter> inf< 2 ch, 0 Hz, Float32, non-inter>)
[1973b5e4] audiounit_ios audio output error: failed to init AudioUnit (560557684)
[1973b5e4] audiounit_ios audio output error: opening AudioUnit output failed
[1973b5e4] core audio output error: module not functional
[17a27e74] core decoder error: failed to create audio output
The code in my appDelegate:
- (void)applicationDidEnterBackground:(UIApplication *)application
{
NSLog(#"applicationDidEnterBackground");
if(_playerController!=NULL){
[_playerController performSelector:#selector(goToBackground) withObject:nil afterDelay:1];
_playerController=NULL;
}
}
And in my uiViewController:
-(void)close:(BOOL)enterBackground
{
[_mediaPlayer stop];
NSArray* options = #[[NSString stringWithFormat:#"--no-video"]];
_mediaPlayer = [[VLCMediaPlayer alloc] initWithOptions:options];
_mediaPlayer.delegate = self;
_mediaPlayer.drawable = _videoView;
_mediaPlayer.media = [VLCMedia mediaWithURL:[NSURL URLWithString:url]];
[_mediaPlayer play];
}
Am i doing anything wrong?
yes, don't stop the video and start a new player. Just disable video decoding on the existing player and re-enable it once your app is in the foreground again. This is way more efficient, elegant and faster. Additionally, you won't run into this audio session locking issue.
How is VLC for iOS doing this? When the app is on the way to the background, we store the current video track's ID, which can be "1" but also something entirely else depending on the played stream, in a variable next to the media player object. Then, we we switch the media player's video track to "-1", which is the value for "off" in any case. Video decoding stops. Once the app moves to the foreground again, the media player's video track is set to the cached track ID again and video decoding starts again.
I'm using AVQueuePlayer to do playback of a list of videos in our app. I'm trying to cache the videos that are played by AVQueuePlayer so that the video doesn't have to be downloaded each time.
So, after the video finishes playing, I attempt to save the AVPlayerItem into disk so next time it's initialized with a local URL.
I've tried to achieve this through two approaches:
Use AVAssetExportSession
Use AVAssetReader and AVAssetWriter.
1) AVAssetExportSession approach
This approach works like this:
Observe when an AVPlayerItem finishes playing using AVPlayerItemDidPlayToEndTimeNotification.
When the above notification is received (a video finished playing, so it's fully downloaded), use AVAssetExportSession to export that video into a file in disk.
The code:
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(videoDidFinishPlaying:) name:AVPlayerItemDidPlayToEndTimeNotification object:nil];
then
- (void)videoDidFinishPlaying:(NSNotification*)note
{
AVPlayerItem *itemToSave = [note object];
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:itemToSave.asset presetName:AVAssetExportPresetHighestQuality];
exportSession.outputFileType = AVFileTypeMPEG4;
exportSession.outputURL = [NSURL fileURLWithPath:#"/path/to/Documents/video.mp4"];
[exportSession exportAsynchronouslyWithCompletionHandler:^{
switch(exportSession.status){
case AVAssetExportSessionStatusExporting:
NSLog(#"Exporting...");
break;
case AVAssetExportSessionStatusCompleted:
NSLog(#"Export completed, wohooo!!");
break;
case AVAssetExportSessionStatusWaiting:
NSLog(#"Waiting...");
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Failed with error: %#", exportSession.error);
break;
}
}
The result in console of that code is:
Failed with error: Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo=0x98916a0 {NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x99ddd10 "The operation couldn’t be completed. (OSStatus error -12109.)", NSLocalizedFailureReason=An unknown error occurred (-12109)}
2) AVAssetReader, AVAssetWriter approach
The code:
- (void)savePlayerItem:(AVPlayerItem*)item
{
NSError *assetReaderError = nil;
AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:assetToCache error:&assetReaderError];
//(algorithm continues)
}
That code throws an exception when trying to alloc/init the AVAssetReader with the following information:
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVAssetReader initWithAsset:error:] Cannot initialize an instance of AVAssetReader with an asset at non-local URL 'https://someserver.com/video1.mp4''
***
Any help is much appreciated.
How I ended up manage to do this is to create the usually array of AVPlayer > AVPlayerItem > AVURLAsset. But also at the same time create an AVExportSession right away before AVPlayer have completely loaded the video from the remote URL.
The AVExportSession actually will export slowly as the AVPlayer tries to pre-buffer the URL. This is irregardless of whether the user starts playing the AVPlayer or not. However the output file will not actually complete until the entire URL is pre-buffered. At that point AVExportSession will callback with the completion. This is also completely independent of whether the user is playing, or have completed playing the video or not.
As such by creating the AVPlayer/AVExportSession combo early, it became my pre-buffer mechanism
-- Update 2018-01-10 --
Want to add that after having deployed the above mechanism. We have encountered 1 big caveat that I think deserves some mention.
You cannot have too many video pipelines in memory (AVPlayer with AVPlayerItem/AVAsset hooked up). AVFoundation will refusing to play. So do use the above mechanism to pre-buffer. But once the video is downloaded to a file, if the user is not viewing the video, dealloc the AVPlayer/AVAsset. Reallocate an AVPlayer when the user decides to play the video again, this time with the AVURLAsset pointed to your local buffered copy of the video.
I'm working on a plugin for Apache Cordova that will allow audio streaming from a remote URL through the Media API.
The issue that I'm experiencing is that I get an EXC_BAD_ACCESS signal whenever I try to access certain properties on the AVPlayer instance. currentTime and isPlaying are the worst offenders. The player will be playing sound through the speakers, but as soon as my code reaches a player.currentTime or [player currentTime] it throws the bad access signal.
[player play];
double position = round([player duration] * 1000) / 1000;
[player currentTime]; //This will cause the signal
I am using ARC, so I'm not releasing anything that shouldn't be.
Edit:
Everything that I've done has been hacking around on the Cordova 3 CDVSound class as a proof of concept for actual streaming on iOS.
The original code can be found here: https://github.com/apache/cordova-plugin-media/tree/master/src/ios
My code can be found here:
CDVSound.h
CDVSound.m
The method startPlayingAudio will trip an exc_bad_access at line 346. Removing line 346 will cause audio to play, but it will trip a bad access later down the road when getCurrentPositionAudio and line 532 is called.
Edit / Solution
So it turns out that the best way to handle this is to use a AVPlayerItem and then access it with player.currentItem.currentTime. The real question then becomes, why isn't this behavior documented with AVPlayer and why does it behave like this?
I'm working on video editing application for iphone/ipod touch. My app simply asks user to choose one of already existing videos in the camera roll directory, then frame by frame changes pixel values and saves again under different name in the camera roll directory. Because video processing might be quite long I really need to implement some kind of functionality to resume previously started session(ie. when video processing reaches 30% of total video length and user presses down the home button(or there is a phone call) when my application is brought back to the foreground video processing should start from 30%, not from beginning).
Most important parts of my code(simplified a bit to be more readable):
AVAssetReader* assetReader = [[AVAssetReader alloc] initWithAsset:sourceAsset error:&error];
NSArray* videoTracks = [sourceAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack* videoTrack = [videoTracks objectAtIndex:0];
AVAssetReaderTrackOutput* assetReaderOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack
[assetReader addOutput:assetReaderOutput]);
[assetReader addOutput:assetReaderOutput];
[mVideoWriter startWriting]
[mAssetReader startReading]
[mVideoWriter startSessionAtSourceTime: kCMTimeZero];
mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
[writerInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^{
while (mWriterInput.readyForMoreMediaData) {
CMSampleBufferRef nextBuffer = [mAssetReaderOutput copyNextSampleBuffer];
if (nextBuffer) {
// frame processing here
} // if
} // while
}]; // block
// when done saving my video to camera roll directory
I'm listening to my app's delegate callback methods like applicationWillResignActive, applicationWillEnterForeground but I'm not sure how to handle them in proper way. What I've tried already:
1) [AVAssetWriter start/endSessionAtSourceTime], unfortunately stopping and starting in the last frame presentation time did not work for me
2) saving "by hand" every part of movie that was processed and when processing reaches 100% merging all of them using AVMutableComposition, this solution however sometimes crashes in dispatch queue
Would be really great if someone could give me some hints how it should be done correctly...
I'm pretty sure AVAssetWriter can't append, so something along the lines of 2), saving the pieces then merging them, is probably the best solution if you must make your export restartable.
But first you have to resolve that crash.
Also, before you start creating hundreds of movie pieces however, you should have a look AVAssetWriter.movieFragmentInterval as with careful management of presentation time stamps/durations you may be able to use it minimise the number of pieces you have to merge.
Have you tried -[UIApplication beginBackgroundTaskWithExpirationHandler:]? This seems like a good place to request extra operating time in the background to finish the heavy lifting.