AVAsset duration is not correct - ios

I have video in Mac player duration of video is 31 seconds. When I'm using it in my app and loading that file the duration of AVAsset is '28.03'.
AVAsset *videoAsset = [AVAsset assetWithURL:videoUrl];
Float64 time = CMTimeGetSeconds(videoAsset.duration);

For some types of assets a duration is an approximation. If you need the exact duration (should be an extreme case) use:
NSDictionary *options = #{AVURLAssetPreferPreciseDurationAndTimingKey: #YES};
AVURLAsset *videoAsset = [URLAssetWithURL:videoUrl options:options];
You can find more informations in documentation. Calculating the duration may take some time, so remember to use asynchronous loading:
[videoAsset loadValuesAsynchronouslyForKeys:#[#"duration"] completionHandler:^{
switch ([videoAsset statusOfValueForKey:#"duration" error:nil]) {
case AVKeyValueStatusLoaded:
Float64 time = CMTimeGetSeconds(videoAsset.duration);
// ...
break;
default:
// other cases like cancellation or fail
break;
}
}];
You can find some more tips on using AVFoundation API in the video Discovering AV Foundation - WWDC 2010 Session 405

Related

How to manually change the streaming video quality in AV player in ios?

I am building application in which online streaming is handled by AV Player(Default iOS player).
I want to add button for HD streaming, how to I achieve that?
The solution I found was to ensure that the underlying AVAsset is ready to return basic info, such as its duration, before feeding it to the AVPlayer. AVAsset has a method loadValuesAsynchronouslyForKeys: which is handy for this:
AVAsset *asset = [AVAsset assetWithURL:self.mediaURL];
[asset loadValuesAsynchronouslyForKeys:#[#"duration"] completionHandler:^{
AVPlayerItem *newItem = [[AVPlayerItem alloc] initWithAsset:asset];
[self.avPlayer replaceCurrentItemWithPlayerItem:newItem];
}];
In my case the URL is a network resource, and replaceCurrentItemWithPlayerItem: will actually block for several seconds waiting for this information to download otherwise.

AVURLAsset tracksWithMediaType returns no audioTracks for AVMediaTypeAudio in Simulator

I am doing the following
AVURLAsset *audioAsset = [[AVURLAsset alloc] initWithURL:audioUrl options:nil];
NSArray<AVAssetTrack *> *audioTracks = [audioAsset tracksWithMediaType:AVMediaTypeAudio];
which works fine on a real device.
The problem is just happening in the simulators. I am having a statically added mp3 in the bundle, so the audioAsset is properly initiated. But the array audioTracks is empty on the simulator (even though the path in audioUrl and the audioAsset variable is correct and existing.
Any suggestions?
I've faced with the same issue on a real device as well. The issue is caused by the fact that after initialising asset it doesn't ready yet.
Please have a look at documentation:
You can initialize a player item with an existing asset, or you can initialize a player item directly from a URL so that you can play a resource at a particular location (AVPlayerItem will then create and configure an asset for the resource). As with AVAsset, though, simply initializing a player item doesn’t necessarily mean it’s ready for immediate playback. You can observe (using key-value observing) an item’s status property to determine if and when it’s ready to play.
To fix the issue you can try to do the following:
AVURLAsset *audioAsset = [[AVURLAsset alloc] initWithURL:audioUrl options:nil];
NSString *tracksKey = #"tracks";
[audioAsset loadValuesAsynchronouslyForKeys:#[tracksKey] completionHandler:
^{
NSError *error;
AVKeyValueStatus status = [asset statusOfValueForKey:tracksKey error:&error];
if (status == AVKeyValueStatusLoaded) {
// The asset is ready at this point
NSArray<AVAssetTrack *> *audioTracks = [audioAsset tracksWithMediaType:AVMediaTypeAudio];
}
}];
Also it's worth to note that AVKeyValueStatus status can be AVKeyValueStatusFailed in case if audioAsset is not playable:
BOOL result = [audioAsset isPlayable];

AVURLAsset load webvtt file

I am trying to use AVURLAsset to load a webvtt file.
Below is my code.
NSString *urlAddress = #"http://somewhere/some.vtt";
NSURL *urlStream = [[NSURL alloc] initWithString:urlAddress];
AVAsset *avAsset = [AVURLAsset URLAssetWithURL:urlStream options:nil];
NSArray *requestKeys = [NSArray arrayWithObjects:#"tracks",#"playable",nil];
[avAsset loadValuesAsynchronouslyForKeys:requestKeys completionHandler:^{
dispatch_async(dispatch_get_main_queue(),^{
//complete block here
AVKeyValueStatus status =[avAsset statusOfValueForKey:#"tracks" error:nil];
if(status == AVKeyValueStatusLoaded) {
//loaded block !
//Question 1
CMTime assetTime = [avAsset duration];
Float64 duration = CMTimeGetSeconds(assetTime);
NSLog(#"%f", duration);
//Question 2
AVMediaSelectionGroup *subtitle = [avAsset mediaSelectionGroupForMediaCharacteristic: AVMediaCharacteristicLegible];
NSLog(#"%#", subtitle);
}
else {
//don’t load block !
}
});
}];
Question 1: It always go into the "Loaded Block", but I find the avAsset's duration is not complete, that means the data is not loaded? How should I modify it?
Question 2: I am trying to use it to my avplayer's subtitle, but the AVMediaSelectionGroup is always null. What should I do?
For question 1, add duration to your keys:
NSArray *requestKeys = #[#"tracks",#"playable", #"duration"];
I've posted a solution over here: https://stackoverflow.com/a/37945178/171933 Basically you need to use an AVMutableComposition to join the video with the subtitles and then play back that composition.
About your second question: mediaSelectionGroupForMediaCharacteristic seems to only be supported when these "characteristics" are already baked into either the media file or your m3u8 stream, according to this statement by an Apple engineer (bottom of the page).

iOS: How to trim silence from start and end of .aif audio recording?

My app includes the ability for the user to record a brief message; I'd like to trim off any silence (or, to be more precise, any audio whose volume falls below a given threshold) from the beginning and end of the recording.
I'm recording the audio with an AVAudioRecorder, and saving it to an .aif file. I've seen some mention elsewhere of methods by which I could have it wait to start recording until the audio level reaches a threshold; that'd get me halfway there, but won't help with trimming silence off the end.
If there's a simple way to do this, I'll be eternally grateful!
Thanks.
This project takes audio from the microphone, triggers on loud noise and untriggers when quiet. It also trims and fades in/fades out around the ends.
https://github.com/fulldecent/FDSoundActivatedRecorder
Relevant code you are seeking:
- (NSString *)recordedFilePath
{
// Prepare output
NSString *trimmedAudioFileBaseName = [NSString stringWithFormat:#"recordingConverted%x.caf", arc4random()];
NSString *trimmedAudioFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:trimmedAudioFileBaseName];
NSFileManager *fileManager = [NSFileManager defaultManager];
if ([fileManager fileExistsAtPath:trimmedAudioFilePath]) {
NSError *error;
if ([fileManager removeItemAtPath:trimmedAudioFilePath error:&error] == NO) {
NSLog(#"removeItemAtPath %# error:%#", trimmedAudioFilePath, error);
}
}
NSLog(#"Saving to %#", trimmedAudioFilePath);
AVAsset *avAsset = [AVAsset assetWithURL:self.audioRecorder.url];
NSArray *tracks = [avAsset tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *track = [tracks objectAtIndex:0];
AVAssetExportSession *exportSession = [AVAssetExportSession
exportSessionWithAsset:avAsset
presetName:AVAssetExportPresetAppleM4A];
// create trim time range
CMTime startTime = CMTimeMake(self.recordingBeginTime*SAVING_SAMPLES_PER_SECOND, SAVING_SAMPLES_PER_SECOND);
CMTimeRange exportTimeRange = CMTimeRangeFromTimeToTime(startTime, kCMTimePositiveInfinity);
// create fade in time range
CMTime startFadeInTime = startTime;
CMTime endFadeInTime = CMTimeMake(self.recordingBeginTime*SAVING_SAMPLES_PER_SECOND + RISE_TRIGGER_INTERVALS*INTERVAL_SECONDS*SAVING_SAMPLES_PER_SECOND, SAVING_SAMPLES_PER_SECOND);
CMTimeRange fadeInTimeRange = CMTimeRangeFromTimeToTime(startFadeInTime, endFadeInTime);
// setup audio mix
AVMutableAudioMix *exportAudioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *exportAudioMixInputParameters =
[AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:track];
[exportAudioMixInputParameters setVolumeRampFromStartVolume:0.0 toEndVolume:1.0
timeRange:fadeInTimeRange];
exportAudioMix.inputParameters = [NSArray
arrayWithObject:exportAudioMixInputParameters];
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:trimmedAudioFilePath];
exportSession.outputFileType = AVFileTypeAppleM4A;
exportSession.timeRange = exportTimeRange;
exportSession.audioMix = exportAudioMix;
// MAKE THE EXPORT SYNCHRONOUS
dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
[exportSession exportAsynchronouslyWithCompletionHandler:^{
dispatch_semaphore_signal(semaphore);
}];
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
if (AVAssetExportSessionStatusCompleted == exportSession.status) {
NSLog(#"AVAssetExportSessionStatusCompleted");
return trimmedAudioFilePath;
} else if (AVAssetExportSessionStatusFailed == exportSession.status) {
// a failure may happen because of an event out of your control
// for example, an interruption like a phone call comming in
// make sure and handle this case appropriately
NSLog(#"AVAssetExportSessionStatusFailed %#", exportSession.error.localizedDescription);
} else {
NSLog(#"Export Session Status: %d", exportSession.status);
}
return nil;
}
I'm recording the audio with an AVAudioRecorder, and saving it to an .aif file. I've seen some mention elsewhere of methods by which I could have it wait to start recording until the audio level reaches a threshold; that'd get me halfway there
Without adequate buffering, that would truncate the start.
I don't know of an easy way. You would have to write a new audio file after recording and analyzing it for the desired start and end points. Modifying the existing file would be straightforward if you knew the AIFF format well (not many people do) and had an easy way to read the file's sample data.
The analysis stage is pretty easy for a basic implementation -- evaluate the average power of sample data, until your threshold is exceeded. Repeat in reverse for end.

tracks in AVComposition losing time when paused

I've created an AVMutableComposition that consists of a bunch of audio tracks that start at specific times. From there, following Apple recommendations, i turned it into an AVComposition before playing it with AVPlayer.
It all works fine playing this AVPlayer item, but if I pause it and then continue, all the tracks in the composition appear to slip back about 0.2 seconds relative to each other (i.e., they bunch up). Hitting pause and continuing several times compounds the effect and the overlap is more significant (basically if I hit it enough, I will end up with all 8 tracks playing simultaneously).
if (self.player.rate > 0.0) {
//if player is playing, pause
[self.player pause];
} else {
if (self.player) {
[self.player play];
return;
}
*/CODE CREATING COMPOSITION - missed out big chunk of code relating to finding the track and retrieving its position and scale/*
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *sourceAsset = [[AVURLAsset alloc] initWithURL:url options:options];
//calculate times
NSNumber *time = [soundArray1 objectAtIndex:1]; //this is the time scale - e.g. 96 or 120 etc.
double timenow = [time doubleValue];
double insertTime = (240*y);
AVMutableCompositionTrack *track =
[composition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
//insert the audio track from the asset into the track added to the mutable composition
AVAssetTrack *myTrack = [[sourceAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
CMTimeRange myTrackRange = myTrack.timeRange;
NSError *error = nil;
[track insertTimeRange:myTrackRange
ofTrack:myTrack
atTime:CMTimeMake(insertTime, timenow)
error:&error];
[sourceAsset release];
}
}
AVComposition *immutableSnapshotOfMyComposition = [composition copy];
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:immutableSnapshotOfMyComposition];
self.player = [[AVPlayer alloc] initWithPlayerItem:playerItem];
NSLog(#"here");
[self.player play];
Thanks
OK, this feels a little hacky, but it definitely works if anybody is stuck. If someone has a better answer, do let me know!
Basically, I just save the player.currentTime of the track when I hit pause and remake the track when i hit play, just starting from the point at which i paused it. No discernible delay, but I'd still be happier without wasting this extra processing.
Make sure you properly release your player item after you hit pause, otherwise you'll end up with a giant stack of AVPlayers!
I have a solution that is a bit less hacky but still hacky.
The solution comes from the fact that I noticed that if you seeked on the player, the latency between audio and video introduced by pausing disappeared.
Hence: just save the player.currentTime just before pausing and, player seekToTime just before playing again. It works pretty well on iOS 6, haven't tested on other versions yet.

Resources