I want to implement audio trimming in iOS, I want something like a two way slider that the user can slide over a particular period of the recorded audio, and then I will save that audio lie the image below
AVAssetExportSession - Exporting a Trimmed Audio Asset
The below code will do the job for you please refer to this link for further detail
- (BOOL)exportAsset:(AVAsset *)avAsset toFilePath:(NSString *)filePath {
// we need the audio asset to be at least 50 seconds long for this snippet
CMTime assetTime = [avAsset duration];
Float64 duration = CMTimeGetSeconds(assetTime);
if (duration < 50.0) return NO;
// get the first audio track
NSArray *tracks = [avAsset tracksWithMediaType:AVMediaTypeAudio];
if ([tracks count] == 0) return NO;
AVAssetTrack *track = [tracks objectAtIndex:0];
// create the export session
// no need for a retain here, the session will be retained by the
// completion handler since it is referenced there
AVAssetExportSession *exportSession = [AVAssetExportSession
exportSessionWithAsset:avAsset
presetName:AVAssetExportPresetAppleM4A];
if (nil == exportSession) return NO;
// create trim time range - 20 seconds starting from 30 seconds into the asset
CMTime startTime = CMTimeMake(30, 1);
CMTime stopTime = CMTimeMake(50, 1);
CMTimeRange exportTimeRange = CMTimeRangeFromTimeToTime(startTime, stopTime);
// create fade in time range - 10 seconds starting at the beginning of trimmed asset
CMTime startFadeInTime = startTime;
CMTime endFadeInTime = CMTimeMake(40, 1);
CMTimeRange fadeInTimeRange = CMTimeRangeFromTimeToTime(startFadeInTime,
endFadeInTime);
// setup audio mix
AVMutableAudioMix *exportAudioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *exportAudioMixInputParameters =
[AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:track];
[exportAudioMixInputParameters setVolumeRampFromStartVolume:0.0 toEndVolume:1.0
timeRange:fadeInTimeRange];
exportAudioMix.inputParameters = [NSArray
arrayWithObject:exportAudioMixInputParameters];
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:filePath]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
exportSession.timeRange = exportTimeRange; // trim time range
exportSession.audioMix = exportAudioMix; // fade in audio mix
// perform the export
[exportSession exportAsynchronouslyWithCompletionHandler:^{
if (AVAssetExportSessionStatusCompleted == exportSession.status) {
NSLog(#"AVAssetExportSessionStatusCompleted");
} else if (AVAssetExportSessionStatusFailed == exportSession.status) {
// a failure may happen because of an event out of your control
// for example, an interruption like a phone call comming in
// make sure and handle this case appropriately
NSLog(#"AVAssetExportSessionStatusFailed");
} else {
NSLog(#"Export Session Status: %d", exportSession.status);
}
}];
return YES;
}
Related
My app captures a video clip for 3 seconds, and programmatically I want to create a 15 sec clip from recorded 3 sec clip by looping it 5 times. And finally have to save 15 sec clip in CameraRoll.
I have got my 3 sec video clip via AVCaptureMovieFileOutput and I have its NSURL from delegate which is currently in NSTemporaryDirectory().
I am using AVAssetWriterInput for looping it. But it ask for CMSampleBufferRef like :
[writerInput appendSampleBuffer:sampleBuffer];
How will I gee this CMSampleBufferRef from video in NSTemporaryDirectory() ?
I have seen code for converting UIImage to CMSampleBufferRef, but I can find any for video file.
Any suggestion will be helpful. :)
Finally, I fixed my problem using AVMutableComposition. Here is my code :
AVMutableComposition *mixComposition = [AVMutableComposition new];
AVMutableCompositionTrack *mutableCompVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVURLAsset *videoAsset = [[AVURLAsset alloc]initWithURL:3SecFileURL options:nil];
CMTimeRange video_timeRange = CMTimeRangeMake(kCMTimeZero, [videoAsset duration]);
CGAffineTransform rotationTransform = CGAffineTransformMakeRotation(M_PI_2);
[mutableCompVideoTrack setPreferredTransform:rotationTransform];
CMTime currentCMTime = kCMTimeZero;
for (NSInteger count = 0 ; count < 5 ; count++)
{
[mutableCompVideoTrack insertTimeRange:video_timeRange ofTrack:[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:currentCMTime error:nil];
currentCMTime = CMTimeAdd(currentCMTime, [videoAsset duration]);
}
NSString *fullMoviePath = [NSTemporaryDirectory() stringByAppendingPathComponent:[#"moviefull" stringByAppendingPathExtension:#"mov"]];
NSURL *finalVideoFileURL = [NSURL fileURLWithPath:fullMoviePath];
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetPassthrough];
[exportSession setOutputFileType:AVFileTypeQuickTimeMovie];
[exportSession setOutputURL:finalVideoFileURL];
CMTimeValue val = [mixComposition duration].value;
CMTime start = CMTimeMake(0, 1);
CMTime duration = CMTimeMake(val, 1);
CMTimeRange range = CMTimeRangeMake(start, duration);
[exportSession setTimeRange:range];
[exportSession exportAsynchronouslyWithCompletionHandler:^{
switch ([exportSession status])
{
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export failed: %# %#", [[exportSession error] localizedDescription], [[exportSession error]debugDescription]);
}
case AVAssetExportSessionStatusCancelled:
{
NSLog(#"Export canceled");
break;
}
case AVAssetExportSessionStatusCompleted:
{
NSLog(#"Export complete!");
}
default: NSLog(#"default");
}
}];
Take a look at AVAssetReader, this can return an CMSampleBufferRef. Keep in mind, you'll need to manipulate the timestamp for your approach to work.
I have an app which combines video files together to make a long video. There could be a delay between videos (e.g. V1 starts at t=0s and runs for 5 seconds, V1 starts at t=10s). In this case, I want the video to freeze the last frame of V1 until V2 starts.
I'm using the code below, but between videos, the whole video goes white.
Any ideas how I can get the effect I'm looking for?
Thanks!
#interface VideoJoins : NSObject
-(instancetype)initWithURL:(NSURL*)url
andDelay:(NSTimeInterval)delay;
#property (nonatomic, strong) NSURL* url;
#property (nonatomic) NSTimeInterval delay;
#end
and
+(void)joinVideosSequentially:(NSArray*)videoJoins
withFileType:(NSString*)fileType
toOutput:(NSURL*)outputVideoURL
onCompletion:(dispatch_block_t) onCompletion
onError:(ErrorBlock) onError
onCancel:(dispatch_block_t) onCancel
{
//From original question on http://stackoverflow.com/questions/6575128/how-to-combine-video-clips-with-different-orientation-using-avfoundation
// Didn't add support for portrait+landscape.
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
CMTime startTime = kCMTimeZero;
/*videoClipPaths is a array of paths of the video clips recorded*/
//for loop to combine clips into a single video
for (NSInteger i=0; i < [videoJoins count]; i++)
{
VideoJoins* vj = videoJoins[i];
NSURL *url = vj.url;
NSTimeInterval nextDelayTI = 0;
if(i+1 < [videoJoins count])
{
VideoJoins* vjNext = videoJoins[i+1];
nextDelayTI = vjNext.delay;
}
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
CMTime assetDuration = [asset duration];
CMTime assetDurationWithNextDelay = assetDuration;
if(nextDelayTI != 0)
{
CMTime nextDelay = CMTimeMakeWithSeconds(nextDelayTI, 1000000);
assetDurationWithNextDelay = CMTimeAdd(assetDuration, nextDelay);
}
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//set the orientation
if(i == 0)
{
[compositionVideoTrack setPreferredTransform:videoTrack.preferredTransform];
}
BOOL ok = [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDurationWithNextDelay) ofTrack:videoTrack atTime:startTime error:nil];
ok = [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDuration) ofTrack:audioTrack atTime:startTime error:nil];
startTime = CMTimeAdd(startTime, assetDurationWithNextDelay);
}
//Delete output video if it exists
NSString* outputVideoString = [outputVideoURL absoluteString];
if ([[NSFileManager defaultManager] fileExistsAtPath:outputVideoString])
{
[[NSFileManager defaultManager] removeItemAtPath:outputVideoString error:nil];
}
//export the combined video
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:composition
presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL = outputVideoURL;
exporter.outputFileType = fileType;
exporter.shouldOptimizeForNetworkUse = YES;
[exporter exportAsynchronouslyWithCompletionHandler:^(void)
{
switch (exporter.status)
{
case AVAssetExportSessionStatusCompleted: {
onCompletion();
break;
}
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export Failed");
NSError* err = exporter.error;
NSLog(#"ExportSessionError: %#", [err localizedDescription]);
onError(err);
break;
}
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export Cancelled");
NSLog(#"ExportSessionError: %#", [exporter.error localizedDescription]);
onCancel();
break;
}
}];
}
EDIT: Got it working. Here is how I extract the images and generate the videos from those images:
+ (void)writeImageAsMovie:(UIImage*)image
toPath:(NSURL*)url
fileType:(NSString*)fileType
duration:(NSTimeInterval)duration
completion:(VoidBlock)completion
{
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:url
fileType:fileType
error:&error];
NSParameterAssert(videoWriter);
CGSize size = image.size;
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
//Write samples:
CMTime halfTime = CMTimeMakeWithSeconds(duration/2, 100000);
CMTime endTime = CMTimeMakeWithSeconds(duration, 100000);
CVPixelBufferRef buffer = [VideoCreator pixelBufferFromCGImage:image.CGImage];
[adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
[adaptor appendPixelBuffer:buffer withPresentationTime:halfTime];
[adaptor appendPixelBuffer:buffer withPresentationTime:endTime];
//Finish the session:
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:endTime];
[videoWriter finishWritingWithCompletionHandler:^{
if(videoWriter.error)
{
NSLog(#"Error:%#", [error localizedDescription]);
}
if(completion)
{
completion();
}
}];
}
+(void)generateVideoImageFromURL:(NSURL*)url
atTime:(CMTime)thumbTime
withMaxSize:(CGSize)maxSize
completion:(ImageBlock)handler
{
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:url options:nil];
if(!asset)
{
if(handler)
{
handler(nil);
return;
}
}
if(CMTIME_IS_POSITIVE_INFINITY(thumbTime))
{
thumbTime = asset.duration;
}
else if(CMTIME_IS_NEGATIVE_INFINITY(thumbTime) || CMTIME_IS_INVALID(thumbTime) || CMTIME_IS_INDEFINITE(thumbTime))
{
thumbTime = CMTimeMake(0, 30);
}
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
generator.maximumSize = maxSize;
CMTime actualTime;
NSError* error;
CGImageRef image = [generator copyCGImageAtTime:thumbTime actualTime:&actualTime error:&error];
UIImage *thumb = [[UIImage alloc] initWithCGImage:image];
CGImageRelease(image);
if(handler)
{
handler(thumb);
}
}
AVMutableComposition can only stitch videos together. I did it by doing two things:
Extracting last frame of the first video as image.
Making a video using this image(duration depends on your requirement).
Then you can compose these three videos (V1,V2 and your single image video). Both tasks are very easy to do.
For extracting the image out of the video, look at this link. If you don't want to use MPMoviePlayerController,which is used by accepted answer, then look at other answer by Steve.
For making video using the image check out this link. Question is about the issue of audio but I don't think you need audio. So just look at the method mentioned in question itself.
UPDATE:
There is an easier way but it comes with a disadvantage. You can have two AVPlayer. First one plays your video which has white frames in between. Other one sits behind paused at last frame of video 1. So when the middle part comes, you will see the second AVPlayer loaded with last frame. So as a whole it will look like video 1 is paused. And trust me naked eye can't make out when player got changed. But the obvious disadvantage is that your exported video will be same with blank frames. So if you are just going to play it back in your app only, you can go with this approach.
The first frame of video asset is always black or white
CMTime delta = CMTimeMake(1, 25); //1 frame (if fps = 25)
CMTimeRange timeRangeInVideoAsset = CMTimeRangeMake(delta,clipVideoTrack.timeRange.duration);
nextVideoClipStartTime = CMTimeAdd(nextVideoClipStartTime, timeRangeInVideoAsset.duration);
Merged more then 400 shirt videos in one.
My code is based off of the code in Chapter 6 of Learning Core Audio.
This function is designed to take an input file and copy the packets form that file to an output file. I am using it to write the data of many input files onto one output file, and it works quite well.
The only problem I am having is when I add a new input file and set the startPacketPosition to an area on the output file that already has packets written to it. It replaces the old data with the new data.
Is there a way to write new packets to a file without replacing the existing data. It would be like adding a sound effect to a song file without replacing any of the song data.
If this isn't possible to do using AudioFileWritePackets, what would be the best alternative?
static void writeInputFileToOutputFile(AudioStreamBasicDescription *format, ExtAudioFileRef *inputFile, AudioFileID *outputFile, UInt32 *startPacketPosition) {
//determine the size of the output buffer
UInt32 outputBufferSize = 32 * 1024; //32kb
UInt32 sizePerPacket = format->mBytesPerPacket;
UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket;
//allocate a buffer for recieving the data
UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8) * outputBufferSize);
//read-convert-write
while (1) {
AudioBufferList convertedData; //create an audio buffer list
convertedData.mNumberBuffers = 1; //with only one buffer
//set the properties on the single buffer
convertedData.mBuffers[0].mNumberChannels = format->mChannelsPerFrame;
convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
convertedData.mBuffers[0].mData = outputBuffer;
//get the number of frames and buffer data from the input file
UInt32 framesPerBuffer = packetsPerBuffer;
CheckError(ExtAudioFileRead(*inputFile, &framesPerBuffer, &convertedData), "ExtAudioFileRead");
//if framecount is 0, were finished
if (framesPerBuffer == 0) {
return;
}
UInt32 bytes = format->mBytesPerPacket;
CheckError(AudioFileWritePackets(*outputFile, false, framesPerBuffer*bytes, NULL, *startPacketPosition, &framesPerBuffer, convertedData.mBuffers[0].mData), "AudioFileWritePackets");
//increase the ouput file packet position
*startPacketPosition += framesPerBuffer;
}
free(outputBuffer);
}
I found a way to accomplish this using AVMutableComposition
-(void)addInputAsset:(AVURLAsset *)input toOutputComposition:(AVMutableComposition *)composition atTime:(float)seconds {
//add the input to the composition
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipAudioTrack = [[input tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//set the start time
int sampleRate = 44100;
CMTime nextClipStartTime = CMTimeMakeWithSeconds(seconds, sampleRate);
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, input.duration) ofTrack:clipAudioTrack atTime:nextClipStartTime error:nil];
}
To use this method to mix two audio files you would call it like this:
-(void)mixAudio {
//create a mutable compsition
AVMutableComposition* composition = [AVMutableComposition composition];
//audio file 1
NSURL *url1 = [NSURL URLWithString:#"path/to/file1.mp3"];
AVURLAsset* asset1 = [[AVURLAsset alloc]initWithURL:url1 options:nil];
[self addInputAsset:asset1 toOutputComposition:composition atTime:0.0];
//audio file 2
NSURL *url2 = [NSURL URLWithString:#"path/to/file2.aif"];
AVURLAsset* asset2 = [[AVURLAsset alloc]initWithURL:url2 options:nil];
[self addInputAsset:asset2 toOutputComposition:composition atTime:0.2];
//create the export session
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:composition presetName:AVAssetExportPresetAppleM4A];
if (exportSession == nil) {
//ERROR: abort
}
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:#"path/to/output_file.m4a"]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
//export the file
[exportSession exportAsynchronouslyWithCompletionHandler:^{
if (AVAssetExportSessionStatusCompleted == exportSession.status) {
NSLog(#"AVAssetExportSessionStatusCompleted");
} else if (AVAssetExportSessionStatusFailed == exportSession.status) {
NSLog(#"AVAssetExportSessionStatusFailed");
} else {
NSLog(#"Export Session Status: %ld", (long)exportSession.status);
}
}];
}
My app includes the ability for the user to record a brief message; I'd like to trim off any silence (or, to be more precise, any audio whose volume falls below a given threshold) from the beginning and end of the recording.
I'm recording the audio with an AVAudioRecorder, and saving it to an .aif file. I've seen some mention elsewhere of methods by which I could have it wait to start recording until the audio level reaches a threshold; that'd get me halfway there, but won't help with trimming silence off the end.
If there's a simple way to do this, I'll be eternally grateful!
Thanks.
This project takes audio from the microphone, triggers on loud noise and untriggers when quiet. It also trims and fades in/fades out around the ends.
https://github.com/fulldecent/FDSoundActivatedRecorder
Relevant code you are seeking:
- (NSString *)recordedFilePath
{
// Prepare output
NSString *trimmedAudioFileBaseName = [NSString stringWithFormat:#"recordingConverted%x.caf", arc4random()];
NSString *trimmedAudioFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:trimmedAudioFileBaseName];
NSFileManager *fileManager = [NSFileManager defaultManager];
if ([fileManager fileExistsAtPath:trimmedAudioFilePath]) {
NSError *error;
if ([fileManager removeItemAtPath:trimmedAudioFilePath error:&error] == NO) {
NSLog(#"removeItemAtPath %# error:%#", trimmedAudioFilePath, error);
}
}
NSLog(#"Saving to %#", trimmedAudioFilePath);
AVAsset *avAsset = [AVAsset assetWithURL:self.audioRecorder.url];
NSArray *tracks = [avAsset tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *track = [tracks objectAtIndex:0];
AVAssetExportSession *exportSession = [AVAssetExportSession
exportSessionWithAsset:avAsset
presetName:AVAssetExportPresetAppleM4A];
// create trim time range
CMTime startTime = CMTimeMake(self.recordingBeginTime*SAVING_SAMPLES_PER_SECOND, SAVING_SAMPLES_PER_SECOND);
CMTimeRange exportTimeRange = CMTimeRangeFromTimeToTime(startTime, kCMTimePositiveInfinity);
// create fade in time range
CMTime startFadeInTime = startTime;
CMTime endFadeInTime = CMTimeMake(self.recordingBeginTime*SAVING_SAMPLES_PER_SECOND + RISE_TRIGGER_INTERVALS*INTERVAL_SECONDS*SAVING_SAMPLES_PER_SECOND, SAVING_SAMPLES_PER_SECOND);
CMTimeRange fadeInTimeRange = CMTimeRangeFromTimeToTime(startFadeInTime, endFadeInTime);
// setup audio mix
AVMutableAudioMix *exportAudioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *exportAudioMixInputParameters =
[AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:track];
[exportAudioMixInputParameters setVolumeRampFromStartVolume:0.0 toEndVolume:1.0
timeRange:fadeInTimeRange];
exportAudioMix.inputParameters = [NSArray
arrayWithObject:exportAudioMixInputParameters];
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:trimmedAudioFilePath];
exportSession.outputFileType = AVFileTypeAppleM4A;
exportSession.timeRange = exportTimeRange;
exportSession.audioMix = exportAudioMix;
// MAKE THE EXPORT SYNCHRONOUS
dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
[exportSession exportAsynchronouslyWithCompletionHandler:^{
dispatch_semaphore_signal(semaphore);
}];
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
if (AVAssetExportSessionStatusCompleted == exportSession.status) {
NSLog(#"AVAssetExportSessionStatusCompleted");
return trimmedAudioFilePath;
} else if (AVAssetExportSessionStatusFailed == exportSession.status) {
// a failure may happen because of an event out of your control
// for example, an interruption like a phone call comming in
// make sure and handle this case appropriately
NSLog(#"AVAssetExportSessionStatusFailed %#", exportSession.error.localizedDescription);
} else {
NSLog(#"Export Session Status: %d", exportSession.status);
}
return nil;
}
I'm recording the audio with an AVAudioRecorder, and saving it to an .aif file. I've seen some mention elsewhere of methods by which I could have it wait to start recording until the audio level reaches a threshold; that'd get me halfway there
Without adequate buffering, that would truncate the start.
I don't know of an easy way. You would have to write a new audio file after recording and analyzing it for the desired start and end points. Modifying the existing file would be straightforward if you knew the AIFF format well (not many people do) and had an easy way to read the file's sample data.
The analysis stage is pretty easy for a basic implementation -- evaluate the average power of sample data, until your threshold is exceeded. Repeat in reverse for end.
I have a requirement where user will be allowed to trim a audio file before submitting to server. The trimming function works fine in iOS 6 and not in iOS 7.
This happens in iOS 7 when user chooses a song from iTunes library and start trimming. It appears as trimmed. The new file which is created after trimming plays upto trimmed and rest will be blank. Also the duration shows the original song duration. This doesn't happen for all files. It happens only for some files. Also I did check the exportable and hasProtectedContent . Both have correct values (exportable - yes, hasProtectedContent - no). Can I know what could be issue in iOS 7.
I am pasting the trimming audio file code for reference
- (void)trimAndExportAudio:(AVAsset *)avAsset withDuration:(NSInteger)durationInSeconds withStartTime:(NSInteger)startTime endTime:(NSInteger)endTime toFileName:(NSString *)filename withTrimCompleteBlock:(TrimCompleteBlock)trimCompleteBlock
{
if (startTime < 0 || startTime > durationInSeconds || startTime >= endTime)
{
CGLog(#"start time = %d endTime %d durationInSeconds %d", startTime, endTime, durationInSeconds);
trimCompleteBlock(NO, #"Invalid Start Time");
return;
}
if (endTime > durationInSeconds)
{
CGLog(#"start time = %d endTime %d durationInSeconds %d", startTime, endTime, durationInSeconds);
trimCompleteBlock(NO, #"Invalid End Time");
return;
}
// create the export session
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:avAsset presetName:AVAssetExportPresetAppleM4A];
if (exportSession == nil)
{
trimCompleteBlock(NO, #"Could not create an Export Session.");
return;
}
//export file path
NSError *removeError = nil;
NSString *filePath = [[CGUtilities applicationLibraryMyRecordingsDirectory] stringByAppendingPathComponent:filename];
if ([[NSFileManager defaultManager] fileExistsAtPath:filePath])
{
[[NSFileManager defaultManager] removeItemAtPath:filePath error:&removeError];
}
if (removeError)
{
CGLog(#"Error removing existing file = %#", removeError);
}
// create trim time range
CMTime exportStartTime = CMTimeMake(startTime, 1);
CMTime exportStopTime = CMTimeMake(endTime, 1);
CMTimeRange exportTimeRange = CMTimeRangeFromTimeToTime(exportStartTime, exportStopTime);
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:filePath]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
exportSession.timeRange = exportTimeRange; // trim time range
//perform the export
__weak AVAssetExportSession *weakExportSession = exportSession;
[exportSession exportAsynchronouslyWithCompletionHandler:^{
if (AVAssetExportSessionStatusCompleted == exportSession.status)
{
if (![filename isEqualToString:kLibraryTempFileName])
{
//created a new recording
}
trimCompleteBlock(YES, nil);
}
else if (AVAssetExportSessionStatusFailed == exportSession.status)
{
// a failure may happen because of an event out of your control
// for example, an interruption like a phone call comming in
// make sure and handle this case appropriately
trimCompleteBlock(NO, weakExportSession.error.description);
}
else
{
trimCompleteBlock(NO, weakExportSession.error.description);
}
}];
}
Thanks
We can import AVFoundation/AVFoundation.h
-(BOOL)trimAudiofile{
float audioStartTime;//define start time of audio
float audioEndTime;//define end time of audio
NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
[dateFormatter setDateFormat:#"yyyy-MM-dd_HH-mm-ss"];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSLibraryDirectory, NSUserDomainMask, YES);
NSString *libraryCachesDirectory = [paths objectAtIndex:0];
libraryCachesDirectory = [libraryCachesDirectory stringByAppendingPathComponent:#"Caches"];
NSString *OutputFilePath = [libraryCachesDirectory stringByAppendingFormat:#"/output_%#.mp4", [dateFormatter stringFromDate:[NSDate date]]];
NSURL *audioFileOutput = [NSURL fileURLWithPath:OutputFilePath];
NSURL *audioFileInput;//<Path of orignal audio file>
if (!audioFileInput || !audioFileOutput)
{
return NO;
}
[[NSFileManager defaultManager] removeItemAtURL:audioFileOutput error:NULL];
AVAsset *asset = [AVAsset assetWithURL:audioFileInput];
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:asset
presetName:AVAssetExportPresetAppleM4A];
if (exportSession == nil)
{
return NO;
}
CMTime startTime = CMTimeMake((int)(floor(audioStartTime * 100)), 100);
CMTime stopTime = CMTimeMake((int)(ceil(audioEndTime * 100)), 100);
CMTimeRange exportTimeRange = CMTimeRangeFromTimeToTime(startTime, stopTime);
exportSession.outputURL = audioFileOutput;
exportSession.timeRange = exportTimeRange;
exportSession.outputFileType = AVFileTypeAppleM4A;
[exportSession exportAsynchronouslyWithCompletionHandler:^
{
if (AVAssetExportSessionStatusCompleted == exportSession.status)
{
NSLog(#"Export OK");
}
else if (AVAssetExportSessionStatusFailed == exportSession.status)
{
NSLog(#"Export failed: %#", [[exportSession error] localizedDescription]);
}
}];
return YES;
}