I am trying to record the Audio using AVAssetWriter. But when in plays the File it plays 2 times slower than the origninal audio.
What i have done is creating a AVAssetWriter in Following way..
_writer = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeMPEG4 error:nil];
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
NSDictionary *audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC ], AVFormatIDKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSNumber numberWithInt: 64000 ], AVEncoderBitRateKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
nil];
_audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioOutputSettings];
_audioWriterInput.expectsMediaDataInRealTime=YES;
[_writer addInput:_audioWriterInput];
and after that i start appending the buffer in the data as follows
if (_audioWriterInput.readyForMoreMediaData == YES) {
[_audioWriterInput appendSampleBuffer:sampleBuffer];
return YES;
}
What i am doing is getting the mic output using the AVCapureSession in the following delegate function
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
and passing the sampleBuffer directly to avassetWriter to write to file.
Can somebody please let me know why it is writing the audio data to file so slower.. has anybody else faces the similar problem ? and what could be the possible resolution for the issue..
Got the issue..
I was adding the mic capture to the same session in which i am capturing the camera.. Separated both the sessions and now things seems to be working fine..
Related
Below code converts cv::Mat to CVPixelBufferRef
CVPixelBufferRef getImageBufferFromMat(cv::Mat matimg) {
cv::cvtColor(matimg, matimg, CV_BGR2BGRA);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool: YES], kCVPixelBufferMetalCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt: matimg.cols], kCVPixelBufferWidthKey,
[NSNumber numberWithInt: matimg.rows], kCVPixelBufferHeightKey,
[NSNumber numberWithInt: matimg.step[0]], kCVPixelBufferBytesPerRowAlignmentKey,
nil];
CVPixelBufferRef imageBuffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, matimg.cols, matimg.rows, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *base = CVPixelBufferGetBaseAddress(imageBuffer);
memcpy(base, matimg.data, matimg.total() * matimg.elemSize());
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return imageBuffer;
}
The problem is I am getting half the image
Original Image
After Convertion (i convert CVPixelBufferRef back to UIImage and store it using UIImageWriteToSavedPhotosAlbum just for checking)
Interestingly, the image size of Mat and CVPixelBufferRef are the same.
Now, what I did was resizing the image just before memcopy, where the height is increased by 2 folds
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *base = CVPixelBufferGetBaseAddress(imageBuffer);
cv::resize(matimg, matimg, cv::Size(), 1 , 2);
memcpy(base, matimg.data, matimg.total() * matimg.elemSize());
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
Now the image size is still the same...
I want to badly know what's causing this behavior and I am sure I am missing something...
I found a solution to this problem after reading this.
The system likes images to be a multiple of 64 bytes per row, presumably for better performance due to cache line alignment. As image resolution is [1000 × 1000], not multiple of 64 hence bytes per row would default to 27840 don't know why... This was causing the problems.
Anyway, if anyone looking for the solution
CVPixelBufferRef getImageBufferFromMat(cv::Mat matimg) {
cv::cvtColor(matimg, matimg, CV_BGR2BGRA);
int widthReminder = matimg.cols % 64, heightReminder = matimg.rows % 64;
if (widthReminder != 0 || heightReminder != 0) {
cv::resize(matimg, matimg, cv::Size(matimg.cols + (64 - widthReminder), matimg.rows + (64 - heightReminder)));
}
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool: YES], kCVPixelBufferMetalCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool: YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt: matimg.cols], kCVPixelBufferWidthKey,
[NSNumber numberWithInt: matimg.rows], kCVPixelBufferHeightKey,
[NSNumber numberWithInt: matimg.step[0]], kCVPixelBufferBytesPerRowAlignmentKey,
nil];
CVPixelBufferRef imageBuffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, matimg.cols, matimg.rows, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;
NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *base = CVPixelBufferGetBaseAddress(imageBuffer);
memcpy(base, matimg.data, matimg.total() * matimg.elemSize());
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
// UIImageWriteToSavedPhotosAlbum(converts(imageBuffer), nil, nil, nil);
return imageBuffer;
}
I have a React Native (Expo) app which captures audio using the expo-av library.
It then uploads the audio file to Amazon S3, and then Transcribes that in Amazon Transcribe.
For Android , i save the audio as a '.m4a' file, and call the Amazon Transcribe API as :
transcribe_client.start_transcription_job(TranscriptionJobName = job_name,
Media={'MediaFileUri' : file_uri},
MediaFormat='mp4',
LanguageCode='en-US')
What should the 'MediaFormat' be for upload from an iOS device, which will typically be a '.caf' file ?
Amazon Transcribe only allows these media formats
MP3, MP4, WAV, FLAC, AMR, OGG, and WebM
Possible solutions:
Create an API wich does the conversion for you.
You can easly create one using for example the FFMPEG python library.
Use an already made API.
By using the cloudconvert API you can convert the file with ease, but only if you pay for it.
Use an different library to record the IOS audio.
There's an module called react-native-record-audio-ios wich is made entirely for IOS and record audio in .caf, .m4a, and .wav.
Use the LAME api to convert it.
As said here, you can convert a .caf file into a .mp3 one by probably creating a native module wich would run this:
FILE *pcm = fopen("file.caf", "rb");
FILE *mp3 = fopen("file.mp3", "wb");
const int PCM_SIZE = 8192;
const int MP3_SIZE = 8192;
short int pcm_buffer[PCM_SIZE*2];
unsigned char mp3_buffer[MP3_SIZE];
lame_t lame = lame_init();
lame_set_in_samplerate(lame, 44100);
lame_set_VBR(lame, vbr_default);
lame_init_params(lame);
do {
read = fread(pcm_buffer, 2*sizeof(short int), PCM_SIZE, pcm);
if (read == 0)
write = lame_encode_flush(lame, mp3_buffer, MP3_SIZE);
else
write = lame_encode_buffer_interleaved(lame, pcm_buffer, read, mp3_buffer, MP3_SIZE);
fwrite(mp3_buffer, write, 1, mp3);
} while (read != 0);
lame_close(lame);
fclose(mp3);
fclose(pcm);
Creating an native module who runs this objective-c code:
-(void) convertToWav
{
// set up an AVAssetReader to read from the iPod Library
NSString *cafFilePath=[[NSBundle mainBundle]pathForResource:#"test" ofType:#"caf"];
NSURL *assetURL = [NSURL fileURLWithPath:cafFilePath];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
NSError *assetError = nil;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:songAsset
error:&assetError]
;
if (assetError) {
NSLog (#"error: %#", assetError);
return;
}
AVAssetReaderOutput *assetReaderOutput = [AVAssetReaderAudioMixOutput
assetReaderAudioMixOutputWithAudioTracks:songAsset.tracks
audioSettings: nil];
if (! [assetReader canAddOutput: assetReaderOutput]) {
NSLog (#"can't add reader output... die!");
return;
}
[assetReader addOutput: assetReaderOutput];
NSString *title = #"MyRec";
NSArray *docDirs = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES);
NSString *docDir = [docDirs objectAtIndex: 0];
NSString *wavFilePath = [[docDir stringByAppendingPathComponent :title]
stringByAppendingPathExtension:#"wav"];
if ([[NSFileManager defaultManager] fileExistsAtPath:wavFilePath])
{
[[NSFileManager defaultManager] removeItemAtPath:wavFilePath error:nil];
}
NSURL *exportURL = [NSURL fileURLWithPath:wavFilePath];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:exportURL
fileType:AVFileTypeWAVE
error:&assetError];
if (assetError)
{
NSLog (#"error: %#", assetError);
return;
}
AudioChannelLayout channelLayout;
memset(&channelLayout, 0, sizeof(AudioChannelLayout));
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio
outputSettings:outputSettings];
if ([assetWriter canAddInput:assetWriterInput])
{
[assetWriter addInput:assetWriterInput];
}
else
{
NSLog (#"can't add asset writer input... die!");
return;
}
assetWriterInput.expectsMediaDataInRealTime = NO;
[assetWriter startWriting];
[assetReader startReading];
AVAssetTrack *soundTrack = [songAsset.tracks objectAtIndex:0];
CMTime startTime = CMTimeMake (0, soundTrack.naturalTimeScale);
[assetWriter startSessionAtSourceTime: startTime];
__block UInt64 convertedByteCount = 0;
dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
[assetWriterInput requestMediaDataWhenReadyOnQueue:mediaInputQueue
usingBlock: ^
{
while (assetWriterInput.readyForMoreMediaData)
{
CMSampleBufferRef nextBuffer = [assetReaderOutput copyNextSampleBuffer];
if (nextBuffer)
{
// append buffer
[assetWriterInput appendSampleBuffer: nextBuffer];
convertedByteCount += CMSampleBufferGetTotalSampleSize (nextBuffer);
CMTime progressTime = CMSampleBufferGetPresentationTimeStamp(nextBuffer);
CMTime sampleDuration = CMSampleBufferGetDuration(nextBuffer);
if (CMTIME_IS_NUMERIC(sampleDuration))
progressTime= CMTimeAdd(progressTime, sampleDuration);
float dProgress= CMTimeGetSeconds(progressTime) / CMTimeGetSeconds(songAsset.duration);
NSLog(#"%f",dProgress);
}
else
{
[assetWriterInput markAsFinished];
// [assetWriter finishWriting];
[assetReader cancelReading];
}
}
}];
}
But, as said here:
Since the iPhone shouldn't really be used for processor intensive things such as audio conversion.
So i recommend you the third solution, because it's easier and doesn't look like an intensive task for the Iphone processor.
I'm trying to encode any audio format to AAC format, with 44100Hz sample rate.
So basically : input (mp3, aac? etc, any sample rate) -> AAC (44100Hz)
The source audio comes from a video (mp4), but I can extract it to m4a (AAC). The thing is I also want to change the sample rate to 44100Hz.
I'm trying to achieve this with AVAssetReader and AVAssetWriter, but not sure if its possible or if it's the best solution. Any other solution would be very much appreciated !
Here's my code so far :
// Input video audio (.mp4)
AVAsset *videoAsset = <mp4 video asset>;
NSArray<AVAssetTrack *> *videoAudioTracks = [videoAsset tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *videoAudioTrack = [videoAudioTracks objectAtIndex:0];
// Output audio (.m4a AAC)
NSURL *exportUrl = <m4a, aac output file URL>;
// ASSET READER
NSError *error;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:videoAsset
error:&error];
if(error) {
NSLog(#"error:%#",error);
return;
}
// Asset reader output
AVAssetReaderOutput *assetReaderOutput =[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoAudioTrack
outputSettings:nil];
if(![assetReader canAddOutput:assetReaderOutput]) {
NSLog(#"Can't add output!");
return;
}
[assetReader addOutput:assetReaderOutput];
// ASSET WRITER
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:exportUrl
fileType:AVFileTypeAppleM4A
error:&error];
if(error) {
NSLog(#"error:%#",error);
return;
}
AudioChannelLayout channelLayout;
memset(&channelLayout, 0, sizeof(AudioChannelLayout));
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
NSDictionary *outputSettings = #{AVFormatIDKey: #(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey: #2,
AVSampleRateKey: #44100.0F,
AVChannelLayoutKey: [NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)],
AVEncoderBitRateKey: #64000};
/*NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.f], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];*/
// Asset writer input
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio
outputSettings:outputSettings];
if ([assetWriter canAddInput:assetWriterInput])
[assetWriter addInput:assetWriterInput];
else {
NSLog(#"can't add asset writer input... die!");
return;
}
assetWriterInput.expectsMediaDataInRealTime = NO;
[assetWriter startWriting];
[assetReader startReading];
CMTime startTime = CMTimeMake (0, videoAudioTrack.naturalTimeScale);
[assetWriter startSessionAtSourceTime: startTime];
__block UInt64 convertedByteCount = 0;
dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
[assetWriterInput requestMediaDataWhenReadyOnQueue:mediaInputQueue
usingBlock: ^
{
while (assetWriterInput.readyForMoreMediaData)
{
CMSampleBufferRef nextBuffer = [assetReaderOutput copyNextSampleBuffer];
if (nextBuffer)
{
// append buffer
[assetWriterInput appendSampleBuffer: nextBuffer];
convertedByteCount += CMSampleBufferGetTotalSampleSize (nextBuffer);
CMSampleBufferInvalidate(nextBuffer);
CFRelease(nextBuffer);
nextBuffer = NULL;
}
else
{
[assetWriterInput markAsFinished];
// [assetWriter finishWriting];
[assetReader cancelReading];
break;
}
}
}];
And here is the error I get with a video that contains an mp3 audio track :
Terminating app due to uncaught exception
'NSInvalidArgumentException', reason: '*** -[AVAssetWriterInput
appendSampleBuffer:] Cannot append sample buffer: Input buffer must
be in an uncompressed format when outputSettings is not nil'
Any help would be much appreciated, thanks !
You should be able to achieve this by configuring your AVAssetReaderOutput output settings:
NSDictionary *readerOutputSettings = #{ AVSampleRateKey: #44100, AVFormatIDKey: #(kAudioFormatLinearPCM) };
AVAssetReaderOutput *assetReaderOutput =[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoAudioTrack
outputSettings:readerOutputSettings];
I'm not native to Obj-C and I had to google around to figure out the accepted answer in Swift.
Here is the Swift version:
let audioSettings: [String : Any] = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100
]
let assetReaderAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings)
I have few methods that are supposed to write video in mov file to temp dir, but after ~15 sec. I'm getting errors:
Received memory warning.
Received memory warning.
Received memory warning.
Received memory warning.
Then app is crashing. I'm stuck and have no idea what is wrong...
- (void) saveVideoToFileFromBuffer:(CMSampleBufferRef) buffer {
if (!movieWriter) {
NSString *moviePath = [NSString stringWithFormat:#"%#tmpMovie", NSTemporaryDirectory()];
if ([[NSFileManager defaultManager] fileExistsAtPath:moviePath])
[self removeMovieAtPath:moviePath];
NSError *error = nil;
movieWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:moviePath] fileType: AVFileTypeQuickTimeMovie error:&error];
if (error) {
m_log(#"Error allocating AssetWriter: %#", [error localizedDescription]);
} else {
CMFormatDescriptionRef description = CMSampleBufferGetFormatDescription(buffer);
if(![self setUpMovieWriterObjectWithDescriptor:description])
m_log(#"ET go home, no video recording!!");
}
}
if (movieWriter.status != AVAssetWriterStatusWriting) {
[movieWriter startWriting];
[movieWriter startSessionAtSourceTime:kCMTimeZero];
apiStatusChangeIndicator = NO;
}
if (movieWriter.status == AVAssetWriterStatusWriting) {
if (![movieInput appendSampleBuffer:buffer]) m_log(#"Failed to append sample buffer!");
}
}
Rest of code:
- (BOOL) setUpMovieWriterObjectWithDescriptor:(CMFormatDescriptionRef) descriptor {
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(descriptor);
NSDictionary *compressionSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoProfileLevelH264Baseline31,AVVideoProfileLevelKey,
[NSNumber numberWithInteger:30], AVVideoMaxKeyFrameIntervalKey, nil];
//AVVideoProfileLevelKey set because of errors
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,[NSNumber numberWithInt:dimensions.width], AVVideoWidthKey,
[NSNumber numberWithInt:dimensions.height], AVVideoHeightKey, compressionSettings, AVVideoCompressionPropertiesKey, nil];
if ([movieWriter canApplyOutputSettings:videoSettings forMediaType:AVMediaTypeVideo]) {
movieInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
movieInput.expectsMediaDataInRealTime = YES;
if ([movieWriter canAddInput:movieInput]) {
[movieWriter addInput:movieInput];
} else {
m_log(#"Couldn't apply video input to Asset Writer!");
return NO;
}
} else {
m_log(#"Couldn't apply video settings to AVAssetWriter!");
return NO;
}
return YES;
}
Would be great if someone could point my mistake! Can share more code if needed. SampleBuffer comes from CIImage with filters.
Now new thing, I can record few seconds of movie and saved it, but it's all black screen...
UPDATE
Saving video works, but creating CMSampleBufferRef from CIImage fails. It's reason that I got green or black screen, here's the code:
- (CMSampleBufferRef) processCIImageToPixelBuffer:(CIImage*) image andSampleBuffer:(CMSampleTimingInfo) info{
CVPixelBufferRef renderTargetPixelBuffer;
CFDictionaryRef empty;
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault,
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
CVReturn cvError = CVPixelBufferCreate(kCFAllocatorSystemDefault,
[image extent].size.width,
[image extent].size.height,
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
attrs,
&renderTargetPixelBuffer);
if (cvError != 0) {
m_log(#"Error when init Pixel buffer: %i", cvError);
}
CFRelease(empty);
CFRelease(attrs);
CVPixelBufferLockBaseAddress(renderTargetPixelBuffer, 0 );
[_coreImageContext render:image toCVPixelBuffer:renderTargetPixelBuffer];
CVPixelBufferUnlockBaseAddress(renderTargetPixelBuffer, 0 );
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, renderTargetPixelBuffer, &videoInfo);
CMSampleBufferRef recordingBuffer;
OSStatus cmError = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, renderTargetPixelBuffer, true, NULL, NULL, videoInfo, &info, &recordingBuffer);
if (cmError != 0 ) {
m_log(#"Error creating sample buffer: %i", (int)cmError);
}
CVPixelBufferRelease(renderTargetPixelBuffer);
renderTargetPixelBuffer = NULL;
CFRelease(videoInfo);
videoInfo = NULL;
return recordingBuffer;
}
You should check your code with Profile tool. Especially for memory leaks. May be you do not release sample buffer:
CMSampleBufferInvalidate(buffer);
CFRelease(buffer);
buffer = NULL;
I'm trying to use AVAssetWriter to create an m4a file. It works perfectly on any devices exclude iPhone 3G. Some says, that problem is in 3G does not support AAC encoding. Is it true? if ([assetWriter canAddInput:assetWriterInput]) returns NO.
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:exportURL
fileType:AVFileTypeAppleM4A
error:&assetError];
if (assetError) {
NSLog (#"error: %#", assetError);
return;
}
AudioChannelLayout channelLayout;
memset(&channelLayout, 0, sizeof(AudioChannelLayout));
channelLayout.mChannelBitmap = 0;
channelLayout.mNumberChannelDescriptions = 0;
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
[ NSNumber numberWithInt: 2], AVNumberOfChannelsKey,
[ NSNumber numberWithFloat: 44100.0], AVSampleRateKey,
[ NSData dataWithBytes: &channelLayout length:sizeof( AudioChannelLayout ) ], AVChannelLayoutKey,
[ NSNumber numberWithInt: 192000], AVEncoderBitRateKey,nil];
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio
outputSettings:outputSettings];
if ([assetWriter canAddInput:assetWriterInput]) {
[assetWriter addInput:assetWriterInput];
}
Anybody knows, how to create m4a file on a iPhone 3G?
Thanks
As you're saying, iPhone 3G does not support "built in" encoding of AAC (sorry, no direct link, scroll about a page down to table 1-2) so there is no way to do it without bundling your own encoder.
The problem is that bundling an AAC encoder requires a patent license. If that's something you can deal with, you should be able to compile faad2 for the iPhone and use that.