I have few methods that are supposed to write video in mov file to temp dir, but after ~15 sec. I'm getting errors:
Received memory warning.
Received memory warning.
Received memory warning.
Received memory warning.
Then app is crashing. I'm stuck and have no idea what is wrong...
- (void) saveVideoToFileFromBuffer:(CMSampleBufferRef) buffer {
if (!movieWriter) {
NSString *moviePath = [NSString stringWithFormat:#"%#tmpMovie", NSTemporaryDirectory()];
if ([[NSFileManager defaultManager] fileExistsAtPath:moviePath])
[self removeMovieAtPath:moviePath];
NSError *error = nil;
movieWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:moviePath] fileType: AVFileTypeQuickTimeMovie error:&error];
if (error) {
m_log(#"Error allocating AssetWriter: %#", [error localizedDescription]);
} else {
CMFormatDescriptionRef description = CMSampleBufferGetFormatDescription(buffer);
if(![self setUpMovieWriterObjectWithDescriptor:description])
m_log(#"ET go home, no video recording!!");
}
}
if (movieWriter.status != AVAssetWriterStatusWriting) {
[movieWriter startWriting];
[movieWriter startSessionAtSourceTime:kCMTimeZero];
apiStatusChangeIndicator = NO;
}
if (movieWriter.status == AVAssetWriterStatusWriting) {
if (![movieInput appendSampleBuffer:buffer]) m_log(#"Failed to append sample buffer!");
}
}
Rest of code:
- (BOOL) setUpMovieWriterObjectWithDescriptor:(CMFormatDescriptionRef) descriptor {
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(descriptor);
NSDictionary *compressionSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoProfileLevelH264Baseline31,AVVideoProfileLevelKey,
[NSNumber numberWithInteger:30], AVVideoMaxKeyFrameIntervalKey, nil];
//AVVideoProfileLevelKey set because of errors
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,[NSNumber numberWithInt:dimensions.width], AVVideoWidthKey,
[NSNumber numberWithInt:dimensions.height], AVVideoHeightKey, compressionSettings, AVVideoCompressionPropertiesKey, nil];
if ([movieWriter canApplyOutputSettings:videoSettings forMediaType:AVMediaTypeVideo]) {
movieInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
movieInput.expectsMediaDataInRealTime = YES;
if ([movieWriter canAddInput:movieInput]) {
[movieWriter addInput:movieInput];
} else {
m_log(#"Couldn't apply video input to Asset Writer!");
return NO;
}
} else {
m_log(#"Couldn't apply video settings to AVAssetWriter!");
return NO;
}
return YES;
}
Would be great if someone could point my mistake! Can share more code if needed. SampleBuffer comes from CIImage with filters.
Now new thing, I can record few seconds of movie and saved it, but it's all black screen...
UPDATE
Saving video works, but creating CMSampleBufferRef from CIImage fails. It's reason that I got green or black screen, here's the code:
- (CMSampleBufferRef) processCIImageToPixelBuffer:(CIImage*) image andSampleBuffer:(CMSampleTimingInfo) info{
CVPixelBufferRef renderTargetPixelBuffer;
CFDictionaryRef empty;
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault,
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
CVReturn cvError = CVPixelBufferCreate(kCFAllocatorSystemDefault,
[image extent].size.width,
[image extent].size.height,
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
attrs,
&renderTargetPixelBuffer);
if (cvError != 0) {
m_log(#"Error when init Pixel buffer: %i", cvError);
}
CFRelease(empty);
CFRelease(attrs);
CVPixelBufferLockBaseAddress(renderTargetPixelBuffer, 0 );
[_coreImageContext render:image toCVPixelBuffer:renderTargetPixelBuffer];
CVPixelBufferUnlockBaseAddress(renderTargetPixelBuffer, 0 );
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, renderTargetPixelBuffer, &videoInfo);
CMSampleBufferRef recordingBuffer;
OSStatus cmError = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, renderTargetPixelBuffer, true, NULL, NULL, videoInfo, &info, &recordingBuffer);
if (cmError != 0 ) {
m_log(#"Error creating sample buffer: %i", (int)cmError);
}
CVPixelBufferRelease(renderTargetPixelBuffer);
renderTargetPixelBuffer = NULL;
CFRelease(videoInfo);
videoInfo = NULL;
return recordingBuffer;
}
You should check your code with Profile tool. Especially for memory leaks. May be you do not release sample buffer:
CMSampleBufferInvalidate(buffer);
CFRelease(buffer);
buffer = NULL;
Related
I have a React Native (Expo) app which captures audio using the expo-av library.
It then uploads the audio file to Amazon S3, and then Transcribes that in Amazon Transcribe.
For Android , i save the audio as a '.m4a' file, and call the Amazon Transcribe API as :
transcribe_client.start_transcription_job(TranscriptionJobName = job_name,
Media={'MediaFileUri' : file_uri},
MediaFormat='mp4',
LanguageCode='en-US')
What should the 'MediaFormat' be for upload from an iOS device, which will typically be a '.caf' file ?
Amazon Transcribe only allows these media formats
MP3, MP4, WAV, FLAC, AMR, OGG, and WebM
Possible solutions:
Create an API wich does the conversion for you.
You can easly create one using for example the FFMPEG python library.
Use an already made API.
By using the cloudconvert API you can convert the file with ease, but only if you pay for it.
Use an different library to record the IOS audio.
There's an module called react-native-record-audio-ios wich is made entirely for IOS and record audio in .caf, .m4a, and .wav.
Use the LAME api to convert it.
As said here, you can convert a .caf file into a .mp3 one by probably creating a native module wich would run this:
FILE *pcm = fopen("file.caf", "rb");
FILE *mp3 = fopen("file.mp3", "wb");
const int PCM_SIZE = 8192;
const int MP3_SIZE = 8192;
short int pcm_buffer[PCM_SIZE*2];
unsigned char mp3_buffer[MP3_SIZE];
lame_t lame = lame_init();
lame_set_in_samplerate(lame, 44100);
lame_set_VBR(lame, vbr_default);
lame_init_params(lame);
do {
read = fread(pcm_buffer, 2*sizeof(short int), PCM_SIZE, pcm);
if (read == 0)
write = lame_encode_flush(lame, mp3_buffer, MP3_SIZE);
else
write = lame_encode_buffer_interleaved(lame, pcm_buffer, read, mp3_buffer, MP3_SIZE);
fwrite(mp3_buffer, write, 1, mp3);
} while (read != 0);
lame_close(lame);
fclose(mp3);
fclose(pcm);
Creating an native module who runs this objective-c code:
-(void) convertToWav
{
// set up an AVAssetReader to read from the iPod Library
NSString *cafFilePath=[[NSBundle mainBundle]pathForResource:#"test" ofType:#"caf"];
NSURL *assetURL = [NSURL fileURLWithPath:cafFilePath];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
NSError *assetError = nil;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:songAsset
error:&assetError]
;
if (assetError) {
NSLog (#"error: %#", assetError);
return;
}
AVAssetReaderOutput *assetReaderOutput = [AVAssetReaderAudioMixOutput
assetReaderAudioMixOutputWithAudioTracks:songAsset.tracks
audioSettings: nil];
if (! [assetReader canAddOutput: assetReaderOutput]) {
NSLog (#"can't add reader output... die!");
return;
}
[assetReader addOutput: assetReaderOutput];
NSString *title = #"MyRec";
NSArray *docDirs = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES);
NSString *docDir = [docDirs objectAtIndex: 0];
NSString *wavFilePath = [[docDir stringByAppendingPathComponent :title]
stringByAppendingPathExtension:#"wav"];
if ([[NSFileManager defaultManager] fileExistsAtPath:wavFilePath])
{
[[NSFileManager defaultManager] removeItemAtPath:wavFilePath error:nil];
}
NSURL *exportURL = [NSURL fileURLWithPath:wavFilePath];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:exportURL
fileType:AVFileTypeWAVE
error:&assetError];
if (assetError)
{
NSLog (#"error: %#", assetError);
return;
}
AudioChannelLayout channelLayout;
memset(&channelLayout, 0, sizeof(AudioChannelLayout));
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio
outputSettings:outputSettings];
if ([assetWriter canAddInput:assetWriterInput])
{
[assetWriter addInput:assetWriterInput];
}
else
{
NSLog (#"can't add asset writer input... die!");
return;
}
assetWriterInput.expectsMediaDataInRealTime = NO;
[assetWriter startWriting];
[assetReader startReading];
AVAssetTrack *soundTrack = [songAsset.tracks objectAtIndex:0];
CMTime startTime = CMTimeMake (0, soundTrack.naturalTimeScale);
[assetWriter startSessionAtSourceTime: startTime];
__block UInt64 convertedByteCount = 0;
dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
[assetWriterInput requestMediaDataWhenReadyOnQueue:mediaInputQueue
usingBlock: ^
{
while (assetWriterInput.readyForMoreMediaData)
{
CMSampleBufferRef nextBuffer = [assetReaderOutput copyNextSampleBuffer];
if (nextBuffer)
{
// append buffer
[assetWriterInput appendSampleBuffer: nextBuffer];
convertedByteCount += CMSampleBufferGetTotalSampleSize (nextBuffer);
CMTime progressTime = CMSampleBufferGetPresentationTimeStamp(nextBuffer);
CMTime sampleDuration = CMSampleBufferGetDuration(nextBuffer);
if (CMTIME_IS_NUMERIC(sampleDuration))
progressTime= CMTimeAdd(progressTime, sampleDuration);
float dProgress= CMTimeGetSeconds(progressTime) / CMTimeGetSeconds(songAsset.duration);
NSLog(#"%f",dProgress);
}
else
{
[assetWriterInput markAsFinished];
// [assetWriter finishWriting];
[assetReader cancelReading];
}
}
}];
}
But, as said here:
Since the iPhone shouldn't really be used for processor intensive things such as audio conversion.
So i recommend you the third solution, because it's easier and doesn't look like an intensive task for the Iphone processor.
I'm trying to reverse an AVAsset audio and save it to a file. To make things clear, I've made simple application with the issue https://github.com/ksenia-lyagusha/AudioReverse.git
The application takes mp4 video file from bundle, exports it to Temporary folder in the sandbox as single m4a file, then tries to read it from there, reverse and save result file back.
Temporary m4a file is OK.
The only result of my reverse part is Audio file in the Sandbox with white noise.
There is the part of code below, that is in charge of reversing AVAsset. It is based on related questions
How to reverse an audio file?
iOS audio manipulation - play local .caf file backwards
However, it doesn't work for me.
OSStatus theErr = noErr;
UInt64 fileDataSize = 0;
AudioFileID inputAudioFile;
AudioStreamBasicDescription theFileFormat;
UInt32 thePropertySize = sizeof(theFileFormat);
theErr = AudioFileOpenURL((__bridge CFURLRef)[NSURL URLWithString:inputPath], kAudioFileReadPermission, 0, &inputAudioFile);
thePropertySize = sizeof(fileDataSize);
theErr = AudioFileGetProperty(inputAudioFile, kAudioFilePropertyAudioDataByteCount, &thePropertySize, &fileDataSize);
UInt32 ps = sizeof(AudioStreamBasicDescription) ;
AudioFileGetProperty(inputAudioFile, kAudioFilePropertyDataFormat, &ps, &theFileFormat);
UInt64 dataSize = fileDataSize;
void *theData = malloc(dataSize);
// set up output file
AudioFileID outputAudioFile;
AudioStreamBasicDescription myPCMFormat;
myPCMFormat.mSampleRate = 44100;
myPCMFormat.mFormatID = kAudioFormatLinearPCM;
// kAudioFormatFlagsCanonical is deprecated
myPCMFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
myPCMFormat.mChannelsPerFrame = 1;
myPCMFormat.mFramesPerPacket = 1;
myPCMFormat.mBitsPerChannel = 32;
myPCMFormat.mBytesPerPacket = (myPCMFormat.mBitsPerChannel / 8) * myPCMFormat.mChannelsPerFrame;
myPCMFormat.mBytesPerFrame = myPCMFormat.mBytesPerPacket;
NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:#"ReverseAudio.caf"];
NSURL *outputURL = [NSURL fileURLWithPath:exportPath];
theErr = AudioFileCreateWithURL((__bridge CFURLRef)outputURL,
kAudioFileCAFType,
&myPCMFormat,
kAudioFileFlags_EraseFile,
&outputAudioFile);
//Read data into buffer
//if readPoint = dataSize, then bytesToRead = 0 in while loop and
//it is endless
SInt64 readPoint = dataSize-1;
UInt64 writePoint = 0;
while(readPoint > 0)
{
UInt32 bytesToRead = 2;
AudioFileReadBytes(inputAudioFile, false, readPoint, &bytesToRead, theData);
// bytesToRead is now the amount of data actually read
UInt32 bytesToWrite = bytesToRead;
AudioFileWriteBytes(outputAudioFile, false, writePoint, &bytesToWrite, theData);
// bytesToWrite is now the amount of data actually written
writePoint += bytesToWrite;
readPoint -= bytesToRead;
}
free(theData);
AudioFileClose(inputAudioFile);
AudioFileClose(outputAudioFile);
If I change file type in AudioFileCreateWithURL from kAudioFileCAFType to another the result file is not created in the Sandbox at all.
Thanks for any help.
You get white noise because your in and out file formats are incompatible. You have different sample rates and channels and probably other differences. To make this work you need to have a common (PCM) format mediating between reads and writes. This is a reasonable job for the new(ish) AVAudio frameworks. We read from file to PCM, shuffle the buffers, then write from PCM to file. This approach is not optimised for large files, as all data is read into the buffers in one go, but is enough to get you started.
You can call this method from your getAudioFromVideo completion block. Error handling ignored for clarity.
- (void)readAudioFromURL:(NSURL*)inURL reverseToURL:(NSURL*)outURL {
//prepare the in and outfiles
AVAudioFile* inFile =
[[AVAudioFile alloc] initForReading:inURL error:nil];
AVAudioFormat* format = inFile.processingFormat;
AVAudioFrameCount frameCount =(UInt32)inFile.length;
NSDictionary* outSettings = #{
AVNumberOfChannelsKey:#(format.channelCount)
,AVSampleRateKey:#(format.sampleRate)};
AVAudioFile* outFile =
[[AVAudioFile alloc] initForWriting:outURL
settings:outSettings
error:nil];
//prepare the forward and reverse buffers
self.forwaredBuffer =
[[AVAudioPCMBuffer alloc] initWithPCMFormat:format
frameCapacity:frameCount];
self.reverseBuffer =
[[AVAudioPCMBuffer alloc] initWithPCMFormat:format
frameCapacity:frameCount];
//read file into forwardBuffer
[inFile readIntoBuffer:self.forwaredBuffer error:&error];
//set frameLength of reverseBuffer to forwardBuffer framelength
AVAudioFrameCount frameLength = self.forwaredBuffer.frameLength;
self.reverseBuffer.frameLength = frameLength;
//iterate over channels
//stride is 1 or 2 depending on interleave format
NSInteger stride = self.forwaredBuffer.stride;
for (AVAudioChannelCount channelIdx = 0;
channelIdx < self.forwaredBuffer.format.channelCount;
channelIdx++) {
float* forwaredChannelData =
self.forwaredBuffer.floatChannelData[channelIdx];
float* reverseChannelData =
self.reverseBuffer.floatChannelData[channelIdx];
int32_t reverseIdx = 0;
//iterate over samples, allocate to reverseBuffer in reverse order
for (AVAudioFrameCount frameIdx = frameLength;
frameIdx >0;
frameIdx--) {
float sample = forwaredChannelData[frameIdx*stride];
reverseChannelData[reverseIdx*stride] = sample;
reverseIdx++;
}
}
//write reverseBuffer to outFile
[outFile writeFromBuffer:self.reverseBuffer error:nil];
}
I wasn't able to find the problem in your code, however I suggest you reversing AVAsset using AVAssetWriter. Following code is based on iOS reverse audio through AVAssetWritet. I've added additional method there to make it work. Finally I've got reversed file.
static NSMutableArray *samples;
static OSStatus sampler(CMSampleBufferRef sampleBuffer, CMItemCount index, void *refcon)
{
[samples addObject:(__bridge id _Nonnull)(sampleBuffer)];
return noErr;
}
- (void)reversePlayAudio:(NSURL *)inputURL
{
AVAsset *asset = [AVAsset assetWithURL:inputURL];
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
NSMutableDictionary* audioReadSettings = [NSMutableDictionary dictionary];
[audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM]
forKey:AVFormatIDKey];
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:audioReadSettings];
[reader addOutput:readerOutput];
[reader startReading];
NSDictionary *outputSettings = #{AVFormatIDKey : #(kAudioFormatMPEG4AAC),
AVSampleRateKey : #(44100.0),
AVNumberOfChannelsKey : #(1),
AVEncoderBitRateKey : #(128000),
AVChannelLayoutKey : [NSData data]};
AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio
outputSettings:outputSettings];
NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:#"reverseAudio.m4a"];
NSURL *exportURL = [NSURL fileURLWithPath:exportPath];
NSError *writerError = nil;
AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:exportURL
fileType:AVFileTypeAppleM4A
error:&writerError];
[writerInput setExpectsMediaDataInRealTime:NO];
writer.shouldOptimizeForNetworkUse = NO;
[writer addInput:writerInput];
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];
CMSampleBufferRef sample;// = [readerOutput copyNextSampleBuffer];
samples = [[NSMutableArray alloc] init];
while (sample != NULL) {
sample = [readerOutput copyNextSampleBuffer];
if (sample == NULL)
continue;
CMSampleBufferCallForEachSample(sample, &sampler, NULL);
CFRelease(sample);
}
NSArray* reversedSamples = [[samples reverseObjectEnumerator] allObjects];
for (id reversedSample in reversedSamples) {
if (writerInput.readyForMoreMediaData) {
[writerInput appendSampleBuffer:(__bridge CMSampleBufferRef)(reversedSample)];
}
else {
[NSThread sleepForTimeInterval:0.05];
}
}
[samples removeAllObjects];
[writerInput markAsFinished];
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
[writer finishWritingWithCompletionHandler:^{
// writing is finished
// reversed audio file in TemporaryDirectory in the Sandbox
}];
});
}
Known issues of the code.
There might be some problems with the memory, if the audio is long.
The audio file's duration is longer than original's. (As a quick fix you might cut it down as usual AVAsset).
I am trying to grab the audio track from a video as raw pcm data. The plan is pass this data into a float* table array in libpd. I have managed to get some sample data but the number of samples reported is way too low. For example for a 29sec clip I am getting a reported 3968 samples. What I am looking forward is the amplitudes at each sample. For a track of audio length of 29 secs I would expect to have an array of 1278900 in size (at 44.1khz).
Here is what I have put together based on other examples:
- (void)audioCapture {
NSLog(#"capturing");
NSError *error;
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:self.videoAsset error:&error];
NSArray *audioTracks = [self.videoAsset tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *audioTrack = nil;
if ([audioTracks count] > 0)
audioTrack = [audioTracks objectAtIndex:0];
// Decompress to Linear PCM with the asset reader
NSDictionary *decompressionAudioSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM], AVFormatIDKey,
nil];
AVAssetReaderOutput *output = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
[reader addOutput:output];
[reader startReading];
CMSampleBufferRef sample = [output copyNextSampleBuffer];
//array for our sample amp values
SInt16* samples = NULL;
//sample count;
CMItemCount numSamplesInBuffer = 0;
//sample buffer
CMBlockBufferRef buffer;
while( sample != NULL )
{
sample = [output copyNextSampleBuffer];
if( sample == NULL )
continue;
buffer = CMSampleBufferGetDataBuffer( sample );
size_t lengthAtOffset;
size_t totalLength;
char* data;
if( CMBlockBufferGetDataPointer( buffer, 0, &lengthAtOffset, &totalLength, &data ) != noErr )
{
NSLog( #"error!" );
break;
}
numSamplesInBuffer = CMSampleBufferGetNumSamples(sample);
AudioBufferList audioBufferList;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sample,
NULL,
&audioBufferList,
sizeof(audioBufferList),
NULL,
NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&buffer
);
for (int bufferCount=0; bufferCount < audioBufferList.mNumberBuffers; bufferCount++) {
samples = (SInt16 *)audioBufferList.mBuffers[bufferCount].mData;
NSLog(#"idx %f",(double)samples[bufferCount]);
}
}
NSLog(#"num samps in buf %ld",numSamplesInBuffer);
//[PdBase copyArray:(float *)samples
//toArrayNamed:#"DestTable" withOffset:0
//count:pdTableSize];
}
I can successfully create a movie from a single still image. However I am also given an array of smaller images that I need to superimpose on top of the background image. I've tried just repeating the process of appending frames with the assetWriter, but I get errors because you can't write to the same frame you've already written to.
So, I assume you have to compose the entire pixel buffer for each frame completely before you write the frame. But how would you do that?
Here's my code that works for rendering one background image:
CGSize renderSize = CGSizeMake(320, 568);
NSUInteger fps = 30;
self.assetWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:videoOutputPath] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(self.assetWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:renderSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:renderSize.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([self.assetWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[self.assetWriter addInput:videoWriterInput];
//Start a session:
[self.assetWriter startWriting];
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
NSInteger totalFrames = 90; //3 seconds
//process the bg image
int frameCount = 0;
UIImage* resizedImage = [UIImage resizeImage:self.bgImage size:renderSize];
buffer = [self pixelBufferFromCGImage:[resizedImage CGImage]];
BOOL append_ok = YES;
int j = 0;
while (append_ok && j < totalFrames) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
CMTime frameTime = CMTimeMake(frameCount,(int32_t) fps);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(!append_ok){
NSError *error = self.assetWriter.error;
if(error!=nil) {
NSLog(#"Unresolved error %#,%#.", error, [error userInfo]);
}
}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
frameCount++;
}
if (!append_ok) {
printf("error appending image %d times %d\n, with error.", frameCount, j);
}
//Finish the session:
[videoWriterInput markAsFinished];
[self.assetWriter finishWritingWithCompletionHandler:^() {
self.assetWriter = nil;
}];
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image {
CGSize size = CGSizeMake(320,568);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
size.width,
size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess){
NSLog(#"Failed to create pixel buffer");
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Again, the question is how to create a pixel buffer for a background image and an array of N small images that will be layered on top of the bg image. The next step after this will be to also superimposed a small video.
You can add the pixel info from the image list over the pixel buffer.
This example code shows how to add BGRA data over a ARGB pixelbuffer.
// Try to create a pixel buffer with the image mat
uint8_t* videobuffer = m_imageBGRA.data;
// From image buffer (BGRA) to pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate (NULL, m_width, m_height, kCVPixelFormatType_32ARGB, NULL, &pixelBuffer);
if ((pixelBuffer == NULL) || (status != kCVReturnSuccess))
{
NSLog(#"Error CVPixelBufferPoolCreatePixelBuffer[pixelBuffer=%#][status=%d]", pixelBuffer, status);
return;
}
else
{
uint8_t *videobuffertmp = videobuffer;
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixelBuffer);
// Add data for all the pixels in the image
for( int row=0 ; row<m_width ; ++row )
{
for( int col=0 ; col<m_height ; ++col )
{
memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t)); // alpha
memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t)); // red
memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t)); // green
memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t)); // blue
// Move the buffer pointer to the next pixel
pixelBufferData += 4*sizeof(uint8_t);
videobuffertmp += 4*sizeof(uint8_t);
}
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
So, in this example, the data into a image (videobuffer) is added to the pixel buffer. Usually, the pixel data is stored in a single row, so for each pixel, we have 4 bytes (represented as 'uint8_t' in this case): First for blue, then green, next red and the last for the alpha value (remember that the original image is in BGRA format).
The pixel buffer works in the same way, so the data is stored in a sigle row (ARGB in this case, as defined with 'kCVPixelFormatType_32ARGB' parameter).
This piece of code reorders the pixel data to match with the pixelbuffer configuration:
memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t)); // alpha
memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t)); // red
memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t)); // green
memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t)); // blue
And once we have the pixel added, we can move forward a pixel by:
// Move the buffer pointer to the next pixel
pixelBufferData += 4*sizeof(uint8_t);
videobuffertmp += 4*sizeof(uint8_t);
Moving the pointers 4 bytes forward.
If your images are smaller, you can add them in a smaller region, or define an 'if' using the alpha value as target data. For example:
// Add data for all the pixels in the image
for( int row=0 ; row<m_width ; ++row )
{
for( int col=0 ; col<m_height ; ++col )
{
if( videobuffertmp[3] > 10 ) // check alpha channel
{
memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t)); // alpha
memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t)); // red
memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t)); // green
memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t)); // blue
}
// Move the buffer pointer to the next pixel
pixelBufferData += 4*sizeof(uint8_t);
videobuffertmp += 4*sizeof(uint8_t);
}
}
I've managed to get the raw data from a MPMediaItem using an AVAssetReader after combining the answers of a couple of SO questions like this one and this one and a nice blog post. I'm also able to play this raw data using FMOD, but then a problem arises.
It appears the resulting audio is of lower quality than the original track. Though AVAssetTrack formatDescription tells me there are 2 channels in the data, the result sounds mono. It also sounds a bit dampened (less crispy) like the bitrate is lowered.
Am I doing something wrong or is the quality of the MPMediaItem data lowered on purpose by the AVAssetReader (because of piracy)?
#define OUTPUTRATE 44100
Initializing the AVAssetReader and AVAssetReaderTrackOutput
// prepare AVAsset and AVAssetReaderOutput etc
MPMediaItem* mediaItem = ...;
NSURL* ipodAudioUrl = [mediaItem valueForProperty:MPMediaItemPropertyAssetURL];
AVURLAsset * asset = [[AVURLAsset alloc] initWithURL:ipodAudioUrl options:nil];
NSError * error = nil;
assetReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
if(error)
NSLog(#"error creating reader: %#", [error debugDescription]);
AVAssetTrack* songTrack = [asset.tracks objectAtIndex:0];
NSArray* trackDescriptions = songTrack.formatDescriptions;
numChannels = 2;
for(unsigned int i = 0; i < [trackDescriptions count]; ++i)
{
CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[trackDescriptions objectAtIndex:i];
const AudioStreamBasicDescription* bobTheDesc = CMAudioFormatDescriptionGetStreamBasicDescription (item);
if(bobTheDesc && bobTheDesc->mChannelsPerFrame == 1) {
numChannels = 1;
}
}
NSDictionary* outputSettingsDict = [[[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
[NSNumber numberWithInt:OUTPUTRATE],AVSampleRateKey,
[NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,
nil] autorelease];
AVAssetReaderTrackOutput * output = [[[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict] autorelease];
[assetReader addOutput:output];
[assetReader startReading];
Initializing FMOD and the FMOD sound
// Init FMOD
FMOD_RESULT result = FMOD_OK;
unsigned int version = 0;
/*
Create a System object and initialize
*/
result = FMOD::System_Create(&system);
ERRCHECK(result);
result = system->getVersion(&version);
ERRCHECK(result);
if (version < FMOD_VERSION)
{
fprintf(stderr, "You are using an old version of FMOD %08x. This program requires %08x\n", version, FMOD_VERSION);
exit(-1);
}
result = system->setSoftwareFormat(OUTPUTRATE, FMOD_SOUND_FORMAT_PCM16, 1, 0, FMOD_DSP_RESAMPLER_LINEAR);
ERRCHECK(result);
result = system->init(32, FMOD_INIT_NORMAL | FMOD_INIT_ENABLE_PROFILE, NULL);
ERRCHECK(result);
// Init FMOD sound stream
CMTimeRange timeRange = [songTrack timeRange];
float durationInSeconds = timeRange.duration.value / timeRange.duration.timescale;
FMOD_CREATESOUNDEXINFO exinfo = {0};
memset(&exinfo, 0, sizeof(FMOD_CREATESOUNDEXINFO));
exinfo.cbsize = sizeof(FMOD_CREATESOUNDEXINFO); /* required. */
exinfo.decodebuffersize = OUTPUTRATE; /* Chunk size of stream update in samples. This will be the amount of data passed to the user callback. */
exinfo.length = OUTPUTRATE * numChannels * sizeof(signed short) * durationInSeconds; /* Length of PCM data in bytes of whole song (for Sound::getLength) */
exinfo.numchannels = numChannels; /* Number of channels in the sound. */
exinfo.defaultfrequency = OUTPUTRATE; /* Default playback rate of sound. */
exinfo.format = FMOD_SOUND_FORMAT_PCM16; /* Data format of sound. */
exinfo.pcmreadcallback = pcmreadcallback; /* User callback for reading. */
exinfo.pcmsetposcallback = pcmsetposcallback; /* User callback for seeking. */
result = system->createStream(NULL, FMOD_OPENUSER, &exinfo, &sound);
ERRCHECK(result);
result = system->playSound(FMOD_CHANNEL_FREE, sound, false, &channel);
ERRCHECK(result);
Reading from the AVAssetReaderTrackOutput into a ring buffer
AVAssetReaderTrackOutput * trackOutput = (AVAssetReaderTrackOutput *)[assetReader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer];
if (sampleBufferRef)
{
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBufferRef, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
if(blockBuffer == NULL)
{
stopLoading = YES;
continue;
}
if(&audioBufferList == NULL)
{
stopLoading = YES;
continue;
}
if(audioBufferList.mNumberBuffers != 1)
NSLog(#"numBuffers = %lu", audioBufferList.mNumberBuffers);
for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
{
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
SInt8 *frame = (SInt8*)audioBuffer.mData;
for(int i=0; i<audioBufferList.mBuffers[y].mDataByteSize; i++)
{
ringBuffer->push_back(frame[i]);
}
}
CMSampleBufferInvalidate(sampleBufferRef);
CFRelease(sampleBufferRef);
}
I'm not familiar with FMOD, so I can't comment there. AVAssetReader doesn't do any "copy protection" stuff, so that's not a worry. (If you can get the AVAssetURL, the track is DRM free)
Since you are using non-interleaved buffers, there will only be one buffer, so I guess your last bit of code might be wrong
Here's an example of some code that's working well for me. Btw, your for loop is probably not going to be very performant. You may consider using memcpy or something...
If you are not restricted to your existing ring buffer, try TPCircularBuffer (https://github.com/michaeltyson/TPCircularBuffer) it is amazing.
CMSampleBufferRef nextBuffer = NULL;
if(_reader.status == AVAssetReaderStatusReading)
{
nextBuffer = [_readerOutput copyNextSampleBuffer];
}
if (nextBuffer)
{
AudioBufferList abl;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
nextBuffer,
NULL,
&abl,
sizeof(abl),
NULL,
NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&blockBuffer);
// the correct way to get the number of bytes in the buffer
size_t size = CMSampleBufferGetTotalSampleSize(nextBuffer);
memcpy(ringBufferTail, abl.mBuffers[0].mData, size);
CFRelease(nextBuffer);
CFRelease(blockBuffer);
}
Hope this helps
You're initialiazing FMOD to output mono audio. Try
result = system->setSoftwareFormat(OUTPUTRATE, FMOD_SOUND_FORMAT_PCM16, 2, 0, FMOD_DSP_RESAMPLER_LINEAR);