I need to do audio streaming in an iOS app using Objective C. I have used AVFoundation framework and capture the raw data from microphone and send to sever. However raw data which I am receiving is corrupt, Below is my code.
Please suggest me where I am doing wrong.
session = [[AVCaptureSession alloc] init];
NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
[NSNumber numberWithFloat:16000.0], AVSampleRateKey,
[NSNumber numberWithInt: 1],AVNumberOfChannelsKey,
[NSNumber numberWithInt:32], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
nil];
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
[session addInput:audioInput];
AVCaptureAudioDataOutput *audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
dispatch_queue_t audioQueue = dispatch_queue_create("AudioQueue", NULL);
[audioDataOutput setSampleBufferDelegate:self queue:audioQueue];
AVAssetWriterInput *_assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:recordSettings];
_assetWriterVideoInput.performsMultiPassEncodingIfSupported = YES;
if([session canAddOutput:audioDataOutput] ){
[session addOutput:audioDataOutput];
}
[session startRunning];
Capturing:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
AudioBufferList audioBufferList;
NSMutableData *data= [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for( int y=0; y< audioBufferList.mNumberBuffers; y++ ){
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
[data appendBytes:frame length:audioBuffer.mDataByteSize];
NSString *base64Encoded = [data base64EncodedStringWithOptions:0];
NSLog(#"Encoded: %#", base64Encoded);
}
CFRelease(blockBuffer);
}
I posted a sample of the kind of code you need to make this work. Its approach is nearly the same as yours. You should be able to read it easily.
The app uses AudioUnit to record and playback microphone input and speaker output, NSNetServices to connect two iOS devices on your network, and NSStreams to send an audio stream between the devices.
You can download the source code at:
https://drive.google.com/open?id=1tKgVl0X92SYvgpvbljRzilXNQ6iBcjqM
It requires the latest Xcode 9 beta release to compile, and the latest iOS 11 beta release to run it.
NOTE | A log entry for each method call and event is displayed in a textfield that encompasses the entire screen; there is no interactive interface—no buttons, etc. After installing the app on two iOS devices, simply launch it on both devices to automatically connect to your network and start streaming audio.
Related
I tried many other blogs and stack overflow. I didn't get solution for this, I can able to create custom camera with preview. I need video with custom frame, that's why I am using AVAssetWriter. But i unable to save recorded video into documents. I tried like this,
-(void) initilizeCameraConfigurations {
if(!captureSession) {
captureSession = [[AVCaptureSession alloc] init];
[captureSession beginConfiguration];
captureSession.sessionPreset = AVCaptureSessionPresetHigh;
self.view.backgroundColor = UIColor.blackColor;
CGRect bounds = self.view.bounds;
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
captureVideoPreviewLayer.backgroundColor = [UIColor clearColor].CGColor;
captureVideoPreviewLayer.bounds = self.view.frame;
captureVideoPreviewLayer.connection.videoOrientation = AVCaptureVideoOrientationPortrait;
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.position = CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
[self.view.layer addSublayer:captureVideoPreviewLayer];
[self.view bringSubviewToFront:self.controlsBgView];
}
// Add input to session
NSError *err;
videoCaptureDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&err];
if([captureSession canAddInput:videoCaptureDeviceInput]) {
[captureSession addInput:videoCaptureDeviceInput];
}
docPathUrl = [[NSURL alloc] initFileURLWithPath:[self getDocumentsUrl]];
assetWriter = [AVAssetWriter assetWriterWithURL:docPathUrl fileType:AVFileTypeQuickTimeMovie error:&err];
NSParameterAssert(assetWriter);
//assetWriter.movieFragmentInterval = CMTimeMakeWithSeconds(1.0, 1000);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:300], AVVideoWidthKey,
[NSNumber numberWithInt:300], AVVideoHeightKey,
nil];
writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
writerInput.expectsMediaDataInRealTime = YES;
writerInput.transform = CGAffineTransformMakeRotation(M_PI);
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:300], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:300], kCVPixelBufferHeightKey,
nil];
assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
if([assetWriter canAddInput:writerInput]) {
[assetWriter addInput:writerInput];
}
// Set video stabilization mode to preview layer
AVCaptureVideoStabilizationMode stablilizationMode = AVCaptureVideoStabilizationModeCinematic;
if([videoCaptureDevice.activeFormat isVideoStabilizationModeSupported:stablilizationMode]) {
[captureVideoPreviewLayer.connection setPreferredVideoStabilizationMode:stablilizationMode];
}
// image output
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[captureSession addOutput:stillImageOutput];
[captureSession commitConfiguration];
if (![captureVideoPreviewLayer.connection isEnabled]) {
[captureVideoPreviewLayer.connection setEnabled:YES];
}
[captureSession startRunning];
}
-(IBAction)startStopVideoRecording:(id)sender {
if(captureSession) {
if(isVideoRecording) {
[writerInput markAsFinished];
[assetWriter finishWritingWithCompletionHandler:^{
NSLog(#"Finished writing...checking completion status...");
if (assetWriter.status != AVAssetWriterStatusFailed && assetWriter.status == AVAssetWriterStatusCompleted)
{
// Video saved
} else
{
NSLog(#"#123 Video writing failed: %#", assetWriter.error);
}
}];
} else {
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:kCMTimeZero];
isVideoRecording = YES;
}
}
}
-(NSString *) getDocumentsUrl {
NSString *docPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
docPath = [[docPath stringByAppendingPathComponent:#"Movie"] stringByAppendingString:#".mov"];
if([[NSFileManager defaultManager] fileExistsAtPath:docPath]) {
NSError *err;
[[NSFileManager defaultManager] removeItemAtPath:docPath error:&err];
}
NSLog(#"Movie path : %#",docPath);
return docPath;
}
#end
Correct me if anything wrong. Thank you in advance.
You don't say what actually goes wrong, but two things look wrong with your code:
docPath = [[docPath stringByAppendingPathComponent:#"Movie"] stringByAppendingString:#".mov"];
looks like it creates an undesired path like this #"/path/Movie/.mov", when you want this:
docPath = [docPath stringByAppendingPathComponent:#"Movie.mov"];
And your timeline is wrong. Your asset writer starts at time 0, but the sampleBuffers start at CMSampleBufferGetPresentationTimestamp(sampleBuffer) > 0, so instead do this:
-(void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
if(firstSampleBuffer) {
[assetWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimestamp(sampleBuffer)];
}
[writerInput appendSampleBuffer:sampleBuffer];
}
Conceptually, you have to main functional areas: One that generates video frames – this the AVCaptureSession, and everything that is attached to it –, and another that writes these frames to a file – in your case the AVAssetWriter with attached inputs.
The problem with your code is: There is no connection between these two. No video frames / images coming out of the capture session are passed to the asset writer inputs.
Furthermore, the AVCaptureStillImageOutput method -captureStillImageAsynchronouslyFromConnection:completionHandler: is nowhere called, so the capture session actually produces no frames.
So, as a minimum, implement something like this:
-(IBAction)captureStillImageAndAppend:(id)sender
{
[stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageOutput.connections.firstObject completionHandler:
^(CMSampleBufferRef imageDataSampleBuffer, NSError* error)
{
// check error, omitted here
if (CMTIME_IS_INVALID( startTime)) // startTime is an ivar
[assetWriter startSessionAtSourceTime:(startTime = CMSampleBufferGetPresentationTimeStamp( imageDataSampleBuffer))];
[writerInput appendSampleBuffer:imageDataSampleBuffer];
}];
}
Remove the AVAssetWriterInputPixelBufferAdaptor, it's not used.
But there are issues with AVCaptureStillImageOutput:
it's only intended to produce still images, not videos
it must be configured to produce uncompressed sample buffers if the asset writer input is configured to compress the appended sample buffers (stillImageOutput.outputSettings = #{ (NSString*)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)};)
it's deprecated under iOS
If you actually want to produce a video, as opposed to a sequence of still images, instead of the AVCaptureStillImageOutput add a AVCaptureVideoDataOutput to the capture session. It needs a delegate and a serial dispatch queue to output the sample buffers. The delegate has to implement something like this:
-(void)captureOutput:(AVCaptureOutput*)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
if (CMTIME_IS_INVALID( startTime)) // startTime is an ivar
[assetWriter startSessionAtSourceTime:(startTime = CMSampleBufferGetPresentationTimeStamp( sampleBuffer))];
[writerInput appendSampleBuffer:sampleBuffer];
}
Note that
you will want to make sure that the AVCaptureVideoDataOutput only outputs frames when you're actually recording; add/remove it from the capture session or enable/disable its connection in the startStopVideoRecording action
reset the startTime to kCMTimeInvalid before starting another recording
I want to get playing audio recordings from iOS on WP8 and vice-versa.
On iOS I'm using AVAudioRecorder for that purpose with the following configuration:
NSString *tempPath = NSTemporaryDirectory();
NSURL *soundFileURL = [NSURL fileURLWithPath:[tempPath stringByAppendingPathComponent:#"sound.aac"]];
NSDictionary *recordSettings = [NSDictionary
dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatMPEG4AAC],
AVFormatIDKey,
[NSNumber numberWithInt:AVAudioQualityMin],
AVEncoderAudioQualityKey,
[NSNumber numberWithInt:8000],
AVEncoderBitRateKey,
[NSNumber numberWithInt: 1],
AVNumberOfChannelsKey,
[NSNumber numberWithFloat:8000.0],
AVSampleRateKey,
[NSNumber numberWithInt:16],
AVEncoderBitDepthHintKey,
nil];
NSError *error = nil;
_audioRecorder = [[AVAudioRecorder alloc]
initWithURL:soundFileURL
settings:recordSettings
error:&error];
_audioRecorder.delegate = self;
The files "sound.aac" contains the recording in the AAC container and playing recorded audio sample works well on iOS.
I couldn't play the "sound.aac" on WP8 after transfering the file to the WP8 device. According the following link: http://msdn.microsoft.com/en-us/library/windowsphone/develop/ff462087(v=vs.105).aspx#BKMK_AudioSupport WP8 should be able to play the file.
The code on WP8 I've used is:
try
{
this.mediaPlayer = new MediaElement();
mediaPlayer.MediaEnded += new RoutedEventHandler(mediaPlayer_MediaEnded);
IsolatedStorageFile myStore = IsolatedStorageFile.GetUserStoreForApplication();
IsolatedStorageFileStream mediaStream = myStore.OpenFile("sound.aac", FileMode.Open, FileAccess.Read);
this.mediaPlayer.SetSource(mediaStream);
this.messageTextBlock.Text = "Playing the message...";
mediaPlayer.Play();
}
catch (Exception exception)
{
MessageBox.Show("Error playing audio!");
Debug.WriteLine(exception);
return;
}
After this the "sound.aac" is playing endlessly with no sound coming from speaker. The message "Playing the message..." is shown, there is no thrown Exception, no mediaPlayer_MediaEnded call. All I can do is to stop the playing.
I don't know how to get it working.
I'm not sure what I should put in the method
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
In order to write the frames to a video. Can anybody share with me the body of their code for this method, where the results is recording the frames to a movie?
I thought I had my assetWriter and videoInput setup correctly, but all I'm getting is a movie with 1 frame used repeatedly.
Check the Apple Sample Code Rosy Writer. Its a very good example of what you are looking for.
You can successfully recording a video and grabbing frames at the same time using this method:
AVCaptureSession *captureSession = [AVCaptureSession new];
AVCaptureDevice *captureDevice = [AVCaptureDevice new];
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput new];
AVCaptureVideoDataOutput *output = [AVCaptureVideoDataOutput new];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:640], AVVideoWidthKey, [NSNumber numberWithInt:480], AVVideoHeightKey, AVVideoCodecH264, AVVideoCodecKey, nil];
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
/ AVCaptureVideDataOutput /
AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor =
[[AVAssetWriterInputPixelBufferAdaptor alloc]
initWithAssetWriterInput:assetWriterInput
sourcePixelBufferAttributes:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],
kCVPixelBufferPixelFormatTypeKey,
nil]];
/* Asset writer with MPEG4 format*/
AVAssetWriter *assetWriterMyData = [[AVAssetWriter alloc]
initWithURL:URLFromSomwhere
fileType:AVFileTypeMPEG4
error:you need to check error conditions,
this example is too lazy];
[assetWriterMyData addInput:assetWriterInput];
assetWriterInput.expectsMediaDataInRealTime = YES;
/ Start writing data /
[assetWriterMyData startWriting];
[assetWriterMyData startSessionAtSourceTime:kCMTimeZero];
[captureSession startRunning];
- (void) captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// a very dense way to keep track of the time at which this frame
// occurs relative to the output stream, but it's just an example!
static int64_t frameNumber = 0;
if(assetWriterInput.readyForMoreMediaData)
[pixelBufferAdaptor appendPixelBuffer:imageBuffer
withPresentationTime:CMTimeMake(frameNumber, 25)];
frameNumber++;
}
/* To stop recording, stop capture session and finish writing data*/
[captureSession stopRunning];
[assetWriterMyData finishWriting];
I am currently working on an application as part of my Bachelor in Computer Science. The application will correlate data from the iPhone hardware (accelerometer, gps) and music that is being played.
The project is still in its infancy, having worked on it for only 2 months.
The moment that I am right now, and where I need help, is reading PCM samples from songs from the itunes library, and playing them back using and audio unit.
Currently the implementation I would like working does the following: chooses a random song from iTunes, and reads samples from it when required, and stores in a buffer, lets call it sampleBuffer. Later on in the consumer model the audio unit (which has a mixer and a remoteIO output) has a callback where I simply copy the required number of samples from sampleBuffer into the buffer specified in the callback. What i then hear through the speakers is something not quite what i expect; I can recognize that it is playing the song however it seems that it is incorrectly decoded and it has a lot of noise! I attached an image which shows the first ~half a second (24576 samples # 44.1kHz), and this does not resemble a normall looking output.
Before I get into the listing I have checked that the file is not corrupted, similarily I have written test cases for the buffer (so I know the buffer does not alter the samples), and although this might not be the best way to do it (some would argue to go the audio queue route), I want to perform various manipulations on the samples aswell as changing the song before it is finished, rearranging what song is played, etc. Furthermore, maybe there are some incorrect settings in the audio unit, however, the graph that displays the samples (which shows the samples are decoded incorrectly) is taken straight from the buffer, thus I am only looking now to solve why the reading from the disk and decoding does not work correctly. Right now i simply want to get a play through working.
Cant post images because new to stackoverflow so heres the link to the image: http://i.stack.imgur.com/RHjlv.jpg
Listing:
This is where I setup the audioReadSettigns which will be used for the AVAssetReaderAudioMixOutput
// Set the read settings
audioReadSettings = [[NSMutableDictionary alloc] init];
[audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM]
forKey:AVFormatIDKey];
[audioReadSettings setValue:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey];
[audioReadSettings setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey];
[audioReadSettings setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey];
[audioReadSettings setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsNonInterleaved];
[audioReadSettings setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
Now the following code listing is a method that receives an NSString with the persistant_id of the song:
-(BOOL)setNextSongID:(NSString*)persistand_id {
assert(persistand_id != nil);
MPMediaItem *song = [self getMediaItemForPersistantID:persistand_id];
NSURL *assetUrl = [song valueForProperty:MPMediaItemPropertyAssetURL];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetUrl
options:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
forKey:AVURLAssetPreferPreciseDurationAndTimingKey]];
NSError *assetError = nil;
assetReader = [[AVAssetReader assetReaderWithAsset:songAsset error:&assetError] retain];
if (assetError) {
NSLog(#"error: %#", assetError);
return NO;
}
CMTimeRange timeRange = CMTimeRangeMake(kCMTimeZero, songAsset.duration);
[assetReader setTimeRange:timeRange];
track = [[songAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
assetReaderOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:[NSArray arrayWithObject:track]
audioSettings:audioReadSettings];
if (![assetReader canAddOutput:assetReaderOutput]) {
NSLog(#"cant add reader output... die!");
return NO;
}
[assetReader addOutput:assetReaderOutput];
[assetReader startReading];
// just getting some basic information about the track to print
NSArray *formatDesc = ((AVAssetTrack*)[[assetReaderOutput audioTracks] objectAtIndex:0]).formatDescriptions;
for (unsigned int i = 0; i < [formatDesc count]; ++i) {
CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i];
const CAStreamBasicDescription *asDesc = (CAStreamBasicDescription*)CMAudioFormatDescriptionGetStreamBasicDescription(item);
if (asDesc) {
// get data
numChannels = asDesc->mChannelsPerFrame;
sampleRate = asDesc->mSampleRate;
asDesc->Print();
}
}
[self copyEnoughSamplesToBufferForLength:24000];
return YES;
}
The following presents the function -(void)copyEnoughSamplesToBufferForLength:
-(void)copyEnoughSamplesToBufferForLength:(UInt32)samples_count {
[w_lock lock];
int stillToCopy = 0;
if (sampleBuffer->numSamples() < samples_count) {
stillToCopy = samples_count;
}
NSAutoreleasePool *apool = [[NSAutoreleasePool alloc] init];
CMSampleBufferRef sampleBufferRef;
SInt16 *dataBuffer = (SInt16*)malloc(8192 * sizeof(SInt16));
int a = 0;
while (stillToCopy > 0) {
sampleBufferRef = [assetReaderOutput copyNextSampleBuffer];
if (!sampleBufferRef) {
// end of song or no more samples
return;
}
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBufferRef);
CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples(sampleBufferRef);
AudioBufferList audioBufferList;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBufferRef,
NULL,
&audioBufferList,
sizeof(audioBufferList),
NULL,
NULL,
0,
&blockBuffer);
int data_length = floorf(numSamplesInBuffer * 1.0f);
int j = 0;
for (int bufferCount=0; bufferCount < audioBufferList.mNumberBuffers; bufferCount++) {
SInt16* samples = (SInt16 *)audioBufferList.mBuffers[bufferCount].mData;
for (int i=0; i < numSamplesInBuffer; i++) {
dataBuffer[j] = samples[i];
j++;
}
}
CFRelease(sampleBufferRef);
sampleBuffer->putSamples(dataBuffer, j);
stillToCopy = stillToCopy - data_length;
}
free(dataBuffer);
[w_lock unlock];
[apool release];
}
Now the sampleBuffer will have incorrectly decoded samples. Can anyone help me why this is so? This happens for different files on my iTunes library (mp3, aac, wav, etc).
Any help would be greatly appreciated, furthermore, if you need any other listing of my code, or perhaps what the output sounds like, I will attach it per request. I have been sitting on this for the past week trying to debug it and have found no help online -- everyone seems to be doign it in my way, yet it seems that only I have this issue.
Thanks for any help at all!
Peter
Currently, I am also working on a project which involves extracting audio samples from iTunes Library into AudioUnit.
The audiounit render call back is included for your reference. The input format is set as SInt16StereoStreamFormat.
I have made use of Michael Tyson's circular buffer implementation - TPCircularBuffer as the buffer storage. Very easy to use and understand!!! Thanks Michael!
- (void) loadBuffer:(NSURL *)assetURL_
{
if (nil != self.iPodAssetReader) {
[iTunesOperationQueue cancelAllOperations];
[self cleanUpBuffer];
}
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:assetURL_ options:nil];
if (asset == nil) {
NSLog(#"asset is not defined!");
return;
}
NSLog(#"Total Asset Duration: %f", CMTimeGetSeconds(asset.duration));
NSError *assetError = nil;
self.iPodAssetReader = [AVAssetReader assetReaderWithAsset:asset error:&assetError];
if (assetError) {
NSLog (#"error: %#", assetError);
return;
}
AVAssetReaderOutput *readerOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:asset.tracks audioSettings:outputSettings];
if (! [iPodAssetReader canAddOutput: readerOutput]) {
NSLog (#"can't add reader output... die!");
return;
}
// add output reader to reader
[iPodAssetReader addOutput: readerOutput];
if (! [iPodAssetReader startReading]) {
NSLog(#"Unable to start reading!");
return;
}
// Init circular buffer
TPCircularBufferInit(&playbackState.circularBuffer, kTotalBufferSize);
__block NSBlockOperation * feediPodBufferOperation = [NSBlockOperation blockOperationWithBlock:^{
while (![feediPodBufferOperation isCancelled] && iPodAssetReader.status != AVAssetReaderStatusCompleted) {
if (iPodAssetReader.status == AVAssetReaderStatusReading) {
// Check if the available buffer space is enough to hold at least one cycle of the sample data
if (kTotalBufferSize - playbackState.circularBuffer.fillCount >= 32768) {
CMSampleBufferRef nextBuffer = [readerOutput copyNextSampleBuffer];
if (nextBuffer) {
AudioBufferList abl;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(nextBuffer, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
UInt64 size = CMSampleBufferGetTotalSampleSize(nextBuffer);
int bytesCopied = TPCircularBufferProduceBytes(&playbackState.circularBuffer, abl.mBuffers[0].mData, size);
if (!playbackState.bufferIsReady && bytesCopied > 0) {
playbackState.bufferIsReady = YES;
}
CFRelease(nextBuffer);
CFRelease(blockBuffer);
}
else {
break;
}
}
}
}
NSLog(#"iPod Buffer Reading Finished");
}];
[iTunesOperationQueue addOperation:feediPodBufferOperation];
}
static OSStatus ipodRenderCallback (
void *inRefCon, // A pointer to a struct containing the complete audio data
// to play, as well as state information such as the
// first sample to play on this invocation of the callback.
AudioUnitRenderActionFlags *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence
// between sounds; for silence, also memset the ioData buffers to 0.
const AudioTimeStamp *inTimeStamp, // Unused here.
UInt32 inBusNumber, // The mixer unit input bus that is requesting some new
// frames of audio data to play.
UInt32 inNumberFrames, // The number of frames of audio to provide to the buffer(s)
// pointed to by the ioData parameter.
AudioBufferList *ioData // On output, the audio data to play. The callback's primary
// responsibility is to fill the buffer(s) in the
// AudioBufferList.
)
{
Audio* audioObject = (Audio*)inRefCon;
AudioSampleType *outSample = (AudioSampleType *)ioData->mBuffers[0].mData;
// Zero-out all the output samples first
memset(outSample, 0, inNumberFrames * kUnitSize * 2);
if ( audioObject.playingiPod && audioObject.bufferIsReady) {
// Pull audio from circular buffer
int32_t availableBytes;
AudioSampleType *bufferTail = TPCircularBufferTail(&audioObject.circularBuffer, &availableBytes);
memcpy(outSample, bufferTail, MIN(availableBytes, inNumberFrames * kUnitSize * 2) );
TPCircularBufferConsume(&audioObject.circularBuffer, MIN(availableBytes, inNumberFrames * kUnitSize * 2) );
audioObject.currentSampleNum += MIN(availableBytes / (kUnitSize * 2), inNumberFrames);
if (availableBytes <= inNumberFrames * kUnitSize * 2) {
// Buffer is running out or playback is finished
audioObject.bufferIsReady = NO;
audioObject.playingiPod = NO;
audioObject.currentSampleNum = 0;
if ([[audioObject delegate] respondsToSelector:#selector(playbackDidFinish)]) {
[[audioObject delegate] performSelector:#selector(playbackDidFinish)];
}
}
}
return noErr;
}
- (void) setupSInt16StereoStreamFormat {
// The AudioUnitSampleType data type is the recommended type for sample data in audio
// units. This obtains the byte size of the type for use in filling in the ASBD.
size_t bytesPerSample = sizeof (AudioSampleType);
// Fill the application audio format struct's fields to define a linear PCM,
// stereo, noninterleaved stream at the hardware sample rate.
SInt16StereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
SInt16StereoStreamFormat.mFormatFlags = kAudioFormatFlagsCanonical;
SInt16StereoStreamFormat.mBytesPerPacket = 2 * bytesPerSample; // *** kAudioFormatFlagsCanonical <- implicit interleaved data => (left sample + right sample) per Packet
SInt16StereoStreamFormat.mFramesPerPacket = 1;
SInt16StereoStreamFormat.mBytesPerFrame = SInt16StereoStreamFormat.mBytesPerPacket * SInt16StereoStreamFormat.mFramesPerPacket;
SInt16StereoStreamFormat.mChannelsPerFrame = 2; // 2 indicates stereo
SInt16StereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
SInt16StereoStreamFormat.mSampleRate = graphSampleRate;
NSLog (#"The stereo stream format for the \"iPod\" mixer input bus:");
[self printASBD: SInt16StereoStreamFormat];
}
I guess it is kind of late, but you could try this library:
https://bitbucket.org/artgillespie/tslibraryimport
After using this to save the audio into a file, you could process the data with render callbacks from MixerHost.
If I were you I would either use kAudioUnitSubType_AudioFilePlayer to play the file and access its samples with the units render callback.
Or
Use ExtAudioFileRef to extract the samples straight to a buffer.
Im having trouble getting audio recorded into a video using avassetwriter on the iPhone. I am able to record video from the camera on the phone no problem but when I try to add audio I get nothing, also the durations displayed in the video in the photo albums app are showing something really out of whack, a 4 second video will show 15:54:01 or something like that, and every video made after the number increases even if the video is shorter. I've been trying to follow what ive seen in other questions here but no luck.
Heres how I'm setting up my audio inputs
captureSession = [[AVCaptureSession alloc] init];
//set up audio input
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeAudio];
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error ];
audioOutput = [[AVCaptureAudioDataOutput alloc] init];
if([captureSession canAddOutput:audioOutput])
{
[captureSession addOutput:audioOutput];
}
[audioOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
heres how im setting up the AVAssetWriter
videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:MOVIE_PATH] fileType:AVFileTypeQuickTimeMovie error:&error];
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
NSDictionary* audioOutputSettings = audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
[ NSNumber numberWithInt: 64000 ], AVEncoderBitRateKey,
nil];
and then here is how im writing the audio sample buffers using the CMSampleBufferRef Sent by the audioOutput call back
- (void) captureAudio:(CMSampleBufferRef)sampleBuffer
{
if([audioWriterInput isReadyForMoreMediaData]){
[audioWriterInput appendSampleBuffer:sampleBuffer];
}
}
Would really appreciate any help, I've been stuck on this all day.
I don't see you calling [videoWriter startSessionAtSourceTime], also you're discarding audio sample buffers when audioWriterInput isn't ready (and that can't be what you want).
So your problem lies in the PTSs (Presentation Time Stamps) of what you're writing out. Either you can tell the output that your timeline starts at a given time t with startSessionAtSourceTime or you can modify the buffers you append to have zero based presentationTimeStamps.