Can not play audio in didReceiveData method of Multipeer Connectivity - ios

My aim is to stream voice data to multiple devices using multipeer connnectivity.
I am using AVCaptureSession to access voice data from microphone using AVCaptureDevice type AVMediaTypeAudio.
In a custom AVCaptureAudioDataOutput class i am getting those audio data and want to stream that to all connected peers.
//sending data using Multipeer Connectivity
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
// NSLog(#"---A U D I O :%#",sampleBuffer);
AudioBufferList audioBufferList;
NSMutableData *data= [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for( int y=0; y< audioBufferList.mNumberBuffers; y++ ){
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
[data appendBytes:frame length:audioBuffer.mDataByteSize];
}
CFRelease(blockBuffer);
[_session sendData:data toPeers:_session.connectedPeers withMode:MCSessionSendDataReliable error:nil];
}
In the Receiver application I am getting all nsdata in didRecievedData delegate method in MCSessionDelegate.
But I am not getting any way to play that raw NSData in Reciever application.

Related

Play audio on iOS from a memory data stream

I am porting an audio library to iOS allowing to play audio streams fed from callbacks. The user provides a callback returning raw PCM data, and I need to have this data be played. Moreover, the library must be able to play multiple streams at once.
I figured I would need to use AVFoundation, but it seems like AVAudioPlayer does not support streamed audio buffers, and all the streaming documentation I could find used data coming directly from the network. What is the API I should use here?
Thanks in advance!
By the way, I am not using the Apple libraries through Swift or Objective-C. However I assume everything is exposed still, so an example in Swift would be greatly appreciated anyway!
You need to initialise:
The Audio Session to use input audio unit and output.
-(SInt32) audioSessionInitialization:(SInt32)preferred_sample_rate {
// - - - - - - Audio Session initialization
NSError *audioSessionError = nil;
session = [AVAudioSession sharedInstance];
// disable AVAudioSession
[session setActive:NO error:&audioSessionError];
// set category - (PlayAndRecord to use input and output session AudioUnits)
[session setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error:&audioSessionError];
double preferredSampleRate = 441000;
[session setPreferredSampleRate:preferredSampleRate error:&audioSessionError];
// enable AVAudioSession
[session setActive:YES error:&audioSessionError];
// Configure notification for device output change (speakers/headphones)
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(routeChange:)
name:AVAudioSessionRouteChangeNotification
object:nil];
// - - - - - - Create audio engine
[self audioEngineInitialization];
return [session sampleRate];
}
The Audio Engine
-(void) audioEngineInitialization{
engine = [[AVAudioEngine alloc] init];
inputNode = [engine inputNode];
outputNode = [engine outputNode];
[engine connect:inputNode to:outputNode format:[inputNode inputFormatForBus:0]];
AudioStreamBasicDescription asbd_player;
asbd_player.mSampleRate = session.sampleRate;
asbd_player.mFormatID = kAudioFormatLinearPCM;
asbd_player.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd_player.mFramesPerPacket = 1;
asbd_player.mChannelsPerFrame = 2;
asbd_player.mBitsPerChannel = 16;
asbd_player.mBytesPerPacket = 4;
asbd_player.mBytesPerFrame = 4;
OSStatus status;
status = AudioUnitSetProperty(inputNode.audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&asbd_player,
sizeof(asbd_player));
// Add the render callback for the ioUnit: for playing
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = engineInputCallback; ///CALLBACK///
callbackStruct.inputProcRefCon = (__bridge void *)(self);
status = AudioUnitSetProperty(inputNode.audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,//Global
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
[engine prepare];
}
The Audio Engine callback
static OSStatus engineInputCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// the reference to the audio controller where you get the stream data
MyAudioController *ac = (__bridge MyAudioController *)(inRefCon);
// in practice we will only ever have 1 buffer, since audio format is mono
for (int i = 0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// copy stream buffer data to output buffer
UInt32 size = min(buffer.mDataByteSize, ac.playbackBuffer.mDataByteSize);
memcpy(buffer.mData, ac.streamBuffer.mData, size);
buffer.mDataByteSize = size; // indicate how much data we wrote in the buffer
}
return noErr;
}

Objective-C - BAD ACCESS EXC

I am trying to create a project that uses audio streaming from unity. For this, I am developing a plugin.
At the time of recording the audio has no problem and I send the data through a websocket in a base64 string.
Then from xcode I catch it and transform it to NSDATA, that's the problem. At first there are no problems but after a moment, Xcode show me the error EXC_BAD_ACCESS and I can't continue copying the NSDATA into the buffer.
Here is the code.
#import "AudioProcessor.h"
#pragma mark Playback callback
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioProcessor audioProcessor = (AudioProcessor) inRefCon;
// copy buffer to audio buffer which gets played after function return
if(ioData->mNumberBuffers > 0) {
AudioBuffer buffer = ioData->mBuffers[0];
// get the data from Unity
NSString *inputData = audioProcessor.getInputData;
if(inputData && ![inputData isKindOfClass:[NSNull class]])
{
//here it's the problem.
NSData *data = [[NSData alloc] initWithBase64EncodedString:inputData options:0];
memcpy(buffer.mData, data.bytes, data.length);
buffer.mDataByteSize = (int) data.length;
free(data);
}
return noErr;
}
#pragma mark controll stream
-(void)setInputData:(NSString *)datosValue
{
inputData = datosValue;
}
-(NSString*)getInputData
{
return inputData;
}
If someone knows how it could be done so that the application does not close, I would appreciate it.
First of all please conform to the naming convention that variable names start with a lowercase letter.
The error occurs because an instance of NSData is an object / a pointer, you have to add a *
NSData *data = [[NSData alloc] init....
Further it's highly recommended to access properties with dot notation for example
data.bytes
data.length

audio Streaming AVFoundation using Audio Queues/ buffer in iOS

I need to do audio streaming in an iOS app using Objective C. I have used AVFoundation framework and capture the raw data from microphone and send to sever. However raw data which I am receiving is corrupt, Below is my code.
Please suggest me where I am doing wrong.
session = [[AVCaptureSession alloc] init];
NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
[NSNumber numberWithFloat:16000.0], AVSampleRateKey,
[NSNumber numberWithInt: 1],AVNumberOfChannelsKey,
[NSNumber numberWithInt:32], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
nil];
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
[session addInput:audioInput];
AVCaptureAudioDataOutput *audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
dispatch_queue_t audioQueue = dispatch_queue_create("AudioQueue", NULL);
[audioDataOutput setSampleBufferDelegate:self queue:audioQueue];
AVAssetWriterInput *_assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:recordSettings];
_assetWriterVideoInput.performsMultiPassEncodingIfSupported = YES;
if([session canAddOutput:audioDataOutput] ){
[session addOutput:audioDataOutput];
}
[session startRunning];
Capturing:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
AudioBufferList audioBufferList;
NSMutableData *data= [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for( int y=0; y< audioBufferList.mNumberBuffers; y++ ){
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
[data appendBytes:frame length:audioBuffer.mDataByteSize];
NSString *base64Encoded = [data base64EncodedStringWithOptions:0];
NSLog(#"Encoded: %#", base64Encoded);
}
CFRelease(blockBuffer);
}
I posted a sample of the kind of code you need to make this work. Its approach is nearly the same as yours. You should be able to read it easily.
The app uses AudioUnit to record and playback microphone input and speaker output, NSNetServices to connect two iOS devices on your network, and NSStreams to send an audio stream between the devices.
You can download the source code at:
https://drive.google.com/open?id=1tKgVl0X92SYvgpvbljRzilXNQ6iBcjqM
It requires the latest Xcode 9 beta release to compile, and the latest iOS 11 beta release to run it.
NOTE | A log entry for each method call and event is displayed in a textfield that encompasses the entire screen; there is no interactive interface—no buttons, etc. After installing the app on two iOS devices, simply launch it on both devices to automatically connect to your network and start streaming audio.

Load NSData from AudioBufferList into AVAudioPlayer

From a delegate method I am receiving an AudioBufferList while I am recording audio. I am trying to collect the data from the AudioBufferList and save it so I can load it into my AVAudioPlayer but the AVAudioPlayer throws an error and I am unable to play the recording. I need to be able to play the audio through AVAudioPlayer without having the file, just by using the AudioBufferList.
Originally I was saving the recording to a file then loading it into AVAudioPlayer but with this method I was unable to append to the recording without having to make another audio file then merging the 2 files after the append was made. This was taking to much time and I would still like to be able to listen to the recording between appends. So now I am not saving the audio file so that I can keep appending to it until I wish to save it. The problem with this is the NSData that I am saving from the AudioBufferList is not loading into the AVAudioPlayer properly.
Here is my code for gathering the NSData:
- (void) microphone:(EZMicrophone *)microphone
hasBufferList:(AudioBufferList *)bufferList
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels
{
AudioBuffer sourceBuffer = bufferList->mBuffers[0];
if (audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize)
{
free(audioBuffer.mData);
audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
}
int currentBuffer =0;
int maxBuf = 800;
for( int y=0; y< bufferList->mNumberBuffers; y++ )
{
if (currentBuffer < maxBuf)
{
AudioBuffer audioBuff = bufferList->mBuffers[y];
Float32 *frame = (Float32*)audioBuff.mData;
[data appendBytes:frame length:audioBuffer.mDataByteSize];
currentBuffer += audioBuff.mDataByteSize;
}
else
{
break;
}
}
}
When I try and load the NSData into AVAudioPlayer I get the following error:
self.audioPlayer = [[AVAudioPlayer alloc] initWithData:data error:&err];
err:
Error Domain=NSOSStatusErrorDomain Code=1954115647 "The operation couldn’t be completed. (OSStatus error 1954115647.)"
Any help would be appreciated.
Thank you,

Mute Audio AVAssetWriterInput while recording

I am recording video and audio using an AVAssetWriter to append CMSampleBuffer from AVCaptureVideoDataOutput and AVCaptureAudioDataOutput respectively. What I want to do is at the user discretion mute the audio during the recording.
I assuming the best way is to some how create an empty CMSampleBuffer like
CMSampleBufferRef sb;
CMSampleBufferCreate(kCFAllocatorDefault, NULL, YES, NULL, NULL, NULL, 0, 1, &sti, 0, NULL, &sb);
[_audioInputWriter appendSampleBuffer:sb];
CFRelease(sb);
but that doesn't work, so I am assuming that I need to create a silent audio buffer. How do I do this and is there a better way?
I have done this before by calling a function that processes the data in the SampleBuffer and zeros all of it. Might need to modify this if your audio format is not using an SInt16 sample size.
You can also use this same technique to process the audio in other ways.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if(isMute){
[self muteAudioInBuffer:sampleBuffer];
}
}
- (void) muteAudioInBuffer:(CMSampleBufferRef)sampleBuffer
{
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;
CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));
for (NSInteger i=0; i<numSamples; i++) {
samples[i] = (SInt16)0;
}
}

Resources