I get a libyuv crash recently.
I try a lot, but no use.
Please help or try to give some ideas how to achieve this. Thanks!
I have a iOS project(Objective C). One of the functions is encode the video stream.
My idea is
Step 1: Start a timer(20 FPS)
Step 2: Copy and get the bitmap data
Step 3: Transfer the bitmap data to YUV I420 (libyuv)
Step 4: Encode to the h264 format (Openh264)
Step 5: Send the h264 data with RTSP
All of function run on the foreground.
It works well for 3~4hr.
BUT it always will be crashed after 4hr+.
Check the CPU(39%), Memory(140MB), it is stable(No memory leak, CPU busy, etc.).
I try a lot, but no use ( Include add try-catch in my project, detect the data size before run in this line )
I figure out it will run more if decrease the FPS time(20FPS -> 15FPS)
Does it need to add something after encode each frame?
Could someone help me or give some idea for this? Thanks!
// This function runs in a GCD timer
- (void)processSDLFrame:(NSData *)_frameData {
if (mH264EncoderPtr == NULL) {
[self initEncoder];
return;
}
int argbSize = mMapWidth * mMapHeight * 4;
NSData *frameData = [[NSData alloc] initWithData:_frameData];
if ([frameData length] == 0 || [frameData length] != argbSize) {
NSLog(#"Incorrect frame with size : %ld\n", [frameData length]);
return;
}
SFrameBSInfo info;
memset(&info, 0, sizeof (SFrameBSInfo));
SSourcePicture pic;
memset(&pic, 0, sizeof (SSourcePicture));
pic.iPicWidth = mMapWidth;
pic.iPicHeight = mMapHeight;
pic.uiTimeStamp = [[NSDate date] timeIntervalSince1970];
#try {
libyuv::ConvertToI420(
static_cast<const uint8 *>([frameData bytes]), // sample
argbSize, // sample_size
mDstY, // dst_y
mStrideY, // dst_stride_y
mDstU, // dst_u
mStrideU, // dst_stride_u
mDstV, // dst_v
mStrideV, // dst_stride_v
0, // crop_x
0, // crop_y
mMapWidth, // src_width
mMapHeight, // src_height
mMapWidth, // crop_width
mMapHeight, // crop_height
libyuv::kRotateNone, // rotation
libyuv::FOURCC_ARGB); // fourcc
} #catch (NSException *exception) {
NSLog(#"libyuv::ConvertToI420 - exception:%#", exception.reason);
return;
}
pic.iColorFormat = videoFormatI420;
pic.iStride[0] = mStrideY;
pic.iStride[1] = mStrideU;
pic.iStride[2] = mStrideV;
pic.pData[0] = mDstY;
pic.pData[1] = mDstU;
pic.pData[2] = mDstV;
if (mH264EncoderPtr == NULL) {
NSLog(#"OpenH264Manager - encoder not initialized");
return;
}
int rv = -1;
#try {
rv = mH264EncoderPtr->EncodeFrame(&pic, &info);
} #catch (NSException *exception) {
NSLog( #"NSException caught - mH264EncoderPtr->EncodeFrame" );
NSLog( #"Name: %#", exception.name);
NSLog( #"Reason: %#", exception.reason );
[self deinitEncoder];
return;
}
if (rv != cmResultSuccess) {
NSLog(#"OpenH264Manager - encode failed : %d", rv);
[self deinitEncoder];
return;
}
if (info.eFrameType == videoFrameTypeSkip) {
NSLog(#"OpenH264Manager - drop skipped frame");
return;
}
// handle buffer data
int size = 0;
int layerSize[MAX_LAYER_NUM_OF_FRAME] = { 0 };
for (int layer = 0; layer < info.iLayerNum; layer++) {
for (int i = 0; i < info.sLayerInfo[layer].iNalCount; i++) {
layerSize[layer] += info.sLayerInfo[layer].pNalLengthInByte[i];
}
size += layerSize[layer];
}
uint8 *output = (uint8 *)malloc(size);
size = 0;
for (int layer = 0; layer < info.iLayerNum; layer++) {
memcpy(output + size, info.sLayerInfo[layer].pBsBuf, layerSize[layer]);
size += layerSize[layer];
}
// alloc new buffer for streaming
NSData *newData = [NSData dataWithBytes:output length:size];
// Send the data with RTSP
sendData( newData );
// free output buffer data
free(output);
}
[Jan/08/2020 Update]
I report this ticket on the Google Issue Report
https://bugs.chromium.org/p/libyuv/issues/detail?id=853
The Googler give me a feedback.
ARGBToI420 does no allocations. Its similar to a memcpy with a source and destination and number of pixels to convert.
The most common issues with it are
1. the destination buffer has been deallocated. Try adding validation that the YUV buffer is valid. Write to the first and last byte of each layer.
This often occurs on shutdown and threads dont shut down in the order you were hoping. A mutex to guard the memory could help.
2. the destination is an odd size and the allocator did not allocate enough memory. When alllocating the UV plane, use (width + 1) / 2 for width/stride and (height + 1) / 2 for height of UV. Allocate stride * height bytes. You could also use an allocator that verifies there are no overreads or overwrites, or a sanitizer like asan / msan.
When screen casting, usually windows are a multiple of 2 pixels on Windows and Linux, but I have seen MacOS use odd pixel count.
As a test you could wrap the function with temporary buffers. Copy the ARGB to a temporary ARGB buffer.
Call ARGBToI420 to a temporary I420 buffer.
Copy the I420 result to the final I420 buffer.
That should give you a clue which buffer/function is failing.
I will try them.
I need to stream audio from the mic to a http server.
These recording settings are what I need:
NSDictionary *audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt: kAudioFormatULaw],AVFormatIDKey,
[NSNumber numberWithFloat:8000.0],AVSampleRateKey,//was 44100.0
[NSData dataWithBytes: &acl length: sizeof( AudioChannelLayout ) ], AVChannelLayoutKey,
[NSNumber numberWithInt:1],AVNumberOfChannelsKey,
[NSNumber numberWithInt:64000],AVEncoderBitRateKey,
nil];
API im coding to states:
Send a continuous stream of audio to the currently viewed camera.
Audio needs to be encoded at G711 mu-law at 64 kbit/s for transfer to
the Axis camera at the bedside. send (this should be a POST URL in SSL
to connected server): POST /transmitaudio?id=
Content-type: audio/basic Content-Length: 99999 (length is ignored)
Below are a list of links I have tried to work with.
LINK - (SO)basic explanation that only audio unit and audio queues will allow for nsdata as output when recording via the mic | not an example but a good definition of whats needed (audio queues, or audio units)
LINK - (SO)audio callback example | only includes the callback
LINK - (SO)REMOTE IO example | doesnt have start/stop and is for saving to a file
LINK - (SO)REMOTE IO example | unanswered not working
LINK - (SO)Basic audio recording example | good example but records to file
LINK - (SO)Question that guided me to InMemoryAudioFile class (couldnt get working) | followed links to inMemoryFile (or something like that) but couldn't get it working.
LINK - (SO)more audio unit and remote io example/problems | got this one working but once again there isn't a stop function, and even when I tried to figure out what the call is and made it stop, it still didn't not seem to transmit the audio to the server.
LINK - Decent remoteIO and audio queue example but | another good example and almost got it working but had some problems with the code (compiler thinking its not obj-c++) and once again dont know how to get audio "data" from it instead of to a file.
LINK - Apple docs for audio queue | had problems with frameworks. worked through it (see question below) but in the end couldn't get it working however probably didn't give this one as much time as the others, and maybe should have.
LINK - (SO)problems I have had when trying to implement audio queue/unit | not an example
LINK - (SO)another remoteIO example | another good example but cant figure out how to get it to data instead of file.
LINK - also looks interesting, circular buffers | couldn't figure out how to incorporate this with the audio callback
Here is my current class attempting to stream. This seems to work although there is static coming out of the speakers at the receivers end (connected to the server). Which seems to indicate a problem with the audio data format.
IOS VERSION (minus delegate methods for GCD socket):
#implementation MicCommunicator {
AVAssetWriter * assetWriter;
AVAssetWriterInput * assetWriterInput;
}
#synthesize captureSession = _captureSession;
#synthesize output = _output;
#synthesize restClient = _restClient;
#synthesize uploadAudio = _uploadAudio;
#synthesize outputPath = _outputPath;
#synthesize sendStream = _sendStream;
#synthesize receiveStream = _receiveStream;
#synthesize socket = _socket;
#synthesize isSocketConnected = _isSocketConnected;
-(id)init {
if ((self = [super init])) {
_receiveStream = [[NSStream alloc]init];
_sendStream = [[NSStream alloc]init];
_socket = [[GCDAsyncSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()];
_isSocketConnected = FALSE;
_restClient = [RestClient sharedManager];
_uploadAudio = false;
NSArray *searchPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
_outputPath = [NSURL fileURLWithPath:[[searchPaths objectAtIndex:0] stringByAppendingPathComponent:#"micOutput.output"]];
NSError * assetError;
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono; //kAudioChannelLayoutTag_Stereo;
NSDictionary *audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt: kAudioFormatULaw],AVFormatIDKey,
[NSNumber numberWithFloat:8000.0],AVSampleRateKey,//was 44100.0
[NSData dataWithBytes: &acl length: sizeof( AudioChannelLayout ) ], AVChannelLayoutKey,
[NSNumber numberWithInt:1],AVNumberOfChannelsKey,
[NSNumber numberWithInt:64000],AVEncoderBitRateKey,
nil];
assetWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioOutputSettings]retain];
[assetWriterInput setExpectsMediaDataInRealTime:YES];
assetWriter = [[AVAssetWriter assetWriterWithURL:_outputPath fileType:AVFileTypeWAVE error:&assetError]retain]; //AVFileTypeAppleM4A
if (assetError) {
NSLog (#"error initing mic: %#", assetError);
return nil;
}
if ([assetWriter canAddInput:assetWriterInput]) {
[assetWriter addInput:assetWriterInput];
} else {
NSLog (#"can't add asset writer input...!");
return nil;
}
}
return self;
}
-(void)dealloc {
[_output release];
[_captureSession release];
[_captureSession release];
[assetWriter release];
[assetWriterInput release];
[super dealloc];
}
-(void)beginStreaming {
NSLog(#"avassetwrter class is %#",NSStringFromClass([assetWriter class]));
self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
if (audioInput)
[self.captureSession addInput:audioInput];
else {
NSLog(#"No audio input found.");
return;
}
self.output = [[AVCaptureAudioDataOutput alloc] init];
dispatch_queue_t outputQueue = dispatch_queue_create("micOutputDispatchQueue", NULL);
[self.output setSampleBufferDelegate:self queue:outputQueue];
dispatch_release(outputQueue);
self.uploadAudio = FALSE;
[self.captureSession addOutput:self.output];
[assetWriter startWriting];
[self.captureSession startRunning];
}
-(void)pauseStreaming
{
self.uploadAudio = FALSE;
}
-(void)resumeStreaming
{
self.uploadAudio = TRUE;
}
-(void)finishAudioWork
{
[self dealloc];
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
AudioBufferList audioBufferList;
NSMutableData *data= [[NSMutableData alloc] init];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (int y = 0; y < audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
[data appendBytes:frame length:audioBuffer.mDataByteSize];
}
// append [data bytes] to your NSOutputStream
// These two lines write to disk, you may not need this, just providing an example
[assetWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
[assetWriterInput appendSampleBuffer:sampleBuffer];
//start upload audio data
if (self.uploadAudio) {
if (!self.isSocketConnected) {
[self connect];
}
NSString *requestStr = [NSString stringWithFormat:#"POST /transmitaudio?id=%# HTTP/1.0\r\n\r\n",self.restClient.sessionId];
NSData *requestData = [requestStr dataUsingEncoding:NSUTF8StringEncoding];
[self.socket writeData:requestData withTimeout:5 tag:0];
[self.socket writeData:data withTimeout:5 tag:0];
}
//stop upload audio data
CFRelease(blockBuffer);
blockBuffer=NULL;
[data release];
}
And the JAVA version:
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.BufferedReader;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.PrintWriter;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.Arrays;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocket;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder.AudioSource;
import android.util.Log;
public class AudioWorker extends Thread
{
private boolean stopped = false;
private String host;
private int port;
private long id=0;
boolean run=true;
AudioRecord recorder;
//ulaw encoder stuff
private final static String TAG = "UlawEncoderInputStream";
private final static int MAX_ULAW = 8192;
private final static int SCALE_BITS = 16;
private InputStream mIn;
private int mMax = 0;
private final byte[] mBuf = new byte[1024];
private int mBufCount = 0; // should be 0 or 1
private final byte[] mOneByte = new byte[1];
////
/**
* Give the thread high priority so that it's not canceled unexpectedly, and start it
*/
public AudioWorker(String host, int port, long id)
{
this.host = host;
this.port = port;
this.id = id;
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
// start();
}
#Override
public void run()
{
Log.i("AudioWorker", "Running AudioWorker Thread");
recorder = null;
AudioTrack track = null;
short[][] buffers = new short[256][160];
int ix = 0;
/*
* Initialize buffer to hold continuously recorded AudioWorker data, start recording, and start
* playback.
*/
try
{
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
track = new AudioTrack(AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
recorder.startRecording();
// track.play();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the AudioWorker track for playback.
*/
SSLContext sc = SSLContext.getInstance("SSL");
sc.init(null, trustAllCerts, new java.security.SecureRandom());
SSLSocketFactory sslFact = sc.getSocketFactory();
SSLSocket socket = (SSLSocket)sslFact.createSocket(host, port);
socket.setSoTimeout(10000);
InputStream inputStream = socket.getInputStream();
DataInputStream in = new DataInputStream(new BufferedInputStream(inputStream));
OutputStream outputStream = socket.getOutputStream();
DataOutputStream os = new DataOutputStream(new BufferedOutputStream(outputStream));
PrintWriter socketPrinter = new PrintWriter(os);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
// socketPrinter.println("POST /transmitaudio?patient=1333369798370 HTTP/1.0");
socketPrinter.println("POST /transmitaudio?id="+id+" HTTP/1.0");
socketPrinter.println("Content-Type: audio/basic");
socketPrinter.println("Content-Length: 99999");
socketPrinter.println("Connection: Keep-Alive");
socketPrinter.println("Cache-Control: no-cache");
socketPrinter.println();
socketPrinter.flush();
while(!stopped)
{
Log.i("Map", "Writing new data to buffer");
short[] buffer = buffers[ix++ % buffers.length];
N = recorder.read(buffer,0,buffer.length);
track.write(buffer, 0, buffer.length);
byte[] bytes2 = new byte[buffer.length * 2];
ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);
read(bytes2, 0, bytes2.length);
os.write(bytes2,0,bytes2.length);
//
// ByteBuffer byteBuf = ByteBuffer.allocate(2*N);
// System.out.println("byteBuf length "+2*N);
// int i = 0;
// while (buffer.length > i) {
// byteBuf.putShort(buffer[i]);
// i++;
// }
// byte[] b = new byte[byteBuf.remaining()];
}
os.close();
}
catch(Throwable x)
{
Log.w("AudioWorker", "Error reading voice AudioWorker", x);
}
/*
* Frees the thread's resources after the loop completes so that it can be run again
*/
finally
{
recorder.stop();
recorder.release();
track.stop();
track.release();
}
}
/**
* Called from outside of the thread in order to stop the recording/playback loop
*/
public void close()
{
stopped = true;
}
public void resumeThread()
{
stopped = false;
run();
}
TrustManager[] trustAllCerts = new TrustManager[]{
new X509TrustManager() {
public java.security.cert.X509Certificate[] getAcceptedIssuers() {
return null;
}
public void checkClientTrusted(
java.security.cert.X509Certificate[] certs, String authType) {
}
public void checkServerTrusted(
java.security.cert.X509Certificate[] chain, String authType) {
for (int j=0; j<chain.length; j++)
{
System.out.println("Client certificate information:");
System.out.println(" Subject DN: " + chain[j].getSubjectDN());
System.out.println(" Issuer DN: " + chain[j].getIssuerDN());
System.out.println(" Serial number: " + chain[j].getSerialNumber());
System.out.println("");
}
}
}
};
public static void encode(byte[] pcmBuf, int pcmOffset,
byte[] ulawBuf, int ulawOffset, int length, int max) {
// from 'ulaw' in wikipedia
// +8191 to +8159 0x80
// +8158 to +4063 in 16 intervals of 256 0x80 + interval number
// +4062 to +2015 in 16 intervals of 128 0x90 + interval number
// +2014 to +991 in 16 intervals of 64 0xA0 + interval number
// +990 to +479 in 16 intervals of 32 0xB0 + interval number
// +478 to +223 in 16 intervals of 16 0xC0 + interval number
// +222 to +95 in 16 intervals of 8 0xD0 + interval number
// +94 to +31 in 16 intervals of 4 0xE0 + interval number
// +30 to +1 in 15 intervals of 2 0xF0 + interval number
// 0 0xFF
// -1 0x7F
// -31 to -2 in 15 intervals of 2 0x70 + interval number
// -95 to -32 in 16 intervals of 4 0x60 + interval number
// -223 to -96 in 16 intervals of 8 0x50 + interval number
// -479 to -224 in 16 intervals of 16 0x40 + interval number
// -991 to -480 in 16 intervals of 32 0x30 + interval number
// -2015 to -992 in 16 intervals of 64 0x20 + interval number
// -4063 to -2016 in 16 intervals of 128 0x10 + interval number
// -8159 to -4064 in 16 intervals of 256 0x00 + interval number
// -8192 to -8160 0x00
// set scale factors
if (max <= 0) max = MAX_ULAW;
int coef = MAX_ULAW * (1 << SCALE_BITS) / max;
for (int i = 0; i < length; i++) {
int pcm = (0xff & pcmBuf[pcmOffset++]) + (pcmBuf[pcmOffset++] << 8);
pcm = (pcm * coef) >> SCALE_BITS;
int ulaw;
if (pcm >= 0) {
ulaw = pcm <= 0 ? 0xff :
pcm <= 30 ? 0xf0 + (( 30 - pcm) >> 1) :
pcm <= 94 ? 0xe0 + (( 94 - pcm) >> 2) :
pcm <= 222 ? 0xd0 + (( 222 - pcm) >> 3) :
pcm <= 478 ? 0xc0 + (( 478 - pcm) >> 4) :
pcm <= 990 ? 0xb0 + (( 990 - pcm) >> 5) :
pcm <= 2014 ? 0xa0 + ((2014 - pcm) >> 6) :
pcm <= 4062 ? 0x90 + ((4062 - pcm) >> 7) :
pcm <= 8158 ? 0x80 + ((8158 - pcm) >> 8) :
0x80;
} else {
ulaw = -1 <= pcm ? 0x7f :
-31 <= pcm ? 0x70 + ((pcm - -31) >> 1) :
-95 <= pcm ? 0x60 + ((pcm - -95) >> 2) :
-223 <= pcm ? 0x50 + ((pcm - -223) >> 3) :
-479 <= pcm ? 0x40 + ((pcm - -479) >> 4) :
-991 <= pcm ? 0x30 + ((pcm - -991) >> 5) :
-2015 <= pcm ? 0x20 + ((pcm - -2015) >> 6) :
-4063 <= pcm ? 0x10 + ((pcm - -4063) >> 7) :
-8159 <= pcm ? 0x00 + ((pcm - -8159) >> 8) :
0x00;
}
ulawBuf[ulawOffset++] = (byte)ulaw;
}
}
public static int maxAbsPcm(byte[] pcmBuf, int offset, int length) {
int max = 0;
for (int i = 0; i < length; i++) {
int pcm = (0xff & pcmBuf[offset++]) + (pcmBuf[offset++] << 8);
if (pcm < 0) pcm = -pcm;
if (pcm > max) max = pcm;
}
return max;
}
public int read(byte[] buf, int offset, int length) throws IOException {
if (recorder == null) throw new IllegalStateException("not open");
// return at least one byte, but try to fill 'length'
while (mBufCount < 2) {
int n = recorder.read(mBuf, mBufCount, Math.min(length * 2, mBuf.length - mBufCount));
if (n == -1) return -1;
mBufCount += n;
}
// compand data
int n = Math.min(mBufCount / 2, length);
encode(mBuf, 0, buf, offset, n, mMax);
// move data to bottom of mBuf
mBufCount -= n * 2;
for (int i = 0; i < mBufCount; i++) mBuf[i] = mBuf[i + n * 2];
return n;
}
}
My work on this topic has been staggering and long. I have finally gotten this to work however hacked it may be. Because of that I will list some warnings prior to posting the answer:
There is still a clicking noise between buffers
I get warnings due to the way I use my obj-c classes in the obj-c++ class, so there is something wrong there (however from my research using a pool does the same as release so I dont believe this matters to much):
Object 0x13cd20 of class __NSCFString autoreleased with no pool in
place - just leaking - break on objc_autoreleaseNoPool() to debug
In order to get this working I had to comment out all AQPlayer references from SpeakHereController (see below) due to errors I couldnt fix any other way. It didnt matter for me however since I am only recording
So the main answer to the above is that there is a bug in AVAssetWriter that stopped it from appending the bytes and writing the audio data. I finally found this out after contacting apple support and have them notify me about this. As far as I know the bug is specific to ulaw and AVAssetWriter though I havnt tried many other formats to verify.
In response to this the only other option is/was to use AudioQueues. Something I had tried before but had brought a bunch of problems. The biggest problem being my lack of knowledge in obj-c++. The class below that got things working is from the speakHere example with slight changes so that the audio is ulaw formatted. The other problems came about trying to get all files to play nicely. However this was easily remedied by changing all filenames in the chain to .mm . The next problem was trying to use the classes in harmony. This is still a WIP, and ties into warning number 2. But my basic solution to this was to use the SpeakHereController (also included in the speakhere example) instead of directly accessing AQRecorder.
Anyways here is the code:
Using the SpeakHereController from an obj-c class
.h
#property(nonatomic,strong) SpeakHereController * recorder;
.mm
[init method]
//AQRecorder wrapper (SpeakHereController) allocation
_recorder = [[SpeakHereController alloc]init];
//AQRecorder wrapper (SpeakHereController) initialization
//technically this class is a controller and thats why its init method is awakeFromNib
[_recorder awakeFromNib];
[recording]
bool buttonState = self.audioRecord.isSelected;
[self.audioRecord setSelected:!buttonState];
if ([self.audioRecord isSelected]) {
[self.recorder startRecord];
}else {
[self.recorder stopRecord];
}
SpeakHereController
#import "SpeakHereController.h"
#implementation SpeakHereController
#synthesize player;
#synthesize recorder;
#synthesize btn_record;
#synthesize btn_play;
#synthesize fileDescription;
#synthesize lvlMeter_in;
#synthesize playbackWasInterrupted;
char *OSTypeToStr(char *buf, OSType t)
{
char *p = buf;
char str[4], *q = str;
*(UInt32 *)str = CFSwapInt32(t);
for (int i = 0; i < 4; ++i) {
if (isprint(*q) && *q != '\\')
*p++ = *q++;
else {
sprintf(p, "\\x%02x", *q++);
p += 4;
}
}
*p = '\0';
return buf;
}
-(void)setFileDescriptionForFormat: (CAStreamBasicDescription)format withName:(NSString*)name
{
char buf[5];
const char *dataFormat = OSTypeToStr(buf, format.mFormatID);
NSString* description = [[NSString alloc] initWithFormat:#"(%d ch. %s # %g Hz)", format.NumberChannels(), dataFormat, format.mSampleRate, nil];
fileDescription.text = description;
[description release];
}
#pragma mark Playback routines
-(void)stopPlayQueue
{
// player->StopQueue();
[lvlMeter_in setAq: nil];
btn_record.enabled = YES;
}
-(void)pausePlayQueue
{
// player->PauseQueue();
playbackWasPaused = YES;
}
-(void)startRecord
{
// recorder = new AQRecorder();
if (recorder->IsRunning()) // If we are currently recording, stop and save the file.
{
[self stopRecord];
}
else // If we're not recording, start.
{
// btn_play.enabled = NO;
// Set the button's state to "stop"
// btn_record.title = #"Stop";
// Start the recorder
recorder->StartRecord(CFSTR("recordedFile.caf"));
[self setFileDescriptionForFormat:recorder->DataFormat() withName:#"Recorded File"];
// Hook the level meter up to the Audio Queue for the recorder
// [lvlMeter_in setAq: recorder->Queue()];
}
}
- (void)stopRecord
{
// Disconnect our level meter from the audio queue
// [lvlMeter_in setAq: nil];
recorder->StopRecord();
// dispose the previous playback queue
// player->DisposeQueue(true);
// now create a new queue for the recorded file
recordFilePath = (CFStringRef)[NSTemporaryDirectory() stringByAppendingPathComponent: #"recordedFile.caf"];
// player->CreateQueueForFile(recordFilePath);
// Set the button's state back to "record"
// btn_record.title = #"Record";
// btn_play.enabled = YES;
}
- (IBAction)play:(id)sender
{
if (player->IsRunning())
{
if (playbackWasPaused) {
// OSStatus result = player->StartQueue(true);
// if (result == noErr)
// [[NSNotificationCenter defaultCenter] postNotificationName:#"playbackQueueResumed" object:self];
}
else
// [self stopPlayQueue];
nil;
}
else
{
// OSStatus result = player->StartQueue(false);
// if (result == noErr)
// [[NSNotificationCenter defaultCenter] postNotificationName:#"playbackQueueResumed" object:self];
}
}
- (IBAction)record:(id)sender
{
if (recorder->IsRunning()) // If we are currently recording, stop and save the file.
{
[self stopRecord];
}
else // If we're not recording, start.
{
// btn_play.enabled = NO;
//
// // Set the button's state to "stop"
// btn_record.title = #"Stop";
// Start the recorder
recorder->StartRecord(CFSTR("recordedFile.caf"));
[self setFileDescriptionForFormat:recorder->DataFormat() withName:#"Recorded File"];
// Hook the level meter up to the Audio Queue for the recorder
[lvlMeter_in setAq: recorder->Queue()];
}
}
#pragma mark AudioSession listeners
void interruptionListener( void * inClientData,
UInt32 inInterruptionState)
{
SpeakHereController *THIS = (SpeakHereController*)inClientData;
if (inInterruptionState == kAudioSessionBeginInterruption)
{
if (THIS->recorder->IsRunning()) {
[THIS stopRecord];
}
else if (THIS->player->IsRunning()) {
//the queue will stop itself on an interruption, we just need to update the UI
[[NSNotificationCenter defaultCenter] postNotificationName:#"playbackQueueStopped" object:THIS];
THIS->playbackWasInterrupted = YES;
}
}
else if ((inInterruptionState == kAudioSessionEndInterruption) && THIS->playbackWasInterrupted)
{
// we were playing back when we were interrupted, so reset and resume now
// THIS->player->StartQueue(true);
[[NSNotificationCenter defaultCenter] postNotificationName:#"playbackQueueResumed" object:THIS];
THIS->playbackWasInterrupted = NO;
}
}
void propListener( void * inClientData,
AudioSessionPropertyID inID,
UInt32 inDataSize,
const void * inData)
{
SpeakHereController *THIS = (SpeakHereController*)inClientData;
if (inID == kAudioSessionProperty_AudioRouteChange)
{
CFDictionaryRef routeDictionary = (CFDictionaryRef)inData;
//CFShow(routeDictionary);
CFNumberRef reason = (CFNumberRef)CFDictionaryGetValue(routeDictionary, CFSTR(kAudioSession_AudioRouteChangeKey_Reason));
SInt32 reasonVal;
CFNumberGetValue(reason, kCFNumberSInt32Type, &reasonVal);
if (reasonVal != kAudioSessionRouteChangeReason_CategoryChange)
{
/*CFStringRef oldRoute = (CFStringRef)CFDictionaryGetValue(routeDictionary, CFSTR(kAudioSession_AudioRouteChangeKey_OldRoute));
if (oldRoute)
{
printf("old route:\n");
CFShow(oldRoute);
}
else
printf("ERROR GETTING OLD AUDIO ROUTE!\n");
CFStringRef newRoute;
UInt32 size; size = sizeof(CFStringRef);
OSStatus error = AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute);
if (error) printf("ERROR GETTING NEW AUDIO ROUTE! %d\n", error);
else
{
printf("new route:\n");
CFShow(newRoute);
}*/
if (reasonVal == kAudioSessionRouteChangeReason_OldDeviceUnavailable)
{
if (THIS->player->IsRunning()) {
[THIS pausePlayQueue];
[[NSNotificationCenter defaultCenter] postNotificationName:#"playbackQueueStopped" object:THIS];
}
}
// stop the queue if we had a non-policy route change
if (THIS->recorder->IsRunning()) {
[THIS stopRecord];
}
}
}
else if (inID == kAudioSessionProperty_AudioInputAvailable)
{
if (inDataSize == sizeof(UInt32)) {
UInt32 isAvailable = *(UInt32*)inData;
// disable recording if input is not available
THIS->btn_record.enabled = (isAvailable > 0) ? YES : NO;
}
}
}
#pragma mark Initialization routines
- (void)awakeFromNib
{
// Allocate our singleton instance for the recorder & player object
recorder = new AQRecorder();
player = nil;//new AQPlayer();
OSStatus error = AudioSessionInitialize(NULL, NULL, interruptionListener, self);
if (error) printf("ERROR INITIALIZING AUDIO SESSION! %d\n", error);
else
{
UInt32 category = kAudioSessionCategory_PlayAndRecord;
error = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category);
if (error) printf("couldn't set audio category!");
error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, self);
if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", error);
UInt32 inputAvailable = 0;
UInt32 size = sizeof(inputAvailable);
// we do not want to allow recording if input is not available
error = AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &size, &inputAvailable);
if (error) printf("ERROR GETTING INPUT AVAILABILITY! %d\n", error);
// btn_record.enabled = (inputAvailable) ? YES : NO;
// we also need to listen to see if input availability changes
error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, propListener, self);
if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", error);
error = AudioSessionSetActive(true);
if (error) printf("AudioSessionSetActive (true) failed");
}
// [[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(playbackQueueStopped:) name:#"playbackQueueStopped" object:nil];
// [[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(playbackQueueResumed:) name:#"playbackQueueResumed" object:nil];
// UIColor *bgColor = [[UIColor alloc] initWithRed:.39 green:.44 blue:.57 alpha:.5];
// [lvlMeter_in setBackgroundColor:bgColor];
// [lvlMeter_in setBorderColor:bgColor];
// [bgColor release];
// disable the play button since we have no recording to play yet
// btn_play.enabled = NO;
// playbackWasInterrupted = NO;
// playbackWasPaused = NO;
}
# pragma mark Notification routines
- (void)playbackQueueStopped:(NSNotification *)note
{
btn_play.title = #"Play";
[lvlMeter_in setAq: nil];
btn_record.enabled = YES;
}
- (void)playbackQueueResumed:(NSNotification *)note
{
btn_play.title = #"Stop";
btn_record.enabled = NO;
[lvlMeter_in setAq: player->Queue()];
}
#pragma mark Cleanup
- (void)dealloc
{
[btn_record release];
[btn_play release];
[fileDescription release];
[lvlMeter_in release];
// delete player;
delete recorder;
[super dealloc];
}
#end
AQRecorder
(.h has 2 lines of importance
#define kNumberRecordBuffers 3
#define kBufferDurationSeconds 5.0
)
#include "AQRecorder.h"
//#include "UploadAudioWrapperInterface.h"
//#include "RestClient.h"
RestClient * restClient;
NSData* data;
// ____________________________________________________________________________________
// Determine the size, in bytes, of a buffer necessary to represent the supplied number
// of seconds of audio data.
int AQRecorder::ComputeRecordBufferSize(const AudioStreamBasicDescription *format, float seconds)
{
int packets, frames, bytes = 0;
try {
frames = (int)ceil(seconds * format->mSampleRate);
if (format->mBytesPerFrame > 0)
bytes = frames * format->mBytesPerFrame;
else {
UInt32 maxPacketSize;
if (format->mBytesPerPacket > 0)
maxPacketSize = format->mBytesPerPacket; // constant packet size
else {
UInt32 propertySize = sizeof(maxPacketSize);
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize,
&propertySize), "couldn't get queue's maximum output packet size");
}
if (format->mFramesPerPacket > 0)
packets = frames / format->mFramesPerPacket;
else
packets = frames; // worst-case scenario: 1 frame in a packet
if (packets == 0) // sanity check
packets = 1;
bytes = packets * maxPacketSize;
}
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
return 0;
}
return bytes;
}
// ____________________________________________________________________________________
// AudioQueue callback function, called when an input buffers has been filled.
void AQRecorder::MyInputBufferHandler( void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
if (inNumPackets > 0) {
// write packets to file
// XThrowIfError(AudioFileWritePackets(aqr->mRecordFile, FALSE, inBuffer->mAudioDataByteSize,
// inPacketDesc, aqr->mRecordPacket, &inNumPackets, inBuffer->mAudioData),
// "AudioFileWritePackets failed");
aqr->mRecordPacket += inNumPackets;
// int numBytes = inBuffer->mAudioDataByteSize;
// SInt8 *testBuffer = (SInt8*)inBuffer->mAudioData;
//
// for (int i=0; i < numBytes; i++)
// {
// SInt8 currentData = testBuffer[i];
// printf("Current data in testbuffer is %d", currentData);
//
// NSData * temp = [NSData dataWithBytes:currentData length:sizeof(currentData)];
// }
data=[[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]retain];
[restClient uploadAudioData:data url:nil];
}
// if we're not stopping, re-enqueue the buffer so that it gets filled again
if (aqr->IsRunning())
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}
AQRecorder::AQRecorder()
{
mIsRunning = false;
mRecordPacket = 0;
data = [[NSData alloc]init];
restClient = [[RestClient sharedManager]retain];
}
AQRecorder::~AQRecorder()
{
AudioQueueDispose(mQueue, TRUE);
AudioFileClose(mRecordFile);
if (mFileName){
CFRelease(mFileName);
}
[restClient release];
[data release];
}
// ____________________________________________________________________________________
// Copy a queue's encoder's magic cookie to an audio file.
void AQRecorder::CopyEncoderCookieToFile()
{
UInt32 propertySize;
// get the magic cookie, if any, from the converter
OSStatus err = AudioQueueGetPropertySize(mQueue, kAudioQueueProperty_MagicCookie, &propertySize);
// we can get a noErr result and also a propertySize == 0
// -- if the file format does support magic cookies, but this file doesn't have one.
if (err == noErr && propertySize > 0) {
Byte *magicCookie = new Byte[propertySize];
UInt32 magicCookieSize;
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MagicCookie, magicCookie, &propertySize), "get audio converter's magic cookie");
magicCookieSize = propertySize; // the converter lies and tell us the wrong size
// now set the magic cookie on the output file
UInt32 willEatTheCookie = false;
// the converter wants to give us one; will the file take it?
err = AudioFileGetPropertyInfo(mRecordFile, kAudioFilePropertyMagicCookieData, NULL, &willEatTheCookie);
if (err == noErr && willEatTheCookie) {
err = AudioFileSetProperty(mRecordFile, kAudioFilePropertyMagicCookieData, magicCookieSize, magicCookie);
XThrowIfError(err, "set audio file's magic cookie");
}
delete[] magicCookie;
}
}
void AQRecorder::SetupAudioFormat(UInt32 inFormatID)
{
memset(&mRecordFormat, 0, sizeof(mRecordFormat));
UInt32 size = sizeof(mRecordFormat.mSampleRate);
XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareSampleRate,
&size,
&mRecordFormat.mSampleRate), "couldn't get hardware sample rate");
//override samplearate to 8k from device sample rate
mRecordFormat.mSampleRate = 8000.0;
size = sizeof(mRecordFormat.mChannelsPerFrame);
XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareInputNumberChannels,
&size,
&mRecordFormat.mChannelsPerFrame), "couldn't get input channel count");
// mRecordFormat.mChannelsPerFrame = 1;
mRecordFormat.mFormatID = inFormatID;
if (inFormatID == kAudioFormatLinearPCM)
{
// if we want pcm, default to signed 16-bit little-endian
mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
mRecordFormat.mBitsPerChannel = 16;
mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame;
mRecordFormat.mFramesPerPacket = 1;
}
if (inFormatID == kAudioFormatULaw) {
// NSLog(#"is ulaw");
mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger;
mRecordFormat.mSampleRate = 8000.0;
// mRecordFormat.mFormatFlags = 0;
mRecordFormat.mFramesPerPacket = 1;
mRecordFormat.mChannelsPerFrame = 1;
mRecordFormat.mBitsPerChannel = 16;//was 8
mRecordFormat.mBytesPerPacket = 1;
mRecordFormat.mBytesPerFrame = 1;
}
}
NSString * GetDocumentDirectory(void)
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
return basePath;
}
void AQRecorder::StartRecord(CFStringRef inRecordFile)
{
int i, bufferByteSize;
UInt32 size;
CFURLRef url;
try {
mFileName = CFStringCreateCopy(kCFAllocatorDefault, inRecordFile);
// specify the recording format
SetupAudioFormat(kAudioFormatULaw /*kAudioFormatLinearPCM*/);
// create the queue
XThrowIfError(AudioQueueNewInput(
&mRecordFormat,
MyInputBufferHandler,
this /* userData */,
NULL /* run loop */, NULL /* run loop mode */,
0 /* flags */, &mQueue), "AudioQueueNewInput failed");
// get the record format back from the queue's audio converter --
// the file may require a more specific stream description than was necessary to create the encoder.
mRecordPacket = 0;
size = sizeof(mRecordFormat);
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription,
&mRecordFormat, &size), "couldn't get queue's format");
NSString *basePath = GetDocumentDirectory();
NSString *recordFile = [basePath /*NSTemporaryDirectory()*/ stringByAppendingPathComponent: (NSString*)inRecordFile];
url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRef)recordFile, NULL);
// create the audio file
XThrowIfError(AudioFileCreateWithURL(url, kAudioFileCAFType, &mRecordFormat, kAudioFileFlags_EraseFile,
&mRecordFile), "AudioFileCreateWithURL failed");
CFRelease(url);
// copy the cookie first to give the file object as much info as we can about the data going in
// not necessary for pcm, but required for some compressed audio
CopyEncoderCookieToFile();
// allocate and enqueue buffers
bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); // enough bytes for half a second
for (i = 0; i < kNumberRecordBuffers; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]),
"AudioQueueAllocateBuffer failed");
XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL),
"AudioQueueEnqueueBuffer failed");
}
// start the queue
mIsRunning = true;
XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed");
}
catch (CAXException &e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
catch (...) {
fprintf(stderr, "An unknown error occurred\n");
}
}
void AQRecorder::StopRecord()
{
// end recording
mIsRunning = false;
// XThrowIfError(AudioQueueReset(mQueue), "AudioQueueStop failed");
XThrowIfError(AudioQueueStop(mQueue, true), "AudioQueueStop failed");
// a codec may update its cookie at the end of an encoding session, so reapply it to the file now
CopyEncoderCookieToFile();
if (mFileName)
{
CFRelease(mFileName);
mFileName = NULL;
}
AudioQueueDispose(mQueue, true);
AudioFileClose(mRecordFile);
}
Please feel free to comment or refine my answer, I will accept it as the answer if its a better solution. Please note this was my first attempt and Im sure it is not the most elegant or proper solution.
You could use the gamekit Framework? Then send the audio over bluetooth. There are examples in the ios developer library