determining throughput speed Cordova bridge. Any insights? - ios

I'm trying to get an idea about the throughput speed of Cordova's WebView-Native bridge. For this I will send a Uint8Array of a determined size. However, I would have expected the throughput speed of this bridge to be higher.
Testing on a iPhone 4s and Cordova 5.4.1 the throughput speed is ~0.8mb/s. This feels low to me (based on some experience with Cordva apps, but this is of course subjective).
Question: is this realistic?
I think that without posting my code this questions show no enough info, therefore I've attached my code:
My JavaScript code:
var success = function(endTime) {
//endTime is unix epoch time (ms)
var time_df = endTime - beginTime
results.push(time_df)
numTests += 1;
sendMessage()
}
var sendMessage = function() {
// execute test 30 times
if (numTests < 30) {
beginTime = +new Date()
bench.send(payload, success, fail)
} else {
console.log('results', results)
}
}
var createPayload = function(size) {
size = size * 1000;
var bufView = new Uint8Array(size);
for (var i = 0; i < size; ++i) {
bufView[i] = i % 20;
}
return bufView.buffer;
}
var startTest = function() {
// create payload of 500kb
payload = createPayload(500)
sendMessage()
}
My Objective-C code.
Code does basicly one thing: return unix epoch time after payload is received.
(void)send:(CDVInvokedUrlCommand*)command
{
NSTimeInterval time = ([[NSDate date] timeIntervalSince1970]); // returned as a double
long digits = (long)time; // this is the first 10 digits
int decimalDigits = (int)(fmod(time, 1) * 1000); // this will get the 3 missing digits
long timestamp = (digits * 1000) + decimalDigits;
NSString *timestampString = [NSString stringWithFormat:#"%ld%d",digits ,decimalDigits];
NSString* callbackId = [command callbackId];
CDVPluginResult* result = [CDVPluginResult
resultWithStatus:CDVCommandStatus_OK
messageAsString:timestampString];
[self success:result callbackId:callbackId];
}

Related

Why does this EA (in early development) not produce an alert, or an error?

#property strict
string subfolder = "ipc\\";
int last_read = 0;
int t = 0;
struct trade_message
{
int time; // time
string asset; // asset
string direction; // direction
double open_price; // open
double stop_price; // open
double close_price;// open
float fraction; // fraction
string comment; // comment
string status; // status
};
trade_message messages[];
int OnInit()
{
int FH = FileOpen(subfolder+"processedtime.log",FILE_BIN);
if(FH >=0)
{
last_read = FileReadInteger(FH,4);
FileClose(FH);
}
return(INIT_SUCCEEDED);
}
void OnTick()
{
int FH=FileOpen(subfolder+"data.csv",FILE_READ|FILE_CSV, ","); //open file
int p=0;
while(!FileIsEnding(FH))
{
t = StringToInteger(FileReadString(FH));
if(t<=last_read)
{
break;
}
do
{
messages[p].time = t; // time
messages[p].direction = FileReadString(FH); // direction
messages[p].open_price = StringToDouble(FileReadString(FH)); // open
messages[p].stop_price = StringToDouble(FileReadString(FH)); // stop
messages[p].close_price = StringToDouble(FileReadString(FH)); // close
messages[p].fraction = StringToDouble(FileReadString(FH)); // fraction (?float)
messages[p].comment = FileReadString(FH); // comment
messages[p].status = FileReadString(FH); // status
Alert(messages[p].comment);
}
while(!FileIsLineEnding(FH));
p++;
Alert("P = ",p,"; Array length = ", ArraySize(messages));
}
FileClose(FH);
last_read = t;
FileDelete(subfolder+"processedtime.log");
FH = FileOpen(subfolder+"processedtime.log",FILE_BIN);
FileWriteInteger(FH,t,4);
FileClose(FH);
ArrayFree(messages);
}
The code is in tick function in order to test it before taking it out to a function.
The data.csv file is:
Timestamp
Asset
Direction
Price
Stop
Profit
Fraction
Comment
Status
xxx
yyy
SHORT
13240
13240
13220
0.5
yyy SHORT 13240 - taken half at 13220 and stop to breakeven
U
xxx
yyy
SHORT
13240
13262
13040
1.0
55%
DP
The processedtime.log is not being created.
So 2 problems, both of my own making.
Not skipping the header row, simply remedied by inserting
do f = FileReadString(FH);
while(!FileIsLineEnding(FH));
after FileOpen() (easier than altering my source for data.csv), and
forgetting about the asset part of my structure, thus ensuring my structures got out of sync with my lines!! Remedied by adding messages[p].asset = FileReadString(FH); between messages[p].time = t and messages[p].direction = FileReadString(FH);

Why is the performance gap between GCD, ObjC and Swift so large

OC: Simulator iPhoneSE iOS 13;
(60-80 seconds)
NSTimeInterval t1 = NSDate.date.timeIntervalSince1970;
NSInteger count = 100000;
for (NSInteger i = 0; i < count; i++) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSLog(#"op begin: %ld", i);
self.idx = i;
NSLog(#"op end: %ld", i);
if (i == count - 1) {
NSLog(#"耗时: %f", NSDate.date.timeIntervalSince1970-t1);
}
});
}
Swift: Simulator iPhoneSE iOS 13;
(10-14 seconds)
let t1 = Date().timeIntervalSince1970
let count = 100000
for i in 0..<count {
DispatchQueue.global().async {
print("subOp begin i:\(i), thred: \(Thread.current)")
self.idx = i
print("subOp end i:\(i), thred: \(Thread.current)")
if i == count-1 {
print("耗时: \(Date().timeIntervalSince1970-t1)")
}
}
}
My previous work has been writing code using oc. Recently learned to use swift. I was surprised by the wide performance gap for a common use of GCD
tl;dr
You most likely are doing a debug build. In Swift, debug builds do all sort of safety checks. If you do a release build, it turns off those safety checks, and the performance is largely indistinguishable from the Objective-C rendition.
A few observations:
Neither of these are thread-safe regarding their interaction with index, namely your interaction with this property is not being synchronized. If you’re going to update a property from multiple threads, you have to synchronize your access (with locks, serial queue, reader-writer concurrent queue, etc.).
In your Swift code, are you doing a release build? In a debug build, there are safety checks in a debug build that aren’t performed with the Objective-C rendition. Do a “release” build for both to compare apples-to-apples.
Your Objective-C rendition is using DISPATCH_QUEUE_PRIORITY_HIGH, but your Swift iteration is using a QoS of .default. I’d suggest using a QoS of .userInteractive or .userInitiated to be comparable.
Global queues are concurrent queues. So, checking i == count - 1 is not sufficient to know whether all of the current tasks are done. Generally we’d use dispatch groups or concurrentPerform/dispatch_apply to know when they’re done.
FWIW, dispatching 100,000 tasks to a global queue is not advised, because you’re going to quickly exhaust the worker threads. Don’t do this sort of thread explosion. You can have unexpected lockups in your app if you do that. In Swift we’d use concurrentPerform. In Objective-C we’d use dispatch_apply.
You’re doing NSLog in Objective-C and print in Swift. Those are not the same thing. I’d suggest doing NSLog in both if you want to compare performance.
When performance testing, I might suggest using unit tests’ measure routine, which repeats it a number of times.
Anyway, when correcting for all of this, the time for the Swift code was largely indistinguishable from the Objective-C performance.
Here are the routines I used. In Swift:
class SwiftExperiment {
// This is not advisable, because it suffers from thread explosion which will exhaust
// the very limited number of worker threads.
func experiment1(completion: #escaping (TimeInterval) -> Void) {
let t1 = Date()
let count = 100_000
let group = DispatchGroup()
for i in 0..<count {
DispatchQueue.global(qos: .userInteractive).async(group: group) {
NSLog("op end: %ld", i);
}
}
group.notify(queue: .main) {
let elapsed = Date().timeIntervalSince(t1)
completion(elapsed)
}
}
// This is safer (though it's a poor use of `concurrentPerform` as there's not enough
// work being done on each thread).
func experiment2(completion: #escaping (TimeInterval) -> Void) {
let t1 = Date()
let count = 100_000
DispatchQueue.global(qos: .userInteractive).async() {
DispatchQueue.concurrentPerform(iterations: count) { i in
NSLog("op end: %ld", i);
}
let elapsed = Date().timeIntervalSince(t1)
completion(elapsed)
}
}
}
And the equivalent Objective-C routines:
// This is not advisable, because it suffers from thread explosion which will exhaust
// the very limited number of worker threads.
- (void)experiment1:(void (^ _Nonnull)(NSTimeInterval))block {
NSDate *t1 = [NSDate date];
NSInteger count = 100000;
dispatch_group_t group = dispatch_group_create();
for (NSInteger i = 0; i < count; i++) {
dispatch_group_async(group, dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0), ^{
NSLog(#"op end: %ld", i);
});
}
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
NSTimeInterval elapsed = [NSDate.date timeIntervalSinceDate:t1];
NSLog(#"耗时: %f", elapsed);
block(elapsed);
});
}
// This is safer (though it's a poor use of `dispatch_apply` as there's not enough
// work being done on each thread).
- (void)experiment2:(void (^ _Nonnull)(NSTimeInterval))block {
NSDate *t1 = [NSDate date];
NSInteger count = 100000;
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0), ^{
dispatch_apply(count, dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0), ^(size_t index) {
NSLog(#"op end: %ld", index);
});
NSTimeInterval elapsed = [NSDate.date timeIntervalSinceDate:t1];
NSLog(#"耗时: %f", elapsed);
block(elapsed);
});
}
And my unit tests:
class MyApp4Tests: XCTestCase {
func testSwiftExperiment1() throws {
let experiment = SwiftExperiment()
measure {
let e = expectation(description: "experiment1")
experiment.experiment1 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
func testSwiftExperiment2() throws {
let experiment = SwiftExperiment()
measure {
let e = expectation(description: "experiment2")
experiment.experiment2 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
func testObjcExperiment1() throws {
let experiment = ObjectiveCExperiment()
measure {
let e = expectation(description: "experiment1")
experiment.experiment1 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
func testObjcExperiment2() throws {
let experiment = ObjectiveCExperiment()
measure {
let e = expectation(description: "experiment2")
experiment.experiment2 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
}
That resulted in

Strictly scheduled loop timing in Swift

What is the best way to schedule a repeated task with very strict timing (accurate and reliable enough for musical sequencing)? From the Apple docs, it is clear that NSTimer is not reliable in this sense (i.e., "A timer is not a real-time mechanism"). An approach that I borrowed from AudioKit's AKPlaygroundLoop seems consistent within about 4ms (if not quite accurate), and might be feasible:
class JHLoop: NSObject{
var trigger: Int {
return Int(60 * duration) // 60fps * t in seconds
}
var counter: Int = 0
var duration: Double = 1.0 // in seconds, but actual loop is ~1.017s
var displayLink: CADisplayLink?
weak var delegate: JHLoopDelegate?
init(dur: Double) {
duration = dur
}
func stopLoop() {
displayLink?.invalidate()
}
func startLoop() {
counter = 0
displayLink = CADisplayLink(target: self, selector: "update")
displayLink?.frameInterval = 1
displayLink?.addToRunLoop(NSRunLoop.currentRunLoop(), forMode: NSRunLoopCommonModes)
}
func update() {
if counter < trigger {
counter++
} else {
counter = 0
// execute loop here
NSLog("loop executed")
delegate!.loopBody()
}
}
}
protocol JHLoopDelegate: class {
func loopBody()
}
↑ Replaced code with the actual class I will try to use for the time being.
For reference, I am hoping to make a polyrhythmic drum sequencer, so consistency is most important. I will also need to be able to smoothly modify the loop, and ideally the looping period, in real time.
Is there a better way to do this?
You can try to use mach_wait_until() api. It’s pretty good for high precision timer. I changed apple example from here a little. It works fine in mine command line tool project. In below code snippet I changed main() method from my project to startLoop(). Also you can see this.
Hope it helps.
static const uint64_t NANOS_PER_USEC = 1000ULL;
static const uint64_t NANOS_PER_MILLISEC = 1000ULL * NANOS_PER_USEC;
static const uint64_t NANOS_PER_SEC = 1000ULL * NANOS_PER_MILLISEC;
static mach_timebase_info_data_t timebase_info;
static uint64_t nanos_to_abs(uint64_t nanos) {
return nanos * timebase_info.denom / timebase_info.numer;
}
func startLoop() {
while(true) { //
int64_t nanosec = waitSomeTime(1000); // each second
NSLog(#"%lld", nanosec);
update() // call needed update here
}
}
uint64_t waitSomeTime(int64_t eachMillisec) {
uint64_t start;
uint64_t end;
uint64_t elapsed;
uint64_t elapsedNano;
if ( timebase_info.denom == 0 ) {
(void) mach_timebase_info(&timebase_info);
}
// Start the clock.
start = mach_absolute_time();
mach_wait_until(start + nanos_to_abs(eachMillisec * NANOS_PER_MILLISEC));
// Stop the clock.
end = mach_absolute_time();
// Calculate the duration.
elapsed = end - start;
elapsedNano = elapsed * timebase_info.numer / timebase_info.denom;
return elapsedNano;
}

How to encode and decode audio using opus

I am trying integrate opus into my application, the encode and decode function returns positive value which means successfully, but the output audio can't play. Raw audio data can play as well.
Here is how I encode data. I use 4 bytes prefix to separate from each packet.
self.encoder = opus_encoder_create(24000, 1, OPUS_APPLICATION_VOIP, &opusError);
opus_encoder_ctl(self.encoder, OPUS_SET_BANDWIDTH(OPUS_BANDWIDTH_SUPERWIDEBAND));
- (void) encodeBufferList:(AudioBufferList *)bufferList {
BOOL success = TPCircularBufferProduceBytes(_circularBuffer, bufferList->mBuffers[0].mData, bufferList->mBuffers[0].mDataByteSize);
if (!success) {
NSLog(#"insufficient space in circular buffer!");
}
if (!_encoding) {
_encoding = YES;
dispatch_async(self.processingQueue, ^{
[self startEncodingLoop];
});
}
}
-(void)startEncodingLoop
{
int32_t availableBytes = 0;
opus_int16 *data = (opus_int16*)TPCircularBufferTail(_circularBuffer, &availableBytes);
int availableSamples = availableBytes / _inputASBD.mBytesPerFrame;
/*!
* Use dynamic duration
*/
// int validSamples[6] = {2.5, 5, 10, 20, 40, 60}; // in milisecond
// int esample = validSamples[0] * self.sampleRate / 1000;
// for (int i = 0; i < 6; i++) {
// int32_t samp = validSamples[i] * self.sampleRate / 1000;
// if (availableSamples < samp) {
// break;
// }
// esample = samp;
// }
/*!
* Use 20ms
*/
int esample = 20 * self.sampleRate / 1000;
if (availableSamples < esample) {
/*!
* Out of data. Finish encoding
*/
self.encoding = NO;
[self.eDelegate didFinishEncode];
return;
}
// printf("raw input value for packet \n");
// for (int i = 0; i < esample * self.numberOfChannels; i++) {
// printf("%d :", data[i]);
// }
int returnValue = opus_encode(_encoder, data, esample, _encoderOutputBuffer, 1000);
TPCircularBufferConsume(_circularBuffer, esample * sizeof(opus_int16) * self.numberOfChannels);
// printf("output encode \n");
// for (int i = 0; i < returnValue; i++) {
// printf("%d :", _encoderOutputBuffer[i]);
// }
NSMutableData *outputData = [NSMutableData new];
NSError *error = nil;
if (returnValue <= 0) {
error = [OKUtilities errorForOpusErrorCode:returnValue];
}else {
[outputData appendBytes:_encoderOutputBuffer length:returnValue * sizeof(unsigned char)];
unsigned char int_field[4];
int_to_char(returnValue , int_field);
NSData *header = [NSData dataWithBytes:&int_field[0] length:4 * sizeof(unsigned char)];
if (self.eDelegate) {
[self.eDelegate didEncodeWithData:header];
}
}
if (self.eDelegate) {
[self.eDelegate didEncodeWithData:outputData];
}
[self startEncodingLoop];
}
And here is decode function:
self.decoder = opus_decoder_create(24000, 1, &opusError);
opus_decoder_ctl(self.decoder, OPUS_SET_SIGNAL(OPUS_SIGNAL_VOICE));
opus_decoder_ctl(self.decoder, OPUS_SET_GAIN(10));
-(void)startParseData:(unsigned char*)data remainingLen:(int)len
{
if (len <= 0) {
[self.dDelegate didFinishDecode];
return;
}
int headLen = sizeof(unsigned char) * 4;
unsigned char h[4];
h[0] = data[0];
h[1] = data[1];
h[2] = data[2];
h[3] = data[3];
int packetLen = char_to_int(h);
data += headLen;
packetLen = packetLen * sizeof(unsigned char) * self.numberOfChannels;
[self decodePacket:data length:packetLen remainingLen:len - headLen];
}
-(void)decodePacket:(unsigned char*)inputData length:(int)len remainingLen:(int)rl
{
int bw = opus_packet_get_bandwidth(inputData); //TEST: return OPUS_BANDWIDTH_SUPERWIDEBAND here
int32_t decodedSamples = 0;
// int validSamples[6] = {2.5, 5, 10, 20, 40, 60}; // in milisecond
/*!
* Use 60ms
*/
int esample = 60 * self.sampleRate / 1000;
// printf("input decode \n");
// for (int i = 0; i < len; i++) {
// printf("%d :", inputData[i]);
// }
_decoderBufferLength = esample * self.numberOfChannels * sizeof(opus_int16);
int returnValue = opus_decode(_decoder, inputData, len, _outputBuffer, esample, 1);
if (returnValue < 0) {
NSError *error = [OKUtilities errorForOpusErrorCode:returnValue];
NSLog(#"decode error %#", error);
inputData += len;
[self startParseData:inputData remainingLen:rl - len];
return;
}
decodedSamples = returnValue;
NSUInteger length = decodedSamples * self.numberOfChannels;
// printf("raw decoded data \n");
// for (int i = 0; i < length; i++) {
// printf("%d :", _outputBuffer[i]);
// }
NSData *audioData = [NSData dataWithBytes:_outputBuffer length:length * sizeof(opus_int16)];
if (self.dDelegate) {
[self.dDelegate didDecodeData:audioData];
}
inputData += len;
[self startParseData:inputData remainingLen:rl - len];
}
Please help me to point out what I am missing. An example would be great.
I think the problem is on the decode side:
You pass 1 as the fec argument to opus_decode(). This asks the decoder to generate the full packet duration's worth of data from error correction data in the current packet. I don't see any lost packet tracking in your code, so 0 should be passed instead. With that change your input and output duration should match.
You configure the decoder for mono output, but later use self.numberOfChannels in length calculations. Those should match or you may get unexpected behaviour.
OPUS_SET_SIGNAL doesn't do anything in opus_decoder_ctl() but it will just return OPUS_UNIMPLEMENTED without affecting behaviour.
Opus packets can be up to 120 ms in duration, so your limit of 60 ms could fail to decode some streams. If you're only talking to your own app that won't cause a problem the way you've configured it, since libopus defaults to 20ms frames.
I found what the problem is. I have set the audio format is float kAudioFormatFlagIsPacked|kAudioFormatFlagIsFloat;. I should use opus_encode_float and opus_decode_float instead of opus_encode opus_decode.
As #Ralph says, we should use fec = 0 in opus_decode. Thanks to #Ralph.
One thing I notice is that you're treating the return value of opus_encode() as a number of samples encoded, when it's the number of bytes in the compressed packet. that means you're writing 50% or 75% garbage data from the end of _encoderOutputBuffer into your encoded stream.
Also make sure _encoderOutputBuffer has room for the hard-coded 1000 byte packet-length limit you're passing in.

AudioConverter#FillComplexBuffer returns -50 and does not convert anything

I'm strongly following this Xamarin sample (based on this Apple sample) to convert a LinearPCM file to an AAC file.
The sample works great, but implemented in my project, the FillComplexBuffer method returns error -50 and the InputData event is not triggered once, thus nothing is converted.
The error only appears when testing on a device. When testing on the emulator, everything goes great and I get a good encoded AAC file at the end.
I tried a lot of things today, and I don't see any difference between my code and the sample code. Do you have any idea where this may come from?
I don't know if this is in anyway related to Xamarin, it doesn't seem so since the Xamarin sample works great.
Here's the relevant part of my code:
protected void Encode(string path)
{
// In class setup. File at TempWavFilePath has DecodedFormat as format.
//
// DecodedFormat = AudioStreamBasicDescription.CreateLinearPCM();
// AudioStreamBasicDescription encodedFormat = new AudioStreamBasicDescription()
// {
// Format = AudioFormatType.MPEG4AAC,
// SampleRate = DecodedFormat.SampleRate,
// ChannelsPerFrame = DecodedFormat.ChannelsPerFrame,
// };
// AudioStreamBasicDescription.GetFormatInfo (ref encodedFormat);
// EncodedFormat = encodedFormat;
// Setup converter
AudioStreamBasicDescription inputFormat = DecodedFormat;
AudioStreamBasicDescription outputFormat = EncodedFormat;
AudioConverterError converterCreateError;
AudioConverter converter = AudioConverter.Create(inputFormat, outputFormat, out converterCreateError);
if (converterCreateError != AudioConverterError.None)
{
Console.WriteLine("Converter creation error: " + converterCreateError);
}
converter.EncodeBitRate = 192000; // AAC 192kbps
// get the actual formats back from the Audio Converter
inputFormat = converter.CurrentInputStreamDescription;
outputFormat = converter.CurrentOutputStreamDescription;
/*** INPUT ***/
AudioFile inputFile = AudioFile.OpenRead(NSUrl.FromFilename(TempWavFilePath));
// init buffer
const int inputBufferBytesSize = 32768;
IntPtr inputBufferPtr = Marshal.AllocHGlobal(inputBufferBytesSize);
// calc number of packets per read
int inputSizePerPacket = inputFormat.BytesPerPacket;
int inputBufferPacketSize = inputBufferBytesSize / inputSizePerPacket;
AudioStreamPacketDescription[] inputPacketDescriptions = null;
// init position
long inputFilePosition = 0;
// define input delegate
converter.InputData += delegate(ref int numberDataPackets, AudioBuffers data, ref AudioStreamPacketDescription[] dataPacketDescription)
{
// how much to read
if (numberDataPackets > inputBufferPacketSize)
{
numberDataPackets = inputBufferPacketSize;
}
// read from the file
int outNumBytes;
AudioFileError readError = inputFile.ReadPackets(false, out outNumBytes, inputPacketDescriptions, inputFilePosition, ref numberDataPackets, inputBufferPtr);
if (readError != 0)
{
Console.WriteLine("Read error: " + readError);
}
// advance input file packet position
inputFilePosition += numberDataPackets;
// put the data pointer into the buffer list
data.SetData(0, inputBufferPtr, outNumBytes);
// add packet descriptions if required
if (dataPacketDescription != null)
{
if (inputPacketDescriptions != null)
{
dataPacketDescription = inputPacketDescriptions;
}
else
{
dataPacketDescription = null;
}
}
return AudioConverterError.None;
};
/*** OUTPUT ***/
// create the destination file
var outputFile = AudioFile.Create (NSUrl.FromFilename(path), AudioFileType.M4A, outputFormat, AudioFileFlags.EraseFlags);
// init buffer
const int outputBufferBytesSize = 32768;
IntPtr outputBufferPtr = Marshal.AllocHGlobal(outputBufferBytesSize);
AudioBuffers buffers = new AudioBuffers(1);
// calc number of packet per write
int outputSizePerPacket = outputFormat.BytesPerPacket;
AudioStreamPacketDescription[] outputPacketDescriptions = null;
if (outputSizePerPacket == 0) {
// if the destination format is VBR, we need to get max size per packet from the converter
outputSizePerPacket = (int)converter.MaximumOutputPacketSize;
// allocate memory for the PacketDescription structures describing the layout of each packet
outputPacketDescriptions = new AudioStreamPacketDescription [outputBufferBytesSize / outputSizePerPacket];
}
int outputBufferPacketSize = outputBufferBytesSize / outputSizePerPacket;
// init position
long outputFilePosition = 0;
long totalOutputFrames = 0; // used for debugging
// write magic cookie if necessary
if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0)
{
outputFile.MagicCookie = converter.CompressionMagicCookie;
}
// loop to convert data
Console.WriteLine ("Converting...");
while (true)
{
// create buffer
buffers[0] = new AudioBuffer()
{
NumberChannels = outputFormat.ChannelsPerFrame,
DataByteSize = outputBufferBytesSize,
Data = outputBufferPtr
};
int writtenPackets = outputBufferPacketSize;
// LET'S CONVERT (it's about time...)
AudioConverterError converterFillError = converter.FillComplexBuffer(ref writtenPackets, buffers, outputPacketDescriptions);
if (converterFillError != AudioConverterError.None)
{
Console.WriteLine("FillComplexBuffer error: " + converterFillError);
}
if (writtenPackets == 0) // EOF
{
break;
}
// write to output file
int inNumBytes = buffers[0].DataByteSize;
AudioFileError writeError = outputFile.WritePackets(false, inNumBytes, outputPacketDescriptions, outputFilePosition, ref writtenPackets, outputBufferPtr);
if (writeError != 0)
{
Console.WriteLine("WritePackets error: {0}", writeError);
}
// advance output file packet position
outputFilePosition += writtenPackets;
if (FlowFormat.FramesPerPacket != 0) {
// the format has constant frames per packet
totalOutputFrames += (writtenPackets * FlowFormat.FramesPerPacket);
} else {
// variable frames per packet require doing this for each packet (adding up the number of sample frames of data in each packet)
for (var i = 0; i < writtenPackets; ++i)
{
totalOutputFrames += outputPacketDescriptions[i].VariableFramesInPacket;
}
}
}
// write out any of the leading and trailing frames for compressed formats only
if (outputFormat.BitsPerChannel == 0)
{
Console.WriteLine("Total number of output frames counted: {0}", totalOutputFrames);
WritePacketTableInfo(converter, outputFile);
}
// write the cookie again - sometimes codecs will update cookies at the end of a conversion
if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0)
{
outputFile.MagicCookie = converter.CompressionMagicCookie;
}
// Clean everything
Marshal.FreeHGlobal(inputBufferPtr);
Marshal.FreeHGlobal(outputBufferPtr);
converter.Dispose();
outputFile.Dispose();
// Remove temp file
File.Delete(TempWavFilePath);
}
I already saw this SO question, but the not-detailed C++/Obj-C related answer doesn't seem to fit with my problem.
Thanks !
I finally found the solution!
I just had to declare AVAudioSession category before converting the file.
AVAudioSession.SharedInstance().SetCategory(AVAudioSessionCategory.AudioProcessing);
AVAudioSession.SharedInstance().SetActive(true);
Since I also use an AudioQueue to RenderOffline, I must in fact set the category to AVAudioSessionCategory.PlayAndRecord so both the offline rendering and the audio converting work.

Resources