How to play MIDI with bassmidi? (ios) - ios

I'm trying to play midi but it's not working, I can play an mp3 but when I change the code to midi and build - there is no sound. (with "bassmidi" plugin).
my code:
NSString *fileName = #"1";
NSString *fileType = #"mid"; // mid
BASS_Init(-1, 44100, 0, 0, 0); // initialize output device
NSString *respath=[[NSBundle mainBundle]pathForResource:fileName ofType:fileType]; // get path of audio file
HSTREAM stream=BASS_MIDI_StreamCreateFile(0, [respath UTF8String], 0, 0, BASS_SAMPLE_LOOP, 1);
BASS_ChannelPlay(stream, FALSE); // play the stream

For those who would be interested in using the BASSMIDI player on iOS, we implemented an AUv3 Audio Unit wrapped around the bassmidi library.
The main advantage is that this audio unit can be inserted into a graph of audio nodes handled by the iOS Audio Engine (just like you would do with the AVAudioUnitSampler).
The code is available on a public repository:
https://github.com/newzik/BassMidiAudioUnit
Feel free to use it!

Related

iOS Swift playing audio (aac) from network stream

I'm developing an iOS application and I'm quite new to iOS development. So far I have implemented a h264 decoder from network stream using VideoToolbox, which was quite hard.
Now I need to play an audio stream that comes from network, but with no file involved, just a raw AAC stream read directly from the socket. This streams comes from the output of a ffmpeg instance.
The problem is that I don't know how to start with this, it seems there is little information about this topic. I have already tried with AVAudioPlayer but found just silence. I think I have first need to decompress the packets from the stream, just like with the h264 decoder.
I have been trying also with AVAudioEngine and AVAudioPlayerNode but no sucess, same as with AVAudioPlayer. Can someone provide me some guidance? Maybe AudioToolbox? AudioQueue?
Thank you very much for the help :)
Edit:
I'm playing around with AVAudioCompressedBuffer and having no error using AVAudioEngine and AVAudioNode. But, I don't know what this output means:
inBuffer: <AVAudioCompressedBuffer#0x6040004039f0: 0/1024 bytes>
Does this mean that the buffer is empty? I have been trying to feed this buffer in several ways, but always returns something like 0/1024. I think I'm not doing this right:
compressedBuffer.mutableAudioBufferList.pointee = audioBufferList
Any idea?
Thank you!
Edit 2:
I'm editing for reflecting my code for decompressing the buffer. Maybe some one can point me in the right direction.
Note: The packet that is ingested by this function actually is passed without the ADTS header (9 bytes) but I have also tried passing it with the header.
func decodeCompressedPacket(packet: Data) -> AVAudioPCMBuffer {
var packetCopy = packet
var streamDescription: AudioStreamBasicDescription = AudioStreamBasicDescription.init(mSampleRate: 44100, mFormatID: kAudioFormatMPEG4AAC, mFormatFlags: UInt32(MPEG4ObjectID.AAC_LC.rawValue), mBytesPerPacket: 0, mFramesPerPacket: 1024, mBytesPerFrame: 0, mChannelsPerFrame: 1, mBitsPerChannel: 0, mReserved: 0)
let audioFormat = AVAudioFormat.init(streamDescription: &streamDescription)
let compressedBuffer = AVAudioCompressedBuffer.init(format: audioFormat!, packetCapacity: 1, maximumPacketSize: 1024)
print("packetCopy count: \(packetCopy.count)")
var audioBuffer: AudioBuffer = AudioBuffer.init(mNumberChannels: 1, mDataByteSize: UInt32(packetCopy.count), mData: &packetCopy)
var audioBufferList: AudioBufferList = AudioBufferList.init(mNumberBuffers: 1, mBuffers: audioBuffer)
var mNumberBuffers = 1
var packetSize = packetCopy.count
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mBuffers, &audioBuffer, MemoryLayout<AudioBuffer>.size)
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mBuffers.mDataByteSize, &packetSize, MemoryLayout<Int>.size)
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mNumberBuffers, &mNumberBuffers, MemoryLayout<UInt32>.size)
// compressedBuffer.mutableAudioBufferList.pointee = audioBufferList
var bufferPointer = compressedBuffer.data
for byte in packetCopy {
memset(compressedBuffer.mutableAudioBufferList[0].mBuffers.mData, Int32(byte), MemoryLayout<UInt8>.size)
}
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mNumberChannels)")
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mDataByteSize)")
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mData)")
var uncompressedBuffer = uncompress(inBuffer: compressedBuffer)
print("uncompressedBuffer: \(uncompressedBuffer)")
return uncompressedBuffer
}
So you are right in thinking you will (most likely) need to decompress the packets received from the stream. The idea is to get them to raw PCM format so that this can be sent directly to the audio output. This way you could also apply any DSP / audio manipulation you could want to the audio stream.
As you mentioned, you will probably need to be looking into the AudioQueue direction and the Apple Docs provide a good example of streaming audio in realtime, although this is in obj-c (in this case I think it may be a good idea to carry this out in obj-c). This is probably the best place to get started (interfacing the obj-c to swift is super simple).
Looking again at it in Swift there is the class AVAudioCompressedBuffer which seems to handle AAC for your case (would not need to decode the AAC if you get this to work), however there is no direct method for setting the buffer as it is intended for just being a storage container, I believe. Here's a working example of someone using the AVAudioCompressedBuffer along with an AVAudioFile (maybe you could buffer everything into files in background threads? I think it would be too much IO overhead).
However, if you tackle this in obj-c there is a post on how to set the AVAudioPCMBuffer (maybe works with AVAudioCompressedBuffer?) directly through memset (kind of digusting but at the same time lovely as an embedded programmer myself).
// make a silent stereo buffer
AVAudioChannelLayout *chLayout = [[AVAudioChannelLayout alloc] initWithLayoutTag:kAudioChannelLayoutTag_Stereo];
AVAudioFormat *chFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
sampleRate:44100.0
interleaved:NO
channelLayout:chLayout];
AVAudioPCMBuffer *thePCMBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:chFormat frameCapacity:1024];
thePCMBuffer.frameLength = thePCMBuffer.frameCapacity;
for (AVAudioChannelCount ch = 0; ch < chFormat.channelCount; ++ch) {
memset(thePCMBuffer.floatChannelData[ch], 0, thePCMBuffer.frameLength * chFormat.streamDescription->mBytesPerFrame);
}
I know this is a lot to take and no way seems like a simple solution, but I think the obj-c AudioQueue technique would be my first stop!
Hope this helps!

Set timestamp in CMsampleBuffer using AVAssetWriter not working

Hello I'm working in an app that is recording video + audio. The Video source is the camera, and the audio is coming from streaming. My problem happen when the communication with streaming is closed for some reason. Then in that case I switch the audio source to built in mic. The problem is the audio is not synchronised at all. I would like to add a space in my audio and then set the timestamp in realtime according to the current video timestamp. Seems AvassetWritter is adding the frames consecutive from built in mic and it looks like is ignoring the timestamp.
Do you know why avassetwriter is ignoring the timestamp?
EDIT:
This is the code than gets the latest video timestamp
- (void)renderVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVPixelBufferRef renderedPixelBuffer = NULL;
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
self.lastVideoTimestamp = timestamp;
and this is the code that I use to synchronise audio coming from built in mic when the stream is disconnected.
CFRelease(sampleBuffer);
sampleBuffer = [self adjustTime:sampleBuffer by:self.lastVideoTimestamp];
//Adjust CMSampleBufferFunction
- (CMSampleBufferRef) adjustTime:(CMSampleBufferRef) sample by:(CMTime) offset
{
CMItemCount count;
CMSampleBufferGetSampleTimingInfoArray(sample, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(sample, count, pInfo, &count);
for (CMItemCount i = 0; i < count; i++)
{
pInfo[i].decodeTimeStamp = kCMTimeInvalid;//CMTimeSubtract(pInfo[i].decodeTimeStamp, offset);
pInfo[i].presentationTimeStamp = CMTimeSubtract(pInfo[i].presentationTimeStamp, offset);
}
CMSampleBufferRef sout;
CMSampleBufferCreateCopyWithNewTiming(nil, sample, count, pInfo, &sout);
free(pInfo);
return sout;
}
That is what I want to do.
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
----------------------------------- -----------------
I would like to get this, as you can see there is a space with no audio, because the audio coming from the stream is disconnected and maybe you didn't receive all of the audio.
What it is currently doing:
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
--------------------------------------------------------------------

Long audio file not playing

Im trying to play a really long audio file in iOS (like 10 minutes long) and it doesn't want to play. I've gotten other files to work with this code just fine but this one just doesn't want to play.
Heres my code:
void SoundFinished (SystemSoundID snd, void* context) {
NSLog(#"finished!");
AudioServicesRemoveSystemSoundCompletion(snd);
AudioServicesDisposeSystemSoundID(snd);
}
- (IBAction)lolol{
NSURL* sndurl = [[NSBundle mainBundle] URLForResource:#"full video" withExtension:#"mp3"];
SystemSoundID snd;
AudioServicesCreateSystemSoundID((__bridge CFURLRef)sndurl, &snd);
AudioServicesAddSystemSoundCompletion(snd, nil, nil, SoundFinished, nil);
AudioServicesPlaySystemSound(snd);
}
The file is too long for System Sound Services to play it as mentioned in the docs -
You can use System Sound Services to play short (30 seconds or shorter) sounds.
I suggest using an alternate way to play the file. Have a look at using AVAudioPlayer class to play the audio file.

Getting or setting the audio format that AUGraphAddRenderNotify receives

Is it possible to set the audio format for an AUGraphAddRenderNotify callback? If not, is it possible just to see what the format is at init time?
I have a very simple AUGraph which plays audio from a kAudioUnitSubType_AudioFilePlayer to a kAudioUnitSubType_RemoteIO. I'm doing some live processing on the audio so I've added a AUGraphAddRenderNotify callback to the graph to do it there. This all works fine, but when I initialise the graph, I need to set up a couple buffers and some other data for my processing, and I need to know what format will be supplied in the callback. (On some devices it's interleaved, on others it's not — this is fine I just need to know).
Here's the setup:
NewAUGraph(&audioUnitGraph);
AUNode playerNode;
AUNode outputNode;
AudioComponentDescription playerDescription = {
.componentType = kAudioUnitType_Generator,
.componentSubType = kAudioUnitSubType_AudioFilePlayer,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
AudioComponentDescription outputDescription = {
.componentType = kAudioUnitType_Output,
.componentSubType = kAudioUnitSubType_RemoteIO,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
AUGraphAddNode(audioUnitGraph, &playerDescription, &playerNode);
AUGraphAddNode(audioUnitGraph, &outputDescription, &outputNode);
AUGraphOpen(audioUnitGraph);
AUGraphNodeInfo(audioUnitGraph, playerNode, NULL, &playerAudioUnit);
AUGraphNodeInfo(audioUnitGraph, outputNode, NULL, &outputAudioUnit);
// Tried adding all manner of AudioUnitSetProperty() calls here to set the AU formats
AUGraphConnectNodeInput(audioUnitGraph, playerNode, 0, outputNode, 0);
AUGraphAddRenderNotify(audioUnitGraph, render, (__bridge void *)self);
AUGraphInitialize(audioUnitGraph);
// Some time later...
// - Set up audio file in the file player
// - Start the graph with AUGraphStart()
I can understand that altering the formats used by the two audio units may not have any effect on the format 'seen' at the point the AUGraph renders into its callback (as this is downstream of them), but surely there is a way to know at init time what that format will be?

Chapter 4 of Learning Core Audio not working due to AudioQueueNewInput failing with fmt?

I am trying to get the Recording program from Chapter 4 of Learning Core Audio by Adamson and Avila to work. Both typing it in by hand and the unmodified version downloaded from the informit website fail in the same way. It always fails with this at the point of queue creation.
Error: AudioQueueNewInput failed ('fmt?')
Has anyone else tried this sample program on Mavericks and XCode5? Here's the one from the download site up to the point of failure. When I tried LPCM with some hardcoded parameters, then it's ok but I cannot get MPEG4AAC to work. Seems like AppleLossless works though.
// Code from download
int main(int argc, const char *argv[])
{
MyRecorder recorder = {0};
AudioStreamBasicDescription recordFormat = {0};
memset(&recordFormat, 0, sizeof(recordFormat));
// Configure the output data format to be AAC
recordFormat.mFormatID = kAudioFormatMPEG4AAC;
recordFormat.mChannelsPerFrame = 2;
// get the sample rate of the default input device
// we use this to adapt the output data format to match hardware capabilities
MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
// ProTip: Use the AudioFormat API to trivialize ASBD creation.
// input: at least the mFormatID, however, at this point we already have
// mSampleRate, mFormatID, and mChannelsPerFrame
// output: the remainder of the ASBD will be filled out as much as possible
// given the information known about the format
UInt32 propSize = sizeof(recordFormat);
CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL,
&propSize, &recordFormat), "AudioFormatGetProperty failed");
// create a input (recording) queue
AudioQueueRef queue = {0};
CheckError(AudioQueueNewInput(&recordFormat, // ASBD
MyAQInputCallback, // Callback
&recorder, // user data
NULL, // run loop
NULL, // run loop mode
0, // flags (always 0)
// &recorder.queue), // output: reference to AudioQueue object
&queue),
"AudioQueueNewInput failed");
I faced up with same problem. Check the sample rate. In your case it will be huge (96000). Just try to set it manually to 44100.

Resources