I'm trying to play audio with audiotoolbox in xcode. I search for this but the documentation is too little. I want play nsdata while i download it from internet.
I look this link: http://www.cocoawithlove.com/2008/09/streaming-and-playing-live-mp3-stream.html
and the apple developer documentation but i can't understand what i have to do to cast nsdata (that is not complete because it is in download) in data that audiotoolbox can read.
With apple documentation i understand the strutture the i must allocate:
typedef struct MyAQStruct {
AudioFileID mAudioFile;
AudioStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kNumberBuffers];
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mDone;
}myAQStruct;
and the callback function the i have to call for play, but which of the variabile in this struct rapresent the data? maybe i think audiostreambasicdescription. or is audiofileid?
And how can i cast nsdata into this type?
Related
Currently I am using the StreamingKit to play streaming mp3 files. I want to save 15 sec of the audio and convert that audio file to video file to share on Instagram.
I believe I have to implement
[audioPlayer appendFrameFilterWithName:#"MyCustomFilter" block:^(UInt32 channelsPerFrame, UInt32 bytesPerFrame, UInt32 frameCount, void* frames)
{
...
}];
to save the audio as mentioned here .https://stackoverflow.com/a/34326868/4110214
But do not know how to do it.
Could someone please guide me to achieve this ?
You could use tab to save the PCM audio into files.
[audioPlayer appendFrameFilterWithName:#"MyCustomFilter" block:^(UInt32 channelsPerFrame, UInt32 bytesPerFrame, UInt32 frameCount, void* frames)
{
...
}];
Im trying to play a really long audio file in iOS (like 10 minutes long) and it doesn't want to play. I've gotten other files to work with this code just fine but this one just doesn't want to play.
Heres my code:
void SoundFinished (SystemSoundID snd, void* context) {
NSLog(#"finished!");
AudioServicesRemoveSystemSoundCompletion(snd);
AudioServicesDisposeSystemSoundID(snd);
}
- (IBAction)lolol{
NSURL* sndurl = [[NSBundle mainBundle] URLForResource:#"full video" withExtension:#"mp3"];
SystemSoundID snd;
AudioServicesCreateSystemSoundID((__bridge CFURLRef)sndurl, &snd);
AudioServicesAddSystemSoundCompletion(snd, nil, nil, SoundFinished, nil);
AudioServicesPlaySystemSound(snd);
}
The file is too long for System Sound Services to play it as mentioned in the docs -
You can use System Sound Services to play short (30 seconds or shorter) sounds.
I suggest using an alternate way to play the file. Have a look at using AVAudioPlayer class to play the audio file.
Since there's very little (more like none, really) documentation on MTAudioProcessingTap, I'm using Apple's demo app from WWDC 2012.
I am trying to have an Audio Graph inside the MTAudioProcessingTap, so I need to set different stream formats for different units that require specific . But every time I try to use AudioUnitGetProperty to get the AudioUnit's ASBD I get an EXC_BAD_ADDRESS error.
Here's the relevant code which results in EXC_BAD_ACCESS. You can try by yourself by downloading Apple's app and adding this to tap_PrepareCallback
OSStatus status = noErr;
AudioStreamBasicDescription testStream;
// Set audio unit input/output stream format to processing format.
if (noErr == status)
{
status = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &testStream, sizeof(AudioStreamBasicDescription));
}
AudioUnitGetProperty takes a pointer to a UInt32 for its size argument, in your sample code you gave a value. Here is the header:
AudioUnitGetProperty( AudioUnit inUnit,
AudioUnitPropertyID inID,
AudioUnitScope inScope,
AudioUnitElement inElement,
void * outData,
UInt32 * ioDataSize)
You should be getting it like this:
AudioStreamBasicDescription testStream = {0};
UInt32 sizeTestStream = sizeof(AudioStreamBasicDescription);
OSStatus status = AudioUnitGetProperty(audioUnit,kAudioUnitProperty_StreamFormat,kAudioUnitScope_Input,0,&testStream,&sizeTestStream);
if(status){
//handle error
}
I'm currently working on a VOIP project for iOS.
I use AudioUnits to get data from the mic and play sounds.
My main app is written in C# (Xamarin) and uses a C++ library for faster audio and codec processing.
To test the input/output result I'm currently testing recording & playback on the same device
- store the mic audio data in a buffer in the recordingCallback
- play the data from the buffer in the playbackCallback
That works as expected, the voice quality is good.
I need to save the incoming audio data from the mic to a raw PCM file.
I have done that, but the resulting file only contains some short "beep" signals.
So my question is:
What Audio settings do I need, that I can hear my voice (real audio signals) in the resulting raw PCM file instead of short beep sounds?
Has anyone an idea what could be wrong or what I have to do that I'm able to replay the resulting PCM file correctly?
My current format settings are (C# code):
int framesPerPacket = 1;
int channelsPerFrame = 1;
int bitsPerChannel = 16;
int bytesPerFrame = bitsPerChannel / 8 * channelsPerFrame;
int bytesPerPacket = bytesPerFrame * framesPerPacket;
AudioStreamBasicDescription audioFormat = new AudioStreamBasicDescription ()
{
SampleRate = 8000,
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked | AudioFormatFlags.LinearPCMIsAlignedHigh,
BitsPerChannel = bitsPerChannel,
ChannelsPerFrame = channelsPerFrame,
BytesPerFrame = bytesPerFrame,
FramesPerPacket = framesPerPacket,
BytesPerPacket = bytesPerPacket,
Reserved = 0
};
Additional C# settings (here in short without error checking):
AVAudioSession session = AVAudioSession.SharedInstance();
NSError error = null;
session.SetCategory(AVAudioSession.CategoryPlayAndRecord, out error);
session.SetPreferredIOBufferDuration(Config.packetLength, out error);
session.SetPreferredSampleRate(Format.samplingRate,out error);
session.SetActive(true,out error);
My current recording callback in short (only for PCM file saving) (C++ code):
OSStatus
NotSoAmazingAudioEngine::recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
std::pair<BufferData*, int> bufferInfo = _sendBuffer.getNextEmptyBufferList();
AudioBufferList* bufferList = new AudioBufferList();
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mData = NULL;
OSStatus status = AudioUnitRender(_instance->_audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, bufferList);
if(_instance->checkStatus(status))
{
if(fout != NULL) //fout is a "FILE*"
{
fwrite(bufferList->mBuffers[0].mData, sizeof(short), bufferList->mBuffers[0].mDataByteSize/sizeof(short), fout);
}
}
delete bufferList;
return noErr;
}
Background info why I need a raw PCM file:
To compress the audio data I'd like to use the Opus codec.
With the codec I have the problem that there is a tiny "tick" at the end of each frame:
With a frame size of 60ms I nearly can't hear them, at 20ms its annoying, at 10 ms frame sizes my own voice can't be heared because of the ticking (for the VOIP application I try to get 10ms frames).
I don't encode & decode in the callback functions (I encode/decode the data in the functions which I use to transfer audio data from the "micbuffer" to the "playbuffer").
And everytime the playbackCallback wants to play some data, there is a frame in my buffer.
I also eliminate my Opus encoding/decoding functions as error source, because if I read PCM data from a raw PCM file, encode & decode it afterwards, and save it to a new raw PCM file, the ticking does not appear (if I play the result file with "Softe Audio Tools", the output file audio is OK).
To find out what causes the ticking, I'd like to save the raw PCM data from the mic to a file to make further investigations on that issue.
I found the solution myself:
My PCM player expected 44100 Hz stereo, but my file only had 8000 Hz mono and therefore my saved file was played about 10x too fast.
I am trying to stream some recording from the mic in my iPhone in real-time over NSSocket.
I have callbacks for recording and playback. I am trying to read directly from the buffer using following code:
SInt16 *intFromBuffer;
intFromBuffer = audioBufferList->mBuffers[0].mData;
When I'm trying:
NSLog(#"0x%X", intFromBuffer);
I get for instance: 0xA182E00
And now the important part - playback callback:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
/**
This is the reference to the object who owns the callback.
*/
AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;
// iterate over incoming stream an copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// find minimum size
UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);
intFromBuffer = (SInt16)([NSString stringWithFormat:#"0x%#", data.output]);
memcpy(buffer.mData, intFromBuffer, size); //this works fine!
}
return noErr;
}
But now - how do I get this intFromBuffer to work over the NSStream?
When I try to send it directly as NSString and send it back as NSString and parse it back to SInt16 all I get is noise.
Any ideas? I'm running out of ideas...
Ok guys, I figured it out.
The problem was related with string encoding. My server was sending UTF8 encoded text, but my code was trying to interpret it with ASCII, which caused problem.
Another problem I found in my code was wrong data type as well. I was sending it as NSMutableData, but when I received it I converted the data in a wrong way back to NSMutableData, so my AudioProcessor was not able to get access to actual data which seemed to compiler as NSString object, and it's been throwing exception while creating frames from bytes:
SInt16 *frames = (SInt16 *)[data.input bytes];
The last problem I have with my code is actually very lossy transmition, so when data comes back from server the quality is very poor, but I believe it's buffer size related problem. I think I will figure it out soon.
Thanks a lot to StackOverflow community. I am new here, but I see you are a big help, even with simple comments that help with heading the right direction.
Thanks again.