play decoded raw audio data in iPhone - ios

i am developing one streaming application for iOS and i am getting all audio packets correctly and decoded also but now i am totally confused about how to play it on iPhone.. I have decoded packets using ffmpeg.. All codes i get so far are playing audio from a file but i my case i have to play audio packets which i am getting from server in an order they are coming.. I dont want to save all packets to a file so any code that will help me to solve my problem is appreciated..
Thnak you...

You'll need to use Audio Services to do this. Either AudioQueue or AudioUnit. AudioQueue is better for streaming type applications.
The classic sample - for AudioQueue - is Apple's SpeakHere.
Matt Gallagher also has some superb tutorials with sample code for streaming.
See Streaming MP3/AAC audio again.
If you want to go the AudioUnit route, see Using RemoteIO audio unit.
By basing your code on Matt Gallagher's sample, possibly also using SpeakHere, you should be able to play your decoded packets. See my other answers for how to play using a buffer rather than from a file.
Don't forget that this is quite advanced stuff. You'll need to be comfortable with buffers, pointers, etc. Make sure you understand frames, packets, etc. as well. Some pain in getting your audio out there is to be expected.

Related

Real time audio recording in Swift

I am building an application which needs to do real time audio recording. I am using Swift for the project - so unable to use Novocaine library (as it has some Obj-C++ code).
What I need is get small chunks of the audio recording (real-time) which I can process or send to my websocket. Is there a Swift library that I can use to achieve this?
In addition to getting the live audio from the microphone, I also need to show a real time waveform.
Start recording
Get an event every for few bytes of recorded data, where I can send these bytes to my websocket.
Showing a waveform for the audio.
Let me know.
You do not need any of 3-rd party tools for getting audio from mic. It can be set up easily using AVAudioEngine. However, for minimising network traffic I suggest to use lame for compressing raw PCM audio stream into mp3.
Here you can find project with minimal functionality for getting mic input and compressing into mp3. In this example project mp3 stores into Documents folder, so you can try and listen to make sure it works.
From this point you can take mp3 buffer and send via socket. You can also play with lame settings to change quality, etc.
There is another branch called no-lame where same functionality implemented without lame encoding. Look here

How to play Audio using raw data in iOS?

I have been working on Audio capture and playback, I am trying to play audio using a buffer. I will get buffer as character pointer I have to read from the buffer and play if anything is present in that buffer. I came to know about AudioQueue, I am not sure AudioQueue will be the correct way for my task. Can anyone done this before, Please suggest some ideas where to start?
Yes, AudioQueues are fine for playback and recording of LPCM audio data signals. You've a lot to learn about audio signals and CoreAudio APIs before you will understand how this all works (an audio signal and AudioQueue crash course is way too big for one SO answer).
Start with some AudioQueue examples and tutorials. Reserve a good amount of time.

Redirection playback output of avplayer item

What I want to do is to take the output samples of an AVAsset corresponding to an audio file (no video involved) and send them to an audio effect class that takes in a block of samples, and I want to be able to this in real time.
I am currently looking at the AVfoundation class reference and programming guide, but I can't see a way of redirect the output of a player item and send it to my effect class, and from there, send the transformed samples to an Audio output (using AVAssetReaderAudioMixOutput?) and hear it from there. I see that the AVAssetReader class gives me a way to get a block of samples using
[myAVAssetReader addOutput:myAVAssetReaderTrackOutput];
[myAVAssetReaderTrackOutput copyNextSampleBuffer];
but Apple documentation specifies that the AVAssetReader class is not made and should not be used for real-time situations. Does anybody have a suggestion on where to look, or if I am having the right approach?
The MTAudioProcessingTap is perfect for this. By leveraging an AVPlayer, you can avoid having to block the samples yourself with the AVAssetReaderOutput and then render them yourself in an Audio Queue or with an Audio Unit.
Instead, attach an MTAudioProcessingTap to the inputParameters of your AVAsset's audioMix, and you'll be given samples in blocks which are easy to then throw into an effect unit.
Another benefit from this is that it will work with AVAssets derived from URLs that can't always be opened by other Apple APIs (like Audio File Services), such as the user's iPod library. Additionally, you get all of the functionality like tolerance of audio interruptions that the AVPlayer provides for free, which you would otherwise have to implement by hand if you went with an AVAssetReader solution.
To set up a tap you have to set up some callbacks that the system invokes as appropriate during playback. Full code for such processing can be found at this tutorial here.
There's a new MTAudioProcessingTap object in iOS 6 and Mac OS 10.8 . Check out the Session 517 video from WWDC 2012 - they've demonstrated exactly what you want to do.
WWDC Link
AVAssetReader is not ideal for realtime usage because it handles the decoding for you, and in various cases copyNextSampleBuffer can block for random amounts of time.
That being said, AVAssetReader can be used wonderfully well in a producer thread feeding a circular buffer. It depends on your required usage, but I've had good success using this method to feed a RemoteIO output, and doing my effects/signal processing in the RemoteIO callback.

Record audio iOS

How does one record audio using iOS? Not the input recording from the microphone, but I want to be able to capture/record the current playing audio within my app?
So, e.g. I start a recording session, and any sound that plays within my app only, I want to record it to a file?
I have done research on this but I am confused with what to use as it looks like mixing audio frameworks can cause problems?
I just want to be able to capture and save the audio playing within my application.
Well if you're looking to just record the audio that YOUR app produces, then yes this is very much possible.
What isn't possible, is recording all audio that is output through the speaker.
(EDIT: I just want to clarify that there is no way to record audio output produced by other applications. You can only record the audio samples that YOU produce).
If you want to record your app's audio output, you must use the remote io audio unit (http://atastypixel.com/blog/using-remoteio-audio-unit/).
All you would really need to do is copy the playback buffer after you fill it.
ex)
memcpy(void *dest, ioData->mBuffers[0].mData, int amount_of_bytes);
This is possible by wrapping a Core Audio public utility file CAAudioUnitOutputCapturer
http://developer.apple.com/library/mac/#samplecode/CoreAudioUtilityClasses/Introduction/Intro.html
See my reply in this question for the wrapper classes.
Properly use Objective C++
There is no public API for capturing or recording all generic audio output from an iOS app.
Check out the MixerHostAudio sample application from Apple. Its a great way to start learning about Audio Units. Once you have an grasp of that, there are many tutorials online that talk about adding recording.

How do you write audio to the first frame with AVAssetWriter while capturing video/audio on iOS?

Long story short, I am trying to implement a naive solution for streaming video from the iOS camera/microphone to a server.
I am using AVCaptureSession with audio and video AVCaptureOutputs, and then using AVAssetWriter/AVAssetWriterInput to capture video and audio in the captureOutput:didOutputSampleBuffer:fromConnection method and write the resulting video to a file.
To make this a stream, I am using an NSTimer to break the video files into 1 second chunks (by hot-swapping in a different AVAssetWriter that has a different outputURL) and upload these to a server over HTTP.
This is working, but the issue I'm running into is this: the beginning of the .mp4 files appear to always be missing audio in the first frame, so when the video files are concatenated on the server (running ffmpeg) there is a noticeable audio skip at the intersections of these files. The video is just fine - no skipping.
I tried many ways of making sure there were no CMSampleBuffers dropped and checked their timestamps to make sure they were going to the right AVAssetWriter, but to no avail.
Checking the AVCam example with AVCaptureMovieFileOutput and AVCaptureLocation example with AVAssetWriter and it appears the files they generate do the same thing.
Maybe there is something fundamental I am misunderstanding here about the nature of audio/video files, as I'm new to video/audio capture - but thought I'd check before I tried to workaround this by learning to use ffmpeg as some seem to do to fragment the stream (if you have any tips on this, too, let me know!). Thanks in advance!
I had the same problem and solved it by recording audio with a different API, Audio Queue. This seems to solve it, just need to take care of timing in order to avoid sound delay.

Resources