I'd like to know if it's possible to record an audio playback into a file in ios without using microphone. In another word, is it possible to capture the audio playback "raw data" into a file?
I tried the AVAudioRecorder, but there is no option to record without using the microphone. Any idea? Thanks
Yes!
Your question is too generic, and the above answers it.
But, more specifically, as with most things there are several approaches possible, the final solution will depend on your needs. For example, if you're constrained to, or have a preference for "playing" via AVPlayer, you can use an MTAudioProcessingTap (See apple's AudioTapProcessor example) to gain access to the "Raw Data". Similarly, you can build an AudioGraph and use an AudioUnit as the "player" along with Audio File Services to both read (which will give you access to the "Raw Data") and write the file. There are various alternatives depending on your needs and, in some cases, your preference. Providing some code or a better explanation of your needs, might solicit a more comprehensive answer.
Related
I'm trying to find a way to get the average power level for a channel, that comes out from the audio played in the embedded video. I'm using YouTube's iOS helper library for embedding the video https://developers.google.com/youtube/v3/guides/ios_youtube_helper
A lot of the answers I've found in StackOverflow refer to AVAudioPlayer, but that's not my case. I also looked in the docs of AudioKit framework to find something that can give the output level of the current audio, but I couldn't find anything related, maybe I missed something over there. I also looked in EZAudio framework even tough it's deprecated, and I also couldn't find something that relates to my case.
My direction of thinking was to find a way to get the actual level that's coming out from the device, but I found one answer in SO that's saying this is not allowed in iOS, although he didn't mention any source for this statement.
https://stackoverflow.com/a/12664340/4711172
So, any help would be much appreciated.
The iOS security sandbox blocks apps from seeing the device's digital audio output stream, or any other app's internal audio output (unless explicitly shared, e.g. inter-app audio, etc.) (when using Apple App store permitted public APIs.)
(Just a guess, but this was possibly and originally implemented in iOS to prevent apps from capturing samples of DRM'd music and/or recording phone call conversations.)
Might be a bit off/weird, but just in case -
Have you considered closing a loop? Meaning - record the incoming audio using 'AVAudioRecorder' and get the audio levels from there?.
See Apple's documentation for AVAudioRecorder (in the overview they're specifying: "Obtain input audio-level data that you can use to provide level metering")
AVAudioRecorder documentation
I am using the ExtAudioFile interface to decode audio, and would like to know whether it ever makes use of hardware-assisted audio decoding.
The docs for the kExtAudioFileProperty_CodecManufacturer property say:
Use this property in iOS to choose between a hardware or software encoder, by specifying kAppleHardwareAudioCodecManufacturer or kAppleSoftwareAudioCodecManufacturer.
This seems to indicate that ExtAudioFile can indeed make use of decoding hardware.
Elsewhere, in the Audio Format Services docs, I find:
Hardware-based codecs can be used only when playing or recording using Audio Queue Services or using interfaces, such as AV Foundation, which use Audio Queue Services. In particular, you cannot use hardware-based audio codecs with OpenAL or when using the I/O audio unit.
... which is not completely clear; if ExtAudioFile uses Audio Queue Services in its implementation, then maybe it can make use of hardware, but we don't actually know how it is implemented.
I tried to test at runtime whether the hardware was being used, but this itself proved difficult. One method, given in the Audio Format Services reference, is to use AudioFormatGetProperty to test the kAudioFormatProperty_HardwareCodecCapabilities property. But the sample code does not work, always returning kAudioFormatUnsupportedPropertyError. (After searching online, I found a few other questions from people who have this problem, but no reports of ever using it successfully.)
So... I'm wondering if anyone knows of any way to test for certain whether the hardware decoder is currently active (in which case I could test for myself whether ExtAudioFile is using it). Or alternatively if anyone has any definitive knowledge of whether ExtAudioFile does use hardware (not based solely on the vague mentions in the Apple docs).
Specifying kAppleHardwareAudioCodecManufacturer with ExtAudioFileSetProperty() appears to enable hardware decoding, because it fails with kAudioConverterErr_HardwareInUse when there is already one audio file open with that codec set, and with kAudioQueueErr_InvalidCodecAccess when the audio category is set to one that does not (according to the documentation) enable hardware decoding.
However, after profiling with Instruments, I found that performance was slightly worse with hardware decoding enabled, which I still can't explain...
guys.
I'm working on some audio services on iOS.
I trying to search any examples or tutorials about
how audio service or stream can read a existing audio file than
process something like filter, than write another file.
Is there any body who can help me?
Dirac3LE (by Stephan M. Bernsee) is a great library for this job.
There are examples and manual included in the download.
It is particulary inteded for time and pitch manipulation
but in your case you'll be interested in its EAFRead and EAFWrite
classes.
If you want to get familiar with the lower level library that you can also use for microphone input/sound output, and that you can get raw samples into and out of, I would suggest taking a look at Audio Queue Services.
I used it in my side project to get audio from the microphone, and I also wrote some code you might find useful to do fast vectorized, FFT based FIR filtering on input audio. You can find the code here https://github.com/jamescarlson/FreeAPRS
I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.
Hi
I want to read and then want to perform some operations on raw audio. What is the best way to do this?
Audio File Services, Audio Converter Services, and Extended Audio File Services, all in Core Audio. AV Foundation + Core Media (specifically AVAssetReader) may also be an option, but it's really new, and therefore even less documented and less well understood than Core Audio at this point.
If you are looking for sample code, "Audio Graph" is a good starting point. The developer has provided a bit of his own documentation, that will help you quite a bit.
It will depend on the use for the audio. If latency is an issue, go for audio units. But if is not, a higher layer may be the one you require, such as AudioQueues.