Is it possible to record output audio in an app using Swift? So, for example, say I'm listening to a podcast, and I want to, within a separate app, record a small segment of the podcast's audio. Is there any way to do that?
I've looked around but have only been able to find information on recording microphone recording and such.
It depends on how you are producing the audio. If the production of the audio is within your control, you can put a tap on the output and record to a file as it plays. The easiest way is with the new AVAudioEngine feature (there are other ways, but AVAudioEngine is basically an easy front end for them).
Of course, if the real problem is to take a copy of a podcast, then obviously all you have to do is download the podcast as opposed to listening to it. Similarly, you could buffer and save streaming audio to a file. There are many apps that do this. But this is not because the device's output is being hijacked; it is, again, because we have control of the sound data itself.
I believe you'll have to write a kernel extension to do that
https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptIOKit/iokit_tutorial.html
You'd have to make your own audio driver to record it
It appears as though
That is how softonic made soundflowerbed.
http://features.en.softonic.com/how-to-record-internal-sound-on-a-mac
Related
I'm trying to identify how much latency is being experienced when using AirPods, compared to using the device mic & speaker, for the purposes of recording user video & audio that must be synchronised to a backing track.
Here's how my system currently works:
I have a recording pipeline that uses AVCaptureSession to record video, and AVAudioEngine to record audio.
During the recording process, I play audio via AVAudioEngine, which the user will 'perform to'. I create a movie file using AVAssetWriter where the user's captured audio (utilising noise cancellation) is added to the file, and the backing audio file is written into a separate track.
The audio file's presentation timestamps are modified slightly to account for the initial playback delay experienced in AVAudioEngine., and this works well (I previously used AVPlayer for audio playback and the start delay was more significant, and that's what led to making use of this technique).
I know about AVAudioSession's inputLatency, outputLatency and bufferDuration properties, and I've read that these can be used to identify latency, at least in one sense. I notice that this calculation yields a total round-trip latency of around 0.01s when using the device on its own, and 0.05 seconds when using AirPods' inputs and outputs.
This is useful, and I can apply that extra time difference in my own logic to improve synchronisation, but there is definitely additional latency in the output, and I can't identify its source.
Strangely, it looks as though the recorded audio and video are in sync, but not in sync with the backing track. This makes me think that the system is still adding compensation to one of those two forms of captured media, but it doesn't relate the active played-back audio, and so the user is potentially listening to delayed-playback audio and I'm not accounting for that extra delay.
Does anyone have any thoughts on what other considerations may be required? I feel as though most use cases for bluetooth synchronisation will be to either synchronise audio and visual output, or to synchronise only the audio and visual input when recording, not a third factor whereby the user is performing alongside an audio or video source on device that is later added to a resultant asset writing session/media file.
I am trying to create a recording app that has the ability to stop and start a audio recording.
My idea to achieve this is to have AudioKit record and save a new file(.aac) every time the stop button is clicked. Then when it goes to play the full recording back it would essentially concatenate all the different aac’s together.(My understanding is that I can't continue recording to the end of a file once it's saved) Example:
Records three different recordings, in directory folder is [1.acc, 2.acc, 3.acc]. When played the user would think it’s one file.
To achieve this do I use a single AKPlayer or multiple? I would need a single playback slider and also a playback time label, both these would have to correlate to the ‘single concatenation’ file of [1.acc, 2.acc, 3.acc].
This is the first time I have used AudioKit, I really appreciate any advice or solutions to this. Thanks!
I'm working on a mobile application that can perform basic analysis on audio input from the microphone in real time. However, the usual way to do it using the 'AVAudioRecorder` as shown in this guide and the API requires you to save it to a file first.
Since the app is meant to stay on for a long time and be used multiple times a day, I want to avoid clutter the phone with many audio files or audio files that are too big. However, I can't seem to find the way around it. Searching for solutions on the internet always lead to solutions of how to save an audio to a file, instead of avoiding saving to a file and work with some kind of buffer.
Any pointers would be super helpful!
Both the iOS Audio Unit and the Audio Queue APIs allow one to process short buffers of audio input in real-time without saving to a file.
You can also use a tap on the AVAudioEngine. See Apple's documentation: https://developer.apple.com/library/ios/samplecode/AVAEMixerSample/Introduction/Intro.html
You can use /dev/null as path in the AVAudioRecorder instance. This way it will not save to a file, but just discard the data.
var url = NSUrl.FromString("/dev/null");
var recorder = new AVAudioRecorder(url, settings, out error);
For a project I need to handle audio in an iPhone app quite special and hope somebody may point me in the right direction.
Lets say you have a fixed set of up to thirty audio files of the same length (2-3 sec, non-compressed). While a que is playing from one audio file it should be able to update parameters that makes the playing continue from another audio file from the same timestamp the previous audiofile ended playing. If the different audio files is different versions of heavely filtered audio it should be possible to "slide" between them an get the impression that you applied the filter directly. The filtering is at the moment not possible to achive in realtime on an iPhone, therefore the prerendered files.
If A B and C is different audio files I like to be able to:
Play A without interruption:
Start AAAAAAAAAAAAA Stop
Or start play A and continue over in B and then C, initiated while playing
Start AAABBBBBBBBCC Stop
Ideally is should be possible to play two er more ques at the same time. Latency is not that important, but the skipping between files should ideally not produce clicks or delays.
I have looked into using Audio Queue Services (which look like hell to dive into) and sniffed on OpenAl. Could anyone give me a ruff overview and a general direction I can spend the next days burried into?
Try using the iOS Audio Unit API, particularly a mixer unit connected to RemoteIO for audio output.
I managed to do this by using FMOD Designer. FMOD (http://www.fmod.org/) is a sound design framework for game development, that supports iOS development. I made a multitrack-event in FMOD Designer with different layers for each sound clip. Add a parameter in the horizontal bar that lets you controll which sound clip to play in realtime. The trick is to let each soundclip continue over the whole bar and controll which sound that is beeing heard by using a volume effect (0-100%) like in the attached picture. In that way you are ensured that skipping between files follow the same timecode. I have tried this successfully with up to thirty layers, but experienced some double playing. This seemed to dissapear if I cut the number down to fifteen.
It should be possible to use iOS Audio Unit API if you are comfortable with this, but for those of us that like the most simple sollution FMOD is quite good :) Thanks to Ellen S for the sollution tip!
Screenshot of the multitrack-event in FMOD Designer:
https://plus.google.com/photos/106278910734599034045/albums/5723469198734595793?authkey=CNSIkbyYw8PM2wE
How does one record audio using iOS? Not the input recording from the microphone, but I want to be able to capture/record the current playing audio within my app?
So, e.g. I start a recording session, and any sound that plays within my app only, I want to record it to a file?
I have done research on this but I am confused with what to use as it looks like mixing audio frameworks can cause problems?
I just want to be able to capture and save the audio playing within my application.
Well if you're looking to just record the audio that YOUR app produces, then yes this is very much possible.
What isn't possible, is recording all audio that is output through the speaker.
(EDIT: I just want to clarify that there is no way to record audio output produced by other applications. You can only record the audio samples that YOU produce).
If you want to record your app's audio output, you must use the remote io audio unit (http://atastypixel.com/blog/using-remoteio-audio-unit/).
All you would really need to do is copy the playback buffer after you fill it.
ex)
memcpy(void *dest, ioData->mBuffers[0].mData, int amount_of_bytes);
This is possible by wrapping a Core Audio public utility file CAAudioUnitOutputCapturer
http://developer.apple.com/library/mac/#samplecode/CoreAudioUtilityClasses/Introduction/Intro.html
See my reply in this question for the wrapper classes.
Properly use Objective C++
There is no public API for capturing or recording all generic audio output from an iOS app.
Check out the MixerHostAudio sample application from Apple. Its a great way to start learning about Audio Units. Once you have an grasp of that, there are many tutorials online that talk about adding recording.