Swift and CarPlay audio streaming - ios

General question: Working on an app that streams high-quality audio as well as listens for voice prompts from the user. This is an integral part of the app. The audio quality when using headphones (wired and BlueTooth) is great, but when using in CarPlay (wired or wireless), the audio output has no depth, is very flat and lower quality.
I know some apps (like Spotify) that only stream out audio maintain their audio quality (though they do have some EQ adjustability built into their app) and wondering if the bi-directional nature of the audio in our app is the culprit.
Any suggestions would be most helpful.

Related

Is there any possibility to read the frequency of the currently playing song with Swift?

I'm new to iOS programming and I don't know where to start. I found code examples how to read frequencies from the microphone with AudioKit framework. But this is not what I am looking for. Is it possible to retrieving frequency of the currently playing song in real time without using a microphone?
Thank you for help.
The iOS security sandbox prevents apps from capturing general audio output of any other app, such as the Music app.
Certain music apps, such as GarageBand might share inter-app audio, but this isn't supported by the majority of apps that output "songs".
An app might play the "song" itself, via an AVAudioPlayer, and tap the AVPlayer's output to get raw sample data for spectral frequency and pitch analysis (two very different things, by-the-way).

Get Audio Power Levels from Currently Playing Music on an iOS Device

I'm aiming to create an audio-visualisation app for iOS. I need to somehow tap into the current audio output from another app (such as Apple's music app, or Spotify) and get the amplitude of the signal for each sample of the music. I will then perform an FFT algorithm on the data to convert it to the frequency-domain and display the data visually.
Is it possible to read this data from the audio output of other apps? If so, what do I need to use to extract this data?
No. The iOS security sandbox will prevent the reading of any audio samples from other apps via any public API (unless the playing app was coded to explicitly export audio data via inter-app audio or other similar interface).

Intercept/modify audio stream on iOS

I am looking at the feasibility of getting the current raw audio stream playing and do stuff with it such as stream it over Bluetooth or equalize it, etc. Is there any way to do this in iOS 8?
For example: apps such as Pandora/Spotify are playing music and I want to access the audio they are playing.
To process audio from another app, that app needs to participate in Inter-App Audio.
I don't know if your example apps do that.

Can I use AVAudioRecorder with an external mic?

Sorry if this question is obvious or duplicated. My 30 minutes of research led me nowhere.
We have an iPhone app that live streams video from the device to our remote Wowza servers.
We're looking to integrate the Swivl (motion tracking tripod) into our product, and it uses a wireless microphone that feeds into the 30-pin port of our iPhone. Swivl's SDK doesn't include anything about capturing audio from their hardware so I assume that it would be handled by the iPhone itself.
If I use the AVAudioRecorer, will it automatically route the audio input from the 30-pin port instead of the default microphone, or do I have to explicitly define the audio source?
Any clues help.
After a few tests, it seems that iOS automatically routes incoming audio signals.
There is no need to explicitly specify the source of the audio.
Straight from AVAudioRecorder documentation:
In iOS, the audio being recorded comes from the device connected by the user—built-in microphone or headset microphone, for example. In OS X, the audio comes from the system’s default audio input device as set by a user in System Preferences.

Audio Unit: Use sound output as input source

I want to process the stereo output from iOS devices, no matter what application causes them and visualize it in real-time.
Is it possible to use the generic output device (or anything else) to get at the audio data which are currently being played? Maybe as an input to a remoteIO unit?
In other words: I want to do what aurioTouch2 does (FFT only) but instead of using the microphone as input source, I want to process everything which is coming out of the speakers at a given time.
Kind regards
If your own app is playing using the RemoteIO Audio Unit, you can capture that content. You can not capture audio your app is playing using many of the other audio APIs. The iOS security sandbox will prevent your app from capturing audio that any other app is playing (unless that app explicitly exports audio via the Inter-App Audio API or equivalent).

Resources