I have an app that used to use GKSession for voice communication between multiple devices. In particular, I used [GKVoiceChatService defaultVoiceChatService] to establish voice communications. Although deprecated back in iOS 7, this continued to work well until now: in iOS 9 this functionality works only over wifi, but not Bluetooth (which is a requirement for my app).
While trying to convert this to use MCSession, I stumbled at the part where audio streaming comes in. I can successfully connect peers over MCSession, but cannot figure out how to treat audio.
I have seen some tutorials on streaming camera video: multipeer connectivity for video streaming between iphones
And for streaming audio from a file: https://robots.thoughtbot.com/streaming-audio-to-multiple-listeners-via-ios-multipeer-connectivity
However, I still cannot quite come up with a way to stream audio from the microphone. Anyone have any tips?
Related
I've integrated Vidyo with CallKit on iOS for VOIP, except for one issue: Audio is sometimes muted on connection, and either needs the microphone toggled twice, or stays muted. I'm using the Vidyo.io resource to connect. I know that Apple has special calls to enhance audio when using voip protocols, but I do not call these, because I think Vidyo is handling the audio here (plus I don't know what to call!). Has anyone successfully integrated CallKit into Vidyo, and, if so, what do you do for Configure AVAudioSession?
I have a device with a camera and i want to connect to it using my iPhone via Bluetooth, so the question: is it possible to send real-time video stream by bluetooth using Swift/Objective - C?
It is not possible. The transmission speed of Bluetooth is not strong enough to stream a video in real-time to another device. If it was audio it is potentially a different story.
You can use bluetooth to transfer a video to another device, but not to stream as far as I know.
I am trying to build an iOS app which controls a music player which runs on a seperate machine. I would like to use the MPNowPlayingInfoCenter for inspecting and controlling this player. As far as I can tell so far, the app actually has to output audio for this to work (see also this answer).
However, for instance, the Spotify app is actually capable of doing this without playing audio on the iOS device. If you use Spotify Connect to play the audio on a different device, the MPNowPlayingInfoCenter still displays the correct song and the controls are functional.
What's the catch here? What does one (conceptually) have to do to achieve this? I can think of continuously emitting a "silent" audio stream, but that seams a bit brute-force.
Streaming silence will work, but you don't need to stream it all the time. Just long enough to send your Now Playing info. Using AVAudioPlayer, I've found approaches as short as this will send the data (assuming the player is loaded with a 1s silent audio file):
player.play()
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
nowPlayingInfoCenter.nowPlayingInfo = [...]
player.stop()
I was very surprised this worked within a single event loop. Any other approach to playing silence seems to work as well. Again, it can be very brief (in fact the above code in my tests doesn't even get as far as playing the silence).
I'm very interested in whether this works reliably for other people, so please comment if you make discoveries about it.
I've explored the Spotify app a bit. I'm not 100% certain if this is the same technique they're using. They're able to mix with other audio somehow. So you can be playing local audio on the phone and also playing Spotify Connect to some other device, and the "Now Playing" info will kind of stomp on each other. For my purposes, that would actually be better, but I haven't been able to duplicate it. I still have to make the audio session non-mixable for at least a moment (so you get ~ 1s audio drop if you're playing local audio at the same time). I did find that the Spotify app was not highly reliable about playing to my Connect device when I was also playing other audio locally. Sometimes it would get confused and switch around where it wanted to play. (Admittedly this is a bizarre use case; I was intentionally trying to break it to figure out how they're doing it.)
EDIT: I don't believe Spotify is actually doing anything special here. I think they just play silent audio. I think getting two streams playing at the same time has to do with AirPlay rather than Spotify (I can't make it happen with Bluetooth, only AirPlay).
Is it possible to have a common implementation of a Core Audio based audio driver bridge for iOS and OSX ? Or is there a difference in the Core Audio API for iOS versus the Core Audio API for OSX?
The audio bridge only needs to support the following methods:
Set desired sample rate
Set desired audio block size (in samples)
Start/Stop microphone stream
Start/Stop speaker stream
The application supplies 2 callback function pointers to the audio bridge and the audio bridge sets everything up so that:
The speaker callback is called on regular time intervals where it's requested to return an audio block
The microphone callback is called on regular time intervals where it receives an audio block
I was told that it's not possible to have a single implementation which works on both iOS and OSX as there are differences between the iOS Core Audio API and the OSX Core Audio API.
Is this true?
There are no significant differences between the Core Audio API on OS X and on iOS. However there are significant differences in obtaining the correct Audio Unit for the microphone and the speaker to use. There are only 2 units on iOS (RemoteIO and one for VOIP), but more and potentially many more on a Mac, plus the user might change the selection. There are also differences in some of the Audio Unit parameters (buffer size, sample rates, etc.) allowed/supported by the hardware.
Can I transfer audio stream from one iOS device to other iOS device (for example from 4s to new iPad) using CoreBluetooth framework. Maybe BLE is too slow fo media streaming?
Bluetooth Low Energy (BLE) is not intended to stream data !
If you want to a stream you MUST use Bluetooth 2.X+EDR and an appropriate profile. Therefore, if you want to stream audio, you need a headset or A2DP profile.
CoreBluetooth API give only access to BLE devices.
Audio streaming wont work any good, since BLE can stream 20 byte packets at a time, with 37.5ms delay between each transfer on iOS5. So this would be laggy and as good as useless. There is always the possibility of buffering the data, but in the end, this is not a good way to stream audio.
|packet| --- 37.5ms --- |packet| --- 37.5ms --- |packet...