We have a VOIP app that generally transfers audio packets with a sample rate of 32Khz. For what would seem to be a reasonable match, we've typically set the AVAudioSessions preferred sample rate to 32Khz as well. On later iPhones (e.g. iPhone XS) we've found the speakerphone no longer plays or is garbled when using a sample rate of 32Khz. But the audio session seems to happily accept (with read back confirmation) a preferredSampleRate of 32Khz. I've read that iPhone 6S codec (and perhaps later devices) only support 48Khz sample rates... but if that is the case why wouldn't iOS override the setPreferredSampleRate?
Related
I have an USB audio acquisition device ( 2 channels, 1 microphone per channel).
This device is connected with an Iphone.
The aim is to design a program in Swift in order to store the data in a buffer and then in pcm arrays (one array per channel).
I tested the connection between the Iphone and the acqusition device using AVAudiosession object.
The IPhone recognises the device. (ios compatble according to the manufacturer)
The process don’t need to work in real time :
1) Data acquisition
2) Stop acquisition
3) Finally processing data stored in pcm arrays (note in a file)
But I must confess that I’m little bit lost…
Do I have to consider two channels or one channel (one frame with one sample per channel) ?
(For Example : audiosession.inputNumberOfChannels = 1 or 2 ?)
Do I have to use Audio Queue, Audio Units Objects or another object?
I carried out several investigations on the Internet but I'm lost...
I would appreciate if you can get me on track by giving me some tips.
Thanks a lot for you support !
Best regards.
Jeff
Is it possible to have a common implementation of a Core Audio based audio driver bridge for iOS and OSX ? Or is there a difference in the Core Audio API for iOS versus the Core Audio API for OSX?
The audio bridge only needs to support the following methods:
Set desired sample rate
Set desired audio block size (in samples)
Start/Stop microphone stream
Start/Stop speaker stream
The application supplies 2 callback function pointers to the audio bridge and the audio bridge sets everything up so that:
The speaker callback is called on regular time intervals where it's requested to return an audio block
The microphone callback is called on regular time intervals where it receives an audio block
I was told that it's not possible to have a single implementation which works on both iOS and OSX as there are differences between the iOS Core Audio API and the OSX Core Audio API.
Is this true?
There are no significant differences between the Core Audio API on OS X and on iOS. However there are significant differences in obtaining the correct Audio Unit for the microphone and the speaker to use. There are only 2 units on iOS (RemoteIO and one for VOIP), but more and potentially many more on a Mac, plus the user might change the selection. There are also differences in some of the Audio Unit parameters (buffer size, sample rates, etc.) allowed/supported by the hardware.
I have an security cam which sends the video stream over 2,4 GHz to a receiver. I now want to know, if it's possible to receive this signal on iPhone and show the video stream. As WiFi is also sending on 2,4 GHz, the iPhone should be able to receive that signal. Or not?
Security Cam: http://www.jay-tech.de/jaytech/servlet/frontend/content.html?articleOID=d583e45:-495a2735:120c7c04348:446c&keywordOID=d583e45:946c233:1182e6a651d:e4e.
My iPhone is a iPhone 5s on iOS 8.1
If it's not possible over iPhone, is it may possible to catch the signal with any other device? I have this devices which I could use:
Raspberry PI, old WiFi USB Stick, Arduino Uno and a buch of cables for TV/Audio/Video etc
Thanks iComputerfreak
Sorry for my bad English, I'm German ;)
In short, no. To receive the signal, you'd need some dedicated hardware to receive the signal and encode it into a format that the iPhone could understand. It's not possible to arbitrarily capture wireless signals on a particular frequency and decode them in software - not on an iPhone, anyway.
Your best solution would be to look for some external hardware which operates on the same frequency, and can encode the video signal over a wifi network - I'd be surprised if such a device doesn't exist, though it may not be cheap. The iPhone can then simply receive the encoded video via wifi and use it like any other video stream.
Can I transfer audio stream from one iOS device to other iOS device (for example from 4s to new iPad) using CoreBluetooth framework. Maybe BLE is too slow fo media streaming?
Bluetooth Low Energy (BLE) is not intended to stream data !
If you want to a stream you MUST use Bluetooth 2.X+EDR and an appropriate profile. Therefore, if you want to stream audio, you need a headset or A2DP profile.
CoreBluetooth API give only access to BLE devices.
Audio streaming wont work any good, since BLE can stream 20 byte packets at a time, with 37.5ms delay between each transfer on iOS5. So this would be laggy and as good as useless. There is always the possibility of buffering the data, but in the end, this is not a good way to stream audio.
|packet| --- 37.5ms --- |packet| --- 37.5ms --- |packet...
While I realize that AirPlay has inherent lag/latency, I'm wondering if there's a way for a (currently hypothetical) iPhone app to detect what that latency is. If so, how precise can that latency value be? I'm more curious in whether an app can "know" its own AirPlay latency, rather than simply minimize it.
The latency does not come from network jitter, but rather is decided by the source device (your iPhone).
Long story short:
It's always precisely 2s (down to the millisecond) with Apple devices.
There is no way to tweak it with public APIs.
Audio latency needs to be very accurate so that multiple outputs can play in sync.
Some explanations about AirPlay's implementation:
The protocol starts with several RTSP commands. During this handshake, the source transmits rtpTime, the time at which the playback starts, which is also your latency. The typical value is 88200 = 2s x 44100 Hz.
AirPlay devices can sync their clock with the source's with NTP to mitigate the network latency.
During playback, the source periodically sends a SYNC packet to adjust the audio latency and make sure that all devices are still in sync.
It's possible to change the latency if you use a custom implementation, but Apple usually rejects them.
Check this writeup for more information. You can also read the unofficial protocol documentation.
The short answer is: no, Apple does not provide a way to do this. Assuming you need your app to be approved in the App Store, you're sort of out of luck. If you can run your app on a jailbroken device you can search around for undocumented APIs that will let you do more.
If you need your app to be available in Apple's App Store, most things you can do network-wise are outlined in the "Reachability" sample app.
The only way I can think of to get a good guess would be to use Bonjour to identify the host (see sample code here https://developer.apple.com/library/ios/#samplecode/BonjourWeb/Introduction/Intro.html) and then ping the host.
However:
If there is more than 1 Airplay station you will need to guess or ask which the user is connected to, or maybe take an average.
The device may not respond to a ping at all (Apple TV and Airport Express both respond to ping, not sure about 3rd party devices.)
The ping may not reflect the actual latency of the audio
Instead of spending too much time on this, you should follow Apple's guidelines for preparing your audio for AirPlay and enriching your app for AirPlay: http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AirPlayGuide/PreparingYourMediaforAirPlay/PreparingYourMediaforAirPlay.html#//apple_ref/doc/uid/TP40011045-CH4-SW1
Hope this helps! :)
You can query iOS's current hardware audio latency by -[AVAudioSession outputLatency],
According to the document for outputLatency:
Using an AirPlay enabled device for your audio content can result in a 2-second delay. Check for this delay in game content.
And according to my experience, this value changes when switching output device, eg:
Speaker: ~10ms
Bluetooth: ~100+ms
AirPlay: 2s