agora + Photon Voice = bad experience - ios

Agora Video 2.92 Broadcast is being used inside of Unity with Photon Voice on iOS.
We are simply using Agora to broadcast a screen-share from a web app, we do not want any audio from from Agora as it is managed externally.
Upon entering/exiting the app, we now seem to either lose our microphone, or we lose speaker (or both) and can no longer hear other players in photon voice.
Is it possible for Agora to not adjust our audio settings? What should we be setting the default parameters to?

have you tried calling disableAudio? It should cut off your audio and allow you to set your audio to Photon. Here is a link to the documentation about it. https://docs.agora.io/en/Voice/API%20Reference/oc/Classes/AgoraRtcEngineKit.html#//api/name/disableAudio
Note- did you know that using agora for voice is easy to integrate in your project instead of photon voice? You can also have audio effects like voice changing and spatial audio. Also, Agora isn't geofenced like photon, allowing you a truly global low latency solution.

Related

Local audio recording using Agora SDK (on iOS)

I need a way to record local participant’s audio to a file. I see there are methods startAudioRecording and stopAudioRecording but they record audio of all participant’s in the call. Is there any way to achieve this without low level audio handling?

How to cancel Noise in voice call swift

I am doing a calling app using agora in swift. I have implemented voice calls, but I need to cancel the external noise during calls. Can I achieve that? Is that possible? If it is how can I do that?
Noice cancellation will be hardware dependent, but as you are using agora you can check the below link to set the appropriate audio profile and test the audio quality.
Refer this link: https://docs.agora.io/en/Voice/audio_profile_apple?platform=iOS

Agora Interactive Live Video Streaming - How to enable audio on broadcaster side?

https://docs.agora.io/en/Interactive%20Broadcast/start_live_ios?platform=iOS
I've followed above tutorial to implement interactive live video streaming. I've one broadcaster and multiple audience. Only broadcaster can broadcast and audience can only view broadcaster.
Broadcaster can't hear his own audio. Is there a way to enable audio on broadcaster side so that he can hear his own audio?
I've used code from above tutorial and set role to .broadcaster on broadcaster side and on audience side it is set to .audience.
Broadcaster
func setClientRole() {
// Set the client role as "host"
agoraKit?.setClientRole(.broadcaster)
}
Audience
func setClientRole() {
// Set the client role as "audience"
agoraKit?.setClientRole(.audience)
}
Generally with Video Streaming services the local user can not hear their own audio by design (look at YouTube Live, FB/Insta Live, etc). Otherwise it would cause echo or could possibly mute the audio if the echo cancelation. It is also very disorienting to a user to hear themselves so I would recommend against this.
In an effort to still answer your question and if it's imperative to your project to have that mic audio, I would recommend that you force the user to use headphones to avoid echo issues. This way you can use a custom audio source (full guide), where you initialize the mic and can send the audio to the headphones as well as pass it to the Agora SDK.
Since the implementation end of this could vary greatly depending on your project, I'll explain the basic concept.
With Agora you can enable the custom audio source using:
self.agoraKit.enableExternalAudioSource(withSampleRate: sampleRate, channelsPerFrame: channel)
When you join the channel you would initialize the mic yourself, and maintain that buffer. Then pass the custom audio to the
self.agoraKit.pushExternalAudioFrame(buffer, System.currentTimeMillis());
For more details I'd recommend taking a look at Agora's API Examples Project. You can use some of the Audio Controllers to see how the audio is handled.

Managing text-to-speech and speech recognition at same time in iOS

I'd like my iOS app to use text-to-speech to read to the user some information that it receives from a server, and I'd also like to allow the user to stop such speech by a voice command. I have tried speech recognition frameworks for iOS like OpenEars and I find the problem that it is listening and detecting the information the app itself is "saying" and it intereferes in the recognition of user's voice commands.
Has somebody dealt with this scenario in iOS and found a solution for that? Thanks in advance
It is not a trivial thing to implement. Unfortunately iOS and others record the sound which is playing through speaker. The only choice you have is to use the headset. In that case speech recognition can continue listening for input. In Openears recognition is disabled during TTS unless headset is plugged in.
If you still want to implement this feature which is called "barge-in" you have to do the following:
Store the audio you play though microphone
Implement noise cancellation algorithm which effectively will remove the audio from the recording. You can use cross-correlation to find a proper offset in the recording and spectral subtraction to remove the audio.
Recognize the speech in remaining signal.
It is not possible to do that without significant modification of openears sources.
Related question is Android Speech Recognition while music is playing

Can I use AVAudioRecorder with an external mic?

Sorry if this question is obvious or duplicated. My 30 minutes of research led me nowhere.
We have an iPhone app that live streams video from the device to our remote Wowza servers.
We're looking to integrate the Swivl (motion tracking tripod) into our product, and it uses a wireless microphone that feeds into the 30-pin port of our iPhone. Swivl's SDK doesn't include anything about capturing audio from their hardware so I assume that it would be handled by the iPhone itself.
If I use the AVAudioRecorer, will it automatically route the audio input from the 30-pin port instead of the default microphone, or do I have to explicitly define the audio source?
Any clues help.
After a few tests, it seems that iOS automatically routes incoming audio signals.
There is no need to explicitly specify the source of the audio.
Straight from AVAudioRecorder documentation:
In iOS, the audio being recorded comes from the device connected by the user—built-in microphone or headset microphone, for example. In OS X, the audio comes from the system’s default audio input device as set by a user in System Preferences.

Resources