I'm using an AVSpeechSynthesizer in my app to synthesise text to speech and play it back on the iPhone's speakers. All works well.
After that I want to recognise the users speech and I'm using SwiftSpeech to do so. It uses SFSpeechRecognizer under the hood. Everything is good here too.
After that the output from speech synthesis (doing it just like before with AVSpeechSynthesizer) is really quiet.
And when the iPhone is connected to a bluetooth headset and it played back via the headphones fine, after doing the speech recognition, the audio output switched from the BT headset to the iPhone (again really quiet.)
I couldn't find anything about that online.
I don't know if sample code would be any helpful, let me know tho.
Related
I'd like to develop a music visualizer app for tvOS which has the ability to listen to whatever background audio is playing (like Spotify, Pandora, etc.). Does anyone know if this is even possible on iOS/tvOS? In other words, there would have to be some functionality that allows the system audio output to be treated like an audio input.
I imagine that it would be the same functionality as doing a screen recording capture, at least the audio part.
My goal is to be able to do this programatically (Objective-C) so that the user doesn't have to do anything, it just "works" out of the box, so to speak.
Thanks
I'm making a karaoke application. If the headphones aren't plugged in, my app works fine. (my voice and background music record together). It is successful, but when I do the same way with headphone then I listen to the recording. I can't hear the background music, but I hear the voice clearly. I attached the code below I used:
https://github.com/genedelisa/AVFoundationRecorder
The issue is that when the headphones are plugged in, the mic is not able to pick up and record the audio, only your voice. Obviously, when the headphones are not plugged in, this is not an issue.
I would recommend that you combine the music and the voice recording after the voice is recorded if the user was using headphones. Suggestions on how to do that can be found on this post: How to merge two mp3 files iOS?
Also, to check if headphones are plugged in check out this post: Are headphones plugged in? iOS7
I am writing an app that plays an audio track for the user to listen to whilst recording from the camera and microphone (using headphones).
The audio track is played using AVAudioPlayer.
The camera/microphone is recorded using AVCaptureSession.
The output uses AVCaptureMovieFileOutput.
It all works perfectly on an iPhone 5 but my iPad 4 experiences an odd side effect. When playing back the recording from the iPad you can hear the audio track as if it has also been recorded. This is all done whilst using headphones and the audio is too quiet to be picked up by the microphone.
When testing on the iPhone only the audio from the mic is recorded, as expected. Both use the same code. Any ideas would be appreciated!!
In case anyone else is having the same problem. I couldn't solve it. Instead I used the setPreferredDataSource:error: method in AVAudioSession to use the device microphone instead of the one on the headset.
I have to build an iOS app for one of my client. The concept is If I blow into the iPhone mic, the sound could be come from the blue tooth speaker. For example If I say "Hi" in the mic, the app should pass that audio to the Bluetooth speaker and sound should comes from the blue tooth speaker. Till now I have done the Bluetooth pairing . I have been searching for this since last 3 days but couldn't find any solution. Please let me know any suggestions or links to start the app.
Many thanks in advance
You have to set the correct audio route. You have to do this by configuring AVAudioSession. This SO answer describes quite well how to do this.
In my iOS game I notice the speaker volume automatically decreases while I'm recording with the mic. I'm using the following C# code in Unity 3D to record a short piece of input from the mic and then analyze audioSrc to see if there was any blowing. I repeat this for a short while.
audioSrc.clip = Microphone.Start (null, false, 5, FREQUENCY);
The whole time I'm also playing some background music, and it is during the execution of the above command that the music volume drops for a bit and then comes back right after recording stops.
I'm not sure if this is specific to Unity 3D on iOS only, or whether this is behaviour common to iOS applications. I haven't noticed the same behaviour on Android. Does anybody know of a way I can prevent this on iOS? If necessary I can execute Objective-C code from Unity to call the iOS APIs.
It's not so much that the volume is decreased, rather it is that iOS has switched the output to earphone, away from speakers. Unity has now included a "Player Setting" for iOS called "Force iOS Speakers when Recording" turn this on.
I'm pretty sure that decreasing the audio while recording is default iOS behaviour.