I have an AVSpeechSynthesizer which converts text to speech, but i've encountered a problem.
I don't know how to save the audio file that it generates to a music file, which I would quite like to be able to do!
So here's my question, how do you save the AVSpeechSynthesizer output and if this isn't possible, can I us AVFoundation, CoreMedia or other public API to capture the output of the speakers, but before it has come out?
Thanks!
Unfortunately no, there is no public API available to capture the speaker output and looking over the docs for AVSpeechSynthesizer and related classes I don't see a way to capture any audio from it. You may want to look at 3rd party libraries to help with this.
Related questions:
Recording audio output only from speaker of iphone excluding microphone
Text-to-speech libraries for iPhone
Related
I use The Amazing Audio Engine to record output audio of my app, which is played by AVSpeechSyntehsizer's speakUtterance method. I used code provided here: Record all sounds generated by my app in a audio file (not from mic)
I get the output file, but I can't play it (file size is always 4kb no matter how long I record, I tried using aiff and m4a extensions, but iTunes is not able to open them). What could be the problem?
Related question:
I was able to record the app output using AVAudioRecorder activated with AVAudioSessionCategoryPlayAndRecord, but it included input from microphone. Is there any way to record app output only? Perhaps change session?
ULTIMATE GOAL:
I need to record AVSpeechSynthesizer to audio file, and since there is no API for this, the only way is to record audio output as it's being played. I'm planning to have my user to use headphones while it's being played/recorded (and warn him that no other sounds should be played while recording is happening). I found that I should use Audio Units, but couldn't find any tutorials on that matter, Apple's manuals are very poor.
My app needs to play some music files, like .mp3. I would like to use MPMoviePlayerController because it has implemented all the UI stuff for me, i.e. I do not want to bother implementing progress slide bar and things like this.
I tested to use it to play a .mp3 file and it worked fine but I do not know if it is fine to use it to do this because its name says "movie player" and it seems it is supposed to play a movie. Would apple reject this? Thank you.
For playing audio from a file or memory, AVAudioPlayer is your best option but unfortunately it doesn't support a network stream while MPMoviePlayerController can
From documentation :
An instance of the AVAudioPlayer class, called an audio player,
provides playback of audio data from a file or memory.
Apple recommends that you use this class for audio playback unless you
are playing audio captured from a network stream or require very low
I/O latency.
For the Apple validation I don't think that your application can be rejected because you're using the Media Player Framework to play an audio file. In fact here they explicitly say that you can do just that:
Choose the right technology for your needs:
To play the audio items in a user’s iPod library, or to play local or
streamed movies, use the Media Player framework. Classes in this
framework automatically support sending audio and video to AirPlay
devices such as Apple TV.
Not sure about performance and memory issues though!
Best of luck.
I'm currently trying to get the outgoing audio signal of my iOS app to be able to send it to Audiobus. I need the AudioBufferLists which are outgoing to be able to route them. I'm using OpenAL for audio playback.
The best case would be that I can even modify the outgoing signal to put effects on it.
There currently appears to be no public API to access the output of OpenAL in an iOS app.
If you want the output, you will need to use another audio API to play the sound, such as Audio Queues with uncompressed raw PCM audio or the RemoteIO Audio Unit, in order to be able to grab the audio output buffers.
You might want to check out how this guy made his own audio mixing object for OpenAL so he could achieve this:
http://www.cuppadev.co.uk/openal-sucks-write-your-own-audio-mixer/
Rather than using OpenAL, you could use CoreAudio with the 3D Mixer Audio Unit (kAudioUnitSubType_AU3DMixerEmbedded). Then you have control over where your output goes. Obviously doing this will sacrifice some portability (you'll be OK with Mac OS X, but not Windows, Linux or Android).
Say you want to playback exactly what the iPhone mic is picking up in real-time. Which framework/class would be used?
You'll need to use the Core Audio framework for this. Specifically, look into audio graphs, audio units, and RemoteIO. Plenty of sample code for those to get you started.
Hey, I'm a new developer in Objective C. I'm trying to record the audio running out of iPhone speakers. I can capture the audio by mouth speaker and record it. But I cannot record the audio producing from my iPhone. Please help me.
Unfortunately, there is no way to directly capture from the "audio bus". You can either capture the audio via the internal microphone or headset microphone, but that's it. If you are rendering the audio, you could obviously also write that audio out to a file as well at the same time. That's pretty much your only option.
yes, you only get a handle on the audio generated by your process. There is no way to get the audio generated by the rest of the system.