Play a beep that loop and change the frequency/speed - ios

I am creating an iphone application that use audio.
I want to play a beep sound that loop indefinitely.
I found an easy way to do that using the upper layer AVAudioPlayer and the numberOfLoops set to "-1". It works fine.
But now I want to play this audio and be able to change the rate / speed. It may works like the sound played by a car when approaching an obstacle. At the beginning the beep has a low frequency and this frequency accelerate till reaching a continuous sound biiiiiiiiiiiip ...
It seems this is not feasible using the high layer AVAudioPlayer, but even looking at AudioToolBox I found no solution.
Any help?

Take a look at Dave Dribin's A440 sample application, which plays a constant 440 Hz tone on the iPhone / iPad. It uses the lower-level Audio Queue Services, but it does what you're asking (short of the realtime tone adjustment, which would just require a tweak of the existing code).

Related

Simultaneous recording and playback at different sample rates in iOS

I am working on an iOS app where audio recording and playback should happen simultaneously but at different sampling rates.
Recording will be done through a connected USB Audio device, and playback is done through the inbuilt speaker. I am using AudioUnits for both recording and playback.
AVAudioSession category is set to AVAudioSessionCategoryPlayAndRecord.
Problem is that, recording sample rate should be 96kHz whereas playback sample rate should be 8kHz and, both should run simultaneously.
Currently, whenever I use AVAudioSessionCategoryPlayAndRecord and setPreferredSampleRate to 96kHz, ultimately sampleRate property of AVAudioSession remains at 48kHz and I am loosing half of the samples while recording.
If I use AVAudioSessionCategoryRecord recording happens just fine. But I can't run the audio playback simultaneously with this category. I even tried AVAudioSessionCategoryMultiRoute with no luck, here sampleRate remains at 44.1kHz
So, my question is in iOS how to use different sample rates for recording and playback and, still run them simultaneously? Any advice or references are greatly appreciated.
Please let me know if any other details are required.
I ended up using AVAudioSessionCategoryPlayAndRecord category.
Preferred sample rate is set to 48kHz.
To achieve higher sampling rate for recording, I am using setPreferredInputNumberOfChannels. By increasing number of input channels(which affects only recording) you can have different sampling rate for recording.
In case the recording sampling rate is not a multiple of playback sample rate, we may need to add some interleaving/padding to input samples(Assuming you have control over how data is formatted by the USB device)

Choosing between AVAudioPlayer and AudioToolbox for many small audio clips

As is demonstrated in this answer, I have recently learned how to play audio files using both AVAudioPlayer and AudioToolbox. I have successfully played a single audio test file using both methods. However, I want to ask about which one I should actually use in my app (or if it even matters).
Here are the relevant characteristics of my app:
There are about 800 audio clips.
Most of the clips last less than one second.
Any could be chosen to be played at random by the user, but only a small subset will be used on any particular run.
No special volume control or playback options are needed.
These are my questions:
Which method for playing a sound would be better? Why?
Should I preload the sounds or just load them when they are needed? I'm guessing that preloading 800 sounds every time is a bad idea. But if I wait to load them until they are needed, I am worried about performance (ie, a noticeable pause before the clip is played)
Do I need to play sounds on a background thread?
So my concerns in choosing which audio player to go with are memory and performance. I couldn't tell from any of the documentation that I saw which is better in this case.

In AVFoundation, how to synchronize recording and playback

I am interested in recording media using an AVCaptureSession in iOS while playing media back using an AVPlayer (specifically, I am playing back audio and recording video, but I'm not sure it matters).
The problem is, when I play the resulting media back together later, they are out of sync. Is it possible to synchronize them, either by ensuring that playback and recording start simultaneously, or by discovering what the offset is between them? I probably need the sync to be on the order of 10 ms. It is unreasonable to assume that I can always capture audio (since the user may use headphones), so syncing via analysis of original and recorded audio is not an option.
This question suggests that it's possible to end playback and recording simultaneously and determine the initial offset from the resulting lengths that way, but I'm unclear how to get them to end simultaneously. I have two cases: 1) the audio playback runs out, and 2), the user hits the "stop recording" button.
This question suggests priming and then applying a fixed, but possibly device-dependent delay, which is obviously a hack, but if it's good enough for audio it's obviously worth considering for video.
Is there another media layer I can use to perform the required synchronization?
Related: this question is unanswered.
If you are specifically using AVPlayer to playback Audio and i would suggest you to use AudioQueueServices for the same. Its seamless and fast as it reads buffer by buffer and play pause is faster than AVPLlayer
There can also be the possibility that you are missing the initial statement of [avPlayer prepareToPlay] which might be causing much overhead for it to sync before playing the Audio.
Hope it helps you.

Problems in Audio/Video Slow motion

I am trying do slow motion for my video file along with audio. In my case, I have to do Ramped Slow motion(Gradually slowing down and speeding up
like parabola not a "Linear Slow Motion".
Ref:Linear slow motion :
Ref : Ramped Slow Motion :
What have i done so far:
Used AVFoundation for first three bullets
From video files, separated audio and video.
Did slow motion for video using AVFoundation api (scaletimeRange).Its really working fine.
The same is not working for audio. Seems there's a bug in apple api itself (Bug ID : 14616144). The relevant question is scaleTimeRange has no effect on audio type AVMutableCompositionTrack
So i switched to Dirac. later found there is a limitation with Dirac's open source edition that it doesn't support Dynamic Time Stretching.
Finally trying to do with OpenAL.
I've taken a sample OpenAL program from Apple developer forum and executed it.
Here are my questions:
Can i store/save the processed audio in OpenAl?if its directly not possible with "OpenAl", can it be done with AVFoundation + OpenAL?
Very importantly, how to do slow motion or stretch the time scale with OpenAL? If i know time stretching, i can apply logic for Ramp Slow Motion.
Is there any other way?
I can't really speak to 1 or 2, but time scaling audio can be as easy as resampling. If you have RAW/PCM audio sampled at 48 kHz and want to playback at half speed, resample to 96 kHz and play the audio at 48 kHz. Since you have twice the number of samples it will take twice as long to play. Generally:
scaledSampleRate = (orignalSampleRate / playRate);
or
playRate = (originalSampleRate / scaledSampleRate);
This will effect the pitch of the track, however that may be the desired effect since that behavior is somewhat is expected in "slow motion" audio. There are more advanced techniques that preserve pitch while scaling time. The open source software Audacity implements these algorithms. You could find inspiration there. There are many resources on the web that explain the tradeoffs of pitch shifting vs time stretching.
http://en.wikipedia.org/wiki/Audio_time-scale/pitch_modification
http://www.dspdimension.com/admin/time-pitch-overview/
Another option you may not have considered is muting the audio during slow motion. That seems to be the technique employed by most AV playback utilities. However, depending on your use case, distorted audio does indicate time is being manipulated.
I have applied slow motion on complete video including audio this might help You check this link : How to do Slow Motion video in IOS

How can I achieve 3D sound on ios?

I am interested in a way to play sounds from specific points in space relative to the user.
Basically I would like to say the user is at point (0,0) and a sound came from (10,10) and then take a sound and send it through some library that plays it, sounding as though it came from the source (10,10). Performance in doing this would be very important.
If it wasn't painfully obvious from reading the question, I have very little experience with audio on any device.
After doing a little research, it seems the options are to use the OpenAL framework which is supported by apple, or essentially roll your own on top of Audio Unit.
There is a 3D Mixer Audio Unit that apple provides, which requires you to develop a lot of understanding of Audio Units.
Then there is OpenAL which is a cross platform audio framework where you can position a "source" and a "listener" and it will compute attenuation and stereo for you.
Both require low level understanding of playing Audio and are not very fun. So I figured I might as well jump all the way in the water and learn about the Audio Units, since I may want to do some more specialized stuff in the future.
This is an easy wrapper for the iOS OpenAL functionality: ObjectAL-for-iPhone
Play around with the example and see if it does what you want.

Resources