I am using the rate property of AVPlayer to change the playback speed of an audio sample. It seems to always apply pitch correction, but I want no pitch correction so that when it speeds up it gets higher like a record or an old tape player. Is there any way to shut off pitch correction altogether in AVPLayer?
I am currently using Swift 3, but Objective C answers are welcome, too.
Not sure if this is possible using an AVPlayer, but if you're just using it to play audio you can easily do this with an AVAudioEngine:
var audioPlayer = AVAudioPlayerNode()
var engine = AVAudioEngine()
var speedControl = AVAudioUnitVarispeed()
// engine setup:
do {
let file = try AVAudioFile(forReading: "myFile.mp3")
engine.attach(audioPlayer)
engine.attach(speedControl)
engine.connect(audioPlayer, to: speedControl, format: nil)
engine.connect(speedControl, to: engine.mainMixerNode, format: nil)
audioPlayer.scheduleFile(file, at: nil)
try engine.start()
} catch {
print(error)
}
// changing rate without pitch correction:
speedControl.rate = 0.91
Actually, this is possible with AVPlayer --
let player = AVPlayer(url: fileURL)
// to turn off pitch correction:
player.currentItem?.audioTimePitchAlgorithm = .varispeed
Using AVAudioUnitTimePitch is what you need. Changing rate property (default is 1.0) will change speed of audio without pitch changing.
Related
I am building a karaoke app with the ability to sing with video so here is my problem:
I am recording the user video (video only from the front camera) along with applying voice filters with audiokit on a separate audio records.
Now in my playback, i want to play the video and the audio in a sync mode but it didn't succeed because a have an out sync of video and audio.
I am using akplayer for audio so i can apply voice mod and vlckit for playing user video.
do {
//MARK: VLC kit part of the video setup
Vlc_VideoPlayer = VLCMediaPlayer()
Vlc_VideoPlayer.media = VLCMedia(url: recordVideoURL)
Vlc_VideoPlayer.addObserver(self, forKeyPath: "time", options: [], context: nil)
Vlc_VideoPlayer.addObserver(self, forKeyPath: "remainingTime", options: [], context: nil)
Vlc_VideoPlayer.drawable = self.CameraView
//MARK: Audiokit with AKPlayer Setup
file = try AKAudioFile(forReading: recordVoiceURL)
player = AKPlayer(audioFile: file)
self.player.preroll()
delay = AKVariableDelay(player)
delay.rampTime = 0.5
delayMixer = AKDryWetMixer(player, delay)
reverb = AKCostelloReverb(delayMixer)
reverbMixer = AKDryWetMixer(delayMixer, reverb)
booster = AKBooster(reverbMixer)
tracker = AKAmplitudeTracker(booster)
AudioKit.output = tracker
try AudioKit.start()
}catch{
print (error)
}
self.startPlayers()
now the startPlayers function :
func startPlayers(){
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
if AudioKit.engine.isRunning {
self.Vlc_VideoPlayer.audio.isMuted = true
self.Vlc_VideoPlayer.play()
self.player.isLooping = false
self.player.play()
}else{
self.startPlayers()
}
}
}
I don't know anything about the VLC player, but with the built in AVPlayer there is an option to sync to a clock:
var time: TimeInterval = 1 // 1 second in the future
videoPlayer.masterClock = CMClockGetHostTimeClock()
let hostTime = mach_absolute_time()
let cmHostTime = CMClockMakeHostTimeFromSystemUnits(hostTime)
let cmVTime = CMTimeMakeWithSeconds(time, preferredTimescale: videoPlayer.currentTime().timescale)
let futureTime = CMTimeAdd(cmHostTime, cmVTime)
videoPlayer.setRate(1, time: CMTime.invalid, atHostTime: futureTime)
AKPlayer then supports syncing to the mach_absolute_time() hostTime using its scheduling functions. As you have above, the two will start close together but there is no guarantee of any sync.
Trying to start two players will work out of pure look and unless you have means to synchronize playback after it started, it will not be perfect. Ideally, you should play the audio with VLC as well to make use of its internal synchronization tools.
To iterate on what you have right now, I would suggest to start playback with VLC until it decoded the first frame, pause, start your audio and continue playback with VLC as soon as you decoded the first audio sample. This will still not be perfect, but probably better.
I would like to make a 5-band audio equalizer (60Hz, 230Hz, 910Hz, 4kHz, 14kHz) using AVAudioEngine. I would like to have the user input gain per band through a vertical slider and accordingly adjust the audio that is playing. I tried using AVAudioUnitEQ to do this, but I hear no difference when playing the audio. I tried to hardcode in values to specify a gain at each frequency, but it still does not work. Here is the code I have:
var audioEngine: AVAudioEngine = AVAudioEngine()
var equalizer: AVAudioUnitEQ!
var audioPlayerNode: AVAudioPlayerNode = AVAudioPlayerNode()
var audioFile: AVAudioFile!
// in viewDidLoad():
equalizer = AVAudioUnitEQ(numberOfBands: 5)
audioEngine.attach(audioPlayerNode)
audioEngine.attach(equalizer)
let bands = equalizer.bands
let freqs = [60, 230, 910, 4000, 14000]
audioEngine.connect(audioPlayerNode, to: equalizer, format: nil)
audioEngine.connect(equalizer, to: audioEngine.outputNode, format: nil)
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
}
bands[0].gain = -10.0
bands[0].filterType = .lowShelf
bands[1].gain = -10.0
bands[1].filterType = .lowShelf
bands[2].gain = -10.0
bands[2].filterType = .lowShelf
bands[3].gain = 10.0
bands[3].filterType = .highShelf
bands[4].gain = 10.0
bands[4].filterType = .highShelf
do {
if let filepath = Bundle.main.path(forResource: "song", ofType: "mp3") {
let filepathURL = NSURL.fileURL(withPath: filepath)
audioFile = try AVAudioFile(forReading: filepathURL)
audioEngine.prepare()
try audioEngine.start()
audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
audioPlayerNode.play()
}
} catch _ {}
Since the low frequencies have a gain of -10 and the high frequencies have a gain of 10, there should be a very noticeable difference when playing any media. However, when the media starts playing, it sounds the same as if played without any equalizer attached.
I'm not sure why this is happening, but I tried several different things to debug. I thought that it might be the order of the functions so I tried switching it so that audioEngine.connect is called after adjusting all of the bands, but that did not make a difference either.
I tried this same code with using an AVAudioUnitTimePitch, and it worked perfectly, so I am dumbfounded as to why it does not work with AVAudioUnitEQ.
I do not want to use any third-party libraries or cocoa pods for this project, I would like to do it using AVFoundation alone.
Any help would be greatly appreciated!
Thanks in advance.
AVAudioUnitEQFilterParameters
Looking through the documentation, I noticed that I had messed with all of the parameters except bypass and it seems that changing this flag fixed everything!
So, I believe the main issue here is that each AVAudioUnitEQ band must not be bypassed by the provided system values rather than the values the programmer sets.
So, I changed
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
}
to
for i in 0...(bands.count - 1) {
bands[i].frequency = Float(freqs[i])
bands[i].bypass = false
bands[i].filtertype = .parametric
}
and everything started working. Furthermore, to make an effective equalizer that allows the user to modify individual frequencies the filtertype for each band should be set to .parametric.
I am still unsure on what I should set the bandwith to, but I can probably check online for that or just mess with it until the sound matches a different equalizer application.
I'm developing a music instrument in iOS with two audio samples (high and low pitches) that are played with view touches. The first sample is very short (a half second) and the other is a little bigger (two seconds). When I play repeatedly and fast the low pitch sound, there is an audio click/pop. There is no problem playing the high pitch sound.
Both audio samples have fade in and fade out in their init/end and there is no clip problem with them.
I'm using this code to load the audio files (simplified here):
engine = AVAudioEngine()
mixer = engine.mainMixerNode
let player = AVAudioPlayerNode()
do {
let audioFile = try AVAudioFile(forReading: instrumentURL)
let audioFormat = audioFile.processingFormat
let audioFrameCount = UInt32(audioFile.length)
let audioFileBuffer = AVAudioPCMBuffer(PCMFormat: audioFormat, frameCapacity: audioFrameCount)
try audioFile.readIntoBuffer(audioFileBuffer)
engine.attachNode(player)
engine.connect(player, to: mixer, format: audioFileBuffer.format)
} catch {
print("Init Error!")
}
and this code to play the samples:
player.play()
player.scheduleBuffer(audioFileBuffer, atTime: nil, options: option, completionHandler: nil)
I'm using a similar functionality in Android with the same audio samples without any click/pop problem.
Is this click/pop problem an implementation error?
How can I fix this problem?
Update 1
I just tried another approach, with AVAudioPlayer and I got the same pop/click problem.
Update 2
I think the problem is to start the audio file again before its end. The sound stops abruptly.
I am playing a sound using AVAudioPlayer.
The sound volume in iPods (iOS7 & iOS8 both) is good.
But when I play same sound in iPhones the sound is playing with very low volume.
Here is my code:
var audioPlayer: AVAudioPlayer?
var soundURL:NSURL? = NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource("sound", ofType: "mp3")!)
audioPlayer = AVAudioPlayer(contentsOfURL: soundURL, error: nil)
audioPlayer?.prepareToPlay()
audioPlayer?.volume = 1
audioPlayer?.play()
I've already included AudioToolbox.framework library in my project.
How can I improve sound in iPhone6 and 6+ ?
EDIT
Recently i noticed that automatically the sound was increased for couple of seconds,but don't know what's wrong happening.?
The volume is low because the sound is coming via the top speaker in iPhone(used for calling).
You have to override the AVAudioSession out put port to the main speaker.(bottom speaker)
This is the code that Route audio output to speaker
AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSessionPortOverride.Speaker, error: &error)
Swift 3.0:
do {
try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
} catch _ {
}
Solution for Swift 4
try? session.setCategory(.playAndRecord, mode: .default, policy: .default, options: .defaultToSpeaker)
P.S. For full class you can check my gist
I think if you set the volume of AVAudioPlayer to 1, then the problem is on the system volume.
You may use a different audio category when playing your sound and the volume is different in different category or different audio route(like native speaker, headset or bluetooth) The default category in your app will be ambient, but the Music app's is playback.
You can try to adjust your volume when you are playing your sound rather than before it's played.
you need to maximize the media volume as well.
extension MPVolumeView {
var volumeSlider:UISlider {
self.showsRouteButton = false
self.showsVolumeSlider = false
self.hidden = true
var slider = UISlider()
for subview in self.subviews {
if subview.isKindOfClass(UISlider){
slider = subview as! UISlider
slider.continuous = false
(subview as! UISlider).value = AVAudioSession.sharedInstance().outputVolume
return slider
}
}
return slider
}
}
then change the volume like this :
func setMediaVolume(volume : Float) {
MPVolumeView().volumeSlider.value = volume
}
in your case, setMediaVolume(1) right before you play your audio.
also make sure that the current audio route is to the Speaker and not the earpiece, you can do that using
try! AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSessionPortOverride.Speaker)
I also experienced this in the swift playground, the volume is much smaller than the sound in the original file. Just increase the volume.
audioPlayer?.volume = 3
When using an AVAudioPlayerNode to schedule a short buffer to play immediately on a touch event ("Touch Up Inside"), I've noticed audible glitches / artifacts on playback while testing. The audio does not glitch at all in iOS simulator, however there is audible distortion on playback when I run the app on an actual iOS device. The audible distortion occurs randomly (the triggered sound will sometimes sound great, while other times it sounds distorted)
I've tried using different audio files, file formats, and preparing the buffer for playback using the prepareWithFrameCount method, but unfortunately the result is always the same and I'm stuck wondering what could be going wrong..
I've stripped the code down to globals for clarity and simplicity. Any help or insight would be greatly appreciated. This is my first attempt at developing an iOS app and my first question posted on Stack Overflow.
let filePath = NSBundle.mainBundle().pathForResource("BD_withSilence", ofType: "caf")!
let fileURL: NSURL = NSURL(fileURLWithPath: filePath)!
var error: NSError?
let file = AVAudioFile(forReading: fileURL, error: &error)
let fileFormat = file.processingFormat
let frameCount = UInt32(file.length)
let buffer = AVAudioPCMBuffer(PCMFormat: fileFormat, frameCapacity: frameCount)
let audioEngine = AVAudioEngine()
let playerNode = AVAudioPlayerNode()
func startEngine() {
var error: NSError?
file.readIntoBuffer(buffer, error: &error)
audioEngine.attachNode(playerNode)
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: buffer.format)
audioEngine.prepare()
func start() {
var error: NSError?
audioEngine.startAndReturnError(&error)
}
start()
}
startEngine()
let frameCapacity = AVAudioFramePosition(buffer.frameCapacity)
let frameLength = buffer.frameLength
let sampleRate: Double = 44100.0
func play() {
func scheduleBuffer() {
playerNode.scheduleBuffer(buffer, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil)
playerNode.prepareWithFrameCount(frameLength)
}
if playerNode.playing == false {
scheduleBuffer()
let time = AVAudioTime(sampleTime: frameCapacity, atRate: sampleRate)
playerNode.playAtTime(time)
}
else {
scheduleBuffer()
}
}
// triggered by a "Touch Up Inside" event on a UIButton in my ViewController
#IBAction func triggerPlay(sender: AnyObject) {
play()
}
Update:
Ok I think I've identified the source of the distortion: the volume of the node(s) is too great at output and causes clipping. By adding these two lines in my startEngine function, the distortion no longer occurred:
playerNode.volume = 0.8
audioEngine.mainMixerNode.volume = 0.8
However, I'm still don't know why I need to lower the output- my audio file itself does not clip. I'm guessing that it might be a result of the way that the AVAudioPlayerNodeBufferOptions.Interrupts is implemented. When a buffer interrupts another buffer, could there be an increase in output volume as a result of the interruption, causing output clipping? I'm still looking for a solid understanding as to why this occurs.. If anyone is willing/able to provide any clarification about this that would be fantastic!
Not sure if this is the problem you experienced in 2015, it may be the same issue that #suthar experienced in 2018.
I experienced a very similar problem and was due to the fact that the sampleRate on the device is different to the simulator. On macOS it is 44100 and on iOS Devices (late model ones) it is 48000.
So when you fill your buffer with 44100 samples on a 48000 device, you get 3900 samples of silence. When played back it doesn't sound like silence, it sounds like a glitch.
I used the mainMixer format when connecting my playerNode and also when creating my pcmBuffer. Don't refer to 48000 or 44100 anywhere in the code.
audioEngine.attach( playerNode)
audioEngine.connect( playerNode, to:mixerNode, format:mixerNode.outputFormat(forBus:0))
let pcmBuffer = AVAudioPCMBuffer( pcmFormat:SynthEngine.shared.audioEngine.mainMixerNode.outputFormat( forBus:0),
frameCapacity:AVAudioFrameCount(bufferSize))