I am building a karaoke app with the ability to sing with video so here is my problem:
I am recording the user video (video only from the front camera) along with applying voice filters with audiokit on a separate audio records.
Now in my playback, i want to play the video and the audio in a sync mode but it didn't succeed because a have an out sync of video and audio.
I am using akplayer for audio so i can apply voice mod and vlckit for playing user video.
do {
//MARK: VLC kit part of the video setup
Vlc_VideoPlayer = VLCMediaPlayer()
Vlc_VideoPlayer.media = VLCMedia(url: recordVideoURL)
Vlc_VideoPlayer.addObserver(self, forKeyPath: "time", options: [], context: nil)
Vlc_VideoPlayer.addObserver(self, forKeyPath: "remainingTime", options: [], context: nil)
Vlc_VideoPlayer.drawable = self.CameraView
//MARK: Audiokit with AKPlayer Setup
file = try AKAudioFile(forReading: recordVoiceURL)
player = AKPlayer(audioFile: file)
self.player.preroll()
delay = AKVariableDelay(player)
delay.rampTime = 0.5
delayMixer = AKDryWetMixer(player, delay)
reverb = AKCostelloReverb(delayMixer)
reverbMixer = AKDryWetMixer(delayMixer, reverb)
booster = AKBooster(reverbMixer)
tracker = AKAmplitudeTracker(booster)
AudioKit.output = tracker
try AudioKit.start()
}catch{
print (error)
}
self.startPlayers()
now the startPlayers function :
func startPlayers(){
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
if AudioKit.engine.isRunning {
self.Vlc_VideoPlayer.audio.isMuted = true
self.Vlc_VideoPlayer.play()
self.player.isLooping = false
self.player.play()
}else{
self.startPlayers()
}
}
}
I don't know anything about the VLC player, but with the built in AVPlayer there is an option to sync to a clock:
var time: TimeInterval = 1 // 1 second in the future
videoPlayer.masterClock = CMClockGetHostTimeClock()
let hostTime = mach_absolute_time()
let cmHostTime = CMClockMakeHostTimeFromSystemUnits(hostTime)
let cmVTime = CMTimeMakeWithSeconds(time, preferredTimescale: videoPlayer.currentTime().timescale)
let futureTime = CMTimeAdd(cmHostTime, cmVTime)
videoPlayer.setRate(1, time: CMTime.invalid, atHostTime: futureTime)
AKPlayer then supports syncing to the mach_absolute_time() hostTime using its scheduling functions. As you have above, the two will start close together but there is no guarantee of any sync.
Trying to start two players will work out of pure look and unless you have means to synchronize playback after it started, it will not be perfect. Ideally, you should play the audio with VLC as well to make use of its internal synchronization tools.
To iterate on what you have right now, I would suggest to start playback with VLC until it decoded the first frame, pause, start your audio and continue playback with VLC as soon as you decoded the first audio sample. This will still not be perfect, but probably better.
Related
I'm working on an app that uses CoreHaptics to play a synchronised pattern of vibrations and audio.
The problem is that the audio only gets played through the iPhones speakers (if the mute switch is not turned on). As soon as I connect my AirPods to the phone the audio stops playing, but the haptics continue.
My code looks something like this:
let engine = CHHapticEngine()
...
var events = [CHHapticEvent]()
...
let volume: Float = 1
let decay: Float = 0.5
let sustained: Float = 0.5
let audioParameters = [
CHHapticEventParameter(parameterID: .audioVolume, value: volume),
CHHapticEventParameter(parameterID: .decayTime, value: decay),
CHHapticEventParameter(parameterID: .sustained, value: sustained)
]
let breathingTimes = pacer.breathingTimeInSeconds
let combinedTimes = breathingTimes.inhale + breathingTimes.exhale
let audioEvent = CHHapticEvent(
audioResourceID: selectedAudio,
parameters: audioParameters,
relativeTime: 0,
duration: combinedTimes
)
events.append(audioEvent)
...
let pattern = try CHHapticPattern(events: events, parameterCurves: [])
let player = try engine.makeAdvancedPlayer(with: pattern)
...
try player.start(atTime: CHHapticTimeImmediate)
My idea to activate an audio session before the player starts, to indicate to the system that audio is played, also didn't changed the outcome:
try AVAudioSession.sharedInstance().setActive(true)
Is there a different way to route the audio from CoreHaptics to a different output other than the integrated speakers?
I'm having a problem with recording audio from the microphone of my test device to a .caf file in Swift, XCode 9.4.1 using the latest version of AudioKit. In a simple test whereby I send the audio straight from the microphone to the output via an AKBooster, it works just fine and I can hear the mic input coming out of the speakers. I'm more or less following this example, although again using a booster node instead of an oscillator.
The following is my code:
class MicrophoneHandler
{
var microphone : AKMicrophone!
var booster : AKBooster!
var mixer : AKMixer!
var recorder : AKNodeRecorder!
var file : AKAudioFile!
var player : AKAudioPlayer!
init()
{
setupMicrophone()
microphone = AKMicrophone()
booster = AKBooster(microphone) // Stereo amplifier for microphone
mixer = AKMixer(booster)
file = try! AKAudioFile() // File to store recorder output
player = try? AKAudioPlayer(file: file) // Player to play back recorded audio file
//player.looping = true
recorder = try? AKNodeRecorder(node: mixer, file: file)
try? recorder.record()
sleep(5)
let dur = String(format: "%0.3f seconds", recorder.recordedDuration)
print("Stopped. (\(dur) recorded)")
recorder.stop()
//file.exportAsynchronously(name: "Test", baseDir: .documents, exportFormat: .caf){ [weak self] _, _ in
//}
//player.play()
//AudioKit.output = player!
//try? AudioKit.start()
}
func setupMicrophone()
{
// Function to initialise microphone settings
// Adapted from AudioKit example code found here:
// https://audiokit.io/examples/MicrophoneAnalysis
AKSettings.bufferLength = .medium
AKSettings.ioBufferDuration = 0.002 // TODO experiment with this to control latency
do
{
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP) // Set session type & allow streaming to Bluetooth devices
} catch
{
AKLog("Could not set session category.")
}
AKSettings.defaultToSpeaker = true // Output to speaker when audio input is enabled
}
}
I have commented out the export code as the problem doesn't appear to be here. The console displays the following:
AKMicrophone.swift:init():45:Mixer inputs 8
AKAudioPlayer.swift:updatePCMBuffer():533:AKAudioPlayer Warning: "BF848EC0-94F8-4E39-A211-784B001CED72.caf" is an empty file
2018-11-16 17:49:16.936169+0000 VoxBox[2258:6984570] Audio files cannot be non-interleaved. Ignoring setting AVLinearPCMIsNonInterleaved YES.
AKNodeRecorder.swift:record():104:AKNodeRecorder: recording
Stopped. (0.000 seconds recorded)
As you can see, the recorder appears not to be recording to file for some reason. To my mind, my code should
Initialise the microphone (including settings)
Route the microphone input through a booster followed by a mixer (mixing with an FX bank will happen later)
Create an empty .caf audio file to be written to
Set up a player to play this file when the time comes
Set up a recorder to record the output of the mixer node to the audio file
Record 5 seconds of microphone input to the audio file
Yet for some reason nothing is being recorded. Clearly I am missing something or have misunderstood how the AKNodeRecorder works in this regard. I have read as many StackOverflow questions on similar topics as I can, had a dig through the AudioKit documentation and read a couple of examples from the AudioKit site, but nothing seems to address my particular problem.
Any help would be much appreciated.
I am having a really difficult time with playing audio in the background of my app. The app is a timer that is counting down and plays bells, and everything worked using the timer originally. Since you cannot run a timer over 3 minutes in the background, I need to play the bells another way.
The user has the ability to choose bells and set the time for these bells to play (e.g. play bell immediately, after 5 minutes, repeat another bell every 10 minutes, etc).
So far I have tried using notifications using DispatchQueue.main and this will work fine if the user does not pause the timer. If they re-enter the app though and pause, I cannot seem to cancel this queue or pause it in anyway.
Next I tried using AVAudioEngine, and created a set of nodes. These will play while the app is in the foreground but seem to stop upon backgrounding. Additionally when I pause the engine and resume later, it won't pause the sequence properly. It will squish the bells into playing one after the other or not at all.
If anyone has any ideas of how to solve my issue that would be great. Technically I could try remove everything from the engine and recreate it from the paused time when the user pauses/resumes, but this seems quite costly. It also doesn't solve the problem of the audio stopping in the background. I have the required background mode 'App plays audio or streams audio/video using Airplay', and it is also checked under the background modes in capabilities.
Below is a sample of how I tried to set up the audio engine. The registerAndPlaySound method is called several other times to create the chain of nodes (or is this done incorrectly?). The code is kinda messy at the moment because I have been trying many ways trying to get this to work.
func setupSounds{
if (attached){
engine.detach(player)
}
engine.attach(player)
attached = true
let mixer = engine.mainMixerNode
engine.connect(player, to: mixer, format: mixer.outputFormat(forBus: 0))
var bell = ""
do {
try engine.start()
} catch {
return
}
if (currentSession.bellObject?.startBell != nil){
bell = (currentSession.bellObject?.startBell)!
guard let url = Bundle.main.url(forResource: bell, withExtension: "mp3") else {
return
}
registerAndPlaySound(url: url, delay: warmUpTime)
}
}
func registerAndPlaySound(url: URL, delay: Double) {
do {
let file = try AVAudioFile(forReading: url)
let format = file.processingFormat
let capacity = file.length
let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(capacity))
do {
try file.read(into: buffer)
}catch {
return
}
let sampleRate = buffer.format.sampleRate
let sampleTime = sampleRate*delay
let futureTime = AVAudioTime(sampleTime: AVAudioFramePosition(sampleTime), atRate: sampleRate)
player.scheduleBuffer(buffer, at: futureTime, options: AVAudioPlayerNodeBufferOptions(rawValue: 0), completionHandler: nil)
player.play()
} catch {
return
}
}
I need to create a Audio Player for streamed URL (m3u8 format). I have created music player using AVPlayer. But I need to show visualizer for streamed song. I have tried different solution but not found any working example of it.
I have created visualizer using AVAudioPlayer(averagePower) but it won't support streamed URL.
Any help to show visualizer for AVPlayer? Thanks in advance.
I have also tried using MYAudioTapProcessor which most of the people suggested, but for streamed URL, tracks always returns null.
Added the MYAudioTapProcessor.h and MYAudioTapProcessor.m in project
//Initialization of player
let playerItem = AVPlayerItem( url:NSURL( string:"https://bitdash-a.akamaihd.net/content/sintel/hls/playlist.m3u8" ) as! URL )
let audioPlayer: AVPlayer = AVPlayer(playerItem:playerItem)
//Added periodic time observer
audioPlayer!.addPeriodicTimeObserver(forInterval: CMTimeMakeWithSeconds(1, 1), queue: DispatchQueue.main) { (CMTime) -> Void in
if audioPlayer!.currentItem?.status == .readyToPlay
{
if let playerItem: AVPlayerItem = audioPlayer!.currentItem {
print(playerItem.asset.tracks.count)
if (playerItem.asset.tracks) != nil {
self.tapProcessor = MYAudioTapProcessor(avPlayerItem: playerItem)
playerItem.audioMix = self.tapProcessor.audioMix
self.tapProcessor.delegate = self
}
}
}
}
//Delegate callback method for MYAudioTapProcessor
func audioTabProcessor(_ audioTabProcessor: MYAudioTapProcessor!, hasNewLeftChannelValue leftChannelValue: Float, rightChannelValue: Float) {
print("volume: \(leftChannelValue) : \(rightChannelValue)")
volumeSlider.value = leftChannelValue
}
Also tried by adding the "Track" observer.
playerItem.addObserver(self, forKeyPath: "tracks", options: NSKeyValueObservingOptions.new, context: nil);
Now if play mp3 file, the callback method calls but for m3u8 callback method doesn't call. The main reason for failing m3u8 URL is it always show tracks array count zero whereas for mp3 files tracks array has one item.
You cannot get tracks for HLS via AVPLayer. You should use progressive download or local file for getting audio tracks while playing media.
I'm developing a music instrument in iOS with two audio samples (high and low pitches) that are played with view touches. The first sample is very short (a half second) and the other is a little bigger (two seconds). When I play repeatedly and fast the low pitch sound, there is an audio click/pop. There is no problem playing the high pitch sound.
Both audio samples have fade in and fade out in their init/end and there is no clip problem with them.
I'm using this code to load the audio files (simplified here):
engine = AVAudioEngine()
mixer = engine.mainMixerNode
let player = AVAudioPlayerNode()
do {
let audioFile = try AVAudioFile(forReading: instrumentURL)
let audioFormat = audioFile.processingFormat
let audioFrameCount = UInt32(audioFile.length)
let audioFileBuffer = AVAudioPCMBuffer(PCMFormat: audioFormat, frameCapacity: audioFrameCount)
try audioFile.readIntoBuffer(audioFileBuffer)
engine.attachNode(player)
engine.connect(player, to: mixer, format: audioFileBuffer.format)
} catch {
print("Init Error!")
}
and this code to play the samples:
player.play()
player.scheduleBuffer(audioFileBuffer, atTime: nil, options: option, completionHandler: nil)
I'm using a similar functionality in Android with the same audio samples without any click/pop problem.
Is this click/pop problem an implementation error?
How can I fix this problem?
Update 1
I just tried another approach, with AVAudioPlayer and I got the same pop/click problem.
Update 2
I think the problem is to start the audio file again before its end. The sound stops abruptly.