I created a simple project to test out the functionality of mach_wait_until(). This code gives me an accurate printout of how precise the 1 second delay is. The console printout is virtually identical and extremely precise on both the iOS Simulator and on my iPad Air 2. However, on my iPad there is a HUGE delay, where the same 1 second delay takes about 100 seconds! And to add to the weirdness of it, the printout in the console says it only takes 1 second (with extremely low jitter and/or lag).
How can this be? Is there some timing conversion that I need to do for a physical iOS device when using mach_wait_until()?
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
playNoteTest()
}
var start = mach_absolute_time()
var end = mach_absolute_time()
func playNoteTest() {
let when = mach_absolute_time() + 1000000000
self.start = mach_absolute_time()
mach_wait_until(when)
self.end = mach_absolute_time()
let timeDelta = (self.end - self.start)
let newTimeDelta = Double(timeDelta) / 1000000000.0
print("Delta Time = \(newTimeDelta)")
playNoteTest()
}
}
mach_absolute_time units are CPU dependent. You need to multiply by a device-specific constant in order to get real-world units. It is discussed in this Tech Q&A from Apple.
Here is some playground code that demonstrates the idea:
import PlaygroundSupport
import Foundation
PlaygroundPage.current.needsIndefiniteExecution = true
class TimeBase {
static let NANOS_PER_USEC: UInt64 = 1000
static let NANOS_PER_MILLISEC: UInt64 = 1000 * NANOS_PER_USEC
static let NANOS_PER_SEC: UInt64 = 1000 * NANOS_PER_MILLISEC
static var timebaseInfo: mach_timebase_info! = {
var tb = mach_timebase_info(numer: 0, denom: 0)
let status = mach_timebase_info(&tb)
if status == KERN_SUCCESS {
return tb
} else {
return nil
}
}()
static func toNanos(abs:UInt64) -> UInt64 {
return (abs * UInt64(timebaseInfo.numer)) / UInt64(timebaseInfo.denom)
}
static func toAbs(nanos:UInt64) -> UInt64 {
return (nanos * UInt64(timebaseInfo.denom)) / UInt64(timebaseInfo.numer)
}
}
let duration = TimeBase.toAbs(nanos: 10 * TimeBase.NANOS_PER_SEC)
DispatchQueue.global(qos: .userInitiated).async {
print("Start")
let start = mach_absolute_time()
mach_wait_until(start+duration)
let stop = mach_absolute_time()
let elapsed = stop-start
let elapsedNanos = TimeBase.toNanos(abs: elapsed)
let elapsedSecs = elapsedNanos/TimeBase.NANOS_PER_SEC
print("Elapsed nanoseconds = \(elapsedNanos)")
print("Elapsed seconds = \(elapsedSecs)")
}
Related
I have an AVAudioPlayerNode looping a segment of a song:
audioPlayer.scheduleBuffer(segment, at: nil, options:.loops)
I want to get current position of the song while it's playing. Usually, this is done by calculating = currentFrame / audioSampleRate
where
var currentFrame: AVAudioFramePosition {
guard let lastRenderTime = audioPlayer.lastRenderTime,
let playerTime = audioPlayer.playerTime(forNodeTime: lastRenderTime) else {
return 0
}
return playerTime.sampleTime
}
However, when the loop ends and restarts, the currentFrame does not restart. But it still increases which makes currentFrame / audioSampleRate incorrect as the current position.
So what is the correct way to calculate the current position?
Good old modulo will do the job:
public var currentTime: TimeInterval {
guard let nodeTime = player.lastRenderTime,
let playerTime = player.playerTime(forNodeTime: nodeTime) else {
return 0
}
let time = (Double(playerTime.sampleTime) / playerTime.sampleRate)
.truncatingRemainder(dividingBy: Double(file.length) / Double(playerTime.sampleRate))
return time
}
I am currently developing an avplayer app which calculates HLS streaming metrics. I wanted to get buffer level for the current item.
private var availableDuration: Double {
guard let timeRange = player.currentItem?.loadedTimeRanges.first?.timeRangeValue else {
return 0.0
}
let startSeconds = timeRange.start.seconds
let durationSeconds = timeRange.duration.seconds
return startSeconds + durationSeconds
}
I am a little confused in the terminology used in apple documentations.
Here i am getting availableDuration of the current item but i am not sure if this represents the buffer level of the current item.
Your code seems ok. I used same
var bufferInSeconds: Double {
guard let range = self.loadedTimeRanges.first?.timeRangeValue else {
return 0.0
}
let sec = range.start.seconds + range.duration.seconds
return sec >= 0 ? sec : 0
}
I am using Xcode instrument time profiler to measure the performance of my app. At the same time, I am also using CFAbsoluteTimeGetCurrent function in my code to print the time I used. However, I have found that sometimes the execution time of a function measured by these two methods are the same, but sometimes there is a big difference.
Here is an example
import UIKit
import ZippyJSON
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewDidAppear(_ animated: Bool) {
let encoder = JSONEncoder()
var students = [Student]()
var scores = [Int]()
for _ in 0...100 {
scores.append(99984)
}
for _ in 0...100 {
students.append(Student(name: "StudentName", age: 99, scores: scores))
}
var dataList = [Data]()
for _ in 0...100 {
let data = try! encoder.encode(students)
dataList.append(data)
}
testJson(dataList: dataList)
}
}
func testJson(dataList: [Data]) {
let t1 = CFAbsoluteTimeGetCurrent()
let decoder = JSONDecoder()
var results = [[Student]]()
for data in dataList {
let r1 = CFAbsoluteTimeGetCurrent()
let result = try! decoder.decode([Student].self, from: data)
let r2 = CFAbsoluteTimeGetCurrent()
results.append(result)
print("single time \(r2 - r1)")
}
let t2 = CFAbsoluteTimeGetCurrent()
print("total time \(t2 - t1)")
}
At this time, I am using Apple's JSONDecoder to decode the data created by me. The printing of the program is
total time 1.428820013999939
And it corresponds to what Time Profiler produce
However, if I change to use a third party library ZippyJSONDecoder to decode the data
func testJson(dataList: [Data]) {
let t1 = CFAbsoluteTimeGetCurrent()
let decoder = ZippyJSONDecoder()
var results = [[Student]]()
for data in dataList {
let r1 = CFAbsoluteTimeGetCurrent()
let result = try! decoder.decode([Student].self, from: data)
let r2 = CFAbsoluteTimeGetCurrent()
results.append(result)
print("single time \(r2 - r1)")
}
let t2 = CFAbsoluteTimeGetCurrent()
print("total time \(t2 - t1)")
}
The printing of the program is
total time 3.199104905128479
But the Time Profile produces this result
It is only 325ms, totally different from what produces by the measurement of CFAbsoluteTimeGetCurrent
I am wondering why this problem occured? Which result is the correct one?
I am generating a wave sound for different frequencies and user should hear this wave sound using headphones only and he/she will set left and right headphone volumes using two different sliders. To achieve wave sound I wrote below code which works perfect.
But problem is: From last 5 days I am trying to set volume for left and right headphones separately, but no luck.
class Synth {
// MARK: Properties
public static let shared = Synth()
public var volume: Float {
set {
audioEngine.mainMixerNode.outputVolume = newValue
}
get {
audioEngine.mainMixerNode.outputVolume
}
}
public var frequencyRampValue: Float = 0
public var frequency: Float = 440 {
didSet {
if oldValue != 0 {
frequencyRampValue = frequency - oldValue
} else {
frequencyRampValue = 0
}
}
}
private var audioEngine: AVAudioEngine
private lazy var sourceNode = AVAudioSourceNode { _, _, frameCount, audioBufferList in
let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
let localRampValue = self.frequencyRampValue
let localFrequency = self.frequency - localRampValue
let period = 1 / localFrequency
for frame in 0..<Int(frameCount) {
let percentComplete = self.time / period
let sampleVal = self.signal(localFrequency + localRampValue * percentComplete, self.time)
self.time += self.deltaTime
self.time = fmod(self.time, period)
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frame] = sampleVal
}
}
self.frequencyRampValue = 0
return noErr
}
private var time: Float = 0
private let sampleRate: Double
private let deltaTime: Float
private var signal: Signal
// MARK: Init
init(signal: #escaping Signal = Oscillator.square) {
audioEngine = AVAudioEngine()
let mainMixer = audioEngine.mainMixerNode
let outputNode = audioEngine.outputNode
let format = outputNode.inputFormat(forBus: 0)
sampleRate = format.sampleRate
deltaTime = 1 / Float(sampleRate)
self.signal = signal
let inputFormat = AVAudioFormat(commonFormat: format.commonFormat,
sampleRate: format.sampleRate,
channels: 1,
interleaved: format.isInterleaved)
audioEngine.attach(sourceNode)
audioEngine.connect(sourceNode, to: mainMixer, format: inputFormat)
audioEngine.connect(mainMixer, to: outputNode, format: nil)
mainMixer.outputVolume = 0
audioEngine.mainMixerNode.pan = 100 // this does not work,
//audioEngine.mainMixerNode.pan = 1.0 // this also does not work
do {
try audioEngine.start()
} catch {
print("Could not start engine: \(error.localizedDescription)")
}
}
//This function will be called in view controller to generate sound
public func setWaveformTo(_ signal: #escaping Signal) {
self.signal = signal
}
}
With the above code I can hear the wave sound as normal in left and right headphone.
I tried to use audioEngine.mainMixerNode.pan for value 100 and -100 also -1.0 and 1.0 but this did not make any change.
I tried to use audioEngine.mainMixerNode.pan for value 100 and -100 but this did not make any change.
The allowable range for the pan value is {-1.0, 1.0}. The values that you say you used are outside that range, so it's not surprising that they had no effect. Try 0.75 or -0.75 instead.
Given a single AKAudioFile that has been created from an AKNodeRecorder containing a series of spoken words, where each word is separated by at least 1 second, what is the best approach to ultimately create a series of files with each file containing one word?
I believe this can be accomplished if there is a way to iterate the file in, for example, 100 ms chunks, and measure the average amplitude of each chunk. "Silent chunks" could be those below some arbitrarily small amplitude. While iterating, if I encounter a chunk with non-silent amplitude, I can grab the starting timestamp of this "non-silent" chunk to create an audio file that starts here and ends at the start time of the next "silent" chunk.
Whether it'd be using a manual approach like the one above or a more built-in processing technique to AudioKit, any suggestions would be greatly appreciated.
I don't have a complete solution, but I've started working on something similar to this. This function could serve as a jumping off point for what you need. Basically you want to read the file into a buffer then analyze the buffer data. At that point you could dice it up into smaller buffers and write those to file.
public class func guessBoundaries(url: URL, sensitivity: Double = 1) -> [Double]? {
var out: [Double] = []
guard let audioFile = try? AVAudioFile(forReading: url) else { return nil }
let processingFormat = audioFile.processingFormat
let frameCount = AVAudioFrameCount(audioFile.length)
guard let pcmBuffer = AVAudioPCMBuffer(pcmFormat: processingFormat, frameCapacity: frameCount) else { return nil }
audioFile.framePosition = 0
do {
audioFile.framePosition = 0
try audioFile.read(into: pcmBuffer, frameCount: frameCount)
} catch let err as NSError {
AKLog("ERROR: Couldn't read data into buffer. \(err)")
return nil
}
let channelCount = Int(pcmBuffer.format.channelCount)
let bufferLength = 1024
let inThreshold: Double = 0.001 / sensitivity
let outThreshold: Double = 0.0001 * sensitivity
let minSegmentDuration: Double = 1
var counter = 0
var thresholdCrossed = false
var rmsBuffer = [Float](repeating: 0, count: bufferLength)
var lastTime: Double = 0
AKLog("inThreshold", inThreshold, "outThreshold", outThreshold)
for i in 0 ..< Int(pcmBuffer.frameLength) {
// n is the channel
for n in 0 ..< channelCount {
guard let sample: Float = pcmBuffer.floatChannelData?[n][i] else { continue }
if counter == rmsBuffer.count {
let time: Double = Double(i) / processingFormat.sampleRate
let avg = rmsBuffer.reduce(0, +) / rmsBuffer.count
// AKLog("Average Value at frame \(i):", avg)
if avg > inThreshold && !thresholdCrossed && time - lastTime > minSegmentDuration {
thresholdCrossed = true
out.append(time)
lastTime = time
} else if avg <= outThreshold && thresholdCrossed && time - lastTime > minSegmentDuration {
thresholdCrossed = false
out.append(time)
lastTime = time
}
counter = 0
}
rmsBuffer[counter] = abs(sample)
counter += 1
}
}
rmsBuffer.removeAll()
return out
}