I am currently developing an avplayer app which calculates HLS streaming metrics. I wanted to get buffer level for the current item.
private var availableDuration: Double {
guard let timeRange = player.currentItem?.loadedTimeRanges.first?.timeRangeValue else {
return 0.0
}
let startSeconds = timeRange.start.seconds
let durationSeconds = timeRange.duration.seconds
return startSeconds + durationSeconds
}
I am a little confused in the terminology used in apple documentations.
Here i am getting availableDuration of the current item but i am not sure if this represents the buffer level of the current item.
Your code seems ok. I used same
var bufferInSeconds: Double {
guard let range = self.loadedTimeRanges.first?.timeRangeValue else {
return 0.0
}
let sec = range.start.seconds + range.duration.seconds
return sec >= 0 ? sec : 0
}
Related
I have an AVAudioPlayerNode looping a segment of a song:
audioPlayer.scheduleBuffer(segment, at: nil, options:.loops)
I want to get current position of the song while it's playing. Usually, this is done by calculating = currentFrame / audioSampleRate
where
var currentFrame: AVAudioFramePosition {
guard let lastRenderTime = audioPlayer.lastRenderTime,
let playerTime = audioPlayer.playerTime(forNodeTime: lastRenderTime) else {
return 0
}
return playerTime.sampleTime
}
However, when the loop ends and restarts, the currentFrame does not restart. But it still increases which makes currentFrame / audioSampleRate incorrect as the current position.
So what is the correct way to calculate the current position?
Good old modulo will do the job:
public var currentTime: TimeInterval {
guard let nodeTime = player.lastRenderTime,
let playerTime = player.playerTime(forNodeTime: nodeTime) else {
return 0
}
let time = (Double(playerTime.sampleTime) / playerTime.sampleRate)
.truncatingRemainder(dividingBy: Double(file.length) / Double(playerTime.sampleRate))
return time
}
I'm using an AVAudioPlayerNode attached to an AVAudioEngine to play a sound.
to get the current time of the player I'm doing this:
extension AVAudioPlayerNode {
var currentTime: TimeInterval {
get {
if let nodeTime: AVAudioTime = self.lastRenderTime, let playerTime: AVAudioTime = self.playerTime(forNodeTime: nodeTime) {
return Double(playerTime.sampleTime) / playerTime.sampleRate
}
return 0
}
}
}
I have a slider that indicates the current time of the audio. When the user changes the slider value, on .ended event I have to change the current time of the player to that indicated in the slider.
To do so:
extension AVAudioPlayerNode {
func seekTo(value: Float, audioFile: AVAudioFile, duration: Float) {
if let nodetime = self.lastRenderTime{
let playerTime: AVAudioTime = self.playerTime(forNodeTime: nodetime)!
let sampleRate = self.outputFormat(forBus: 0).sampleRate
let newsampletime = AVAudioFramePosition(Int(sampleRate * Double(value)))
let length = duration - value
let framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
self.stop()
if framestoplay > 1000 {
self.scheduleSegment(audioFile, startingFrame: newsampletime, frameCount: framestoplay, at: nil,completionHandler: nil)
}
}
self.play()
}
However, my function seekTo is not working correctly(I'm printing currentTime before and after the function and it shows always a negative value ~= -0.02). What is the wrong thing I'm doing and can I find a simpler way to change the currentTime of the player?
I ran into same issue. Apparently the framestoplay was always 0, which happened because of sampleRate. The value for playerTime.sampleRate was always 0 in my case.
So,
let framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
must be replaced with
let framestoplay = AVAudioFrameCount(Float(sampleRate) * length)
Given a single AKAudioFile that has been created from an AKNodeRecorder containing a series of spoken words, where each word is separated by at least 1 second, what is the best approach to ultimately create a series of files with each file containing one word?
I believe this can be accomplished if there is a way to iterate the file in, for example, 100 ms chunks, and measure the average amplitude of each chunk. "Silent chunks" could be those below some arbitrarily small amplitude. While iterating, if I encounter a chunk with non-silent amplitude, I can grab the starting timestamp of this "non-silent" chunk to create an audio file that starts here and ends at the start time of the next "silent" chunk.
Whether it'd be using a manual approach like the one above or a more built-in processing technique to AudioKit, any suggestions would be greatly appreciated.
I don't have a complete solution, but I've started working on something similar to this. This function could serve as a jumping off point for what you need. Basically you want to read the file into a buffer then analyze the buffer data. At that point you could dice it up into smaller buffers and write those to file.
public class func guessBoundaries(url: URL, sensitivity: Double = 1) -> [Double]? {
var out: [Double] = []
guard let audioFile = try? AVAudioFile(forReading: url) else { return nil }
let processingFormat = audioFile.processingFormat
let frameCount = AVAudioFrameCount(audioFile.length)
guard let pcmBuffer = AVAudioPCMBuffer(pcmFormat: processingFormat, frameCapacity: frameCount) else { return nil }
audioFile.framePosition = 0
do {
audioFile.framePosition = 0
try audioFile.read(into: pcmBuffer, frameCount: frameCount)
} catch let err as NSError {
AKLog("ERROR: Couldn't read data into buffer. \(err)")
return nil
}
let channelCount = Int(pcmBuffer.format.channelCount)
let bufferLength = 1024
let inThreshold: Double = 0.001 / sensitivity
let outThreshold: Double = 0.0001 * sensitivity
let minSegmentDuration: Double = 1
var counter = 0
var thresholdCrossed = false
var rmsBuffer = [Float](repeating: 0, count: bufferLength)
var lastTime: Double = 0
AKLog("inThreshold", inThreshold, "outThreshold", outThreshold)
for i in 0 ..< Int(pcmBuffer.frameLength) {
// n is the channel
for n in 0 ..< channelCount {
guard let sample: Float = pcmBuffer.floatChannelData?[n][i] else { continue }
if counter == rmsBuffer.count {
let time: Double = Double(i) / processingFormat.sampleRate
let avg = rmsBuffer.reduce(0, +) / rmsBuffer.count
// AKLog("Average Value at frame \(i):", avg)
if avg > inThreshold && !thresholdCrossed && time - lastTime > minSegmentDuration {
thresholdCrossed = true
out.append(time)
lastTime = time
} else if avg <= outThreshold && thresholdCrossed && time - lastTime > minSegmentDuration {
thresholdCrossed = false
out.append(time)
lastTime = time
}
counter = 0
}
rmsBuffer[counter] = abs(sample)
counter += 1
}
}
rmsBuffer.removeAll()
return out
}
I created a simple project to test out the functionality of mach_wait_until(). This code gives me an accurate printout of how precise the 1 second delay is. The console printout is virtually identical and extremely precise on both the iOS Simulator and on my iPad Air 2. However, on my iPad there is a HUGE delay, where the same 1 second delay takes about 100 seconds! And to add to the weirdness of it, the printout in the console says it only takes 1 second (with extremely low jitter and/or lag).
How can this be? Is there some timing conversion that I need to do for a physical iOS device when using mach_wait_until()?
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
playNoteTest()
}
var start = mach_absolute_time()
var end = mach_absolute_time()
func playNoteTest() {
let when = mach_absolute_time() + 1000000000
self.start = mach_absolute_time()
mach_wait_until(when)
self.end = mach_absolute_time()
let timeDelta = (self.end - self.start)
let newTimeDelta = Double(timeDelta) / 1000000000.0
print("Delta Time = \(newTimeDelta)")
playNoteTest()
}
}
mach_absolute_time units are CPU dependent. You need to multiply by a device-specific constant in order to get real-world units. It is discussed in this Tech Q&A from Apple.
Here is some playground code that demonstrates the idea:
import PlaygroundSupport
import Foundation
PlaygroundPage.current.needsIndefiniteExecution = true
class TimeBase {
static let NANOS_PER_USEC: UInt64 = 1000
static let NANOS_PER_MILLISEC: UInt64 = 1000 * NANOS_PER_USEC
static let NANOS_PER_SEC: UInt64 = 1000 * NANOS_PER_MILLISEC
static var timebaseInfo: mach_timebase_info! = {
var tb = mach_timebase_info(numer: 0, denom: 0)
let status = mach_timebase_info(&tb)
if status == KERN_SUCCESS {
return tb
} else {
return nil
}
}()
static func toNanos(abs:UInt64) -> UInt64 {
return (abs * UInt64(timebaseInfo.numer)) / UInt64(timebaseInfo.denom)
}
static func toAbs(nanos:UInt64) -> UInt64 {
return (nanos * UInt64(timebaseInfo.denom)) / UInt64(timebaseInfo.numer)
}
}
let duration = TimeBase.toAbs(nanos: 10 * TimeBase.NANOS_PER_SEC)
DispatchQueue.global(qos: .userInitiated).async {
print("Start")
let start = mach_absolute_time()
mach_wait_until(start+duration)
let stop = mach_absolute_time()
let elapsed = stop-start
let elapsedNanos = TimeBase.toNanos(abs: elapsed)
let elapsedSecs = elapsedNanos/TimeBase.NANOS_PER_SEC
print("Elapsed nanoseconds = \(elapsedNanos)")
print("Elapsed seconds = \(elapsedSecs)")
}
Is it possible get playing time and total play time in AVPlayer? If yes, how can I do this?
You can access currently played item by using currentItem property:
AVPlayerItem *currentItem = yourAVPlayer.currentItem;
Then you can easily get the requested time values
CMTime duration = currentItem.duration; //total time
CMTime currentTime = currentItem.currentTime; //playing time
Swift 5:
if let currentItem = player.currentItem {
let duration = CMTimeGetSeconds(currentItem.duration)
let currentTime = CMTimeGetSeconds(currentItem.currentTime())
print("Duration: \(duration) s")
print("Current time: \(currentTime) s")
}
_audioPlayer = [self playerWithAudio:_audio];
_observer =
[_audioPlayer addPeriodicTimeObserverForInterval:CMTimeMake(1, 2)
queue:dispatch_get_main_queue()
usingBlock:^(CMTime time)
{
_progress = CMTimeGetSeconds(time);
}];
Swift 3
let currentTime:Double = player.currentItem.currentTime().seconds
You can get the seconds of your current time by accessing the seconds property of the currentTime(). This will return a Double that represents the seconds in time. Then you can use this value to construct a readable time to present to your user.
First, include a method to return the time variables for H:mm:ss that you will display to the user:
func getHoursMinutesSecondsFrom(seconds: Double) -> (hours: Int, minutes: Int, seconds: Int) {
let secs = Int(seconds)
let hours = secs / 3600
let minutes = (secs % 3600) / 60
let seconds = (secs % 3600) % 60
return (hours, minutes, seconds)
}
Next, a method that will convert the values you retrieved above into a readable string:
func formatTimeFor(seconds: Double) -> String {
let result = getHoursMinutesSecondsFrom(seconds: seconds)
let hoursString = "\(result.hours)"
var minutesString = "\(result.minutes)"
if minutesString.characters.count == 1 {
minutesString = "0\(result.minutes)"
}
var secondsString = "\(result.seconds)"
if secondsString.characters.count == 1 {
secondsString = "0\(result.seconds)"
}
var time = "\(hoursString):"
if result.hours >= 1 {
time.append("\(minutesString):\(secondsString)")
}
else {
time = "\(minutesString):\(secondsString)"
}
return time
}
Now, update the UI with the previous calculations:
func updateTime() {
// Access current item
if let currentItem = player.currentItem {
// Get the current time in seconds
let playhead = currentItem.currentTime().seconds
let duration = currentItem.duration.seconds
// Format seconds for human readable string
playheadLabel.text = formatTimeFor(seconds: playhead)
durationLabel.text = formatTimeFor(seconds: duration)
}
}
With Swift 4.2, use this;
let currentPlayer = AVPlayer()
if let currentItem = currentPlayer.currentItem {
let duration = currentItem.asset.duration
}
let currentTime = currentPlayer.currentTime()
Swift 4
self.playerItem = AVPlayerItem(url: videoUrl!)
self.player = AVPlayer(playerItem: self.playerItem)
self.player?.addPeriodicTimeObserver(forInterval: CMTimeMakeWithSeconds(1, 1), queue: DispatchQueue.main, using: { (time) in
if self.player!.currentItem?.status == .readyToPlay {
let currentTime = CMTimeGetSeconds(self.player!.currentTime())
let secs = Int(currentTime)
self.timeLabel.text = NSString(format: "%02d:%02d", secs/60, secs%60) as String//"\(secs/60):\(secs%60)"
})
}
AVPlayerItem *currentItem = player.currentItem;
NSTimeInterval currentTime = CMTimeGetSeconds(currentItem.currentTime);
NSLog(#" Capturing Time :%f ",currentTime);
Swift:
let currentItem = yourAVPlayer.currentItem
let duration = currentItem.asset.duration
var currentTime = currentItem.asset.currentTime
Swift 5:
Timer.scheduledTimer seems better than addPeriodicTimeObserver if you want to have a smooth progress bar
static public var currenTime = 0.0
static public var currenTimeString = "00:00"
Timer.scheduledTimer(withTimeInterval: 1/60, repeats: true) { timer in
if self.player!.currentItem?.status == .readyToPlay {
let timeElapsed = CMTimeGetSeconds(self.player!.currentTime())
let secs = Int(timeElapsed)
self.currenTime = timeElapsed
self.currenTimeString = NSString(format: "%02d:%02d", secs/60, secs%60) as String
print("AudioPlayer TIME UPDATE: \(self.currenTime) \(self.currenTimeString)")
}
}
Swift 4.2:
let currentItem = yourAVPlayer.currentItem
let duration = currentItem.asset.duration
let currentTime = currentItem.currentTime()
in swift 5+
You can query the player directly to find the current time of the actively playing AVPlayerItem.
The time is stored in a CMTime Struct for ease of conversion to various scales such as 10th of sec, 100th of a sec etc
In most cases we need to represent times in seconds so the following will show you what you want
let currentTimeInSecs = CMTimeGetSeconds(player.currentTime())