Lets assume I have a CMTimeRange constructed from start time zero, and
duration of 40 seconds.
I want to split this CMTimeRange into multiple chunks by a X seconds divider. So the total duration of the chunks will be the same duration as the original duration, and each startTime will reflect the endTime of of the previous chunk. The last chunk will be the modulus of the left over seconds.
For example, for video of 40 seconds, and divider of 15 seconds per chunk:
First CMTimeRange - start time: 0, duration: 15 seconds.
Second CMTimeRange - start time: 15, duration: 15 seconds.
Third CMTimeRange - start time: 30, duration: 10 seconds. (left overs)
What I've tried:
I tried using CMTimeSubtract on the total duration and use the result again, in recursive way untill the CMTime in invalid, But it doesn't seems to work.
Any help will be highly appreciated.
Best Regards, Roi
Starting at range.start, create ranges of the given length
until range.end is reached:
func splitIntoChunks(range: CMTimeRange, length: CMTime) -> [CMTimeRange] {
var chunks: [CMTimeRange] = []
var from = range.start
while from < range.end {
chunks.append(CMTimeRange(start: from, duration: length).intersection(range))
from = from + length
}
return chunks
}
intersection is used here to prune the last chunk to the original range.
Alternative solution:
func splitIntoChunks(range: CMTimeRange, length: CMTime) -> [CMTimeRange] {
return stride(from: range.start.seconds, to: range.end.seconds, by: length.seconds).map {
CMTimeRange(start: CMTime(seconds: $0, preferredTimescale: length.timescale), duration: length)
.intersection(range)
}
}
With a custom extension to make CMTime adopt the Strideable
protocol
extension CMTime: Strideable {
public func distance(to other: CMTime) -> TimeInterval {
return other - self
}
public func advanced(by n: TimeInterval) -> CMTime {
return self + n
}
}
this can be further simplified to
func splitIntoChunks(range: CMTimeRange, length: CMTime) -> [CMTimeRange] {
return stride(from: range.start, to: range.end, by: length.seconds).map {
CMTimeRange(start: $0, duration: length) .intersection(range)
}
}
In any case, you'll might want to add a check
precondition(length.seconds > 0, "length must be positive")
to your function, in order to detect invalid calls during development.
I too needed to stride CMTime, to deal with AVCaptureDevice exposure durations & showing these to users.
Turns out Martin's answer doesn't work anymore with the changes in Swift 4.x / XCode 10. Here's my version of CMTime conformance to Strideable:
extension CMTime: Strideable {
public func distance(to other: CMTime) -> TimeInterval {
return TimeInterval((Double(other.value) / Double(other.timescale)) - (Double(self.value) / Double(self.timescale)))
}
public func advanced(by n: TimeInterval) -> CMTime {
var retval = self
retval.value += CMTimeValue(n * TimeInterval(self.timescale))
return retval
}
}
I derped with it in a playground and it seems to work.
Related
I have an AvPictureInPictureController that should display an image while playing audio. I have created a AVSampleBufferDisplayLayer that has a CMSampleBuffer with an image and the two necessary delegates AVPictureInPictureSampleBufferPlaybackDelegate & AVPictureInPictureControllerDelegate
I am setting
pictureInPictureController.requiresLinearPlayback = false
so the picture in picture window is showing the seek back, play/pause and seek forward buttons.
Here is my function for the time range:
func pictureInPictureControllerTimeRangeForPlayback(_ pictureInPictureController: AVPictureInPictureController)
-> CMTimeRange {
let duration = player.state.duration
if duration == 0 {
return CMTimeRange(start: .negativeInfinity, duration: .positiveInfinity)
}
return CMTimeRange(
start: CMTime(
seconds: .zero,
preferredTimescale: 10_000
),
duration: CMTimeMakeWithSeconds(
duration,
preferredTimescale: 10_000
)
)
}
My problem is, that seek forward is disabled on picture in picture. Seeking back is working just fine, also play/pause. But seek forward is disabled and I haven't figured out why this is happening. Any thoughts?
I tried to calculate the CMTimeRange differently but it is always the same behaviour.
I found this project https://github.com/getsidetrack/swiftui-pipify/blob/main/Sources/PipifyController.swift#L250 where the CMTimeRange is calculated from the CACurrentMediaTime() and although I am not sure why there is such a big time range calculated (like taking one week in seconds) CACurrentMediaTime() was the right hint.
So this is how the the pictureInPictureControllerTimeRangeForPlayback looks like now:
func pictureInPictureControllerTimeRangeForPlayback(
_ pictureInPictureController: AVPictureInPictureController
) -> CMTimeRange {
if player.duration == 0 {
return CMTimeRange(start: .negativeInfinity, duration: .positiveInfinity)
}
let currentTime = CMTime(
seconds: CACurrentMediaTime(),
preferredTimescale: 60
)
let currentPosition = CMTime(
seconds: state.position,
preferredTimescale: 60
)
return CMTimeRange(
start: currentTime - currentPosition,
duration: CMTime(
seconds: player.duration,
preferredTimescale: 60
)
)
}
I am generating a wave sound for different frequencies and user should hear this wave sound using headphones only and he/she will set left and right headphone volumes using two different sliders. To achieve wave sound I wrote below code which works perfect.
But problem is: From last 5 days I am trying to set volume for left and right headphones separately, but no luck.
class Synth {
// MARK: Properties
public static let shared = Synth()
public var volume: Float {
set {
audioEngine.mainMixerNode.outputVolume = newValue
}
get {
audioEngine.mainMixerNode.outputVolume
}
}
public var frequencyRampValue: Float = 0
public var frequency: Float = 440 {
didSet {
if oldValue != 0 {
frequencyRampValue = frequency - oldValue
} else {
frequencyRampValue = 0
}
}
}
private var audioEngine: AVAudioEngine
private lazy var sourceNode = AVAudioSourceNode { _, _, frameCount, audioBufferList in
let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
let localRampValue = self.frequencyRampValue
let localFrequency = self.frequency - localRampValue
let period = 1 / localFrequency
for frame in 0..<Int(frameCount) {
let percentComplete = self.time / period
let sampleVal = self.signal(localFrequency + localRampValue * percentComplete, self.time)
self.time += self.deltaTime
self.time = fmod(self.time, period)
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frame] = sampleVal
}
}
self.frequencyRampValue = 0
return noErr
}
private var time: Float = 0
private let sampleRate: Double
private let deltaTime: Float
private var signal: Signal
// MARK: Init
init(signal: #escaping Signal = Oscillator.square) {
audioEngine = AVAudioEngine()
let mainMixer = audioEngine.mainMixerNode
let outputNode = audioEngine.outputNode
let format = outputNode.inputFormat(forBus: 0)
sampleRate = format.sampleRate
deltaTime = 1 / Float(sampleRate)
self.signal = signal
let inputFormat = AVAudioFormat(commonFormat: format.commonFormat,
sampleRate: format.sampleRate,
channels: 1,
interleaved: format.isInterleaved)
audioEngine.attach(sourceNode)
audioEngine.connect(sourceNode, to: mainMixer, format: inputFormat)
audioEngine.connect(mainMixer, to: outputNode, format: nil)
mainMixer.outputVolume = 0
audioEngine.mainMixerNode.pan = 100 // this does not work,
//audioEngine.mainMixerNode.pan = 1.0 // this also does not work
do {
try audioEngine.start()
} catch {
print("Could not start engine: \(error.localizedDescription)")
}
}
//This function will be called in view controller to generate sound
public func setWaveformTo(_ signal: #escaping Signal) {
self.signal = signal
}
}
With the above code I can hear the wave sound as normal in left and right headphone.
I tried to use audioEngine.mainMixerNode.pan for value 100 and -100 also -1.0 and 1.0 but this did not make any change.
I tried to use audioEngine.mainMixerNode.pan for value 100 and -100 but this did not make any change.
The allowable range for the pan value is {-1.0, 1.0}. The values that you say you used are outside that range, so it's not surprising that they had no effect. Try 0.75 or -0.75 instead.
I have built this app with the help of some friends. I don't really know how the code works.
Basically using an apple pencil it records data (time on tablet, speed of apple pencil, stroke counts etc). However as more time elapses and more drawing occurs, the timer gets out of sync with real time.
The purpose of this app is for dementia research, I get patients to draw on the tablet, and i collect information of that. I can't do the research if the timer stinks.
I have tried disabling all the timers, but the lag remains the same. I have a felling it has something to do with how strokes are being sampled. I just need a stroke count I don't need it to show strokes per min (which is what it is currently doing). I think the stroke counter might the cause???
this is the program:
https://drive.google.com/open?id=1lwzKwG7NLcX1qmE5yoxsdq5HICV2TNHm
class StrokeSegment {
var sampleBefore: StrokeSample?
var fromSample: StrokeSample!
var toSample: StrokeSample!
var sampleAfter: StrokeSample?
var fromSampleIndex: Int
var segmentUnitNormal: CGVector {
return segmentStrokeVector.normal!.normalized!
}
var fromSampleUnitNormal: CGVector {
return interpolatedNormalUnitVector(between: previousSegmentStrokeVector, and: segmentStrokeVector)
}
var toSampleUnitNormal: CGVector {
return interpolatedNormalUnitVector(between: segmentStrokeVector, and: nextSegmentStrokeVector)
}
var previousSegmentStrokeVector: CGVector {
if let sampleBefore = self.sampleBefore {
return fromSample.location - sampleBefore.location
} else {
return segmentStrokeVector
}
}
var segmentStrokeVector: CGVector {
return toSample.location - fromSample.location
}
var nextSegmentStrokeVector: CGVector {
if let sampleAfter = self.sampleAfter {
return sampleAfter.location - toSample.location
} else {
return segmentStrokeVector
}
}
init(sample: StrokeSample) {
self.sampleAfter = sample
self.fromSampleIndex = -2
}
#discardableResult
func advanceWithSample(incomingSample: StrokeSample?) -> Bool {
if let sampleAfter = self.sampleAfter {
self.sampleBefore = fromSample
self.fromSample = toSample
self.toSample = sampleAfter
self.sampleAfter = incomingSample
self.fromSampleIndex += 1
return true
}
return false
}
}
class StrokeSegmentIterator: IteratorProtocol {
private let stroke: Stroke
private var nextIndex: Int
private let sampleCount: Int
private let predictedSampleCount: Int
private var segment: StrokeSegment!
init(stroke: Stroke) {
self.stroke = stroke
nextIndex = 1
sampleCount = stroke.samples.count
predictedSampleCount = stroke.predictedSamples.count
if (predictedSampleCount + sampleCount) > 1 {
segment = StrokeSegment(sample: sampleAt(0)!)
segment.advanceWithSample(incomingSample: sampleAt(1))
}
}
func sampleAt(_ index: Int) -> StrokeSample? {
if index < sampleCount {
return stroke.samples[index]
}
let predictedIndex = index - sampleCount
if predictedIndex < predictedSampleCount {
return stroke.predictedSamples[predictedIndex]
} else {
return nil
}
}
func next() -> StrokeSegment? {
nextIndex += 1
if let segment = self.segment {
if segment.advanceWithSample(incomingSample: sampleAt(nextIndex)) {
return segment
}
}
return nil
}
}
for example at true 25 seconds, the app displays the total time at 20 seconds.
A Timer is not something to count elapsed time. It is a tool used to trigger an execution after some time has elapsed. But just "after" some time has elapsed, not "exactly after" some time has elapsed. So for instance doing something like:
var secondsElapsed: TimeInterval = 0.0
let timeInitiated = Date()
Timer.scheduledTimer(withTimeInterval: 1.0, repeats: true) { _ in
secondsElapsed += 1
print("\(secondsElapsed) seconds should have passed but in reality \(Date().timeIntervalSince(timeInitiated)) second have passed")
}
you will see that the two are not the same but are pretty close. But as soon as I add some extra work like this:
var secondsElapsed: TimeInterval = 0.0
let timeInitiated = Date()
func countTo(_ end: Int) {
var string = ""
for i in 1...end {
string += String(i)
}
print("Just counted to string of lenght \(string.count)")
}
Timer.scheduledTimer(withTimeInterval: 1.0/60.0, repeats: true) { _ in
countTo(100000)
}
Timer.scheduledTimer(withTimeInterval: 1.0, repeats: true) { _ in
secondsElapsed += 1
print("\(secondsElapsed) seconds should have passed but in reality \(Date().timeIntervalSince(timeInitiated)) second have passed")
}
we get to situations like "14.0 seconds should have passed but in reality 19.17617702484131 second have passed".
We made the application busy so it doesn't have time to count correctly.
In your case you will need to use one of two solutions:
If you are interested in time elapsed simply use timeIntervalSince as demonstrated in first code snippet.
If you need to ensure triggering every N seconds you should optimize your code, consider multithreading... But mostly keep in mind that you can only get close to "every N seconds", it should not be possible to guarantee an execution exactly every N seconds.
I'm using an AVAudioPlayerNode attached to an AVAudioEngine to play a sound.
to get the current time of the player I'm doing this:
extension AVAudioPlayerNode {
var currentTime: TimeInterval {
get {
if let nodeTime: AVAudioTime = self.lastRenderTime, let playerTime: AVAudioTime = self.playerTime(forNodeTime: nodeTime) {
return Double(playerTime.sampleTime) / playerTime.sampleRate
}
return 0
}
}
}
I have a slider that indicates the current time of the audio. When the user changes the slider value, on .ended event I have to change the current time of the player to that indicated in the slider.
To do so:
extension AVAudioPlayerNode {
func seekTo(value: Float, audioFile: AVAudioFile, duration: Float) {
if let nodetime = self.lastRenderTime{
let playerTime: AVAudioTime = self.playerTime(forNodeTime: nodetime)!
let sampleRate = self.outputFormat(forBus: 0).sampleRate
let newsampletime = AVAudioFramePosition(Int(sampleRate * Double(value)))
let length = duration - value
let framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
self.stop()
if framestoplay > 1000 {
self.scheduleSegment(audioFile, startingFrame: newsampletime, frameCount: framestoplay, at: nil,completionHandler: nil)
}
}
self.play()
}
However, my function seekTo is not working correctly(I'm printing currentTime before and after the function and it shows always a negative value ~= -0.02). What is the wrong thing I'm doing and can I find a simpler way to change the currentTime of the player?
I ran into same issue. Apparently the framestoplay was always 0, which happened because of sampleRate. The value for playerTime.sampleRate was always 0 in my case.
So,
let framestoplay = AVAudioFrameCount(Float(playerTime.sampleRate) * length)
must be replaced with
let framestoplay = AVAudioFrameCount(Float(sampleRate) * length)
I'm trying to make my iphone play a tune without using prerecorded files. What are my options here? AVAudioEngine, AudioKit? I've looked at them, but the learning curve is relatively steep for something I'm hoping is easy. They also seem like tools for creating sound effect given a PCM buffer window.
I'd like to be able to do something like
pitchCreator.play(["C4", "E4", "G4"], durations: [1, 1, 1])
Preferrably sounding like an instrument or at least not like a pure sine wave.
EDIT: The below code has been replaced by AudioKit
To anyone wondering this; I did make it work (kind of) using code similar to the one below.
class PitchCreator {
var engine: AVAudioEngine
var player: AVAudioPlayerNode
var mixer: AVAudioMixerNode
var buffer: AVAudioPCMBuffer
init() {
engine = AVAudioEngine()
player = AVAudioPlayerNode()
mixer = engine.mainMixerNode;
buffer = AVAudioPCMBuffer(PCMFormat: player.outputFormatForBus(0), frameCapacity: 100)
buffer.frameLength = 4096
engine.attachNode(player)
engine.connect(player, to: mixer, format: player.outputFormatForBus(0))
}
func play(frequency: Float) {
let signal = self.createSignal(frequency, amplitudes: [1.0, 0.5, 0.3, 0.1], bufferSize: Int(buffer.frameLength), sampleRate: Float(mixer.outputFormatForBus(0).sampleRate))
for i in 0 ..< signal.count {
buffer.floatChannelData.memory[i] = 0.5 * signal[i]
}
do {
try engine.start()
player.play()
player.scheduleBuffer(buffer, atTime: nil, options: .Loops, completionHandler: nil)
} catch {}
}
func stop() {
engine.stop()
player.stop()
}
func createSignal(frequency: Float, amplitudes: [Float], bufferSize: Int, sampleRate: Float) -> [Float] {
let π = Float(M_PI)
let T = sampleRate / frequency
var x = [Float](count: bufferSize, repeatedValue: 0.0)
for k in 0 ..< x.count {
for h in 0 ..< amplitudes.count {
x[k] += amplitudes[h] * sin(2.0 * π * Float(h + 1) * Float(k) / T)
}
}
return x
}
}
But it doesn't sound good enough so I've gone with sampling the notes I need and just use AVAudioPlayer instead to play them.