Inaccurate DispatchTime.now() in Swift 3.0 - ios

I'm using DispatchTime.now() to measure elapsed time between events. Sometimes it works correctly, but occasionally it generates values that are far below what I would expect.
My current usage:
var t = DispatchTime.now()
var pt = DispatchTime.now()
// called when an event happens
t = DispatchTime.now()
var elapsed = Double(t.uptimeNanoseconds - pt.uptimeNanoseconds)
elapsed *= 32768/1000000000
pt = t
Such that t and pt are current and previous times, elapsed takes the difference in nanoseconds, converts to double, and scales such that 1 second = 32768. When this technique fails the data recorded is about 100 times smaller than what is expected. The scaling is not the issue, I've checked the rawValue of t and pt. My assumption would be that the clock that runs DispatchTime is running at a slower speed, maybe because of debugging, but in general I would think iOS would compensate for something like this.

As #AlexanderMomchliov suggested NSDate is a better approach than DispatchTime.
Implemented as:
var t: TimeInterval = 0
var pt: TimeInterval = NSDate().timeIntervalSinceReferenceDate
// called when an event happens
t = NSDate().timeIntervalSinceReferenceDate
var elapsed: Double = t - pt
elapsed *= 32768
pt = t

You are performing integer division which will result in inaccurate elapsed time:
elapsed *= 32768/1000000000
You should either wrap these as Double or end them with a decimal (i.e. 32768.0/1000000000.0):
elapsed *= Double(32768)/Double(1000000000)
Additionally, NSEC_PER_SEC is defined as a global variable as part of the Dispatch framework:
elapsed *= Double(32768)/Double(NSEC_PER_SEC)
NSDate or Date may be adjusted by the system and should not be used for reliably measuring elapsed time. DispatchTime or CACurrentMediaTime would be good solutions and are both based on mach absolute time. However, if the app is put in the background during measurement, then using Date would be a good fallback.
Also recommend checking out this question for further discussion.

Related

How to display current speed in MPH based on data from GPS and Accelerometer. I have an idea, but having difficulty executing it

I apologize for not knowing the proper terminology for everything here. I'm a fairly new programmer, and entirely new to Swift. The task I'm trying to accomplish is to display a current speed in MPH. I've found that using the "CoreLocation" and storing locations in an array and using "locations.speed "to display the speed is quite slow and does not refresh as often as I want.
My thought was to get an initial speed value using the "MapKit" and "CoreLocation" method, then feed that initial speed value into a function using the accelerometer to provide a quicker responding speedometer. I would do this by integrating the accelerometer values and adding the initial velocity. This was the best solution I could come up with to get a more accurate speedometer with a better refresh rate.
I'm having a couple of issues currently:
First Issue: I don't know how to get an initial speed value from a function using location data as parameters into a function using accelerometer data as parameters.
Second Issue: Even when assuming an initial speed of 0, my current program displays a value that keeps increasing infinitely. I'm not sure what the issue is that is causing this.
I will show you the portion of my code responsible for this, and would appreciate any insight any of you may have!
For my First Issue, here is my GPS data Function:
func provideInitSpeed(_ manager: CLLocationManager, didUpdateLocations locations: [CLLocation])->Double {
let location = locations[0]
return ((location.speed)*2.23693629) //returns speed value converted to MPH
}
I'm not sure how to make a function call to retrieve this value in my Accelerometer function.
For my Second Issue, here is my Accelerometer Function with assumed starting speed of 0:
motionManager.startAccelerometerUpdates(to: OperationQueue.current!) {
(data,error) in
if let myData = data {
//getting my acceleration data and rounding the values off to the hundredths place to reduce noise
xAccel = round((myData.acceleration.x * g)*100)/100
yAccel = round((myData.acceleration.y * g)*100)/100
zAccel = round((myData.acceleration.z * g)*100)/100
// Integrating accel vals to get velocity vals *Possibly where error occurs* I multiply the accel values by the change in time, which is currently set at 0.2 seconds.
xVel += xAccel * self.motionManager.accelerometerUpdateInterval
yVel += yAccel * self.motionManager.accelerometerUpdateInterval
zVel += zAccel * self.motionManager.accelerometerUpdateInterval
// Finding total speed; Magnitude of Velocity
totalSpeed = sqrt(pow(xVel,2) + pow(yVel,2) + pow(zVel,2))
// if-else statment for further noise reduction. note: "zComp" just adjusts for the -1.0 G units z acceleration value that the phone reads by default for gravity
if (totalSpeed - zComp) > -0.1 && (totalSpeed - zComp) < 0.1 {
self.view.reloadInputViews()
self.speedWithAccelLabel.text = "\(0.0)"
} else {
// Printing totalSpeed
self.view.reloadInputViews()
self.speedWithAccelLabel.text = "\(abs(round((totalSpeed - zComp + /*where initSpeed would go*/)*10)/10))"
}
}//data end
}//motionManager end
I'm not sure why but the speed this function displays is always increasing by about 4 mph every refresh of the label.
This is my first time using Stack Overflow, so I apologize for any stupid mistakes I might have made!
Thanks a lot!

How do I achieve very accurate timing in Swift?

I am working on a musical app with an arpeggio/sequencing feature that requires great timing accuracy. Currently, using `Timer' I have achieved an accuracy with an average jitter of ~5ms, but a max jitter of ~11ms, which is unacceptable for fast arpeggios of 8th, 16th notes & 32nd notes especially.
I've read the 'CADisplayLink' is more accurate than 'Timer', but since it is limited to 1/60th of a second for it's accuracy (~16-17ms), it seems like it would be a less accurate approach than what I've achieved with Timer.
Would diving into CoreAudio be the only way to achieve what I want? Is there some other way to achieve more accurate timing?
I did some testing of Timer and DispatchSourceTimer (aka GCD timer) on iPhone 7 with 1000 data points with an interval of 0.05 seconds. I was expecting GCD timer to be appreciably more accurate (given that it had a dedicated queue), but I found that they were comparable, with standard deviation of my various trials ranging from 0.2-0.8 milliseconds and maximum deviation from the mean of about 2-8 milliseconds.
When trying mach_wait_until as outlined in Technical Note TN2169: High Precision Timers in iOS / OS X, I achieved a timer that was roughly 4 times as accurate than what I achieved with either Timer or GCD timers.
Having said that, I'm not entirely confident of the mach_wait_until is the best approach, as the determination of the specific policy values for thread_policy_set seem to be poorly documented. But the code below reflects the values I used in my tests, using code adapted from How to set realtime thread in Swift? and TN2169:
var timebaseInfo = mach_timebase_info_data_t()
func configureThread() {
mach_timebase_info(&timebaseInfo)
let clock2abs = Double(timebaseInfo.denom) / Double(timebaseInfo.numer) * Double(NSEC_PER_SEC)
let period = UInt32(0.00 * clock2abs)
let computation = UInt32(0.03 * clock2abs) // 30 ms of work
let constraint = UInt32(0.05 * clock2abs)
let THREAD_TIME_CONSTRAINT_POLICY_COUNT = mach_msg_type_number_t(MemoryLayout<thread_time_constraint_policy>.size / MemoryLayout<integer_t>.size)
var policy = thread_time_constraint_policy()
var ret: Int32
let thread: thread_port_t = pthread_mach_thread_np(pthread_self())
policy.period = period
policy.computation = computation
policy.constraint = constraint
policy.preemptible = 0
ret = withUnsafeMutablePointer(to: &policy) {
$0.withMemoryRebound(to: integer_t.self, capacity: Int(THREAD_TIME_CONSTRAINT_POLICY_COUNT)) {
thread_policy_set(thread, UInt32(THREAD_TIME_CONSTRAINT_POLICY), $0, THREAD_TIME_CONSTRAINT_POLICY_COUNT)
}
}
if ret != KERN_SUCCESS {
mach_error("thread_policy_set:", ret)
exit(1)
}
}
I then could do:
private func nanosToAbs(_ nanos: UInt64) -> UInt64 {
return nanos * UInt64(timebaseInfo.denom) / UInt64(timebaseInfo.numer)
}
private func startMachTimer() {
Thread.detachNewThread {
autoreleasepool {
self.configureThread()
var when = mach_absolute_time()
for _ in 0 ..< maxCount {
when += self.nanosToAbs(UInt64(0.05 * Double(NSEC_PER_SEC)))
mach_wait_until(when)
// do something
}
}
}
}
Note, you might want to see if when hasn't already passed (you want to make sure that your timers don't get backlogged if your processing can't be completed in the allotted time), but hopefully this illustrates the idea.
Anyway, with mach_wait_until, I achieved greater fidelity than Timer or GCD timers, at the cost of CPU/power consumption as described in What are the do's and dont's of code running with high precision timers?
I appreciate your skepticism on this final point, but I suspect it would be prudent to dive into CoreAudio and see if it might offer a more robust solution.
For acceptable musically accurate rhythms, the only suitable timing source is using Core Audio or AVFoundation.
I'm working on a sequencer App myself, and I would defiantly recommend using AudioKit for those purposes.
It has a its own sequencer class.
https://audiokit.io/

iOS: Synchronizing frames from camera and motion data

I'm trying to capture frames from camera and associated motion data.
For synchronization I'm using timestamps. Video and motion is written to a file and then processed. In that process I can calculate motion-frames offset for every video.
Turns out motion data and video data for same timestamp is offset from each other by different time from 0.2 sec up to 0.3 sec.
This offset is constant for one video but varies from video to video.
If it was same offset every time I would be able to subtract some calibrated value but it's not.
Is there a good way to synchronize timestamps?
Maybe I'm not recording them correctly?
Is there a better way to bring them to the same frame of reference?
CoreMotion returns timestamps relative to system uptime so I add offset to get unix time:
uptimeOffset = [[NSDate date] timeIntervalSince1970] -
[NSProcessInfo processInfo].systemUptime;
CMDeviceMotionHandler blk =
^(CMDeviceMotion * _Nullable motion, NSError * _Nullable error){
if(!error){
motionTimestamp = motion.timestamp + uptimeOffset;
...
}
};
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXTrueNorthZVertical
toQueue:[NSOperationQueue currentQueue]
withHandler:blk];
To get frames timestamps with high precision I'm using AVCaptureVideoDataOutputSampleBufferDelegate. It is offset to unix time also:
-(void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CMTime frameTime = CMSampleBufferGetOutputPresentationTimeStamp(sampleBuffer);
if(firstFrame)
{
firstFrameTime = CMTimeMake(frameTime.value, frameTime.timescale);
startOfRecording = [[NSDate date] timeIntervalSince1970];
}
CMTime presentationTime = CMTimeSubtract(frameTime, firstFrameTime);
float seconds = CMTimeGetSeconds(presentationTime);
frameTimestamp = seconds + startOfRecording;
...
}
It is actually pretty simple to correlate these timestamps - although it's not clearly documented, both camera frame and motion data timestamps are based on the mach_absolute_time() timebase.
This is a monotonic timer that is reset at boot, but importantly also stops counting when the device is asleep. So there's no easy way to convert it to a standard "wall clock" time.
Thankfully you don't need to as the timestamps are directly comparable - motion.timestamp is in seconds, you can log out mach_absolute_time() in the callback to see it is the same timebase. My quick test shows the motion timestamp is typically about 2ms before mach_absolute_time in the handler, which seems about right for how long it might take for the data to get reported to the app.
Note mach_absolute_time() is in tick units that need conversion to nanoseconds; on iOS 10 and later you can just use the equivalent clock_gettime_nsec_np(CLOCK_UPTIME_RAW); which does the same thing.
[_motionManager
startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXArbitraryZVertical
toQueue:[NSOperationQueue currentQueue]
withHandler:^(CMDeviceMotion * _Nullable motion, NSError * _Nullable error) {
// motion.timestamp is in seconds; convert to nanoseconds
uint64_t motionTimestampNs = (uint64_t)(motion.timestamp * 1e9);
// Get conversion factors from ticks to nanoseconds
struct mach_timebase_info timebase;
mach_timebase_info(&timebase);
// mach_absolute_time in nanoseconds
uint64_t ticks = mach_absolute_time();
uint64_t machTimeNs = (ticks * timebase.numer) / timebase.denom;
int64_t difference = machTimeNs - motionTimestampNs;
NSLog(#"Motion timestamp: %llu, machTime: %llu, difference %lli", motionTimestampNs, machTimeNs, difference);
}];
For the camera, the timebase is also the same:
// In practice gives the same value as the CMSampleBufferGetOutputPresentationTimeStamp
// but this is the media's "source" timestamp which feels more correct
CMTime frameTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
uint64_t frameTimestampNs = (uint64_t)(CMTimeGetSeconds(frameTime) * 1e9);
The delay between the timestamp and the handler being called is a bit larger here, usually in the 10s of milliseconds.
We now need to consider what a timestamp on a camera frame actually means - there are two issues here; finite exposure time, and rolling shutter.
Rolling shutter means that not all scanlines of the image are actually captured at the same time - the top row is captured first and the bottom row last. This rolling readout of the data is spread over the entire frame time, so in 30 FPS camera mode the final scanline's exposure start/end time is almost exactly 1/30 second after the respective start/end time of the first scanline.
My tests indicate the presentation timestamp in the AVFoundation frames is the start of the readout of the frame - ie the end of the exposure of the first scanline. So the end of the exposure of the final scanline is frameDuration seconds after this, and the start of the exposure of the first scanline was exposureTime seconds before this. So a timestamp right in the centre of the frame exposure (the midpoint of the exposure of the middle scanline of the image) can be calculated as:
const double frameDuration = 1.0/30; // rolling shutter effect, depends on camera mode
const double exposure = avCaptureDevice.exposureDuration;
CMTime frameTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
double midFrameTime = CMTimeGetSeconds(frameTime) - exposure * 0.5 + frameDuration * 0.5;
In indoor settings, the exposure usually ends up the full frame time anyway, so the midFrameTime from above ends up identical to the frameTime. The difference is noticeable (under extremely fast motion) with short exposures that you typically get from brightly lit outdoor scenes.
Why the original approach had different offsets
I think the main cause of your offset is that you assume the timestamp of the first frame is the time that the handler runs - ie it doesn't account for any delay between capturing the data and it being delivered to your app. Especially if you're using the main queue for these handlers I can imagine the callback for that first frame being delayed by the 0.2-0.3s you mention.
The best solution I was able to find to this problem was
to run a feature tracker over the recorded video, pick one of the strong features and plot the the speed of it's movement along say X axis and then correlate this plot to the accelerometer Y data.
When there's 2 similar plots that are offset of each other along abscissa there's a technique called cross-correlation that allows to find the offset.
There's an obvious drawback of this approach - it's slow as it requires some video processing.

How do I get current playing time, CMTime in milliseconds in AVPlayer?

I am getting current playing value in seconds but i need in milliseconds. I tried to currentTime.value/currentTime.scale. But it didn't get exact value.
CMTime currentTime = vPlayer.currentItem.currentTime; //playing time
CMTimeValue tValue=currentTime.value;
CMTimeScale tScale=currentTime.timescale;
NSTimeInterval time = CMTimeGetSeconds(currentTime);
NSLog(#"Time :%f",time);//This is in seconds, it misses decimal value double shot=(float)tValue/(float)tScale;
shotTimeVideo=[NSString stringWithFormat:#"%.2f",(float)tValue/(float)tScale];
CMTime currentTime = vPlayer.currentItem.currentTime; //playing time
CMTimeValue tValue=currentTime.value;
CMTimeScale tScale=currentTime.timescale;
NSTimeInterval time = CMTimeGetSeconds(currentTime);
NSLog(#"Time :%f",time);//This is in seconds, it misses decimal value
double shot=(float)tValue/(float)tScale;
shotTimeVideo=[NSString stringWithFormat:#"%.2f", (float)tValue/(float)tScale];
okay,first of all, the value you want is millisecond not seconds
So,you can just use CMTimeGetSeconds(<#CMTime time#>) to get Secondsthen,if you want millisecond , use seconds / 1000.f for float or double value
for CMTime calculating use CMTime method CMTimeMultiplyByRatio(<#CMTime time#>, <#int32_t multiplier#>, <#int32_t divisor#>)just do this --> CMTimeMultiplyByRatio(yourCMTimeValue, 1, 1000)
Apple's Doc
#function CMTimeMultiplyByRatio
#abstract Returns the result of multiplying a CMTime by an integer, then dividing by another integer.
#discussion The exact rational value will be preserved, if possible without overflow. If an overflow
would occur, a new timescale will be chosen so as to minimize the rounding error.
Default rounding will be applied when converting the result to this timescale. If the
result value still overflows when timescale == 1, then the result will be either positive
or negative infinity, depending on the direction of the overflow.
If any rounding occurs for any reason, the result's kCMTimeFlags_HasBeenRounded flag will be
set. This flag will also be set if the CMTime operand has kCMTimeFlags_HasBeenRounded set.
If the denominator, and either the time or the numerator, are zero, the result will be
kCMTimeInvalid. If only the denominator is zero, the result will be either kCMTimePositiveInfinity
or kCMTimeNegativeInfinity, depending on the signs of the other arguments.
If time is invalid, the result will be invalid. If time is infinite, the result will be
similarly infinite. If time is indefinite, the result will be indefinite.
#result (time * multiplier) / divisor
A little late, but may be useful for others.
You can get the timestamp in milliseconds from the CMTime object by
CMTimeConvertScale(yourCMTime, timescale:1000, method:
CMTimeRoundingMethod.roundHalfAwayFromZero
An example:
var yourtime:CMTime = CMTimeMake(value:1234567, timescale: 10)
var timemilli:CMTime = CMTimeConvertScale(yourtime, timescale:1000, method:
CMTimeRoundingMethod.roundHalfAwayFromZero)
var timemillival:Int64 = timemilli.value
print("timemilli \(timemillival)")
which will produce
timemilli 123456700

can I make glsl bail out of a loop when it's been running too long?

I'm doing some glsl fractals, and I'd like to make the calculations bail if they're taking too long to keep the frame rate up (without having to figure out what's good for each existing device and any future ones).
It would be nice if there were a timer I could check every 10 iterations or something....
Failing that, it seems the best approach might be to track how long it took to render the previous frame (or previous N frames) and change the "iterate to" number dynamically as a uniform...?
Or some other suggestion? :)
As it appears there's no good way to do this in the GPU, one can do a simple approach to "tune" the "bail after this number of iterations" threshold outside the loop, once per frame.
CFTimeInterval previousTimestamp = CFAbsoluteTimeGetCurrent();
// gl calls here
CFTimeInterval frameDuration = CFAbsoluteTimeGetCurrent() - previousTimestamp;
float msecs = frameDuration * 1000.0;
if (msecs < 0.2) {
_dwell = MIN(_dwell + 16., 256.);
} else if (msecs > 0.4) {
_dwell = MAX(_dwell - 4., 32.);
}
So my "dwell" is kept between 32 and 256, and more optimistically raised than decreased, and is pushed as a uniform in the "gl calls here" section.

Resources