Can iOS boot time drift? - ios

I'm using this code to determine when my iOS device last rebooted:
int mib[MIB_SIZE];
size_t size;
struct timeval boottime;
mib[0] = CTL_KERN;
mib[1] = KERN_BOOTTIME;
size = sizeof(boottime);
if (sysctl(mib, MIB_SIZE, &boottime, &size, NULL, 0) != -1) {
return boottime.tv_sec;
}
return 0;
I'm seeing some anomalies with this time. In particular, I save the long and days and weeks later check the saved long agains the value returned by the above code.
I'm not sure, but I think I'm seeing some drift. This doesn't make any sense to me. I'm not converting to NSDate to prevent drift. I would think that boot time is record by the kernel when it boots and isn't computed again, it is just stored. But could iOS be saving boot time as an NSDate, with any inherent drift problems with that?

While the iOS Kernel is closed-source, it's reasonable to assume most of it is the same as the OSX Kernel, which is open-source.
Within osfmk/kern/clock.c there is the function:
/*
* clock_get_boottime_nanotime:
*
* Return the boottime, used by sysctl.
*/
void
clock_get_boottime_nanotime(
clock_sec_t *secs,
clock_nsec_t *nanosecs)
{
spl_t s;
s = splclock();
clock_lock();
*secs = (clock_sec_t)clock_boottime;
*nanosecs = 0;
clock_unlock();
splx(s);
}
and clock_boottime is declared as:
static uint64_t clock_boottime; /* Seconds boottime epoch */
and finally the comment to this function shows that it can, indeed, change:
/*
* clock_set_calendar_microtime:
*
* Sets the current calendar value by
* recalculating the epoch and offset
* from the system clock.
*
* Also adjusts the boottime to keep the
* value consistent, writes the new
* calendar value to the platform clock,
* and sends calendar change notifications.
*/
void
clock_set_calendar_microtime(
clock_sec_t secs,
clock_usec_t microsecs)
{
...
Update to answer query from OP
I am not certain about how often clock_set_calendar_microtime() is called, as I am not familiar with the inner workings of the kernel; however it adjusts the clock_boottime value and the clock_bootime value is initialized in clock_initialize_calendar(), so I would say it can be called more than once. I have been unable to find any call to it using:
$ find . -type f -exec grep -l clock_set_calendar_microtime {} \;

RE my comment above...
"to my understanding, when the user goes into settings and changes the
time manually, the boot time is changed by the delta to the new time
to keep the interval between boot time and system time, equal. but it
does not "drift" as it is a timestamp, only the system clock itself
drifts."
I'm running NTP on my iOS app, and speak with Google's time servers.
I feed NTP the uptime since boot (which doesn't pause and is correctly adjusted if some nefarious user starts messing with system time... which is the whole point of this in the first place), and then add the offset between uptime since boot and epoch time to my uptime.
inline static struct timeval uptime(void) {
struct timeval before_now, now, after_now;
after_now = since_boot();
do {
before_now = after_now;
gettimeofday(&now, NULL);
after_now = since_boot();
} while (after_now.tv_sec != before_now.tv_sec && after_now.tv_usec != before_now.tv_usec);
struct timeval systemUptime;
systemUptime.tv_sec = now.tv_sec - before_now.tv_sec;
systemUptime.tv_usec = now.tv_usec - before_now.tv_usec;
return systemUptime;
}
I sync with the time servers once every 15 minutes, and calculate the offset drift (aka on system clock drift) every time.
static void calculateOffsetDrift(void) {
static dispatch_queue_t offsetDriftQueue = dispatch_queue_create("", DISPATCH_QUEUE_CONCURRENT);
static double lastOffset;
dispatch_barrier_sync(offsetDriftQueue, ^{
double newOffset = networkOffset();
if (lastOffset != 0.0f) printf("offset difference = %f \n", lastOffset - newOffset);
lastOffset = newOffset;
});
}
On my iPhone Xs Max the system clock usually runs around 30ms behind over 15 minutes.
Here's some figures from a test I just ran using LTE in NYC..
+47.381592 ms
+43.325684 ms
-67.654541 ms
+24.860107 ms
+5.940674 ms
+25.395264 ms
-34.969971 ms

Related

Time errors in ADC

I measure distance using ultrasonic signal. STM32F1 generate Ultrasound signal, STM32F4 writing this signal using microphone. Both STM32 are synchronized using signal generated another device, they connected one wire.
Question: Why signal comes with different times? although I don’t move receiver or transmitter. It's gives errors 50mm.
Dispersion signals
Code of receiver is here:
while (1)
{
if(HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_5) != 0x00)
{
Get_UZ_Signal(uz_signal);
Send_Signal(uz_signal);
}
}
void Get_UZ_Signal(uint16_t* uz_signal)
{
int i,j;
uint16_t uz_buf[10];
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)&uz_buf, 300000);
for(i = 0; i<lenght_signal; i++)
{
j=10;
while(j>0)
{
j--;
}
uz_signal[i] = uz_buf[0];
}
HAL_ADC_Stop_DMA(&hadc1);
}
hadc1.Instance = ADC1;
hadc1.Init.ClockPrescaler = ADC_CLOCK_SYNC_PCLK_DIV4;
hadc1.Init.Resolution = ADC_RESOLUTION_12B;
hadc1.Init.ScanConvMode = DISABLE;
hadc1.Init.ContinuousConvMode = ENABLE;
hadc1.Init.DiscontinuousConvMode = DISABLE;
hadc1.Init.ExternalTrigConvEdge = ADC_EXTERNALTRIGCONVEDGE_NONE;
hadc1.Init.ExternalTrigConv = ADC_SOFTWARE_START;
hadc1.Init.DataAlign = ADC_DATAALIGN_RIGHT;
hadc1.Init.NbrOfConversion = 1;
hadc1.Init.DMAContinuousRequests = DISABLE;
hadc1.Init.EOCSelection = ADC_EOC_SINGLE_CONV;
More information here:
https://github.com/BooSooV/Indoor-Ultrasonic-Positioning-System/tree/master/Studying_ultrasonic_signals/Measured_lengths_dispersion
PROBLEM RESOLVED
Dispersion signals final
The time errors was created by the transmitter, I make some changed in it. Now synchro signal takes with help EXTI, and PWM generated all time, and I controlee signal with Enable or Disable pin on driver. Now I have dispersion 5mm, it enough for me.
Final programs is here
https://github.com/BooSooV/Indoor-Ultrasonic-Positioning-System/tree/master/Studying_ultrasonic_signals/Measured_lengths_dispersion
Maybe, It is a problem of time processing. The speed of sound is 343 meters / sec. Then, 50mm is about 0.15 msec.
Have you thought to call HAL_ADC_Start_DMA() from the main()
uint16_t uz_buf[10];
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)&uz_buf, 10 );//300000);
// Sizeof Buffer ---------------------^
and you call
void Get_UZ_Signal(uint16_t* uz_signal) {
int i;
// For debugging - Get the time to process
// Get the Time from the System Tick. This counter wrap around
// from 0 to SysTick->LOAD every 1 msec
int32_t startTime = SysTick->VAL;
__HAL_ADC_ENABLE(&hadc1); // Start the DMA
// return remaining data units in the current DMA Channel transfer
while(__HAL_DMA_GET_COUNTER(&hadc1) != 0)
;
for(i = 0; i<lenght_signal; i++) {
// uz_signal[i] = uz_buf[0];
// 0 or i ------------^
uz_signal[i] = uz_buf[i];
__HAL_ADC_DISABLE(&hadc1); // Stop the DMA
int32_t endTime = SysTick->VAL;
// check if negative, case of wrap around
int32_t difTime = endTime - startTime;
if ( difTime < 0 )
difTime += SysTick->LOAD
__HAL_DMA_SET_COUNTER(&hadc1, 10); // Reset the counter
// If the DMA buffer is 10, the COUNTER will start at 10
// and decrement
// Ref. Manual: 9.4.4 DMA channel x number of data register (DMA_CNDTRx)
In your code
MX_USART1_UART_Init();
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)&uz_signal, 30000);
// Must be stopped because is running now
// How long is the buffer: uint16_t uz_signal[ 30000 ] ?
__HAL_ADC_DISABLE(&hadc1); // must be disable
// reset the counter
__HAL_DMA_SET_COUNTER(&hdma_adc1, 30000);
while (1)
{
if(HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_5) != 0x00)
{
__HAL_ADC_ENABLE(&hadc1);
while(__HAL_DMA_GET_COUNTER(&hdma_adc1) != 0)
;
__HAL_ADC_DISABLE(&hadc1);
__HAL_DMA_SET_COUNTER(&hdma_adc1, 30000);
// 30,000 is the sizeof your buffer?
...
}
}
I made some test with uC STM32F407 # 168mHz. I'll show you the code below.
The speed of sound is 343 meters / second.
During the test, I calculated the time to process the ADC conversion. Each conversion takes about 0.35 uSec (60 ticks).
Result
Size Corresponding
Array Time Distance
100 0.041ms 14mm
1600 0.571ms 196mm
10000 3.57ms 1225mm
In the code, you'll see the start time. Be careful, the SysTick is a decremental counter starting # the uC speed (168MHz = from 168000 to 0). It could be good idea to get the msec time with HAL_GetTick(), and usec with SysTick.
int main(void)
{
...
MX_DMA_Init();
MX_ADC1_Init();
// Configure the channel in the way you want
ADC_ChannelConfTypeDef sConfig;
sConfig.Channel = ADC_CHANNEL_0; //ADC1_CHANNEL;
sConfig.Rank = 1;
sConfig.SamplingTime = ADC_SAMPLETIME_3CYCLES;
sConfig.Offset = 0;
HAL_ADC_ConfigChannel(&hadc1, &sConfig);
// Start the DMA channel and Stop it
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)&uz_signal, sizeof( uz_signal ) / sizeof( uint16_t ));
HAL_ADC_Stop_DMA( &hadc1 );
// The SysTick is a decrement counter
// Can be use to count usec
// https://www.sciencedirect.com/topics/engineering/systick-timer
tickLoad = SysTick->LOAD;
while (1)
{
// Set the buffer to zero
for( uint32_t i=0; i < (sizeof( uz_signal ) / sizeof( uint16_t )); i++)
uz_signal[i] = 0;
// Reset the counter ready to restart
DMA2_Stream0->NDTR = (uint16_t)(sizeof( uz_signal ) / sizeof( uint16_t ));
/* Enable the Peripheral */
ADC1->CR2 |= ADC_CR2_ADON;
/* Start conversion if ADC is effectively enabled */
/* Clear regular group conversion flag and overrun flag */
ADC1->SR = ~(ADC_FLAG_EOC | ADC_FLAG_OVR);
/* Enable ADC overrun interrupt */
ADC1->CR1 |= (ADC_IT_OVR);
/* Enable ADC DMA mode */
ADC1->CR2 |= ADC_CR2_DMA;
/* Start the DMA channel */
/* Enable Common interrupts*/
DMA2_Stream0->CR |= DMA_IT_TC | DMA_IT_TE | DMA_IT_DME | DMA_IT_HT;
DMA2_Stream0->FCR |= DMA_IT_FE;
DMA2_Stream0->CR |= DMA_SxCR_EN;
//===================================================
// The DMA is ready to start
// Your if(HAL_GPIO_ReadPin( ... ) will be here
HAL_Delay( 10 );
//===================================================
// Get the time
tickStart = SysTick->VAL;
// Start the DMA
ADC1->CR2 |= (uint32_t)ADC_CR2_SWSTART;
// Wait until the conversion is completed
while( DMA2_Stream0->NDTR != 0)
;
// Get end time
tickEnd = SysTick->VAL;
/* Stop potential conversion on going, on regular and injected groups */
ADC1->CR2 &= ~ADC_CR2_ADON;
/* Disable the selected ADC DMA mode */
ADC1->CR2 &= ~ADC_CR2_DMA;
/* Disable ADC overrun interrupt */
ADC1->CR1 &= ~(ADC_IT_OVR);
// Get processing time
tickDiff = tickStart - tickEnd;
//===================================================
// Your processing will go here
HAL_Delay( 10 );
//===================================================
}
}
So, you'll have the start and end time. I think you have to make the formula on the start time.
Good luck
If you want to know the execution time, you could use that little function. It returns the micro Seconds
static uint32_t timeMicroSecDivider = 0;
extern uint32_t uwTick;
// The SysTick->LOAD matchs the uC Speed / 1000.
// If the uC clock is 80MHz, the the LOAD is 80000
// The SysTick->VAL is a decrement counter from (LOAD-1) to 0
//====================================================
uint64_t getTimeMicroSec()
{
if ( timeMicroSecDivider == 0)
{
// Number of clock by micro second
timeMicroSecDivider = SysTick->LOAD / 1000;
}
return( (uwTick * 1000) + ((SysTick->LOAD - SysTick->VAL) / timeMicroSecDivider));
}

Rounding a Duration to the nearest second based on desired precision

I recently started working with Dart, and was trying to format a countdown clock with numbers in a per-second precision.
When counting down time, there's often a precise-yet-imperfect way of representing the time - so if I started a Duration at 2 minutes, and asked to show the current time after one second has elapsed, it is almost guaranteed that the precision of the timer will report at 1:58:999999 (example), and if use Duration.inSeconds() to emit the value, it will be 118 (seconds) which is due to how the ~/ operator works, since it's rounding down to integers based on the Duration's microseconds.
If I render the value as a clock, I'll see the clock go from "2:00" to "1:58" after one second, and will end up displaying "0:00" twice, until the countdown is truly at 0:00:00.
As a human, this appears like the clock is skipping, so I figured since the delta is so small, I should round up to the nearest second, and that would be accurate enough for a countdown timer, and handle the slight imprecision measured in micro/milli-seconds to better serve the viewer.
I came up with this secondRounder approach:
Duration secondRounder(Duration duration) {
int roundedDuration;
if (duration.inMilliseconds > (duration.inSeconds * 1000)) {
roundedDuration = duration.inSeconds + 1;
} else {
roundedDuration = duration.inSeconds;
}
return new Duration(seconds: roundedDuration);
}
This can also be run in this DartPad: https://dartpad.dartlang.org/2a08161c5f889e018938316237c0e810
As I'm yet unfamiliar with all of the methods, I've read through a lot of the docs, and this is the best I've come up with so far. I think I was looking for a method that might looks like:
roundedDuration = duration.ceil(nearest: millisecond)
Is there a better way to go about solving this that I haven't figured out yet?
You can "add" your own method to Duration as an extension method:
extension RoundDurationExtension on Duration {
/// Rounds the time of this duration up to the nearest multiple of [to].
Duration ceil(Duration to) {
int us = this.inMicroseconds;
int toUs = to.inMicroseconds.abs(); // Ignore if [to] is negative.
int mod = us % toUs;
if (mod != 0) {
return Duration(microseconds: us - mod + toUs);
}
return this;
}
}
That should allow you to write myDuration = myDuration.ceil(Duration(seconds: 1)); and round the myDuration up to the nearest second.
The best solution according to the documentation is to use .toStringAsFixed() function
https://api.dart.dev/stable/2.4.0/dart-core/num/toStringAsFixed.html
Examples from the Documentation
1.toStringAsFixed(3); // 1.000
(4321.12345678).toStringAsFixed(3); // 4321.123
(4321.12345678).toStringAsFixed(5); // 4321.12346
123456789012345678901.toStringAsFixed(3); // 123456789012345683968.000
1000000000000000000000.toStringAsFixed(3); // 1e+21
5.25.toStringAsFixed(0); // 5
Another more flexible option can be...
You can use this function to roundup the time.
DateTime alignDateTime(DateTime dt, Duration alignment,
[bool roundUp = false]) {
assert(alignment >= Duration.zero);
if (alignment == Duration.zero) return dt;
final correction = Duration(
days: 0,
hours: alignment.inDays > 0
? dt.hour
: alignment.inHours > 0
? dt.hour % alignment.inHours
: 0,
minutes: alignment.inHours > 0
? dt.minute
: alignment.inMinutes > 0
? dt.minute % alignment.inMinutes
: 0,
seconds: alignment.inMinutes > 0
? dt.second
: alignment.inSeconds > 0
? dt.second % alignment.inSeconds
: 0,
milliseconds: alignment.inSeconds > 0
? dt.millisecond
: alignment.inMilliseconds > 0
? dt.millisecond % alignment.inMilliseconds
: 0,
microseconds: alignment.inMilliseconds > 0 ? dt.microsecond : 0);
if (correction == Duration.zero) return dt;
final corrected = dt.subtract(correction);
final result = roundUp ? corrected.add(alignment) : corrected;
return result;
}
and then use it the following way
void main() {
DateTime dt = DateTime.now();
var newDate = alignDateTime(dt,Duration(minutes:30));
print(dt); // prints 2022-01-07 15:35:56.288
print(newDate); // prints 2022-01-07 15:30:00.000
}

Convert Win32 FILETIME to Unix timestamp in Delphi 7 [duplicate]

I have a trace file that each transaction time represented in Windows filetime format. These time numbers are something like this:
128166372003061629
128166372016382155
128166372026382245
Would you please let me know if there are any C/C++ library in Unix/Linux to extract actual time (specially second) from these numbers ? May I write my own extraction function ?
it's quite simple: the windows epoch starts 1601-01-01T00:00:00Z. It's 11644473600 seconds before the UNIX/Linux epoch (1970-01-01T00:00:00Z). The Windows ticks are in 100 nanoseconds. Thus, a function to get seconds from the UNIX epoch will be as follows:
#define WINDOWS_TICK 10000000
#define SEC_TO_UNIX_EPOCH 11644473600LL
unsigned WindowsTickToUnixSeconds(long long windowsTicks)
{
return (unsigned)(windowsTicks / WINDOWS_TICK - SEC_TO_UNIX_EPOCH);
}
FILETIME type is is the number 100 ns increments since January 1 1601.
To convert this into a unix time_t you can use the following.
#define TICKS_PER_SECOND 10000000
#define EPOCH_DIFFERENCE 11644473600LL
time_t convertWindowsTimeToUnixTime(long long int input){
long long int temp;
temp = input / TICKS_PER_SECOND; //convert from 100ns intervals to seconds;
temp = temp - EPOCH_DIFFERENCE; //subtract number of seconds between epochs
return (time_t) temp;
}
you may then use the ctime functions to manipulate it.
(I discovered I can't enter readable code in a comment, so...)
Note that Windows can represent times outside the range of POSIX epoch times, and thus a conversion routine should return an "out-of-range" indication as appropriate. The simplest method is:
... (as above)
long long secs;
time_t t;
secs = (windowsTicks / WINDOWS_TICK - SEC_TO_UNIX_EPOCH);
t = (time_t) secs;
if (secs != (long long) t) // checks for truncation/overflow/underflow
return (time_t) -1; // value not representable as a POSIX time
return t;
New answer for old question.
Using C++11's <chrono> plus this free, open-source library:
https://github.com/HowardHinnant/date
One can very easily convert these timestamps to std::chrono::system_clock::time_point, and also convert these timestamps to human-readable format in the Gregorian calendar:
#include "date.h"
#include <iostream>
std::chrono::system_clock::time_point
from_windows_filetime(long long t)
{
using namespace std::chrono;
using namespace date;
using wfs = duration<long long, std::ratio<1, 10'000'000>>;
return system_clock::time_point{floor<system_clock::duration>(wfs{t} -
(sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1}))};
}
int
main()
{
using namespace date;
std::cout << from_windows_filetime(128166372003061629) << '\n';
std::cout << from_windows_filetime(128166372016382155) << '\n';
std::cout << from_windows_filetime(128166372026382245) << '\n';
}
For me this outputs:
2007-02-22 17:00:00.306162
2007-02-22 17:00:01.638215
2007-02-22 17:00:02.638224
On Windows, you can actually skip the floor, and get that last decimal digit of precision:
return system_clock::time_point{wfs{t} -
(sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1})};
2007-02-22 17:00:00.3061629
2007-02-22 17:00:01.6382155
2007-02-22 17:00:02.6382245
With optimizations on, the sub-expression (sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1}) will translate at compile time to days{134774} which will further compile-time-convert to whatever units the full-expression requires (seconds, 100-nanoseconds, whatever). Bottom line: This is both very readable and very efficient.
The solution that divides and adds will not work correctly with daylight savings.
Here is a snippet that works, but it is for windows.
time_t FileTime_to_POSIX(FILETIME ft)
{
FILETIME localFileTime;
FileTimeToLocalFileTime(&ft,&localFileTime);
SYSTEMTIME sysTime;
FileTimeToSystemTime(&localFileTime,&sysTime);
struct tm tmtime = {0};
tmtime.tm_year = sysTime.wYear - 1900;
tmtime.tm_mon = sysTime.wMonth - 1;
tmtime.tm_mday = sysTime.wDay;
tmtime.tm_hour = sysTime.wHour;
tmtime.tm_min = sysTime.wMinute;
tmtime.tm_sec = sysTime.wSecond;
tmtime.tm_wday = 0;
tmtime.tm_yday = 0;
tmtime.tm_isdst = -1;
time_t ret = mktime(&tmtime);
return ret;
}
Assuming you are asking about the FILETIME Structure, then FileTimeToSystemTime does what you want, you can get the seconds from the SYSTEMTIME structure it produces.
Here's essentially the same solution except this one encodes negative numbers from Ldap properly and lops off the last 7 digits before conversion.
public static int LdapValueAsUnixTimestamp(SearchResult searchResult, string fieldName)
{
var strValue = LdapValue(searchResult, fieldName);
if (strValue == "0") return 0;
if (strValue == "9223372036854775807") return -1;
return (int)(long.Parse(strValue.Substring(0, strValue.Length - 7)) - 11644473600);
}
If somebody need convert it in MySQL
SELECT timestamp,
FROM_UNIXTIME(ROUND((((timestamp) / CAST(10000000 AS UNSIGNED INTEGER)))
- CAST(11644473600 AS UNSIGNED INTEGER),0))
AS Converted FROM events LIMIT 100
Also here's a pure C#ian way to do it.
(Int32)(DateTime.FromFileTimeUtc(129477880901875000).Subtract(new DateTime(1970, 1, 1))).TotalSeconds;
Here's the result of both methods in my immediate window:
(Int32)(DateTime.FromFileTimeUtc(long.Parse(strValue)).Subtract(new DateTime(1970, 1, 1))).TotalSeconds;
1303314490
(int)(long.Parse(strValue.Substring(0, strValue.Length - 7)) - 11644473600)
1303314490
DateTime.FromFileTimeUtc(long.Parse(strValue))
{2011-04-20 3:48:10 PM}
Date: {2011-04-20 12:00:00 AM}
Day: 20
DayOfWeek: Wednesday
DayOfYear: 110
Hour: 15
InternalKind: 4611686018427387904
InternalTicks: 634389112901875000
Kind: Utc
Millisecond: 187
Minute: 48
Month: 4
Second: 10
Ticks: 634389112901875000
TimeOfDay: {System.TimeSpan}
Year: 2011
dateData: 5246075131329262904

Heap corruption detected - iPhone 5S only

I am developing an app that listens for frequency/pitches, it works fine on iPhone4s, simulator and others but not iPhone 5S. This is the message I am getting:
malloc: *** error for object 0x178203a00: Heap corruption detected, free list canary is damaged
Any suggestion where should I start to dig into?
Thanks!
The iPhone 5s has an arm64/64-bit CPU. Check all the analyze compiler warnings for trying to store 64-bit pointers (and other values) into 32-bit C data types.
Also make sure all your audio code parameter passing, object messaging, and manual memory management code is thread safe, and meets all real-time requirements.
In case it helps anyone, I had exactly the same problem as described above.
The cause in my particular case was pthread_create(pthread_t* thread, ...) on ARM64 was putting the value into *thread at some time AFTER the thread was started. On OSX, ARM32 and on the simulator, it was consistently filling in this value before the start_routine was called.
If I performed a pthread_detach operation in the running thread before that value was written (even by using pthread_self() to get the current thread_t), I would end up with the heap corruption message.
I added a small loop in my thread dispatcher that waited until that value was filled in -- after which the heap errors went away. Don't forget 'volatile'!
Restructuring the code might be a better way to fix this -- it depends on your situation. (I noticed this in a unit test that I'd written, I didn't trip up on this issue on any 'real' code)
Same problem. but my case is I malloc 10Byte memory, but I try to use 20Byte. then it Heap corruption.
## -64,7 +64,7 ## char* bytesToHex(char* buf, int size) {
* be converted to two hex characters, also add an extra space for the terminating
* null byte.
* [size] is the size of the buf array */
- int len = (size * 2) + 1;
+ int len = (size * 3) + 1;
char* output = (char*)malloc(len * sizeof(char));
memset(output, 0, len);
/* pointer to the first item (0 index) of the output array */
char *ptr = &output[0];
int i;
for (i = 0; i < size; i++) {
/* "sprintf" converts each byte in the "buf" array into a 2 hex string
* characters appended with a null byte, for example 10 => "0A\0".
*
* This string would then be added to the output array starting from the
* position pointed at by "ptr". For example if "ptr" is pointing at the 0
* index then "0A\0" would be written as output[0] = '0', output[1] = 'A' and
* output[2] = '\0'.
*
* "sprintf" returns the number of chars in its output excluding the null
* byte, in our case this would be 2. So we move the "ptr" location two
* steps ahead so that the next hex string would be written at the new
* location, overriding the null byte from the previous hex string.
*
* We don't need to add a terminating null byte because it's been already
* added for us from the last hex string. */
ptr += sprintf(ptr, "%02X ", buf[i] & 0xFF);
}
return output;

Console Print Speed

I’ve been looking at a few example programs in order to find better ways to code with Dart.
Not that this example (below) is of any particular importance, however it is taken from rosettacode dot org with alterations by me to (hopefully) bring it up-to-date.
The point of this posting is with regard to Benchmarks and what may be detrimental to results in Dart in some Benchmarks in terms of the speed of printing to the console compared to other languages. I don’t know what the comparison is (to other languages), however in Dart, the Console output (at least in Windows) appears to be quite slow even using StringBuffer.
As an aside, in my test, if n1 is allowed to grow to 11, the total recursion count = >238 million, and it takes (on my laptop) c. 2.9 seconds to run Example 1.
In addition, of possible interest, if the String assignment is altered to int, without printing, no time is recorded as elapsed (Example 2).
Typical times on my low-spec laptop (run from the Console - Windows).
Elapsed Microseconds (Print) = 26002
Elapsed Microseconds (StringBuffer) = 9000
Elapsed Microseconds (no Printing) = 3000
Obviously in this case, console print times are a significant factor relative to computation etc. times.
So, can anyone advise how this compares to eg. Java times for console output? That would at least be an indication as to whether Dart is particularly slow in this area, which may be relevant to some Benchmarks. Incidentally, times when running in the Dart Editor incur a negligible penalty for printing.
// Example 1. The base code for the test (Ackermann).
main() {
for (int m1 = 0; m1 <= 3; ++m1) {
for (int n1 = 0; n1 <= 4; ++n1) {
print ("Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}");
}
}
}
int fAcker(int m2, int n2) => m2==0 ? n2+1 : n2==0 ?
fAcker(m2-1, 1) : fAcker(m2-1, fAcker(m2, n2-1));
The altered code for the test.
// Example 2 //
main() {
fRunAcker(1); // print
fRunAcker(2); // StringBuffer
fRunAcker(3); // no printing
}
void fRunAcker(int iType) {
String sResult;
StringBuffer sb1;
Stopwatch oStopwatch = new Stopwatch();
oStopwatch.start();
List lType = ["Print", "StringBuffer", "no Printing"];
if (iType == 2) // Use StringBuffer
sb1 = new StringBuffer();
for (int m1 = 0; m1 <= 3; ++m1) {
for (int n1 = 0; n1 <= 4; ++n1) {
if (iType == 1) // print
print ("Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}");
if (iType == 2) // StringBuffer
sb1.write ("Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}\n");
if (iType == 3) // no printing
sResult = "Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}\n";
}
}
if (iType == 2)
print (sb1.toString());
oStopwatch.stop();
print ("Elapsed Microseconds (${lType[iType-1]}) = "+
"${oStopwatch.elapsedMicroseconds}");
}
int fAcker(int m2, int n2) => m2==0 ? n2+1 : n2==0 ?
fAcker(m2-1, 1) : fAcker(m2-1, fAcker(m2, n2-1));
//Typical times on my low-spec laptop (run from the console).
// Elapsed Microseconds (Print) = 26002
// Elapsed Microseconds (StringBuffer) = 9000
// Elapsed Microseconds (no Printing) = 3000
I tested using Java, which was an interesting exercise.
The results from this small test indicate that Dart takes about 60% longer for the console output than Java, using the results from the fastest for each. I really need to do a larger test with more terminal output, which I will do.
In terms of "computational" speed with no output, using this test and m = 3, and n = 10, the comparison is consistently around 530 milliseconds for Java compared to 580 milliseconds for Dart. That is 59.5 million calls. Java bombs with n = 11 (238 million calls), which I presume is stack overflow. I'm not saying that is a definitive benchmark of much, but it is an indication of something. Dart appears to be very close in the computational time which is pleasing to see. I altered the Dart code from using the "question mark operator" to use "if" statements the same as Java, and that appears to be a bit faster c. 10% or more, and that appeared to be consistently the case.
I ran a further test for console printing as shown below (example 1 – Dart), (Example 2 – Java).
The best times for each are as follows (100,000 iterations) :
Dart 47 seconds.
Java 22 seconds.
Dart Editor 2.3 seconds.
While it is not earth-shattering, it does appear to illustrate that for some reason (a) Dart is slow with console output, and (b) Dart-Editor is extremely fast with console output. (c) This needs to be taken into account when evaluating any performance that involves console output, which is what initially drew my attention to it.
Perhaps when they have time :) the Dart team could look at this if it is considered worthwhile.
Example 1 - Dart
// Dart - Test 100,000 iterations of console output //
Stopwatch oTimer = new Stopwatch();
main() {
// "warm-up"
for (int i1=0; i1 < 20000; i1++) {
print ("The quick brown fox chased ...");
}
oTimer.reset();
oTimer.start();
for (int i2=0; i2 < 100000; i2++) {
print ("The quick brown fox chased ....");
}
oTimer.stop();
print ("Elapsed time = ${oTimer.elapsedMicroseconds/1000} milliseconds");
}
Example 2 - Java
public class console001
{
// Java - Test 100,000 iterations of console output
public static void main (String [] args)
{
// warm-up
for (int i1=0; i1<20000; i1++)
{
System.out.println("The quick brown fox jumped ....");
}
long tmStart = System.nanoTime();
for (int i2=0; i2<100000; i2++)
{
System.out.println("The quick brown fox jumped ....");
}
long tmEnd = System.nanoTime() - tmStart;
System.out.println("Time elapsed in microseconds = "+(tmEnd/1000));
}
}

Resources