Convert Win32 FILETIME to Unix timestamp in Delphi 7 [duplicate] - delphi

I have a trace file that each transaction time represented in Windows filetime format. These time numbers are something like this:
128166372003061629
128166372016382155
128166372026382245
Would you please let me know if there are any C/C++ library in Unix/Linux to extract actual time (specially second) from these numbers ? May I write my own extraction function ?

it's quite simple: the windows epoch starts 1601-01-01T00:00:00Z. It's 11644473600 seconds before the UNIX/Linux epoch (1970-01-01T00:00:00Z). The Windows ticks are in 100 nanoseconds. Thus, a function to get seconds from the UNIX epoch will be as follows:
#define WINDOWS_TICK 10000000
#define SEC_TO_UNIX_EPOCH 11644473600LL
unsigned WindowsTickToUnixSeconds(long long windowsTicks)
{
return (unsigned)(windowsTicks / WINDOWS_TICK - SEC_TO_UNIX_EPOCH);
}

FILETIME type is is the number 100 ns increments since January 1 1601.
To convert this into a unix time_t you can use the following.
#define TICKS_PER_SECOND 10000000
#define EPOCH_DIFFERENCE 11644473600LL
time_t convertWindowsTimeToUnixTime(long long int input){
long long int temp;
temp = input / TICKS_PER_SECOND; //convert from 100ns intervals to seconds;
temp = temp - EPOCH_DIFFERENCE; //subtract number of seconds between epochs
return (time_t) temp;
}
you may then use the ctime functions to manipulate it.

(I discovered I can't enter readable code in a comment, so...)
Note that Windows can represent times outside the range of POSIX epoch times, and thus a conversion routine should return an "out-of-range" indication as appropriate. The simplest method is:
... (as above)
long long secs;
time_t t;
secs = (windowsTicks / WINDOWS_TICK - SEC_TO_UNIX_EPOCH);
t = (time_t) secs;
if (secs != (long long) t) // checks for truncation/overflow/underflow
return (time_t) -1; // value not representable as a POSIX time
return t;

New answer for old question.
Using C++11's <chrono> plus this free, open-source library:
https://github.com/HowardHinnant/date
One can very easily convert these timestamps to std::chrono::system_clock::time_point, and also convert these timestamps to human-readable format in the Gregorian calendar:
#include "date.h"
#include <iostream>
std::chrono::system_clock::time_point
from_windows_filetime(long long t)
{
using namespace std::chrono;
using namespace date;
using wfs = duration<long long, std::ratio<1, 10'000'000>>;
return system_clock::time_point{floor<system_clock::duration>(wfs{t} -
(sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1}))};
}
int
main()
{
using namespace date;
std::cout << from_windows_filetime(128166372003061629) << '\n';
std::cout << from_windows_filetime(128166372016382155) << '\n';
std::cout << from_windows_filetime(128166372026382245) << '\n';
}
For me this outputs:
2007-02-22 17:00:00.306162
2007-02-22 17:00:01.638215
2007-02-22 17:00:02.638224
On Windows, you can actually skip the floor, and get that last decimal digit of precision:
return system_clock::time_point{wfs{t} -
(sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1})};
2007-02-22 17:00:00.3061629
2007-02-22 17:00:01.6382155
2007-02-22 17:00:02.6382245
With optimizations on, the sub-expression (sys_days{1970_y/jan/1} - sys_days{1601_y/jan/1}) will translate at compile time to days{134774} which will further compile-time-convert to whatever units the full-expression requires (seconds, 100-nanoseconds, whatever). Bottom line: This is both very readable and very efficient.

The solution that divides and adds will not work correctly with daylight savings.
Here is a snippet that works, but it is for windows.
time_t FileTime_to_POSIX(FILETIME ft)
{
FILETIME localFileTime;
FileTimeToLocalFileTime(&ft,&localFileTime);
SYSTEMTIME sysTime;
FileTimeToSystemTime(&localFileTime,&sysTime);
struct tm tmtime = {0};
tmtime.tm_year = sysTime.wYear - 1900;
tmtime.tm_mon = sysTime.wMonth - 1;
tmtime.tm_mday = sysTime.wDay;
tmtime.tm_hour = sysTime.wHour;
tmtime.tm_min = sysTime.wMinute;
tmtime.tm_sec = sysTime.wSecond;
tmtime.tm_wday = 0;
tmtime.tm_yday = 0;
tmtime.tm_isdst = -1;
time_t ret = mktime(&tmtime);
return ret;
}

Assuming you are asking about the FILETIME Structure, then FileTimeToSystemTime does what you want, you can get the seconds from the SYSTEMTIME structure it produces.

Here's essentially the same solution except this one encodes negative numbers from Ldap properly and lops off the last 7 digits before conversion.
public static int LdapValueAsUnixTimestamp(SearchResult searchResult, string fieldName)
{
var strValue = LdapValue(searchResult, fieldName);
if (strValue == "0") return 0;
if (strValue == "9223372036854775807") return -1;
return (int)(long.Parse(strValue.Substring(0, strValue.Length - 7)) - 11644473600);
}

If somebody need convert it in MySQL
SELECT timestamp,
FROM_UNIXTIME(ROUND((((timestamp) / CAST(10000000 AS UNSIGNED INTEGER)))
- CAST(11644473600 AS UNSIGNED INTEGER),0))
AS Converted FROM events LIMIT 100

Also here's a pure C#ian way to do it.
(Int32)(DateTime.FromFileTimeUtc(129477880901875000).Subtract(new DateTime(1970, 1, 1))).TotalSeconds;
Here's the result of both methods in my immediate window:
(Int32)(DateTime.FromFileTimeUtc(long.Parse(strValue)).Subtract(new DateTime(1970, 1, 1))).TotalSeconds;
1303314490
(int)(long.Parse(strValue.Substring(0, strValue.Length - 7)) - 11644473600)
1303314490
DateTime.FromFileTimeUtc(long.Parse(strValue))
{2011-04-20 3:48:10 PM}
Date: {2011-04-20 12:00:00 AM}
Day: 20
DayOfWeek: Wednesday
DayOfYear: 110
Hour: 15
InternalKind: 4611686018427387904
InternalTicks: 634389112901875000
Kind: Utc
Millisecond: 187
Minute: 48
Month: 4
Second: 10
Ticks: 634389112901875000
TimeOfDay: {System.TimeSpan}
Year: 2011
dateData: 5246075131329262904

Related

Rounding a Duration to the nearest second based on desired precision

I recently started working with Dart, and was trying to format a countdown clock with numbers in a per-second precision.
When counting down time, there's often a precise-yet-imperfect way of representing the time - so if I started a Duration at 2 minutes, and asked to show the current time after one second has elapsed, it is almost guaranteed that the precision of the timer will report at 1:58:999999 (example), and if use Duration.inSeconds() to emit the value, it will be 118 (seconds) which is due to how the ~/ operator works, since it's rounding down to integers based on the Duration's microseconds.
If I render the value as a clock, I'll see the clock go from "2:00" to "1:58" after one second, and will end up displaying "0:00" twice, until the countdown is truly at 0:00:00.
As a human, this appears like the clock is skipping, so I figured since the delta is so small, I should round up to the nearest second, and that would be accurate enough for a countdown timer, and handle the slight imprecision measured in micro/milli-seconds to better serve the viewer.
I came up with this secondRounder approach:
Duration secondRounder(Duration duration) {
int roundedDuration;
if (duration.inMilliseconds > (duration.inSeconds * 1000)) {
roundedDuration = duration.inSeconds + 1;
} else {
roundedDuration = duration.inSeconds;
}
return new Duration(seconds: roundedDuration);
}
This can also be run in this DartPad: https://dartpad.dartlang.org/2a08161c5f889e018938316237c0e810
As I'm yet unfamiliar with all of the methods, I've read through a lot of the docs, and this is the best I've come up with so far. I think I was looking for a method that might looks like:
roundedDuration = duration.ceil(nearest: millisecond)
Is there a better way to go about solving this that I haven't figured out yet?
You can "add" your own method to Duration as an extension method:
extension RoundDurationExtension on Duration {
/// Rounds the time of this duration up to the nearest multiple of [to].
Duration ceil(Duration to) {
int us = this.inMicroseconds;
int toUs = to.inMicroseconds.abs(); // Ignore if [to] is negative.
int mod = us % toUs;
if (mod != 0) {
return Duration(microseconds: us - mod + toUs);
}
return this;
}
}
That should allow you to write myDuration = myDuration.ceil(Duration(seconds: 1)); and round the myDuration up to the nearest second.
The best solution according to the documentation is to use .toStringAsFixed() function
https://api.dart.dev/stable/2.4.0/dart-core/num/toStringAsFixed.html
Examples from the Documentation
1.toStringAsFixed(3); // 1.000
(4321.12345678).toStringAsFixed(3); // 4321.123
(4321.12345678).toStringAsFixed(5); // 4321.12346
123456789012345678901.toStringAsFixed(3); // 123456789012345683968.000
1000000000000000000000.toStringAsFixed(3); // 1e+21
5.25.toStringAsFixed(0); // 5
Another more flexible option can be...
You can use this function to roundup the time.
DateTime alignDateTime(DateTime dt, Duration alignment,
[bool roundUp = false]) {
assert(alignment >= Duration.zero);
if (alignment == Duration.zero) return dt;
final correction = Duration(
days: 0,
hours: alignment.inDays > 0
? dt.hour
: alignment.inHours > 0
? dt.hour % alignment.inHours
: 0,
minutes: alignment.inHours > 0
? dt.minute
: alignment.inMinutes > 0
? dt.minute % alignment.inMinutes
: 0,
seconds: alignment.inMinutes > 0
? dt.second
: alignment.inSeconds > 0
? dt.second % alignment.inSeconds
: 0,
milliseconds: alignment.inSeconds > 0
? dt.millisecond
: alignment.inMilliseconds > 0
? dt.millisecond % alignment.inMilliseconds
: 0,
microseconds: alignment.inMilliseconds > 0 ? dt.microsecond : 0);
if (correction == Duration.zero) return dt;
final corrected = dt.subtract(correction);
final result = roundUp ? corrected.add(alignment) : corrected;
return result;
}
and then use it the following way
void main() {
DateTime dt = DateTime.now();
var newDate = alignDateTime(dt,Duration(minutes:30));
print(dt); // prints 2022-01-07 15:35:56.288
print(newDate); // prints 2022-01-07 15:30:00.000
}

Convert: timeinfo = localtime(&now) to 24Hr, then extract tm_hour, C/C++. Syntax assistance

I'm using time.h with tm_hour to get the hour of the day, but the time is in 12 Hr format by default.
I need 24 Hour format (00-23 Hrs) for simple event time code, like:
time_t now;
struct tm * timeinfo;
time(&now);
timeinfo = localtime(&now);
Serial.println("24 Hr Time is: %H:%M:%S\n", timeinfo->tm_hour, timeinfo->tm_min, timeinfo->tm_sec); //test
Serial.println(timeinfo->tm_hour);
if ((timeinfo->tm_hour)>= 22) // 10PM Event
//Do something here
Any syntax assistance appreciated.

Hash value of String that would be stable across iOS releases?

In documentation String.hash for iOS it says:
You should not rely on this property having the same hash value across
releases of OS X.
(strange why they speak of OS X in iOS documentation)
Well, I need a hasshing function that will not change with iOS releases. It can be simple I do not need anything like SHA. Is there some library for that?
There is another question about this here but the accepted (and only) answer there simply states that we should respect the note in documentation.
Here is a non-crypto hash, for Swift 3:
func strHash(_ str: String) -> UInt64 {
var result = UInt64 (5381)
let buf = [UInt8](str.utf8)
for b in buf {
result = 127 * (result & 0x00ffffffffffffff) + UInt64(b)
}
return result
}
It was derived somewhat from a C++11 constexpr
constexpr uint64_t str2int(char const *input) {
return *input // test for null terminator
? (static_cast<uint64_t>(*input) + // add char to end
127 * ((str2int(input + 1) // prime 127 shifts left almost 7 bits
& 0x00ffffffffffffff))) // mask right 56 bits
: 5381; // start with prime number 5381
}
Unfortunately, the two don't yield the same hash. To do that you'd need to reverse the iterator order in strHash:
for b in buf.reversed() {...}
But that will run 13x slower, somewhat comparable to the djb2hash String extension that I got from https://useyourloaf.com/blog/swift-hashable/
Here are some benchmarks, for a million iterations:
hashValue execution time: 0.147760987281799
strHash execution time: 1.45974600315094
strHashReversed time: 18.7755110263824
djb2hash execution time: 16.0091370344162
sdbmhash crashed
For C++, the str2Int is roughly as fast as Swift 3's hashValue:
str2int execution time: 0.136421

Can iOS boot time drift?

I'm using this code to determine when my iOS device last rebooted:
int mib[MIB_SIZE];
size_t size;
struct timeval boottime;
mib[0] = CTL_KERN;
mib[1] = KERN_BOOTTIME;
size = sizeof(boottime);
if (sysctl(mib, MIB_SIZE, &boottime, &size, NULL, 0) != -1) {
return boottime.tv_sec;
}
return 0;
I'm seeing some anomalies with this time. In particular, I save the long and days and weeks later check the saved long agains the value returned by the above code.
I'm not sure, but I think I'm seeing some drift. This doesn't make any sense to me. I'm not converting to NSDate to prevent drift. I would think that boot time is record by the kernel when it boots and isn't computed again, it is just stored. But could iOS be saving boot time as an NSDate, with any inherent drift problems with that?
While the iOS Kernel is closed-source, it's reasonable to assume most of it is the same as the OSX Kernel, which is open-source.
Within osfmk/kern/clock.c there is the function:
/*
* clock_get_boottime_nanotime:
*
* Return the boottime, used by sysctl.
*/
void
clock_get_boottime_nanotime(
clock_sec_t *secs,
clock_nsec_t *nanosecs)
{
spl_t s;
s = splclock();
clock_lock();
*secs = (clock_sec_t)clock_boottime;
*nanosecs = 0;
clock_unlock();
splx(s);
}
and clock_boottime is declared as:
static uint64_t clock_boottime; /* Seconds boottime epoch */
and finally the comment to this function shows that it can, indeed, change:
/*
* clock_set_calendar_microtime:
*
* Sets the current calendar value by
* recalculating the epoch and offset
* from the system clock.
*
* Also adjusts the boottime to keep the
* value consistent, writes the new
* calendar value to the platform clock,
* and sends calendar change notifications.
*/
void
clock_set_calendar_microtime(
clock_sec_t secs,
clock_usec_t microsecs)
{
...
Update to answer query from OP
I am not certain about how often clock_set_calendar_microtime() is called, as I am not familiar with the inner workings of the kernel; however it adjusts the clock_boottime value and the clock_bootime value is initialized in clock_initialize_calendar(), so I would say it can be called more than once. I have been unable to find any call to it using:
$ find . -type f -exec grep -l clock_set_calendar_microtime {} \;
RE my comment above...
"to my understanding, when the user goes into settings and changes the
time manually, the boot time is changed by the delta to the new time
to keep the interval between boot time and system time, equal. but it
does not "drift" as it is a timestamp, only the system clock itself
drifts."
I'm running NTP on my iOS app, and speak with Google's time servers.
I feed NTP the uptime since boot (which doesn't pause and is correctly adjusted if some nefarious user starts messing with system time... which is the whole point of this in the first place), and then add the offset between uptime since boot and epoch time to my uptime.
inline static struct timeval uptime(void) {
struct timeval before_now, now, after_now;
after_now = since_boot();
do {
before_now = after_now;
gettimeofday(&now, NULL);
after_now = since_boot();
} while (after_now.tv_sec != before_now.tv_sec && after_now.tv_usec != before_now.tv_usec);
struct timeval systemUptime;
systemUptime.tv_sec = now.tv_sec - before_now.tv_sec;
systemUptime.tv_usec = now.tv_usec - before_now.tv_usec;
return systemUptime;
}
I sync with the time servers once every 15 minutes, and calculate the offset drift (aka on system clock drift) every time.
static void calculateOffsetDrift(void) {
static dispatch_queue_t offsetDriftQueue = dispatch_queue_create("", DISPATCH_QUEUE_CONCURRENT);
static double lastOffset;
dispatch_barrier_sync(offsetDriftQueue, ^{
double newOffset = networkOffset();
if (lastOffset != 0.0f) printf("offset difference = %f \n", lastOffset - newOffset);
lastOffset = newOffset;
});
}
On my iPhone Xs Max the system clock usually runs around 30ms behind over 15 minutes.
Here's some figures from a test I just ran using LTE in NYC..
+47.381592 ms
+43.325684 ms
-67.654541 ms
+24.860107 ms
+5.940674 ms
+25.395264 ms
-34.969971 ms

Console Print Speed

I’ve been looking at a few example programs in order to find better ways to code with Dart.
Not that this example (below) is of any particular importance, however it is taken from rosettacode dot org with alterations by me to (hopefully) bring it up-to-date.
The point of this posting is with regard to Benchmarks and what may be detrimental to results in Dart in some Benchmarks in terms of the speed of printing to the console compared to other languages. I don’t know what the comparison is (to other languages), however in Dart, the Console output (at least in Windows) appears to be quite slow even using StringBuffer.
As an aside, in my test, if n1 is allowed to grow to 11, the total recursion count = >238 million, and it takes (on my laptop) c. 2.9 seconds to run Example 1.
In addition, of possible interest, if the String assignment is altered to int, without printing, no time is recorded as elapsed (Example 2).
Typical times on my low-spec laptop (run from the Console - Windows).
Elapsed Microseconds (Print) = 26002
Elapsed Microseconds (StringBuffer) = 9000
Elapsed Microseconds (no Printing) = 3000
Obviously in this case, console print times are a significant factor relative to computation etc. times.
So, can anyone advise how this compares to eg. Java times for console output? That would at least be an indication as to whether Dart is particularly slow in this area, which may be relevant to some Benchmarks. Incidentally, times when running in the Dart Editor incur a negligible penalty for printing.
// Example 1. The base code for the test (Ackermann).
main() {
for (int m1 = 0; m1 <= 3; ++m1) {
for (int n1 = 0; n1 <= 4; ++n1) {
print ("Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}");
}
}
}
int fAcker(int m2, int n2) => m2==0 ? n2+1 : n2==0 ?
fAcker(m2-1, 1) : fAcker(m2-1, fAcker(m2, n2-1));
The altered code for the test.
// Example 2 //
main() {
fRunAcker(1); // print
fRunAcker(2); // StringBuffer
fRunAcker(3); // no printing
}
void fRunAcker(int iType) {
String sResult;
StringBuffer sb1;
Stopwatch oStopwatch = new Stopwatch();
oStopwatch.start();
List lType = ["Print", "StringBuffer", "no Printing"];
if (iType == 2) // Use StringBuffer
sb1 = new StringBuffer();
for (int m1 = 0; m1 <= 3; ++m1) {
for (int n1 = 0; n1 <= 4; ++n1) {
if (iType == 1) // print
print ("Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}");
if (iType == 2) // StringBuffer
sb1.write ("Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}\n");
if (iType == 3) // no printing
sResult = "Acker(${m1}, ${n1}) = ${fAcker(m1, n1)}\n";
}
}
if (iType == 2)
print (sb1.toString());
oStopwatch.stop();
print ("Elapsed Microseconds (${lType[iType-1]}) = "+
"${oStopwatch.elapsedMicroseconds}");
}
int fAcker(int m2, int n2) => m2==0 ? n2+1 : n2==0 ?
fAcker(m2-1, 1) : fAcker(m2-1, fAcker(m2, n2-1));
//Typical times on my low-spec laptop (run from the console).
// Elapsed Microseconds (Print) = 26002
// Elapsed Microseconds (StringBuffer) = 9000
// Elapsed Microseconds (no Printing) = 3000
I tested using Java, which was an interesting exercise.
The results from this small test indicate that Dart takes about 60% longer for the console output than Java, using the results from the fastest for each. I really need to do a larger test with more terminal output, which I will do.
In terms of "computational" speed with no output, using this test and m = 3, and n = 10, the comparison is consistently around 530 milliseconds for Java compared to 580 milliseconds for Dart. That is 59.5 million calls. Java bombs with n = 11 (238 million calls), which I presume is stack overflow. I'm not saying that is a definitive benchmark of much, but it is an indication of something. Dart appears to be very close in the computational time which is pleasing to see. I altered the Dart code from using the "question mark operator" to use "if" statements the same as Java, and that appears to be a bit faster c. 10% or more, and that appeared to be consistently the case.
I ran a further test for console printing as shown below (example 1 – Dart), (Example 2 – Java).
The best times for each are as follows (100,000 iterations) :
Dart 47 seconds.
Java 22 seconds.
Dart Editor 2.3 seconds.
While it is not earth-shattering, it does appear to illustrate that for some reason (a) Dart is slow with console output, and (b) Dart-Editor is extremely fast with console output. (c) This needs to be taken into account when evaluating any performance that involves console output, which is what initially drew my attention to it.
Perhaps when they have time :) the Dart team could look at this if it is considered worthwhile.
Example 1 - Dart
// Dart - Test 100,000 iterations of console output //
Stopwatch oTimer = new Stopwatch();
main() {
// "warm-up"
for (int i1=0; i1 < 20000; i1++) {
print ("The quick brown fox chased ...");
}
oTimer.reset();
oTimer.start();
for (int i2=0; i2 < 100000; i2++) {
print ("The quick brown fox chased ....");
}
oTimer.stop();
print ("Elapsed time = ${oTimer.elapsedMicroseconds/1000} milliseconds");
}
Example 2 - Java
public class console001
{
// Java - Test 100,000 iterations of console output
public static void main (String [] args)
{
// warm-up
for (int i1=0; i1<20000; i1++)
{
System.out.println("The quick brown fox jumped ....");
}
long tmStart = System.nanoTime();
for (int i2=0; i2<100000; i2++)
{
System.out.println("The quick brown fox jumped ....");
}
long tmEnd = System.nanoTime() - tmStart;
System.out.println("Time elapsed in microseconds = "+(tmEnd/1000));
}
}

Resources