How to format time in influxdb select query - influxdb

I am new to InfluxDB. I am querying data in admin ui. I see time as timestamp. Is it possible to see it formatted as date and time?

You can select RFC 3339 formatting by entering the following command in the CLI:
precision rfc3339

To convert influxdb timestamp to normal timestamp you can type on console:
influx -precision rfc3339
Now try with your query it should work.
For more details follow link :
https://www.influxdata.com/blog/tldr-influxdb-tech-tips-august-4-2016/

Although thierry answered the question already, here is the link to the documentation as well:
precision 'rfc3339|h|m|s|ms|u|ns'
Specifies the format/precision of the timestamp: rfc3339 (YYYY-MM-DDTHH:MM:SS.nnnnnnnnnZ), h (hours), m (minutes), s (seconds), ms (milliseconds), u (microseconds), ns (nanoseconds). Precision defaults to nanoseconds.
https://docs.influxdata.com/influxdb/v1.5/tools/shell/#influx-arguments

The Web Admin Interface was deprecated as of InfluxDB 1.1 (disabled by default).
The precision of the timestamp can be controlled to return hours (h), minutes (m), seconds (s), milliseconds (ms), microseconds (u) or nanoseconds (ns). A special precision option is RFC3339 which returns the timestamp in RFC3339 format with nanosecond precision. The mechanism for specifying the desired time precision is different for the CLI and HTTP API.
To set the precision in CLI, you write precision <RFC3339|h|m|s|ms|us|ns> in the command line depending on what precision you want. The default value of the precision for the CLI is nanoseconds.
To set the precision in HTTP API, you pass epoch=<h|m|s|ms|us|ns> as a query parameter. The default value of the precision is RFC3339.

Related

editcap -A and -B: what time zone should I use?

I need to extract packets within certain time ranges from a large pcap. And I found editcap's -A and -B option a perfect fit for this task except my target time ranges are in epoch time and -A/B requires time in format YYYY-MM-DD HH:MM:SS.
My question is when I convert epoch time to YYYY-MM-DD HH:MM:SS, what time zone should I use? (I am not sure if this is relevant but the large pcap I use is a merge of smaller pcaps captured from differnt time zones).
I tried tshark which allow filtering based on epoch time (frame.time_epoch>=X) but tshark seems to be resouce expensive and get constantly killed by the ubuntu server I used.
Will appreciate any help!
Use your system's time.
100% correct. The time is parsed and then fed to a routine (mtkime()) that converts a year/month/day/hour/minute/second value, in local time in the machine's timezone, to POSIX time ("Epoch time", where the "Epoch" is the UN*X/POSIX Epoch of 1970-01-01 00:00:00 UTC).
am I right that the capture timstamps are stored as epoch time in pcap internally
Yes.
and thus once the system time I feed into editcap get converted into epoch time, editcap can extract the right packets no matter which time zone the packets are captured from?
Yes.

Why is ActiveSupport::TimeWithZone#to_f returning greater precision than what is stored in postgres?

I have a Rails 5.2 application with a postgres backend. When I pull a timestamp from the database and convert it to a float (to_f), it should return the time in seconds with fractional seconds down to the microsecond (since the greatest precision postgres stores is microseconds). Instead, for about 25% of the timestamps I look at, I get 7 decimal digits (hundredths of nanoseconds) when converting timestamps to float.
irb(main):009:0> u.created_at.to_f
=> 1440127129.5120609
What's going on here? Shouldn't this be 1440127129.512060 or 1440127129.512061? Where is this extra decimal digit coming from?
Because Ruby's inspect as you see just prints out the value in the decimal form, converted from the internally stored form, which is a binary, and its representation has not much to do with its internal precision and absolutely nothing to do with the precision of the value stored in the database before it is imported to the Ruby object instance.
To demonstrate (a part of) the point, try f=0.1000; p f in irb or rails console. It just print "0.1" and not "0.1000".

Why does Delphi use double to store Date and Time instead of Int64?

Why does Delphi use double (8 bytes) to store date and time instead of Int64 (8 byte as well)? As a double precision floating point is not an exact value, I'm curious wether the precision of a unix date and time stored in an Int64 value will be better than the precision of a Delphi date and time?
The simple explanation is that the Delphi TDateTime type maps directly to the OLE/COM date/time format.
Embarcadero chose to use an existing date/time representation rather than create yet another one, and selected, what was at the time, the most obvious platform native option.
A couple of useful articles on Windows date/time representations:
How to recognize different types of timestamps from quite a long way away, Raymond Chen
Eric's Complete Guide To VT_DATE, Eric Lippert
As far as precision goes, you would like to compare Unix time to TDateTime. Unix time has second precision for both 32 or 64 bit values. For values close to the epoch, a double has far greater precision. There are 86,400 seconds in a day, and there are many orders of magnitude more double values between 0 and 1, for instance. You need to go to around year 188,143,673 before the precision of Unix time surpasses that of TDateTime.
Although you have focused on the size of the types, the representation is of course crucially important. For instance, if instead of representing date as seconds after epoch, it was represented as milliseconds after epoch, then the precision of Unix time would be 1000 times greater. Of course the range would be reduced by 1000 times also.
You need to be wary of considering precision of these types in isolation. These types don't exist in isolation, and the source of the values is important. If the time is coming from a file system say, then that will in fact determine the precision of the value.

No results when running query against data written to InfluxDB

I have InfluxDB 0.13 and I'm sending data via the HTTP API. I'm getting status code 204 back, assuming that means OK. I can see the series if I query with "SHOW SERIES", I see the measurement and tags. But I cannot query on any of the data, it just says no results (query: SELECT * FROM "sql-query").
This is the raw data sent to Influx from Fiddler. Any idea what's wrong?
sql-query,Environment=QA,Service=XTAM_Lag SubscriberName="TXXOff",LagMinutes=141278i 1472628420980000000
sql-query,Environment=QA,Service=XTAM_Lag SubscriberName="TXXTIMEDEPOT",LagMinutes=248i 1472628420980000000
sql-query,Environment=QA,Service=XTAM_Lag SubscriberName="TXXOffMirror",LagMinutes=0i 1472628420980000000
sql-query,Environment=QA,Service=XTAM_Lag SubscriberName="TXXOffMirrorQA",LagMinutes=527i 1472628420980000000
sql-query,Environment=QA,Service=XTAM_Lag SubscriberName="TXXOff",LagMinutes=141279i 1472628480390000128
sql-query,Environment=QA,Service=XTAM_Lag SubscriberName="TXXTIMEDEPOT",LagMinutes=249i 1472628480390000128
sql-query,Environment=QA,Service=XTAM_Lag SubscriberName="TXXOffMirror",LagMinutes=0i 1472628480390000128
sql-query,Environment=QA,Service=XTAM_Lag SubscriberName="TXXOffMirrorQA",LagMinutes=528i 1472628480390000128
By default, all InfluxDB queries with no time constraint will use the current time in UTC on the InfluxDB server as an implicit upper time bound. Essentially, the query SELECT * FROM "sql-query" is interpreted as SELECT * FROM "sql-query" WHERE time < now().
The current UTC time on the server running InfluxDB can be different from the current time on the server generating metrics. This difference can be due to either a bad clock or, more likely, the use of a time zone other than UTC.
If there is an offset, new data will sometimes be written with timestamps in the relative future. Due to the implicit upper time bound on queries explained above, those points will then be excluded from a basic query.
To confirm whether this is the issue, try running a query with the upper time bound set a few days in the future.
SELECT * FROM "sql-query" WHERE time < now() + 1w
The query above will return all points in the sql-query measurement, plus any points written written with a relative time up to a week in the future.

What's the best way to store date values in string format?

I have to store date values (TDateTime) in a string format. What is the best way to do this? I considered the following approaches:
FloatToStr : looses precision, depends on locale settings
`FloatToStr' with format settings : looses precision
DateTimeToStr : depends on locale settings
DateTimeToStr with format settings : ?
Are there any other alternatives? How do they compare in terms of
Size in memory
Independence of locale settings
Precision
Use ISO-8601 format, as detailed in http://en.wikipedia.org/wiki/ISO_8601
If you need to save storage space, you can use the "compact" layout, e.g. '20090621T054523'.
You can use e.g. FormatDateTime('yyyymmddThhnnss',aDateTime) to produce it.
About time zone and localisation (from wikipedia):
There are no time zone designators in ISO 8601. Time is only represented as local time or in relation to UTC.
If no UTC relation information is given with a time representation, the time is assumed to be in local time. While it may be safe to assume local time when communicating in the same time zone, it is ambiguous when used in communicating across different time zones. It is usually preferable to indicate a time zone (zone designator) using the standard’s notation.
So you should better convert the time into UTC, then append 'Z' at the end of the timestamp. Or use +hh/-hh according to your local time zone. The following times all refer to the same moment: "18:30Z", "22:30+04", "1130-0700", and "15:00-03:30".
For a better resolution, you can add sub-second timing by adding a fraction after either a comma or a dot character: e.g. to denote "14 hours, 30 minutes, 10 seconds and 500 ms", represent it as "14:30:10,5", "143010,5", "14:30:10.5", or "143010.5". You can add several decimals to increase resolution.
If you need fast Iso8601 conversion routines (working with UTF-8 content), take a look at the corresponding part in SynCommons.pas. It's much faster than the default SysUtils functions.
PS:
If your purpose is just to store TDateTime as text in a pure Delphi application, you can use a not standard but fast:
function DateTimeToText(const aDateTime: TDateTime): string;
begin
result := IntToStr(PInt64(#aDateTime)^);
end;
function TextToDateTime(const aText: string): TDateTime;
begin
PInt64(#result)^ := StrToInt64Def(aText,0);
end;
Using the Int64 binary layout of TDateTime/double memory structure will be faster than any other floating-point related conversion.
Generally I would recommend to store datetimes in ISO format as string: yyyy-mm-dd hh:nn:ss.mmmm
EDIT: if you want to minimize space, you can leave out all the separators and format it like: yyyymmddhhnnssmmmm
FormatDateTime('yyyymmddhhnnss.zzz', Now)
How much precision do you need? You could take, say, the number of milliseconds since, say, 1/1/1970, and just store that as a number converted to a string. If space is really tight, you could base64 that value. The downside would be that neither would be human readable.

Resources