influxdb date format when entering/displaying data - influxdb

I wrote a python program to enter historical data into influxdb.
Everything seems to be ok, but I am not sure if the time field is incorrect. The time is supposed to be YYYY,MM,DD,HH,MM as integers.
This is an example of the json that I am sending to influxdb,
[{'fields': {'High': 72.06, 'Close': 72.01, 'Volume': 6348, 'Open': 72.01, 'Low': 72.01}, 'tags': {'country': 'US', 'symbol': 'AAXJ', 'type': 'ETF', 'exchange': 'NASDAQ'}, 'time': datetime.dat
e(2017, 9, 7, 15, 35), 'measurement': 'quote'}]
However, when I query the data, I get a strange number for the time like this:
time Close High Low Open Volume country exchange symbol type
---- ----- ---- --- ---- ------ ------- -------- ------ ----
1504798500000000000 144.46 144.47 144.06 144.1 112200 US NYSE IBM STOCK
Seems like either the json time format is wrong, or the number displayed by the query is an encoded date representation?

I found the answer here
Formatting the output by entering the following command in the CLI:
precision rfc3339

Related

What is the format of this hex timestamp from the Amazon SES message ID?

Amazon SES message IDs are in the following format:
01020170c41acd6e-89acae55-6245-4d89-86ca-0a177e59e737-000000
This seems to consist of 3 distinct parts
01020170c41acd6e appears to be some sort of hex timestamp. The difference between two timestamps is the time elapsed in milliseconds but it doesn't seem to begin at epoch
c2daf94a-f258-4d59-8fdb-a5512d4c7638 is clearly a standard version 4 UUID
000000 remains the same for first sending and I assume is incremented for redelivery attempts
I have a need to generate a 'fake' message ID in some scenarios. It is trivial to fake 2 and 3 above however I cannot seem to deduce what the format of the timestamp above is. Here are some further examples with corresponding approximate times:
01020170c450e280 - Mar 10, 2020 at 12:00:00.190
01020170c44c2e6a - Mar 10, 2020 at 11:54:51.987
01020170c0e30119 - Mar 09, 2020 at 20:01:07.407
What format is this timestamp?
Taking your first example of 01020170c450e280, the string can be split into 01020 and 170c450e280.
170c450e280 hex == 1583841600128 dec == 2020-03-10T12:00:00.128Z.
However, I'm afraid that the 01020 prefix remains a mystery to me.

R - how to exclude pennystocks from environment before calculating adjusted stock returns

Within my current research I'm trying to find out, how big the impact of ad-hoc sentiment on daily stock returns is.
Calculations functioned quite well and results also are plausible.
The calculations until now with quantmod package and yahoo financial data look like below:
getSymbols(c("^CDAXX",Symbols) , env = myenviron, src = "yahoo",
from = as.Date("2007-01-02"), to = as.Date("2016-12-30")
Returns <- eapply(myenviron, function(s) ROC(Ad(s), type="discrete"))
ReturnsDF <- as.data.table(do.call(merge.xts, Returns))
# adjust column names
colnames(ReturnsDF) <- gsub(".Adjusted","",colnames(ReturnsDF))
ReturnsDF <- as.data.table(ReturnsDF)
However, to make it more robust towards noisy influence of pennystock data I wonder, how its possible to exclude stocks that once in the time period go below a certain value x, let's say 1€.
I guess, the best thing would be to exclude them before calculating the returns and merge the xts object results or even better, before downloading them with the getSymbols command.
Has anybody an idea how this could work best? Thanks in advance.
Try this:
build a price frame of the Adj. closing prices of your symbols
(I use the PF function of the quantmod add-on package qmao which has lots of other useful functions for this type of analysis. (install.packages("qmao", repos="http://R-Forge.R-project.org”))
check by column if any price is below your minimum trigger price
select only columns which have no closings below the trigger price
To stay more flexible I would suggest to take a sub period - let’s say no price below 5 during the last 21 trading days.The toy example below may illustrate my point.
I use AAPL, FB and MSFT as the symbol universe.
> symbols <- c('AAPL','MSFT','FB')
> getSymbols(symbols, from='2018-02-01')
[1] "AAPL" "MSFT" "FB"
> prices <- PF(symbols, silent = TRUE)
> prices
AAPL MSFT FB
2018-02-01 167.0987 93.81929 193.09
2018-02-02 159.8483 91.35088 190.28
2018-02-05 155.8546 87.58855 181.26
2018-02-06 162.3680 90.90299 185.31
2018-02-07 158.8922 89.19102 180.18
2018-02-08 154.5200 84.61253 171.58
2018-02-09 156.4100 87.76771 176.11
2018-02-12 162.7100 88.71327 176.41
2018-02-13 164.3400 89.41000 173.15
2018-02-14 167.3700 90.81000 179.52
2018-02-15 172.9900 92.66000 179.96
2018-02-16 172.4300 92.00000 177.36
2018-02-20 171.8500 92.72000 176.01
2018-02-21 171.0700 91.49000 177.91
2018-02-22 172.5000 91.73000 178.99
2018-02-23 175.5000 94.06000 183.29
2018-02-26 178.9700 95.42000 184.93
2018-02-27 178.3900 94.20000 181.46
2018-02-28 178.1200 93.77000 178.32
2018-03-01 175.0000 92.85000 175.94
2018-03-02 176.2100 93.05000 176.62
Let’s assume you would like any instrument which traded below 175.40 during the last 6 trading days to be excluded from your analysis :-) .
As you can see that shall exclude AAPL and FB.
apply and the base function any applied(!) to a 6-day subset of prices will give us exactly what we want. Showing the last 3 days of prices excluding the instruments which did not meet our condition:
> tail(prices[,apply(tail(prices),2, function(x) any(x < 175.4)) == FALSE],3)
FB
2018-02-28 178.32
2018-03-01 175.94
2018-03-02 176.62

timestamps seem to allow 0 seconds duration for some words in results, is this a bug?

When using the google cloud speech api, the new word accurate timestamps/timecode feature, seem to allow 0 seconds duration for some words in results, here is an example
...
{ startTime: '48.800s', endTime: '48.800s', word: 'a' },
{ startTime: '48.800s', endTime: '49.200s', word: 'kindly' },
...
is this a bug?
To test I used a clip from audio archive "Arthur the Rat", "USA - General mid-western speaker (Michigan)".
you can get better than second precision using the returned timestamp.
you get the start time out of the structure containing the word and you can output it in the following way:
start_time.seconds + start_time.nanos * 1e-9
David Anderson's answer is correct, I just thought I'd elaborate it as I initially thought the response is only to the second precision and not 100ms as the docs describe.
As of July 2018, sending a request to the google cloud speech API including word time offsets returns a response object where each word result in response.results has the structure:
start_time {
seconds: 24
nanos: 100000000
}
end_time {
seconds: 24
nanos: 700000000
}
word: "of"
The nanos field allows you to get the start and end time to the 100ms precision. So you can obtain the start and end times like so:
print(start_time.seconds + start_time.nanos * 1e-9)
print(end_time.seconds + end_time.nanos * 1e-9)
==== Output ====
24.1
24.7

Baffled: SPSS (2265) Unrecognized or invalid variable format

A year ago, we analyzed with SPSS 22 some data with 100+ variables on 5 lines. We used the GUI and laboriously entered variable names and output formats. This year, we are using SPSS 23 after a mandatory upgrade. We have similar data, and want to use a syntax file instead. We copied the GET DATA output from last year, made a few changes, and ran. No deal. We get the notorious and almost completely unhelpful error message in the title. (It continues "The format is invalid. For numeric formats, the width or decimals value may be invalid." Not line number, Not indication of the problem).
We are not using big numbers. We are not using macros, as in this SO question. We tried replacing F1.0 with N1. There are no ','s in the file (hence, no F3,1-like typos). I have searched the web. Does anyone know what else the problem might be?
The failing GET DATA statement, with filename and middle elided.
GET DATA /TYPE=TXT
/FILE="E: ... .txt"
/ENCODING='UTF8'
/DELCASE=VARIABLES 123
/DELIMITERS="\t"
/ARRANGEMENT=DELIMITED
/FIRSTCASE=1
/IMPORTCASE=ALL
/VARIABLES=
ID A4
Group A2
Quality A2
V4 A5
oarea F4.1
oallarea F4.1
olthmean F5.3
olthmax F5.3
...
x N1
o N1
S N1
Z N1
w N1.
F5.5 was not valid. Fixing that and program ran.

Decode GPS Data - No idea what the format is

I have a GPS tracker that a friend lent me. It's a chinese model, with sparse documentation.
It's got a built in gps and a gprs module (sim) and it's sending me my data to a particular IP address.
I can't figure out what all the numbers mean. I got latitude and long thanks to the N and E. but the rest I'm not sure about.
Here's an extract from my log:
4/28/2011 6:48:01 PM (001__450BP00BP05000001__450BP00110428A2451.6491N06700.6385E000.013474342.72000000000L0001ADFE)
4/28/2011 6:48:18 PM (001__450BP00BP05000001__450BP00110428A2451.6491N06700.6385E000.013480942.72000000000L0001ADFE)
4/28/2011 6:49:23 PM (001__450BP00BP05000001__450BP00110428A2451.6491N06700.6385E000.013490942.72000000000L0001ADFE)
4/28/2011 6:50:33 PM (001__450BP00BP05000001__450BP00110428A2451.6362N06700.6297E000.0135016198.8300000000L0001ADFE)
4/28/2011 6:51:39 PM (001__450BP00BP05000001__450BP00110428A2451.5203N06700.5738E000.0135114135.3800000000L0001AEFF)
4/28/2011 6:51:42 PM (001__450BP00BR02110428V2451.4962N06700.5942E000.0135133143.7700000000L0001AF23)
Note: the exact string from the tracker is stored within the round brackets (...)
I gave the dates and times because they may help decode the data if the tracker reports UTC time or something. Didn't see anything matching the time signature though
It would help if you post some more information (any serial numbers or other text on the device).
However, the messages look like GPS518.
I'm mostly guessing, but if I deconstruct the first line, I think this is the meaning:
Request
001 : ?
450 : deviceid
BP00 : handshake
BP05 : command
000001 : ?
Response
450 : device id
BP00 : command
110428 : date (format yymmdd)
A
2451.6491N : Latitude
06700.6385E : Longitude
000.0 : Speed (format nnn.n)
134743 : Time (format hhmmssas UTC) You probably live in GMT-7
42.720 : Heading/Bearing (?)
00000000L : Elevation
0001ADFE : ?
There's a discussion here that might be of interest:
http://sourceforge.net/projects/opengts/forums/forum/579834/topic/3871481
After some googling, I found this. It seems to generate message in roughly the same format as the ones that you are receiving:
http://kmmk.googlecode.com/svn/trunk/kmmk/src/com/gps/testmock/CommAdapterYD518.java
You can listen to the GPS data and parse it.
Please check the following link for more information:
https://github.com/anupama513/Tk102-gps-data-parser-nodejs-server
This is node js server:
listens continuously to the gps data
parse the GPRMC data
can store the data into database
Can also post the data to another webserver/socket.
The parse logic might be slightly different. But, most of the data matches.

Resources