Writing to InfluxDB fails due to timestamp problem - influxdb

I want to write a data point to my influxdb via bash shell but the timestamp seems to cause problems.
root#server:UP [~]# curl -i -XPOST 'http://localhost:8086/write?db=tbr' --data-binary 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000'
HTTP/1.1 400 Bad Request
Content-Type: application/json
Request-Id: 03666c3f-b7b5-11ea-8659-02420a0a1b02
X-Influxdb-Build: OSS
X-Influxdb-Error: unable to parse 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000': bad timestamp
X-Influxdb-Version: 1.7.10
X-Request-Id: 03666c3f-b7b5-11ea-8659-02420a0a1b02
Date: Fri, 26 Jun 2020 13:57:46 GMT
Content-Length: 129
{"error":"unable to parse 'codenarc maxPriority2Violations=917 maxPriority3Violations=3336 1593122400000000000': bad timestamp"}
This is how I created the timestamp in the first place
def date = LocalDateTime.of(2020, Month.JUNE, 26, 0, 0, 0)
def ms = date.atZone(ZoneId.systemDefault()).toInstant().toEpochMilli()
TimeUnit.NANOSECONDS.convert(ms, TimeUnit.MILLISECONDS)
So the timestamp is supposed to be in ns and I ensured that. Why is influxdb giving me that error message? What's wrong with that timestamp?
Thanks!

It's not a timestamp issue. You have missed comma between the data that you are passing. Kindly comma separate all of them except the timestamp field. Modified your curl request, try now.
curl -i -XPOST 'http://localhost:8086/write?db=tbr' --data-binary 'codenarc, maxPriority2Violations=917, maxPriority3Violations=3336 1593122400000000000'
One more example:
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server02,region=us-west value=0.55 1422568543702900257'

Related

Automatically allow-unauthenticated users to cloud run instances

Hi am developing a solution that creates and deploys services to Google cloud run using its REST API with OAuth as a service account created fot that purpose.
I am stuck at making the created services available publicly.
I was unable to find a corresponding --allow-unauthenticated parameter as with gcloud to use from the API.
The only way I found is to manually add allUsers as Cloud Run Invoker on each service I want publicly reachable. But, I would like all the services from that service-account to be automatically reacheable publicly.
I would like to know if there is a better(more automatic) way to achieve this.
Thanks in advance.
Firstly, you can't do this in only one command. You have to deploy the service and then to grant allUsers on the service. The CLI do this 2 steps conveniently for you.
Anyway, when you are stuck like this, there is a useful trick: add --log-http at your gcloud command. Like this, you will see all the HTTP API calls performed by the CLI.
if you do this when you deploy a new Cloud Run service, you will have tons of line and, at a moment, you have this
==== request start ====
uri: https://run.googleapis.com/v1/projects/gbl-imt-homerider-basguillaueb/locations/us-central1/services/predict2:setIamPolicy?alt=json
method: POST
== headers start ==
b'Authorization': --- Token Redacted ---
b'X-Goog-User-Project': b'gbl-imt-homerider-basguillaueb'
b'accept': b'application/json'
b'accept-encoding': b'gzip, deflate'
b'content-length': b'98'
b'content-type': b'application/json'
b'user-agent': b'google-cloud-sdk gcloud/299.0.0 command/gcloud.run.deploy invocation-id/61070d063a604fdda8e87ad63777e3ae environment/devshell environment-version/None interactive/True from-script/False python/3.7.3 term/screen (Linux 4.19.112+
)'
== headers end ==
⠹ Deploying new service...
{"policy": {"bindings": [{"members": ["allUsers"], "role": "roles/run.invoker"}], "etag": "ACAB"}}
== body end ==
⠹ Setting IAM Policy...
---- response start ----
status: 200
-- headers start --
-content-encoding: gzip
cache-control: private
content-length: 159
content-type: application/json; charset=UTF-8
date: Wed, 08 Jul 2020 11:37:11 GMT
server: ESF
transfer-encoding: chunked
vary: Origin, X-Origin, Referer
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-xss-protection: 0
-- headers end --
-- body start --
{
"version": 1,
"etag": "BwWp7IdZGHs=",
"bindings": [
{
"role": "roles/run.invoker",
"members": [
"allUsers"
]
}
]
}
So, it's an addition API call that perform the CLI for you. You can find here the API definition
If you want to perform a call manually, you can do a request like this
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "content-type: application/json" -X POST \
-d '{"policy": {"bindings": [{"members": ["allUsers"], "role": "roles/run.invoker"}]}}' \
"https://run.googleapis.com/v1/projects/<PROJECT_ID>/locations/<REGION>/services/<SERVICE_NAME>:setIamPolicy"

How to decode this url, its pretty complicated

I am in a jam here, I am trying to decode this url,
http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z%2Fp3MrqycripNTi4JXVyp3h3N2r3sqo1N4%3D
This takes you to the
http://www.vidxden.com/ce8mfl8kd6oy
I have run base64 decoding for "3uXn46ChlNzr7Z%2Fp3MrqycripNTi4JXVyp3h3N2r3sqo1N4%3D" and it comes as jibberish.
I am an imacros coder and need to decode this string for my client, Please help me out in this jam.
Regards
Ram
It doesn't decode to that URL, it decodes to a URL that when downloaded will redirect to your target:
$ python3 -c "import urllib.parse; print(urllib.parse.unquote('http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4='))"
http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4=
$ curl -sS http://www.fastpasstv.ms/redirect/?url=3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4= -D /dev/tty
HTTP/1.1 302 Moved Temporarily
Server: nginx/1.0.11
Content-Type: text/html
Location: http://www.vidxden.com/ce8mfl8kd6oy
Content-Length: 0
Accept-Ranges: bytes
Date: Tue, 05 Jun 2012 22:54:31 GMT
Connection: keep-alive
Note the 302 status and the Location header
I was curious, and did some tries.
feeding http://www.insidepro.com/hashes.php?lang=eng with http://www.vidxden.com/ce8mfl8kd6oy or www.vidxden.com/ce8mfl8kd6oy does not result in the url to interpret
using http://code.google.com/p/hash-identifier/ on 3uXn46ChlNzr7Z/p3MrqycripNTi4JXVyp3h3N2r3sqo1N4= or 3uXn46ChlNzr7Z%2Fp3MrqycripNTi4JXVyp3h3N2r3sqo1N4%3D does not give any result
However: 3uXn46ChlNzr7Z seems to be repeatedly used by fastpasstv (google for it). And the rest (p3MrqycripNTi4JXVyp3h3N2r3sqo1N4) is said to possibly be a MD5 Hash by Hash ID.
You will not easily decrypt the MD5 hash...
So, it seems you are out of luck for decrypting this. Just follow the redirect...

Download link (via API) to publicly accessible file on google docs returns 401

I've uploaded a (pgp) file via the documents API, and changed its
visibility to public. However, I'm unable to download it publicly
using the contents link for that file.
Here are the relevant bits of the xml for the meta-data for the file in
question.
$ curl -H "GData-Version: 3.0" -H "Authorization: Bearer ..." https://docs.google.com/feeds/default/private/full
...
<content type="application/pgp-encrypted" src="https://doc-0c-c0-docs.googleusercontent.com/docs/securesc/tkl8gnmcm9fhm6fec3160bcgajgf0i18/opa6m1tmj5cufpvrj89bv4dt0q6696a4/1336514400000/04627947781497054983/04627947781497054983/0B_-KWHz80dDXZ2dYdEZ0dGw3akE?h=16653014193614665626&e=download&gd=true"/>
...
<gd:feedLink rel="http://schemas.google.com/acl/2007#accessControlList" href="https://docs.google.com/feeds/default/private/full/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl"/>
$ curl -H "GData-Version: 3.0" -H "Authorization: Bearer ..." https://docs.google.com/feeds/default/private/full/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl
...
<entry gd:etag="W/"DUcNRns4eCt7ImA9WhVVFUw."">
<id>https://docs.google.com/feeds/id/file%3A0B_-KWHz80dDXZ2dYdEZ0dGw3akE/acl/default</id>
...
<gAcl:role value="reader"/>
<gAcl:scope type="default"/>
...
The role/scope returned for the file in question is reader/default, indicating
it is public. (It also shows up with public shared access in the web UI.)
However, accessing
the src attribute in the content element results in:
$ curl --verbose 'https://doc-0c-c0-docs.googleusercontent.com/docs/securesc/tkl8gnmcm9fhm6fec3160bcgajgf0i18/opa6m1tmj5cufpvrj89bv4dt0q6696a4/1336514400000/04627947781497054983/04627947781497054983/0B_-KWHz80dDXZ2dYdEZ0dGw3akE?h=16653014193614665626&e=download&gd=true'
< HTTP/1.1 401 Unauthorized
< Server: HTTP Upload Server Built on May 7 2012 18:16:42 (1336439802)
< WWW-Authenticate: GoogleLogin realm="http://www.google.com/accounts"
< Date: Tue, 08 May 2012 22:48:37 GMT
< Expires: Tue, 08 May 2012 22:48:37 GMT
< Cache-Control: private, max-age=0
< Content-Length: 0
< Content-Type: text/html
It seems like you are trying to publish a document: https://developers.google.com/google-apps/documents-list/#publishing_documents_by_publishing_a_single_revision
Once you publish it, the link with rel set to "http://schemas.google.com/docs/2007#publish" will point to the published document on the web.

POST with curl according to a particular format

I'm attempting to gather XML data automatically using curl, and my command so far is
curl -E keyStore.pem -d 'uid=myusername&password=mypassword&active=y&type=F' 'https://www.aidaptest.naimes.faa.gov/aidap/XmlNotamServlet HTTP/1.1/' -k
but it keeps on giving me a "Your browser sent a query this server could not understand." error.
I'm pretty sure that it's connecting since it's not rejecting me, but I don't know how to properly format the POST. Here's some documentation they gave me for the format of the POST request.
POST <URL>/aidap/XmlNotamServlet HTTP/1.1
Content-type: application/x-www-form-urlencoded
Content-length: <input_parameter’s length>
<a blank line>
<input_parameter>`
input_parameter is uid, password, and location_id bit and is correct
Am I doing this correctly from what you can see?
Something like this should do it.
curl -E keyStore.pem -v -X POST -i --header Content-Type:application/x-www-form-urlencoded
-d 'uid=myusername&password=mypassword&active=y&type=F' 'https://www.aidaptest.naimes.faa.gov/aidap/XmlNotamServlet'
I don't think you need HTTP/1.1 at the end of the URL in the command line. And even if you needed it, you certainly don't need the final / character before the closing single quote.

Using wget to do monitoring probes

Before I bang my head against all the issues myself I thought I'd run it by you guys and see if you could point me somewhere or pass along some tips.
I'm writing a really basic monitoring script to make sure some of my web applications are alive and answering. I'll fire it off out of cron and send alert emails if there's a problem.
So what I'm looking for are suggestions on what to watch out for. Grepping the output of wget will probably get me by, but I was wondering if there was a more programmatic way to get robust status information out of wget and my resulting web page.
This is a general kind of question, I'm just looking for tips from anybody who happens to have done this kind of thing before.
Check the exit code,
wget --timeout=10 --whatever http://example.com/mypage
if [ $? -ne 0 ] ; then
there's a pproblem, mail logs, send sms, etc.
fi
I prefer curl --head for this type of usage:
% curl --head http://stackoverflow.com/
HTTP/1.1 200 OK
Cache-Control: public, max-age=60
Content-Length: 359440
Content-Type: text/html; charset=utf-8
Expires: Tue, 05 Oct 2010 19:06:52 GMT
Last-Modified: Tue, 05 Oct 2010 19:05:52 GMT
Vary: *
Date: Tue, 05 Oct 2010 19:05:51 GMT
This will allow you to check the return status to make sure it's 200 (or whatever you're expecting it to be) and the content-length to make sure it's the expected value (or at least not zero.) And it will exit non-zero if there's any problem with the connection.
If you want to check for changes in the page content, pipe the output through md5 and then compare what you get to your pre-computed known value:
wget -O - http://stackoverflow.com | md5sum

Resources