Inserting measurements through shell script and CURL failing - influxdb

curl -i -XPOST 'http://127.0.0.1:8086/write?db=test' --data-binary $"inspec_reporting,profile=$profile success_control=$success_control,failure_control=$failure_control,skip_control=$skip_control,success_summary=$success_summary,failure_summary=$failure_summary,skip_summary=$skip_summary"
The above one is not working.
it is failing by adding special characters. When printed, not getting those special characters. Getting those special characters only with curl & influxDB.
HTTP/1.1 400 Bad Request
Content-Type: application/json
Request-Id: b60dd70d-ac82-11e9-9742-000000000000
X-Influxdb-Build: OSS
X-Influxdb-Error: unable to parse '$'inspec_reporting,profile=rhel7_enhanced_profile success_control=65,failure_control=72,skip_control=1,success_summary=252,failure_summary=211,skip_summary=1'': invalid boolean
X-Influxdb-Version: 1.6.2
X-Request-Id: b60dd70d-ac82-11e9-9742-000000000000
Date: Mon, 22 Jul 2019 13:15:01 GMT
Content-Length: 302
{"error":"unable to parse
'$'inspec_reporting,profile=rhel7_enhanced_profile
success_control=\u001b[38;5;41m65,failure_control=\u001b[38;5;9m72,skip_control=\u001b[38;5;247m1,success_summary=\u001b[38;5;41m252,failure_summary=\u001b[38;5;9m211,skip_summary=\u001b[38;5;247m1'':
invalid boolean"}

Related

TwiML containing S3 signed URL gives error in voice call

I am trying to invoke the following:
twilio api:core:calls:create \
--method GET \
--url https://xxxxxxxxx.execute-api.us-east-1.amazonaws.com/play?job=1589297170910&record=2&username=992b512f-130d-4da6-a9d3-a1a4227f82f5 \
--to +19995551212 \
--from +12345678901
SID From To Status Start Time
CA283a5deadbeefcafe0c89e861d +12345678901 +19995551212 queued null
The endpoint in the --url parameter above returns TwiML, with a response like this:
HTTP/1.1 200 OK
Apigw-Requestid: M8x5Wgf9IAMEVmg=
Connection: keep-alive
Content-Length: 1154
Content-Type: text/xml
Date: Fri, 22 May 2020 20:04:12 GMT
<?xml version="1.0" encoding="UTF-8"?>
<Response>
<Play>https://s3.amazonaws.com/<my bucket name>/private/jobs/992b512f-130d-4da6-a9d3-a1a4227f82f5/1589297170910/2.mp3?AWSAccessKeyId=ASIA23STQFMZWXGL6GU2&Signature=3trMYp%2Fzc6ZV4FNRUc6%2B2Exen3k%3D&x-amz-security-token=IQoJb3JpZ2luX2VjEPT%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIQDJyUkf%2FBrHEVBl2k3rKHpaAwyaObsObFqTxp53P%2FKghQIgHPH8Idf9cGZ4XR9zxbs%2FgbEuPmeOPO3%2FbNQQP%2F6LOf8q8QEITRAAGgw3NDY0MjU2OTA5MzEiDBK45eAIlYDHS04WHirOASI41mSxg6kEefiyQkZ969RopMhCFBdsXrUZWefUHrRqkFL209n%2BNLV0gKhAmyG8vvlRON74Zy3J05aIQ79%2BxFwYfKq9HLhvFskU%2B58Q8QmZlZtiPQ0KSGI2OuMceXaroRlVdfEBUJgMwR0EoXYGbf9XlXLgbK8%2BpLLtQ7MNAE4bTNE1%2FccQgq33s1wZfKyUKQGjeZkZEU2ISvDCvvUTsRgLMT9zM1thLszgm7eoaKv%2BdnfeFTKAEQDNaIFtGUwAihm5yaW6XphY8sUtccJoMLvgoPYFOuABojlUjGBEbxcXkk6nIMs6f1KYxc6USarhca13DgbrGnTdGG0CeD3KW9OByw2Cv6A7gyfAgAjSBzDyfC%2FScaYs6i4WdnZNO190d%2F3PoPMnL2kcxqRiWDo9lVXqGa03RekFKWgJGxxZ2nUXffBw9twDmZ%2BElVOZv2M2lhxOR8f06JbX3BtP0%2BE5RNxpRx0HUxeakZzrOcSqpS9OEESYB0E4UtOzrSqPJ0K7V%2B%2FhOldIoAyv%2Bdce1TZgrjgyMMjxemxQeKrtW7RSlXLh2S3SGtN7O2eg06h4YkoikzSWXsmfAOw%3D&Expires=1590782652</Play>
</Response>
The URL in the element above is an Amazon S3 signed URL for an MP3 file. It works fine if played directly in the browser. It also has the proper MIME type set for an MP3 file — audio/mpeg:
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 19010
Content-Type: audio/mpeg
Date: Fri, 22 May 2020 20:08:52 GMT
ETag: "68e63d2845abc6ed528445ab22de68f6"
Last-Modified: Fri, 22 May 2020 19:32:48 GMT
Server: AmazonS3
x-amz-id-2: M/ic62y1nbUEn4PA1THXqx4rdEpKV70C8L6EifAlREOnf7CaG+frpICoaStqn9fr4T9saEJu9qk=
x-amz-request-id: 118DFC99C7EA2F66
[...binary data truncated...]
When I receive my phone call, the message played is "We are sorry, an application error has occurred." I can't find any more debug info.
What am I doing wrong here? Is there an issue with using querystrings or urlencoded strings?
I failed to XML-encode my TwiML <Play> contents. A simple string replace from & to &.

How to make sprockets (or rack, or nginx?) motivate browsers to cache fonts and correctly return 304?

In a Rails6 app with webpacker replaced by sprockets, I do not manage to let sprockets make my browser cache fonts. Edit: my browser does cache the font, but google complains and curl shows how the App responds (not as expected with a 304, see below).
Update
It seems that a 304 is only returned when you tell the server (via If-Modified-Since-headers) that you know exactly the last modified version. While I Mozillas Dev Resources do not state that this should clearly be the case (and I am not in RFC-reading mood), it might make sens:
your server serves the asset on 2020-01-01 (appreviated date for simplicity)
a browser visits you and stores the asset alongside its date
next day same browser revisits, asks server for asset and tells it the last known date (2020-01-01 via If-Modified-Since-header)
server answes 304: You know that stuff already
next day a mistake happens and a dev-asset is served by the server
browser revisits, gets new (but wrong asset with Last modified date of 2020-01-03) and stores it alongside that date
server admins remove the wrong dev asset
next day, browser visits and tells server "I know the thing from yesterday"
server tells browser: no, forgot that, the correct payload is this, and this is the timestamp: 2020-01-01.
In my tests below, I used If-Modified-Since headers that did not correspond to the last (production) asset Timestamp. Thanks #bliof for help figuring that out.
As my ultimate goal was to make googles speed insight happy (now that I know that this 304- response works if all players behave well) I will follow Rails 5+ path of config.public_file_server.headers (https://blog.bigbinary.com/2015/10/31/rails-5-allows-setting-custom-http-headers-for-assets.html). The Rails guides also point out how you would usually let your webserver (or CDN) handle the situation (https://guides.rubyonrails.org/asset_pipeline.html#in-production), but my stack works somewhat different.
Original follows
The fonts are in e.g. app/assets/fonts/OTF/SourceSansPro-BoldIt.otf and correctly put in public/assets/OTF/...fingerprint... (accompanied by a .gz variant). They are referenced via a SCSS font-face rule, pointing to a file with the respective fingerprint in it (using font-url()).
When curling these, I seem to never get a HTTP/1.1 304 Not Modified, but a 200 with the given payload. With the other (JS, CSS) assets it works as expected.
I did not modify config/initializers/assets.rb, as all the subdirectories and files should already be picked up (and the assets:precompile output and content of public/assets shows that it works).
Digging into the sprockets code at https://github.com/rails/sprockets/blob/9909da64595ddcfa1e7ee40ed1d99738961288ec/lib/sprockets/server.rb#L73 seems to indicate that maybe an etag is not set correctly or something like that, but I do not really grock that code.
The application is deployed with dokku (basically a heroku) with a pretty standard nginx-configuration as far as I can tell: https://github.com/dokku/dokku/blob/master/plugins/nginx-vhosts/templates/nginx.conf.sigil . The app serves the assets itself (like in heroku).
What do I have to do such that sprockets adds the relevant headers / responds "correctly" with a 304? Any ideas how to debug that issue?
The relevant "debugging" parts
The initial request for CSS
curl -v https://...application-3d...c76c3.css \
-H 'Accept: text/css,*/*;q=0.1'\
-H 'Accept-Language: en-US,en;q=0.5'\
--compressed # omitted: ... User-Agent, DNT, ...
# omitted: TLS handshake etc
> GET /assets/application-3d...c76c3.css HTTP/1.1
> Host: #the host
> Accept-Encoding: deflate, gzip
> User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:75.0) Gecko/20100101 Firefox/75.0
> Accept: text/css,*/*;q=0.1
> Accept-Language: en-US,en;q=0.5
> Referer: #the host
> DNT: 1
> Connection: keep-alive
> Cookie: #a cookie
>
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 21 Apr 2020 15:39:47 GMT
< Content-Type: text/css
< Content-Length: 41256
< Connection: keep-alive
< Last-Modified: Mon, 06 Apr 2020 11:59:56 GMT
< Content-Encoding: gzip
< Vary: Accept-Encoding
<
# payload
Subsequent fetch of CSS
(The relevant parts, other params and output omitted).
Note that a If-Modified-Since: Mon, 06 Apr 2020 11:59:56 GMT header is sent along.
curl -v 'https://.../assets/application-3d...c76c3.css' \
-H 'If-Modified-Since: Mon, 06 Apr 2020 11:59:56 GMT'\
-H 'Cache-Control: max-age=0'
> If-Modified-Since: Mon, 06 Apr 2020 11:59:56 GMT
> Cache-Control: max-age=0
>
< HTTP/1.1 304 Not Modified
< Server: nginx
< Date: Tue, 21 Apr 2020 15:50:52 GMT
< Connection: keep-alive
(Thats what I want: A 304 Not Modified.
The initial request for the font asset
curl -v 'https://.../assets/WOFF2/TTF/SourceSansPro-Light.ttf-32...d9.woff2' \
-H 'Accept: application/font-woff2;q=1.0,application/font-woff;q=0.9,*/*;q=0.8'\
-H 'Accept-Language: en-US,en;q=0.5'\
--compressed \
-H 'Referer: https://...assets/application-3d....c76c3.css'
# ommitted: User Agent, Cookies, ....
> GET /assets/WOFF2/TTF/SourceSansPro-Light.ttf-32...d9.woff2 HTTP/1.1
> Host: #the host
> Accept-Encoding: deflate, gzip
> User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:75.0) Gecko/20100101 Firefox/75.0
> Accept: application/font-woff2;q=1.0,application/font-woff;q=0.9,*/*;q=0.8
> Accept-Language: en-US,en;q=0.5
> DNT: 1
> Connection: keep-alive
> Referer: https://.../assets/application-3d...c76c3.css
# cookie etc
>
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 21 Apr 2020 15:45:34 GMT
< Content-Type: application/font-woff2
< Content-Length: 88732
< Connection: keep-alive
< Last-Modified: Wed, 25 Mar 2020 20:09:14 GMT
<
# payload
Subsequent fetch of Font
curl -v 'https://.../assets/WOFF2/TTF/SourceSansPro-Light.ttf-32...ed9.woff2' \
-H 'Referer: https://.../assets/application-3d...c76c3.css'\
-H 'If-Modified-Since: Mon, 06 Apr 2020 11:59:56 GMT'
-H 'Cache-Control: max-age=0'
# ....
> If-Modified-Since: Mon, 06 Apr 2020 11:59:56 GMT
> Cache-Control: max-age=0
>
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 21 Apr 2020 15:53:46 GMT
< Content-Type: application/font-woff2
< Content-Length: 88732
< Connection: keep-alive
< Last-Modified: Wed, 25 Mar 2020 20:09:14 GMT
# payload
What I find interesting, that the server actually sends a Last-Modified which is way before the If-Modified-Since. I guess clever browsers stop the conversation there, but I really want to see a well-behaved 304.
Here are few notes/findings:
It seems that it returns 304 when you match the timestamp.
In your example if you do the curl to the font with
-H 'If-Modified-Since: Wed, 25 Mar 2020 20:09:14 GMT'
You'll get the HTTP/1.1 304 Not Modified
Same thing for the .css if you don't exactly match the date, you'll get 200.
I've tried changing sprockets locally to add some puts calls and also to change the default log level of sprockets itself but nothing happens.
TBO I don't believe the Sprokets::Server#call is getting called.
I've tried with puma and with thin, both return 304 only when the dates match.
curl --compressed -H 'Cache-Control: max-age=0' -H 'If-Modified-Since: Thu, 23 Apr 2020 21:34:30 GMT' -v http://localhost:3000/assets/OTF/SpaceMeatball-d61519ff17fadd38b57e3698067894c0e75fcb6031ee91034f5f7d6f2daa4d4b.otf
> Cache-Control: max-age=0
> If-Modified-Since: Thu, 23 Apr 2020 21:34:30 GMT
>
< HTTP/1.1 200 OK
< Last-Modified: Thu, 23 Apr 2020 21:34:29 GMT
curl --compressed -H 'Cache-Control: max-age=0' -H 'If-Modified-Since: Thu, 23 Apr 2020 21:34:29 GMT' -v http://localhost:3000/assets/OTF/SpaceMeatball-d61519ff17fadd38b57e3698067894c0e75fcb6031ee91034f5f7d6f2daa4d4b.otf
> Cache-Control: max-age=0
> If-Modified-Since: Thu, 23 Apr 2020 21:34:29 GMT
>
< HTTP/1.1 304 Not Modified
I am running rails like this:
RAILS_SERVE_STATIC_FILES=1 RAILS_ENV=production ./bin/rails s
or
RAILS_SERVE_STATIC_FILES=1 RAILS_ENV=production bundle exec thin start
todo - find what exactly is returning the response :)

Cannot make get request after acquiring Bearer token - 401 error "insufficient scope"

I have the following requests to get a bearer token from the Docker Hub API and then use that token for a GET request to the given endpoint:
#!/usr/bin/env bash
raw_token="$(curl https://auth.docker.io/token?service=registry.docker.io&scope=repository:library/ubuntu:list)";
token="$(node -pe "JSON.parse('$raw_token').token")"
curl -i -H "Authorization: Bearer $token" https://registry-1.docker.io/v2/library/ubuntu/tags/list
If I run that, I get this error:
HTTP/1.1 401 Unauthorized
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io",scope="repository:library/ubuntu:pull",error="insufficient_scope"
Date: Wed, 20 Jun 2018 08:02:24 GMT
Content-Length: 157
Strict-Transport-Security: max-age=31536000
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"repository","Class":"","Name":"library/ubuntu","Action":"pull"}]}]}
Does anyone know how I can make the right request, and get the right token?
When you get token, you specify scope=repository:library/ubuntu:list. When you use token, it says you need scope="repository:library/ubuntu:pull".

Insufficient scope when attempting to get Docker Hub catalog

I'm attempting to get a catalog listing for Docker hub, but so far I'm just getting an error in response. My understanding is I'd need to pass a bearer token with the catalog request, so I start by getting that token with the related scope:
curl -u "username:password "https://auth.docker.io/token?service=registry.docker.io&scope=registry:catalog:*"
(this is using username/password from my Docker Hub account)
I then pass the returned token to the registry:
curl -vL -H "Authorization: Bearer eyJhbGciOiJFUzI1NiIsInR5cCI6I(...)" https://registry-1.docker.io/v2/_catalog
In response to that request, I'm getting:
* Trying 54.86.130.73...
* Connected to registry-1.docker.io (54.86.130.73) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: *.docker.io
* Server certificate: RapidSSL SHA256 CA - G3
* Server certificate: GeoTrust Global CA
> GET /v2/_catalog HTTP/1.1
> Host: registry-1.docker.io
> User-Agent: curl/7.43.0
> Accept: */*
> Authorization: Bearer eyJhbGciOiJFUzI1NiIsInR5cCI6I(...)
>
< HTTP/1.1 401 Unauthorized
HTTP/1.1 401 Unauthorized
< Content-Type: application/json; charset=utf-8
Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
Docker-Distribution-Api-Version: registry/2.0
< Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io",scope="registry:catalog:*",error="insufficient_scope"
Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io",scope="registry:catalog:*",error="insufficient_scope"
< Date: Fri, 06 May 2016 23:00:08 GMT
Date: Fri, 06 May 2016 23:00:08 GMT
< Content-Length: 134
Content-Length: 134
< Strict-Transport-Security: max-age=31536000
Strict-Transport-Security: max-age=31536000
<
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"registry","Name":"catalog","Action":"*"}]}]}
...which seems to be asking me to go back and get authorized with the URL I entered above.
Should this be possible? If so, what am I missing?
I got the same problem, raised this with docker support and got the following answer,
"The catalog endpoint does not work against Docker Hub because that endpoint actually lists all the repositories on a Registry, and we disabled it as it would list all repositories on Docker Hub."
If you want repository information for a username or organization, an alternative is to use V1 api. The problem with this is you need to make multiple calls based on the number of repos available.
The following command will give the repos available for a given username :
curl -k -H "Authorization: Basic <encripted username:password>" -H "Accept: application/json" -X GET "https://index.docker.io/v1/search?q=<username>&n=100"
Here q is the query and n is the number of items to be returned in each page.
Another call can be made based on what data you need by passing repo as a parameter.

'204 No Content' no data found in influxdb

Successfully installed influxdb on windows and everything is working as expected locally. But having trouble posting data from outside using http api.
I am able to connect to admin panel locally through
http://localhost:8083/
I am using below command for posting data from a remote server:
curl -i -XPOST 'http://172.29.6.195:8086/write?db=telegraf' --data-binary 'test_load,host=njxap1dbadm01 value=13.64'
I am getting below success message:
HTTP/1.1 204 No Content
Request-Id: d3b58c0c-f620-11e5-80a1-000000000000
X-Influxdb-Version: unknown
Date: Wed, 30 Mar 2016 02:40:55 GMT
log on server side:
[http] 2016/03/29 22:40:55 172.29.18.10 - - [29/Mar/2016:22:40:55
-0400] POST /write?db=telegraf HTTP/1.1 204 0 - curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC
zlib/1.2.3 libidn/1.18 libssh2/1.4.2
d3b58c0c-f620-11e5-80a1-000000000000 0
Even though I got the sucess message on client side some how the data is not getting saved on the database.
I checked for the data from admin panel and returning no data.Checked with curl get also no results.
I have retention policy of 1day for my database.
Please help me resolve with the issue of why the data is not getting saved to database.
I'm trying to reproduce this, but I'm not able to with the latest version on InfluxDB.
~$ curl -i -XPOST 'http://localhost:8086/write?db=telegraf' --data-binary 'test_load,host=njxap1dbadm01 value=13.64'
HTTP/1.1 204 No Content
Request-Id: 38fcfb17-fac3-11e5-8004-000000000000
X-Influxdb-Version: unknown
Date: Tue, 05 Apr 2016 00:13:28 GMT
Logs:
[http] 2016/04/04 17:13:28 ::1 - - [04/Apr/2016:17:13:28 -0700] POST /write?db=telegraf HTTP/1.1 204 0 - curl/7.43.0 38fcfb17-fac3-11e5-8004-000000000000 3.776752ms
Querying:
> use telegraf
Using database telegraf
> show series
key
test_load,host=njxap1dbadm01
> select * from test_load
name: test_load
---------------
time host value
1459815208633910164 njxap1dbadm01 13.64
Do you know what version of InfluxDB you are using?

Resources