i am using the GL865-DUAL GPRS module from Telit.
I'm trying to set up a HTTP POST request to a website with my location variables.
I am able to set up a working GPRS connection wit h the provider (got ip adres) but at the post at-command it goes wrong,
this is my command set:
"AT" //response: OK //build in delay of 5 seconds
Delay after each AT command set to 400miliseconds
"AT+CGMR //response: 16.00.152 OK
"AT#QSS?" //response: #QSS:0,1 OK
"AT#QSS=?" // response #QSS: (0-2) OK
"AT+CMEE=1" //response: OK
"AT+CMEE?" // response: +CMEE: 1
"AT+CPIN?" // response: +CPIN: READY OK
"AT+COPS?" // response: +COPS: 0,0,"PROXIMUS" OK
"AT+CSQ" // response: +CSQ: 20
"AT+cgatt=1" // response: OK
"AT+CGDCONT=1, \"IP\",\"internet.proximus.be\"" // response: OK // double quotes in c are expressed as \"
"AT+CGDCONT?" // response: +CGDCONT: 1,"IP","internet.proximus.be","",0,0 OK
"AT#SGACT?" // response: #SGACT: 1,0 OK
"AT#SCFG=?" //#SCFG: (1-6,(0-5),(0-1500),(0-65535),(10-1200),(0-255) OK
"AT#SCFG=1,1,300,90,600,50" // response: OK
"AT#SGACT?" //response: #SGACT: 1,0
"AT#SGACT = 1,1" //response: #SGACT: 178,144.233.116 OK
"AT#HTTPCFG=1,\"https://www.google.be/\",80,0,,,0,120,1" // response: OK
"AT#HTTPSND=1,0,\"search?q=yo\",4,\"1:charset=ISO-8859-1\"test" // response: CME Error 4
What am i doing wrong and is it possible to give a working example ?
Note: no Pin is required for the SIM card and no APN user and password is needed for my case.
Folowing documents could be helpful:
http://www.gaw.ru/pdf/DIA_Telecom/80000ST10028_Easy%20GPRS%20User%20Guide_r1.pdf
http://www4.telit.com/module/infopool/download.php?id=542
Thanks a lot!
Try removing the https:// and the trailing /. #HTTPCFG asks for a hostname or IP address, not a url.
I see two problems with your commands:
Your profile ID for HTTPCFG and HTTPSND don't match.
I think you need double quotes in your password and username fields
for HTTPCFG.
After AT#HTTPSND, the modem replies with >>> indicating that you need to supply Your data. The "test" at the end of that line shouldn't be there, it should be after a delay. you can try specifying just 1 for plain text and not the whole charset. Posting to google?
Docker API image creation / pull (/v1.6/images/create) apparently always return
HTTP/1.1 200 OK
Content-Type: application/json
no matter if the process is a success or a failure.
Furthermore, the payload is not valid json.
eg: /v1.6/images/create?fromImage=whatevertheflush
returns:
{"status":"Pulling repository whatevertheflush"}{"error":"Server error: 404 trying to fetch remote history for whatevertheflush","errorDetail":{"code":404,"message":"Server error: 404 trying to fetch remote history for whatevertheflush"}}
Not being valid json, and the HTTP error not being forwarded / used makes it awkward to handle errors for clients.
Indeed, docker-py just puke the payload (https://github.com/dotcloud/docker-py/blob/master/docker/client.py#L374). And DockerHTTPClient from openstack tries to return a value based on the http error code, which is always 200... (https://github.com/openstack/nova/blob/master/nova/virt/docker/client.py#L191)
Now, I understand the pull might take a long time, and that it somewhat make sense to start streaming an answer to the client, but I can't help thinking something is wrong here.
So, this is three fold:
am I missing something entirely here?
if not: if you are implementing a client application (say, in Python), how would you handle this (elegantly, if possible :))? try to detect valid json blocks, load them, and exit whenever we "think" something is wrong?
if not: is this going to change (for the better) in future docker versions?
This question is a bit old, but for the future reader who has landed on this page, I'd like to let you know you're not alone, we feel your pain. This API is indeed as terrible as it looks.
The TL;DR answer is "the /images/create response format is undocumented; discard the output and query /images/XXX/json after your create call completes."
I wrote some orchestration tools a few years ago, and I found the /images/create API to be extremely annoying. But let's dive in:
There is no documented schema of the 200 response; the v1.19 docs simply gave examples of a few records. The v1.37 (latest at the time I write this) docs don't even go that far, no details are provided at all of the response.
The response is sent as Transfer-Encoding: chunked, and each record sent is preceded by the byte count in hex. Here's a low-level exerpt (bypassing curl, so we can see what actually gets sent on the wire):
host-4:~ rg$ telnet localhost 2375
Trying ::1...
Connected to localhost.
Escape character is '^]'.
POST /images/create?fromImage=jenkins/jenkins:latest HTTP/1.1
Host: localhost:2375
User-Agent: foo/1.0
Accept: */*
HTTP/1.1 200 OK
Api-Version: 1.39
Content-Type: application/json
Docker-Experimental: true
Ostype: linux
Server: Docker/18.09.1 (linux)
Date: Wed, 06 Feb 2019 16:53:19 GMT
Transfer-Encoding: chunked
39
{"status":"Pulling from jenkins/jenkins","id":"latest"}
5e
{"status":"Digest: sha256:abd3e3f96fbc3445c420fda590f37e2bd3377f69affd47b63b3d826d084c5ddc"}
45
{"status":"Status: Image is up to date for jenkins/jenkins:latest"}
0
Yes, it streams the image download progress -- client libraries that don't give low-level access to the chunked records may just concatenate the data before it's provided to you. As you encountered, early versions of the API returned JSON records with the only delimiter being the chunked transfer encoding, so client code received a concatenated block of undelimited JSON and had to parse it by tracking curlies/quotes/escape chars! It has since been updated to now emit records delimited by newlines, but can we count on them always being there? Who knows! This behavior changed without ceremony, and was not preserved if you call older versions of the API on newer daemons.
It returns 200 OK immediately, which doesn't represent success or failure. (Given the nature of the call, I'd imagine it should probably return 202 Accepted instead. Ideally, we'd get a Location header pointing to a new URL that we could use to query the progress/status.)
The response data returned is huge, spammy, and just... silly. If you have a docker instance listening on TCP, try curl -Nv -X POST http://yourdocker:2375/images/create?fromImage=jenkins/jenkins:latest -o /tmp/omgwtf.txt. You'll be amazed. A ton of bandwidth is wasted transferring server-rendered ASCII bar graphs!!!. In fact, the records return each layer's progress three different ways, as numeric fields for current and total bytes, as a bar graph, and as a pretty-printed string with MB or GB units. Why isn't this just rendered on the client? Great question.
Instead, you need your client to parse kilobytes or megabytes of spam.
The bar graph has a randomly escaped unicode rep of the > character, despite being safely inside a JSON string. Someone was just throwing escape calls at the wall to see what stuck? ¯\_(ツ)_/¯
The records themselves are pretty arbitrary. There's an id field that changes what it references, and the only way to know what kind of record it is to parse the human-readable string. Pulling from XXX vs Pulling fs layer vs Downloading etc. As far as I can tell, the only real way to know if it's done is to track all the ids, and ensure you get a Pull complete for each at the time that the socket closes.
You might be able to look for Status: Downloaded newer image for XXX but I'm not sure if there are multiple possible responses for this.
As I mentioned at the start, you'll probably have the best luck requesting /images/XXX/json after /images/create claims to be complete. The combination of the two calls will give a pretty reliable indication of whether /images/create worked or not.
Here's a longer block of concatenated client response that shows a few different record types. Edited for brevity:
{"status":"Pulling from jenkins/jenkins","id":"latest"}
{"status":"Pulling fs layer","progressDetail":{},"id":"ab1fc7e4bf91"}
{"status":"Pulling fs layer","progressDetail":{},"id":"35fba333ff52"}
{"status":"Pulling fs layer","progressDetail":{},"id":"f0cb1fa13079"}
{"status":"Pulling fs layer","progressDetail":{},"id":"3d1dd648b5ad"}
{"status":"Pulling fs layer","progressDetail":{},"id":"a9f886e483d6"}
{"status":"Pulling fs layer","progressDetail":{},"id":"4346341d3c49"}
..
"status":"Waiting","progressDetail":{},"id":"3d1dd648b5ad"}
{"status":"Waiting","progressDetail":{},"id":"a9f886e483d6"}
{"status":"Waiting","progressDetail":{},"id":"4346341d3c49"}
{"status":"Waiting","progressDetail":{},"id":"006f2208d67a"}
{"status":"Waiting","progressDetail":{},"id":"fb85cf26717d"}
{"status":"Waiting","progressDetail":{},"id":"52ca068dbca7"}
{"status":"Waiting","progressDetail":{},"id":"82f4759b8d12"}
...
{"status":"Downloading","progressDetail":{"current":110118,"total":10780995},"progress":"[\u003e ] 110.1kB/10.78MB","id":"35fba333ff52"}
{"status":"Downloading","progressDetail":{"current":457415,"total":45344749},"progress":"[\u003e ] 457.4kB/45.34MB","id":"ab1fc7e4bf91"}
{"status":"Downloading","progressDetail":{"current":44427,"total":4340040},"progress":"[\u003e ] 44.43kB/4.34MB","id":"f0cb1fa13079"}
{"status":"Downloading","progressDetail":{"current":817890,"total":10780995},"progress":"[===\u003e ] 817.9kB/10.78MB","id":"35fba333ff52"}
{"status":"Downloading","progressDetail":{"current":1833671,"total":45344749},"progress":"[==\u003e ] 1.834MB/45.34MB","id":"ab1fc7e4bf91"}
{"status":"Downloading","progressDetail":{"current":531179,"total":4340040},"progress":"[======\u003e ] 531.2kB/4.34MB","id":"f0cb1fa13079"}
{"status":"Downloading","progressDetail":{"current":1719010,"total":10780995},"progress":"[=======\u003e ] 1.719MB/10.78MB","id":"35fba333ff52"}
{"status":"Downloading","progressDetail":{"current":3205831,"total":45344749},"progress":"[===\u003e ] 3.206MB/45.34MB","id":"ab1fc7e4bf91"}
{"status":"Downloading","progressDetail":{"current":1129195,"total":4340040},"progress":"[=============\u003e ] 1.129MB/4.34MB","id":"f0cb1fa13079"}
{"status":"Downloading","progressDetail":{"current":2640610,"total":10780995},"progress":"[============\u003e ] 2.641MB/10.78MB","id":"35fba333ff52"}
{"status":"Downloading","progressDetail":{"current":1719019,"total":4340040},"progress":"[===================\u003e ] 1.719MB/4.34MB","id":"f0cb1fa13079"}
{"status":"Downloading","progressDetail":{"current":4586183,"total":45344749},"progress":"[=====\u003e ] 4.586MB/45.34MB","id":"ab1fc7e4bf91"}
{"status":"Downloading","progressDetail":{"current":3549922,"total":10780995},"progress":"[================\u003e ] 3.55MB/10.78MB","id":"35fba333ff52"}
{"status":"Downloading","progressDetail":{"current":2513643,"total":4340040},"progress":"[============================\u003e ] 2.514M
...
{"status":"Pull complete","progressDetail":{},"id":"6d9b49fc8a28"}
{"status":"Extracting","progressDetail":{"current":380,"total":380},"progress":"[==================================================\u003e] 380B/380B","id":"6302e8b6563c"}
{"status":"Extracting","progressDetail":{"current":380,"total":380},"progress":"[==================================================\u003e] 380B/380B","id":"6302e8b6563c"}
{"status":"Pull complete","progressDetail":{},"id":"6302e8b6563c"}
{"status":"Extracting","progressDetail":{"current":1548,"total":1548},"progress":"[==================================================\u003e] 1.548kB/1.548kB","id":"7348f018cf93"}
{"status":"Extracting","progressDetail":{"current":1548,"total":1548},"progress":"[==================================================\u003e] 1.548kB/1.548kB","id":"7348f018cf93"}
{"status":"Pull complete","progressDetail":{},"id":"7348f018cf93"}
{"status":"Extracting","progressDetail":{"current":3083,"total":3083},"progress":"[==================================================\u003e] 3.083kB/3.083kB","id":"c651ee7bd59e"}
{"status":"Extracting","progressDetail":{"current":3083,"total":3083},"progress":"[==================================================\u003e] 3.083kB/3.083kB","id":"c651ee7bd59e"}
{"status":"Pull complete","progressDetail":{},"id":"c651ee7bd59e"}
{"status":"Digest: sha256:abd3e3f96fbc3445c420fda590f37e2bd3377f69affd47b63b3d826d084c5ddc"}
{"status":"Status: Downloaded newer image for jenkins/jenkins:latest"}
This code runs the Internet now. =8-O
This particular endpoint actually returns chunked encoding. An example via curl:
$ curl -v -X POST http://localhost:4243/images/create?fromImage=base
* About to connect() to localhost port 4243 (#0)
* Trying ::1...
* Connection refused
* Trying 127.0.0.1...
* connected
* Connected to localhost (127.0.0.1) port 4243 (#0)
> POST /images/create?fromImage=base HTTP/1.1
> User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8y zlib/1.2.5
> Host: localhost:4243
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Fri, 07 Feb 2014 04:21:59 GMT
< Transfer-Encoding: chunked
<
* Connection #0 to host localhost left intact
{"status":"Pulling repository base"}{"status":"Pulling image (ubuntu-quantl) from base","progressDetail":{},"id":"b750fe79269d"}{"status":"Pulling image (ubuntu-quantl) from base, endpoint: https://cdn-registry-1.docker.io/v1/","progressDetail":{},"id":"b750fe79269d"}{"status":"Pulling dependent layers","progressDetail":{},"id":"b750fe79269d"}{"status":"Download complete","progressDetail":{},"id":"27cf78414709"}{"status":"Download complete","progressDetail":{},"id":"b750fe79269d"}{"status":"Download complete","progressDetail":{},"id":"b750fe79269d"}* Closing connection #0
Now I'm not sure how you go about parsing this in Python, but in Ruby, I can use Yajl like so:
parts = []
Yajl::Parser.parse(body) { |o| parts << o }
puts parts
{"status"=>"Pulling repository base"}
{"status"=>"Pulling image (ubuntu-quantl) from base", "progressDetail"=>{}, "id"=>"b750fe79269d"}
{"status"=>"Pulling image (ubuntu-quantl) from base, endpoint: https://cdn-registry-1.docker.io/v1/", "progressDetail"=>{}, "id"=>"b750fe79269d"}
{"status"=>"Pulling dependent layers", "progressDetail"=>{}, "id"=>"b750fe79269d"}
{"status"=>"Download complete", "progressDetail"=>{}, "id"=>"27cf78414709"}
{"status"=>"Download complete", "progressDetail"=>{}, "id"=>"b750fe79269d"}
{"status"=>"Download complete", "progressDetail"=>{}, "id"=>"b750fe79269d"}
Using Docker v1.9 I still having this problem to deal with.
Also have found an issue on Docker Github repository: Docker uses invalid JSON format in some API functions #16925
Where some contributor suggests to use Content-Type HTTP header like this: application/json; boundary=NL
This not worked for me.
Then, while struggling with my custom parser, found this question StackOverflow: How to handle a huge stream of JSON dictionaries?