Docker Remote API - Pull/Create image not working - docker

Okay so I have enabled managing the docker daemon over HTTP by starting the daemon as follows:
/usr/bin/docker -d -H fd:// -H=0.0.0.0:2376
I can create containers and remove them via the Remote API (i.e other calls are working fine) but if I try and pull an image it errors as follows:
curl -v -X POST http://localhost:2376/images/create?from=ubuntu
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 2376 (#0)
> POST /images/create?from=ubuntu HTTP/1.1
> User-Agent: curl/7.38.0
> Host: localhost:2376
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Thu, 01 Oct 2015 09:01:02 GMT
< Transfer-Encoding: chunked
<
{"status":"Downloading from http://"}
{"errorDetail":{"message":"Get http://: http: no Host in request URL"},"error":"Get http://: http: no Host in request URL"}
* Connection #0 to host localhost left intact
Anyone know what the answer is?

Ah looks like it was a typo in the parameter name
"from" -> "fromImage"
Basically you get this error if the query parameters are missing.
Also make sure you set tag=latest otherwise it downloads all ubuntu images!

Related

"docker pull/run hello-world" behind a proxy returns "connection refused" error (non-root mode, linux mint)

Edit (kind of solved) - Just before trying something else, I've tried the same procedure (to be sure it still did not work) and it's failed (as expected). Then I've tried with the root/sudo way ("sudo docker run hello-world") and it has worked (more or less expected). Finally for unknown reason it then worked properly when invoking with the non-root way ("docker run hello-world"). It looks like it needed to go to the root/sudo way to somehow "unlock" the non-root way so that it actually uses the proxy ; weird.
I am behind a proxy. I've followed the steps on Docker doc at https://docs.docker.com/config/daemon/systemd/ as "non-root".
When I do the following,
$ docker pull hello-world
I get
Using default tag: latest
Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp x.x.x.x:443: connect: connection refused
(IP was changed to "x" chars).
Configuration:
Linux mint Vanessa (based on Ubuntu 22.04).
Docker version 20.10.17, build 100c701
$ systemctl --user show --property=Environment docker
returns
Environment=PATH=/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin HTTP_PROXY=http://proxy.domain.org:8080 HTTPS_PROXY=http://proxy.domain.org:8080
so it looks like proxy is actually recognized by docker (as it shows in the doc).
I am sure I can pass through the proxy with the following demonstration:
No proxy => connection refused:
$ export http_proxy= && export https_proxy=
$ curl -vv registry-1.docker.io
* Trying x.y.z.a:80...
* connect to x.y.z.a port 80 failed: Connection refused
* Trying x.y.z.b:80...
* connect to x.y.z.b port 80 failed: Connection refused
* Trying x.y.z.c:80...
* connect to x.y.z.c port 80 failed: Connection refused
* Failed to connect to registry-1.docker.io port 80 after 124 ms: Connection refused
* Closing connection 0
curl: (7) Failed to connect to registry-1.docker.io port 80 after 124 ms: Connection refused
With proxy => that's OK:
$ export http_proxy=http://proxy.domain.org:8080 && export https_proxy=http://proxy.domain.org:8080
$ curl -vv registry-1.docker.io
* Uses proxy env variable http_proxy == 'http://proxy.domain.org:8080'
* Trying x.y.z.a:8080...
* Connected to proxy.domain.org (x.y.z.a) port 8080 (#0)
> GET http://registry-1.docker.io/ HTTP/1.1
> Host: registry-1.docker.io
> User-Agent: curl/7.81.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< content-length: 0
< location: https://registry-1.docker.io/
< Proxy-Connection: Keep-Alive
< Connection: Keep-Alive
< Age: 0
< Date: Thu, 25 Aug 2022 15:57:15 GMT
<
* Connection #0 to host proxy.domain.org left intact

Haproxy - Cannot setup the most basic proxy

Please, can somebody look at this config?
global
log stdout format raw local0 debug
stats timeout 30s
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 50000
timeout client 50000
timeout server 50000
frontend app
bind *:15080
default_backend myback
backend myback
server site google.com:80 check
Why is this not working? If I try to visit 127.0.0.1:15080 it takes some time and then the url in a browser changes to www.google.com:16080 which obviously doesn't take you anywhere. The browser says: "This site can’t be reached - ERR_CONNECTION_TIMED_OUT".
So why doesn't it proxy to port 80 as one would expect?
The log entry does not tell much:
127.0.0.1:50871 [01/Jul/2019:14:39:45.879] app myback/site 0/0/20/84/104 301 681 - - ---- 2/2/0/0/0 0/0 "GET / HTTP/1.1"
Haproxy version:
HA-Proxy version 2.0.0-4fb65f-8 2019/06/19 - https://haproxy.org/
EDIT:
I somehow solved the problem by trial & error..
Actually, HAProxy is working as expected and proxying your request to google. Google, however, sees that the host header is 'Host: 127.0.0.1:15080', and responds with a 301 redirect to www.google.com:15080. You can see this without setting up HAProxy by doing:
$ curl -I -H 'Host: 127.0.0.1:15080' google.com
HTTP/1.1 301 Moved Permanently
Location: http://www.google.com:15080/
Content-Type: text/html; charset=UTF-8
Date: Mon, 01 Jul 2019 14:26:09 GMT
Expires: Wed, 31 Jul 2019 14:26:09 GMT
Cache-Control: public, max-age=2592000
Server: gws
Content-Length: 225
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN
If you want to set up a very basic proxy to google, you need to make sure your host header matches and that you are sending requests via https.
backend myback
http-request set-header Host www.google.com
server site google.com:443 ssl verify none check

Docker pull does not work after ubuntu update

After updating to Ubuntu 18.04 LTS docker pull does not work anymore. I get the following error:
sudo docker pull hello-world
Using default tag: latest
Error response from daemon: Get https://registry-1.docker.io/v2/: EOF
tried:
add nameserver 8.8.8.8 and nameserver 8.8.4.4 to /etc/default/docker did not help
Thanks a lot!
Docker version 17.12.1-ce, build 7390fc6
curl -v 'https://registry-1.docker.io/v2/'
* Trying 130.75.6.113...
* TCP_NODELAY set
* Connected to secure-proxy.bla.de (130.87.6.113) port 3131 (#0)
* allocate connect buffer!
* Establish HTTP proxy tunnel to registry-1.docker.io:443
> CONNECT registry-1.docker.io:443 HTTP/1.1
> Host: registry-1.docker.io:443
> User-Agent: curl/7.58.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 403 Forbidden
< Date: Tue, 26 Jun 2018 08:00:51 GMT
< Server: C-ICAP
< Content-Type: text/html
< Content-Language: en
< X-Cache: MISS from secure-proxy
< X-Cache-Lookup: NONE from secure-proxy:3131
< Transfer-Encoding: chunked
* CONNECT responded chunked
< Via: 1.1 secure-proxy (squid/3.5.19)
< Connection: keep-alive
<
* Received HTTP code 403 from proxy after CONNECT
* CONNECT phase completed!
* Closing connection 0
curl: (56) Received HTTP code 403 from proxy after CONNECT
If you are behind an HTTP or HTTPS proxy server you need to add this configuration in the Docker systemd service file. It works after doing what is described here
https://docs.docker.com/config/daemon/systemd/#httphttps-proxy

Running Boot2docker behind proxy, getting FATA[0020] Forbidden for any interaction with Docker hub

Followed the instruction to set up proxy of boot2docker , Getting the following FATAs, any clue
FATA[0020] Get https://index.docker.io/v1/repositories/library/busybox/images: Forbidden while trying to pulling images
FATA[0020] Error response from daemon: Server Error: Post https://index.docker.io/v1/users/: Forbidden while trying to login
FATA[0000] Error response from daemon: Get https://index.docker.io/v1/search?q=ubuntu: Forbidden - while searching for images
updated to include result of curl -v https://index.docker.io:443
* Rebuilt URL to: https://index.docker.io:443/
* About to connect() to proxy 34363bd0dd54 port 8099 (#0)
* Trying 192.168.59.3...
* Adding handle: conn: 0x9adbad8
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x9adbad8) send_pipe: 1, recv_pipe: 0
* Connected to 34363bd0dd54 (192.168.59.3) port 8099 (#0)
* Establish HTTP proxy tunnel to index.docker.io:443
> CONNECT index.docker.io:443 HTTP/1.1
> Host: index.docker.io:443
> User-Agent: curl/7.33.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 403 Forbidden
< Server: squid/3.4.9
< Mime-Version: 1.0
< Date: Fri, 29 May 2015 17:56:22 GMT
< Content-Type: text/html
< Content-Length: 3151
< X-Squid-Error: ERR_ACCESS_DENIED 0
< Vary: Accept-Language
< Content-Language: en
< X-Cache: MISS from localhost
< Via: 1.1 localhost (squid/3.4.9)
< Connection: keep-alive
<
* Received HTTP code 403 from proxy after CONNECT
* Connection #0 to host 34363bd0dd54 left intact
curl: (56) Received HTTP code 403 from proxy after CONNECT
Looks like it is a proxy issue, I am running a proxy server on the host machine, accessing it by its host name in boot2docker VM 's http_proxy and https_proxy butcurl host_proxy:port` works with no issues
I was experiencing the same issue where I would get a 403 error when trying to get lxc-docker to install from get.docker.com (it failed because it could not complete apt-get update. In my case, I have the following setup:
VM Provider: VirtualBox (Ubuntu 14.04 (Trusty))
Environment: Vagrant
Provisioner: chef-zero (via Vagrant)
PROXY: At first I had forgotten about this, but I am running apt-cacher-ng on my host machine (my Macbook Pro) to keep data downloads to a minimum when I'm running apt-get install on Vagrant VM's. In a nutshell, apt-cacher-ng sets up an apt mirror on my Mac for Ubuntu VM's to pull packages from.
I realized that apt-cacher-ng doesn't support SSL repositories (https), but does support normal http repositories. Since the Docker repository uses https, I had to find a workaround.
Before I fixed anything, I had the following in my /etc/apt/apt.conf.d/10mirror file in my Ubuntu VM's (localip is the IP address of my Mac which runs the apt-cacher-ng server):
Acquire::http { Proxy "http://#{localip}:3142"; };
The above line means my Ubuntu VM's were getting packages through apt-cacher-ng, but failing when a repository used https. By adding the following line beneath that line, things then started to work normally:
Acquire::https { Proxy \false"; };
At this point, the contents of /etc/apt/apt.conf.d/10mirror are as follows:
Acquire::http { Proxy "http://#{localip}:3142"; };
Acquire::https { Proxy \false"; };
Now, run apt-get update and then Docker installs successfully and I'm back to normal testing. In case you are using Vagrant to set up the 10mirror file, here are the lines I have in my Vagrantfile which do the job:
oak.vm.provision "shell", inline: "echo 'Acquire::http { Proxy \"http://#{localip}:3142\"; };' > /etc/apt/apt.conf.d/10mirror"
oak.vm.provision "shell", inline: "echo 'Acquire::https { Proxy \"false\"; };' >> /etc/apt/apt.conf.d/10mirror";

Curl POST request in JSON format

I'd like to send a HTTP POST request in JSON format to a specific URL using Curl in the mac terminal.
How do I specify the HTTP verb POST? What is the difference between -d and -X?
How do I specify that I'm sending my data in JSON format?
Any suggestions on how to test the request itself? I'd like to test and see exactly what JSON data is being sent across before I do my 'live' request. Can I run a Rails server on localhost and send my POST request to localhost? How can I see the JSON data?
Any examples are welcome.
Thanks!
1) If you are using the -d option to upload data, curl with automatically use POST. The -X option is used when you want to specify the method (PUT, DELETE etc) rather than getting curl to choose it for you.
echo "how are you" | curl -vvv -d#- http://localhost:8000
* About to connect() to localhost port 8000 (#0)
* Trying 127.0.0.1...
* connected
* Connected to localhost (127.0.0.1) port 8000 (#0)
> POST / HTTP/1.1
> User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8y zlib/1.2.5
> Host: localhost:8000
> Accept: */*
> Content-Length: 11
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 11 out of 11 bytes
< HTTP/1.1 200 OK
< Date: Sun, 03 Aug 2014 13:46:44 GMT
< Connection: keep-alive
< Transfer-Encoding: chunked
2) You can specify that you are sending your data in json format by using the Content-type header. This header can be added in curl using the -H option.
3) Yes you can setup a webserver(using python, nodejs, rails etc) that can just printout the http body once it receives it.
$ curl -d "param1=value1&param2=value2" http://example.com/posts
$ curl -i -H "Accept: application/json" -H "Content-Type: application/json" http://example.com/posts
I prefer use https://chrome.google.com/webstore/detail/advanced-rest-client/hgmloofddffdnphfgcellkdfbfbjeloo

Resources