Docker pull does not work after ubuntu update - docker

After updating to Ubuntu 18.04 LTS docker pull does not work anymore. I get the following error:
sudo docker pull hello-world
Using default tag: latest
Error response from daemon: Get https://registry-1.docker.io/v2/: EOF
tried:
add nameserver 8.8.8.8 and nameserver 8.8.4.4 to /etc/default/docker did not help
Thanks a lot!
Docker version 17.12.1-ce, build 7390fc6
curl -v 'https://registry-1.docker.io/v2/'
* Trying 130.75.6.113...
* TCP_NODELAY set
* Connected to secure-proxy.bla.de (130.87.6.113) port 3131 (#0)
* allocate connect buffer!
* Establish HTTP proxy tunnel to registry-1.docker.io:443
> CONNECT registry-1.docker.io:443 HTTP/1.1
> Host: registry-1.docker.io:443
> User-Agent: curl/7.58.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 403 Forbidden
< Date: Tue, 26 Jun 2018 08:00:51 GMT
< Server: C-ICAP
< Content-Type: text/html
< Content-Language: en
< X-Cache: MISS from secure-proxy
< X-Cache-Lookup: NONE from secure-proxy:3131
< Transfer-Encoding: chunked
* CONNECT responded chunked
< Via: 1.1 secure-proxy (squid/3.5.19)
< Connection: keep-alive
<
* Received HTTP code 403 from proxy after CONNECT
* CONNECT phase completed!
* Closing connection 0
curl: (56) Received HTTP code 403 from proxy after CONNECT

If you are behind an HTTP or HTTPS proxy server you need to add this configuration in the Docker systemd service file. It works after doing what is described here
https://docs.docker.com/config/daemon/systemd/#httphttps-proxy

Related

"docker pull/run hello-world" behind a proxy returns "connection refused" error (non-root mode, linux mint)

Edit (kind of solved) - Just before trying something else, I've tried the same procedure (to be sure it still did not work) and it's failed (as expected). Then I've tried with the root/sudo way ("sudo docker run hello-world") and it has worked (more or less expected). Finally for unknown reason it then worked properly when invoking with the non-root way ("docker run hello-world"). It looks like it needed to go to the root/sudo way to somehow "unlock" the non-root way so that it actually uses the proxy ; weird.
I am behind a proxy. I've followed the steps on Docker doc at https://docs.docker.com/config/daemon/systemd/ as "non-root".
When I do the following,
$ docker pull hello-world
I get
Using default tag: latest
Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp x.x.x.x:443: connect: connection refused
(IP was changed to "x" chars).
Configuration:
Linux mint Vanessa (based on Ubuntu 22.04).
Docker version 20.10.17, build 100c701
$ systemctl --user show --property=Environment docker
returns
Environment=PATH=/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin HTTP_PROXY=http://proxy.domain.org:8080 HTTPS_PROXY=http://proxy.domain.org:8080
so it looks like proxy is actually recognized by docker (as it shows in the doc).
I am sure I can pass through the proxy with the following demonstration:
No proxy => connection refused:
$ export http_proxy= && export https_proxy=
$ curl -vv registry-1.docker.io
* Trying x.y.z.a:80...
* connect to x.y.z.a port 80 failed: Connection refused
* Trying x.y.z.b:80...
* connect to x.y.z.b port 80 failed: Connection refused
* Trying x.y.z.c:80...
* connect to x.y.z.c port 80 failed: Connection refused
* Failed to connect to registry-1.docker.io port 80 after 124 ms: Connection refused
* Closing connection 0
curl: (7) Failed to connect to registry-1.docker.io port 80 after 124 ms: Connection refused
With proxy => that's OK:
$ export http_proxy=http://proxy.domain.org:8080 && export https_proxy=http://proxy.domain.org:8080
$ curl -vv registry-1.docker.io
* Uses proxy env variable http_proxy == 'http://proxy.domain.org:8080'
* Trying x.y.z.a:8080...
* Connected to proxy.domain.org (x.y.z.a) port 8080 (#0)
> GET http://registry-1.docker.io/ HTTP/1.1
> Host: registry-1.docker.io
> User-Agent: curl/7.81.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< content-length: 0
< location: https://registry-1.docker.io/
< Proxy-Connection: Keep-Alive
< Connection: Keep-Alive
< Age: 0
< Date: Thu, 25 Aug 2022 15:57:15 GMT
<
* Connection #0 to host proxy.domain.org left intact

HTTP Request with content-length > 0 but body is empty in docker container

I am trying to run curl commands in a groovy script and containerize it using docker. The script runs a bunch of curl commands. Using the following code to execute the commands.
def process = commandArray.execute()
def out = new StringBuffer()
process.consumeProcessOutputStream(out)
if (process.isAlive()) process.waitForOrKill(240000)
Things work fine when I run locally. But When I execute the groovy script in container the response body comes as empty but the Content-Length will be more than 0.
My headers are as below(Have Replaced few content with dummy data, ex: xyz.com)
* TCP_NODELAY set
* Connected to xyz.com (10.20.30.400) port 80 (#0)
> GET /abc/xyz/type HTTP/1.1
> Host: xyz.com
> User-Agent: curl/7.68.0
> Accept: */*
>
< HTTP/1.1 200 OK
< cache-control: no-cache, private
< etag: "000000000000000000000000000000"
< x-content-type-options: nosniff
< x-xss-protection: 1; mode=block
< strict-transport-security: max-age=31536000 ; includeSubDomains
< x-frame-options: DENY
< content-type: application/json
< content-length: 72950
< date: Tue, 22 Jun 2021 06:17:09 GMT
< x-envoy-upstream-service-time: 57
< server: envoy
<
{ [15940 bytes data]
* Connection #0 to host xyz.com left intact
Although the content-length: 72950 when I print the out object, it is empty []
What might be going on here? This issue is specific to running in container
The curl command looks as below
curl -s -v -L -k -X 'GET' --cert '/tmp/groovy-generated-10305010891926264191-tmpdir/client-cert.pem' --proxy 'https://proxy.com:30000' --noproxy 'localhost,127.0.0.1' 'https://example.com/abc/xyz/type'

Load balancing docker swarm using Ha Proxy

I have a Docker Swarm cluster on AWS which I am trying to load balance using HAProxy. My setup which is behind a VPC looks similar to this:
haproxy_server 10.10.0.10
docker_swarm_master1 10.10.0.12
docker_swarm_master2 10.10.0.13
docker_swarm_worker3 10.10.0.14
My only Tomcat container is currently on master_1 and below is my current HAProxy config file:
global
log 127.0.0.1 local0
log 127.0.0.1 local0 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
maxconn 2000
frontend servers
bind *:80
bind *:8443 ssl crt /etc/haproxy/certs/ssl.pem
default_backend hosts
backend hosts
mode http
balance roundrobin
option httpchk OPTIONS /
option forwardfor
option http-server-close
server swarm 10.10.0.12:8443 check inter 5000
I am able able to see the index.html page in the webapps directory when I do the following from the HAProxy server:
curl -k https://10.10.0.12:8443/docs/index.html
However when I try the following curl command below, I get a 503 server not available error
curl -k https://10.10.0.10:8443/docs/index.html
Anyone know what I am doing wrong? I have spent half the day on this to no avail.
EDIT
curl -XOPTIONS -vk https://10.10.0.10:8443/docs/index.html
* Trying 10.10.0.10...
* Connected to 10.10.0.10 (10.10.0.10) port 8443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 692 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification SKIPPED
* server certificate status verification SKIPPED
* common name: *.secreturl.com (does not match '10.10.0.10')
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: OU=Domain Control Validated,CN=*.secreturl.com
* start date: Sat, 27 Jun 2016 16:39:39 GMT
* expire date: Tue, 11 Jun 2020 18:09:38 GMT
* issuer: C=US,ST=Arizona,L=Scottsdale,O=GoDaddy.com\, Inc.,OU=http://certs.godaddy.com/repository/,CN=Go Daddy Secure Certificate Authority - G2
* compression: NULL
* ALPN, server did not agree to a protocol
> OPTIONS / HTTP/1.1
> Host: 10.10.0.10:8443
> User-Agent: curl/7.47.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 503 Service Unavailable
< Cache-Control: no-cache
< Connection: close
< Content-Type: text/html
<
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
* Closing connection 0
curl -XOPTIONS -vk https://10.10.0.12:8443/docs/index.html
* Trying 10.10.0.12...
* Connected to 10.10.0.12 (10.10.0.12) port 8443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 692 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification SKIPPED
* server certificate status verification SKIPPED
* common name: *.secreturl.com (does not match '10.10.0.10')
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: OU=Domain Control Validated,CN=*.secreturl.com
* start date: Sat, 27 Jun 2016 16:39:39 GMT
* expire date: Tue, 11 Jun 2020 18:09:38 GMT
* issuer: C=US,ST=Arizona,L=Scottsdale,O=GoDaddy.com\, Inc.,OU=http://certs.godaddy.com/repository/,CN=Go Daddy Secure Certificate Authority - G2
* compression: NULL
* ALPN, server did not agree to a protocol
> OPTIONS / HTTP/1.1
> Host: 10.10.0.12:8443
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< Allow: GET, HEAD, POST, PUT, DELETE, OPTIONS
< Content-Length: 0
< Date: Sat, 24 Dec 2016 18:39:27 GMT
<
* Connection #0 to host 10.10.0.12 left intact
If you get a 503 Service Not Available, then your health check fails.
From your configuration, HAProxy will use OPTIONS http://10.10.0.12:8443/ which will fail: your backend accept HTTPS connections. To fix that, tell HAProxy to use HTTPS:
server swarm 10.10.0.12:8443 check inter 5000 ssl verify none
Note: you can enable the stat page with
listen haproxy_admin
bind 127.0.0.1:22002
mode http
stats enable
stats uri /
That should help you debug further issues.
Edit:
The stat page shows L7STS/404, that's the http code HAProxy gets. HAProxy currently checks https://10.10.0.12:8443/ while you test https://10.10.0.12:8443/docs/index.html. Perhaps you should use this url in your check:
option httpchk OPTIONS /docs/index.html

Docker Remote API - Pull/Create image not working

Okay so I have enabled managing the docker daemon over HTTP by starting the daemon as follows:
/usr/bin/docker -d -H fd:// -H=0.0.0.0:2376
I can create containers and remove them via the Remote API (i.e other calls are working fine) but if I try and pull an image it errors as follows:
curl -v -X POST http://localhost:2376/images/create?from=ubuntu
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 2376 (#0)
> POST /images/create?from=ubuntu HTTP/1.1
> User-Agent: curl/7.38.0
> Host: localhost:2376
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Thu, 01 Oct 2015 09:01:02 GMT
< Transfer-Encoding: chunked
<
{"status":"Downloading from http://"}
{"errorDetail":{"message":"Get http://: http: no Host in request URL"},"error":"Get http://: http: no Host in request URL"}
* Connection #0 to host localhost left intact
Anyone know what the answer is?
Ah looks like it was a typo in the parameter name
"from" -> "fromImage"
Basically you get this error if the query parameters are missing.
Also make sure you set tag=latest otherwise it downloads all ubuntu images!

Running Boot2docker behind proxy, getting FATA[0020] Forbidden for any interaction with Docker hub

Followed the instruction to set up proxy of boot2docker , Getting the following FATAs, any clue
FATA[0020] Get https://index.docker.io/v1/repositories/library/busybox/images: Forbidden while trying to pulling images
FATA[0020] Error response from daemon: Server Error: Post https://index.docker.io/v1/users/: Forbidden while trying to login
FATA[0000] Error response from daemon: Get https://index.docker.io/v1/search?q=ubuntu: Forbidden - while searching for images
updated to include result of curl -v https://index.docker.io:443
* Rebuilt URL to: https://index.docker.io:443/
* About to connect() to proxy 34363bd0dd54 port 8099 (#0)
* Trying 192.168.59.3...
* Adding handle: conn: 0x9adbad8
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x9adbad8) send_pipe: 1, recv_pipe: 0
* Connected to 34363bd0dd54 (192.168.59.3) port 8099 (#0)
* Establish HTTP proxy tunnel to index.docker.io:443
> CONNECT index.docker.io:443 HTTP/1.1
> Host: index.docker.io:443
> User-Agent: curl/7.33.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 403 Forbidden
< Server: squid/3.4.9
< Mime-Version: 1.0
< Date: Fri, 29 May 2015 17:56:22 GMT
< Content-Type: text/html
< Content-Length: 3151
< X-Squid-Error: ERR_ACCESS_DENIED 0
< Vary: Accept-Language
< Content-Language: en
< X-Cache: MISS from localhost
< Via: 1.1 localhost (squid/3.4.9)
< Connection: keep-alive
<
* Received HTTP code 403 from proxy after CONNECT
* Connection #0 to host 34363bd0dd54 left intact
curl: (56) Received HTTP code 403 from proxy after CONNECT
Looks like it is a proxy issue, I am running a proxy server on the host machine, accessing it by its host name in boot2docker VM 's http_proxy and https_proxy butcurl host_proxy:port` works with no issues
I was experiencing the same issue where I would get a 403 error when trying to get lxc-docker to install from get.docker.com (it failed because it could not complete apt-get update. In my case, I have the following setup:
VM Provider: VirtualBox (Ubuntu 14.04 (Trusty))
Environment: Vagrant
Provisioner: chef-zero (via Vagrant)
PROXY: At first I had forgotten about this, but I am running apt-cacher-ng on my host machine (my Macbook Pro) to keep data downloads to a minimum when I'm running apt-get install on Vagrant VM's. In a nutshell, apt-cacher-ng sets up an apt mirror on my Mac for Ubuntu VM's to pull packages from.
I realized that apt-cacher-ng doesn't support SSL repositories (https), but does support normal http repositories. Since the Docker repository uses https, I had to find a workaround.
Before I fixed anything, I had the following in my /etc/apt/apt.conf.d/10mirror file in my Ubuntu VM's (localip is the IP address of my Mac which runs the apt-cacher-ng server):
Acquire::http { Proxy "http://#{localip}:3142"; };
The above line means my Ubuntu VM's were getting packages through apt-cacher-ng, but failing when a repository used https. By adding the following line beneath that line, things then started to work normally:
Acquire::https { Proxy \false"; };
At this point, the contents of /etc/apt/apt.conf.d/10mirror are as follows:
Acquire::http { Proxy "http://#{localip}:3142"; };
Acquire::https { Proxy \false"; };
Now, run apt-get update and then Docker installs successfully and I'm back to normal testing. In case you are using Vagrant to set up the 10mirror file, here are the lines I have in my Vagrantfile which do the job:
oak.vm.provision "shell", inline: "echo 'Acquire::http { Proxy \"http://#{localip}:3142\"; };' > /etc/apt/apt.conf.d/10mirror"
oak.vm.provision "shell", inline: "echo 'Acquire::https { Proxy \"false\"; };' >> /etc/apt/apt.conf.d/10mirror";

Resources