I have created a docker host on openstack and launched a container with it's port 22 mapped to a port on docker host. Followed this link
Still i can't ssh from docker host to container. It gives this error:
$> ssh -v root#172.17.0.9 -p 32775
OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 56: Applying options for *
debug1: Connecting to 172.17.0.9 [172.17.0.9] port 32775.
debug1: connect to address 172.17.0.9 port 32775: Connection refused
ssh: connect to host 172.17.0.9 port 32775: Connection refused
Iptables rule is added by default when i used -P option in docker run. It looks like this:
$> iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE tcp -- 172.17.0.3 172.17.0.3 tcp dpt:80
MASQUERADE tcp -- 172.17.0.9 172.17.0.9 tcp dpt:22
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9090 to:172.17.0.3:80
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:32775 to:172.17.0.9:22
And container looks like:
$> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
46111bb52063 sshns "/usr/sbin/sshd -D" 9 hours ago Up 3 hours 0.0.0.0:32776->22/tcp TestSSHcontainer
I need to have ssh only for my purpose. I'm aware about docker exec option. Tried changes like PermitRootLogin yes on sshd_config and ssh_config on both docker host and container with no success.
bash-4.2# /usr/sbin/sshd -Dd
WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several problems.
debug1: sshd version OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: key_parse_private2: missing begin marker
debug1: read PEM private key done: type RSA
debug1: private host key: #0 type 1 RSA
debug1: key_parse_private2: missing begin marker
debug1: read PEM private key done: type ECDSA
debug1: private host key: #1 type 3 ECDSA
debug1: private host key: #2 type 4 ED25519
debug1: rexec_argv[0]='/usr/sbin/sshd'
debug1: rexec_argv[1]='-Dd'
Set /proc/self/oom_score_adj from 0 to -1000
debug1: Bind to port 22 on ::.
Bind to port 22 on :: failed: Address already in use.
debug1: Bind to port 22 on 0.0.0.0.
Bind to port 22 on 0.0.0.0 failed: Address already in use.
Cannot bind any address.
bash-4.2# netstat -anp | grep 22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
bash-4.2# ps -eaf | grep ssh
root 1 0 0 19:17 ? 00:00:00 /usr/sbin/sshd -D
root 26 16 0 22:58 ? 00:00:00 grep ssh
Is there something that i'm still missing?
You're using the ip of your container but the host port mapping of container. Try either ssh -v root#172.17.0.9 or ssh -v root#localhost -p <port_mapping_on_host>(Your docker ps -a shows your porting mapping on host is 32776)
Related
i'm hosting a dockerized web application binded with port 8081 in my remote server.
I want to block that web application for external ips, as I already did wit port 8080 hosting a plain jenkins server.
Here's what i've tried:
iptables -A INPUT -d <my-server-ip> -p tcp --dport 8081 -j DROP
As I did with port 8080.
Here is
iptables -nv -L INPUT
output:
Chain INPUT (policy ACCEPT 2836 packets, 590K bytes)
pkts bytes target prot opt in out source destination
495 23676 DROP tcp -- * * 0.0.0.0/0 <my-ip-addr> tcp dpt:8080
0 0 DROP tcp -- * * 0.0.0.0/0 <my-ip-addr> tcp dpt:8081
Has it possibily something to do with DOCKER chain in iptables ?
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
9 568 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 <container-eth1-addr> tcp dpt:8080
There are more specific rules i need to add ?
Isn't my server INPUT rules supposed to be applied before those listed in the DOCKER chain?
UPDATE - SOLVED
Thanks to larsks's comments I found the solution.
The goal here was to block tcp traffic on port 8081 binded with docker docker container but being able to use ssh tunneling as "poor man" VPN (so non publish the port was not an option).
Just had to add this rule:
iptables -I DOCKER-USER 1 -d <container-eth-ip> ! -s 127.0.0.1 -p tcp --dport 8080 -j DROP
I have a simple python application which stores and searches it's data in an Elasticsearch instance. The python application runs in it's own container, just as Elasticsearch is.
Elasticsearch exposes it's default ports 9200 and 9300, the python application exposes port 5000. The networktype used for Docker is a user defined bridged network.
When i start both containers the application starts up nicely, both containers see each other by container name and communicate just fine.
But from the docker host (linux) it's not possible to connect to the exposed port 5000. So a simple curl http://localhost:5000/ renders in a time-out. The Docker tips from this documentation: https://docs.docker.com/network/bridge/ did not solve this.
After a lot of struggling I tried something completely different, I tried connecting from the outside of the docker host to the python application. I was baffled, from anywhere in the world i could do curl http://<fqdn>:5000/ and was served with the application.
So that means, real problem solved because I'm able to serve the application to the outside world. (So yes, the application inside the container listens on 0.0.0.0 as is the solution for 90% of the network problems reported by others.)
But that still leaves me puzzled, what causes this strange behavior? On my development machine (Windows 10, WSL, Docker desktop, Linux containers) I am able to connect to the service on localhost:5000, 127.0.0.1:5000 etc. On my Linux (production) machine everything works except connecting from the docker host to the containers.
I hope someone can shed a light on this, I trying to understand why this is happening.
Some relevant information
Docker host:
# ifconfig -a
br-77127ce4b631: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
[snip]
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
[snip]
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 1xx.1xx.199.134 netmask 255.255.255.0 broadcast 1xx.1xx.199.255
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1e7f2f7a271b pplbase_api "flask run --host=0.…" 20 hours ago Up 19 hours 0.0.0.0:5000->5000/tcp pplbase_api_1
fdfa10b1ce99 elasticsearch:7.5.1 "/usr/local/bin/dock…" 21 hours ago Up 19 hours 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp pplbase_elastic_1
# docker network ls
NETWORK ID NAME DRIVER SCOPE
[snip]
77127ce4b631 pplbase_pplbase bridge local
# iptables -L -n
[snip]
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:9300
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:9200
ACCEPT tcp -- 0.0.0.0/0 172.18.0.3 tcp dpt:5000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
DROP all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Docker compose file:
version: '3'
services:
api:
build: .
links:
- elastic
ports:
- "5000:5000"
networks:
- pplbase
environment:
- ELASTIC_HOSTS=elastic localhost
- FLASK_APP=app.py
- FLASK_ENV=development
- FLASK_DEBUG=0
tty: true
elastic:
image: "elasticsearch:7.5.1"
ports:
- "9200:9200"
- "9300:9300"
networks:
- pplbase
environment:
- discovery.type=single-node
volumes:
- ${PPLBASE_STORE}:/usr/share/elasticsearch/data
networks:
pplbase:
driver: bridge
After more digging in the riddle is getting bigger and bigger. When using netcat I can establish a connection
Connection to 127.0.0.1 5000 port [tcp/*] succeeded!
Checking with netstat when no clients are connected is see:
tcp6 0 0 :::5000 :::* LISTEN 27824/docker-proxy
While trying to connect from the dockerhost, the connection is made:
tcp 0 1 172.20.0.1:56866 172.20.0.3:5000 SYN_SENT 27824/docker-proxy
tcp6 0 0 :::5000 :::* LISTEN 27824/docker-proxy
tcp6 0 0 ::1:58900 ::1:5000 ESTABLISHED 31642/links
tcp6 592 0 ::1:5000 ::1:58900 ESTABLISHED 27824/docker-proxy
So I'm suspecting now some networkvoodoo on the docker host.
The Flask instance is running at 0.0.0.0:5000.
Have you tried: curl http://0.0.0.0:5000/?
It might be that your host configuration maps localhost with 127.0.0.1 rather than 0.0.0.0
So as I was working on this problem, slowly towards a solution I found my last suggestion was right after all. In the firewall (iptables) I logged all dropped packets and yes, the packets between the docker-bridge (not docker0, but br- and the container (veth) were being dropped by iptables. Adding a rule allowing traffic from the interfaces to flow resolved the problem.
In my case: sudo iptables -I INPUT 3 -s 172.20.0.3 -d 172.20.0.1 -j ACCEPT
Where 172.20.0.0/32 is the bridged network generated by Docker.
Since we are a small team we want to put JetBrains' Hub, Youtrack, Upsource and Teamcity as docker containers (all on the same machine for now). Docker is running on Photon OS 2.0 running on ESXi 6.7. Nginx in another container acts as a DNS proxy so all services are reachable with their own domain names on port 80 for now...
I got all 5 services running and and can access them in a browser. However connecting Youtrack, Upsource and Teamcity to Hub is a challenge. Youtrack, Upsource and Teamcity ask for the Hub URL to confirm it and ask for permission to access Hub.
The Problem:
Hub URL: http://hub.teamtools.mydomain.com -> the container can not access it under that address and verification fails with timeout
Hub URL: http://172.18.0.3:8080 -> the container can access Hub on the internal docker net and then shows a pop up which is trying to show a confirmation page by redirecting to Hub on that internal IP which of course fails in the browser (I tried to copy the URL from the popup into a new window and adjust it there as a hack but that does not work.)
Questions:
How can I link Youtrack, Upsource and Teamcity to the Hub? In order for the confimation process to work the docker containers need to be able to access each other with an external IP/domain name.
Does anything speak against having all four Teamtools on the same machine to get started and separate them later as demand grows?
Configuration so far:
The containers have been turned into services like so:
/etc/systemd/system/docker.nginx.servcie
[Unit]
Description=Nginx DNS proxy
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=/usr/bin/docker network create --subnet=172.18.0.0/16 dockerNet
ExecStartPre=-/usr/bin/docker exec %n stop
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull jwilder/nginx-proxy
ExecStart=/usr/bin/docker run --rm --name %n \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
--net dockerNet --ip 172.18.0.2 \
-p 80:80 \
jwilder/nginx-proxy
[Install]
WantedBy=multi-user.target
/etc/systemd/system/docker.hub.service
[Unit]
Description=JetBrains Hub Service
After=docker.nginx-proxy.service
Requires=docker.nginx-proxy.service
[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker exec %n stop
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull jetbrains/hub:2018.2.9635
ExecStart=/usr/bin/docker run --rm --name %n \
-v /opt/hub/data:/opt/hub/data \
-v /opt/hub/conf:/opt/hub/conf \
-v /opt/hub/logs:/opt/hub/logs \
-v /opt/hub/backups:/opt/hub/backups \
--net dockerNet --ip 172.18.0.3 \
-p 8010:8080 \
--expose 8080 \
-e VIRTUAL_PORT=8080 \
-e VIRTUAL_HOST=hub,teamtools.mydomain.com,hub.teamtools.mydomain.com \
jetbrains/hub:2018.2.9635
[Install]
WantedBy=multi-user.target
... and so on. Since I'm, still trying things out, the ports are mapped on the host and exposed so nginx-proxy can pick them up. I also added static IPs to the containers hoping this would help with my problem.
Running those services results in:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ba8ed89b832 jetbrains/teamcity-server "/run-services.sh" 12 hours ago Up 12 hours 0.0.0.0:8111->8111/tcp docker.teamcity.service
5c819c48cbcc jetbrains/upsource:2018.1.357 "/bin/bash /run.sh" 12 hours ago Up 12 hours 0.0.0.0:8030->8080/tcp docker.upsource.service
cf9dcd1b534c jetbrains/youtrack:2018.2.42223 "/bin/bash /run.sh" 14 hours ago Up 14 hours 0.0.0.0:8020->8080/tcp docker.youtrack.service
de86c3e1f2e2 jetbrains/hub:2018.2.9635 "/bin/bash /run.sh" 14 hours ago Up 14 hours 0.0.0.0:8010->8080/tcp docker.hub.service
9df9cb44e485 jwilder/nginx-proxy "/app/docker-entry..." 14 hours ago Up 14 hours 0.0.0.0:80->80/tcp docker.nginx-proxy.service
Additional Info:
I did consider this could be a firewall issue and this post seems to suggest the same thing:
https://forums.docker.com/t/access-docker-container-from-inside-of-the-container-via-external-url/33271
After some discussion with the provider of the virtual server it
turned out, that conflicting firewall rules between plesk firewall and
iptables caused this problem. After the conflict had been fixed by the
provider the container could be accessed.
Firewall on Photon with rules auto added by docker:
Chain INPUT (policy DROP 2 packets, 203 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
258 19408 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
6 360 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
186 13066 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
186 13066 DOCKER-ISOLATION all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
103 7224 ACCEPT all -- * br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
9 524 DOCKER all -- * br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
74 5318 ACCEPT all -- br-83f08846fc2e !br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
1 52 ACCEPT all -- br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
300 78566 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
8 472 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.2 tcp dpt:80
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.3 tcp dpt:8080
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.5 tcp dpt:8080
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.4 tcp dpt:8080
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.6 tcp dpt:8111
Chain DOCKER-ISOLATION (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- br-83f08846fc2e docker0 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- docker0 br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
186 13066 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
186 13066 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
It turned out to be a firewall issue.
I resolved it using this https://unrouted.io/2017/08/15/docker-firewall/ as a starting point.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:FILTERS - [0:0]
:DOCKER-USER - [0:0]
-F INPUT
-F DOCKER-USER
-F FILTERS
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp --icmp-type any -j ACCEPT
-A INPUT -j FILTERS
-A DOCKER-USER -i eth0 -j FILTERS
-A FILTERS -m state --state ESTABLISHED,RELATED -j ACCEPT
# full access from my workstation
-A FILTERS -m state --state NEW -s 192.168.0.10/32
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT
-A FILTERS -j REJECT --reject-with icmp-host-prohibited
COMMIT
i'm trying to install Docker on my Intel Edison that is running Ubilinux, but need some help.
i compiled the source code using this guide -> https://github.com/docker/docker/blob/master/project/PACKAGERS.md
... but when i try to run the daemon with "./docker daemon" i get this error:
xotl#ubilinux:~/docker/bundles/1.9.0-dev/binary$ sudo ./docker daemon
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
INFO[0000] [graphdriver] using prior storage driver "aufs"
INFO[0000] Option DefaultDriver: bridge
INFO[0000] Option DefaultNetwork: bridge
INFO[0000] Firewalld running: false
FATA[0001] Error starting daemon: Error initializing network controller: Error creating default "bridge" network: Failed to program NAT chain: Failed to inject docker in PREROUTING chain: iptables failed: iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER: iptables: No chain/target/match by that name.
(exit status 1)
xotl#ubilinux:~/docker/bundles/1.9.0-dev/binary$
Need help. =(
EDIT:
This is the output of the sudo iptables -L -v command:
xotl#ubilinux:~/docker/bundles/1.9.0-dev/binary$ sudo iptables -L -v
[sudo] password for xotl:
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- any docker0 anywhere anywhere ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- docker0 !docker0 anywhere anywhere
0 0 ACCEPT all -- docker0 docker0 anywhere anywhere
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain DOCKER (0 references)
pkts bytes target prot opt in out source destination
xotl#ubilinux:~/docker/bundles/1.9.0-dev/binary$
This is the output of the sudo iptables -L -v -t natcommand:
xotl#El-Edison:~/docker/bundles/1.9.0-dev/binary$ sudo iptables -L -v -t nat
[sudo] password for xotl:
Chain PREROUTING (policy ACCEPT 4 packets, 1312 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 4 packets, 1312 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 65 packets, 4940 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 65 packets, 4940 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- any !wlan0 10.10.0.0/21 anywhere
Chain DOCKER (0 references)
pkts bytes target prot opt in out source destination
Docker requires a 64-bit version of your Operating system (https://docs.docker.com/engine/installation/ubuntulinux/). The Intel Edison runs a 32-bit processor and will not be able to run Docker (at least, that's what I found after spending some time researching this.
I installed docker registry 2.0 in a Ubuntu 14.04, following the official site: https://docs.docker.com/registry/deploying/
It will be used for testing so the development so I don't think we need a production instance:
All clients will be 1.6 so just registry 2.0 required
We don't need any kind of authentication
I install it:
docker run -d -p 5000:5000 registry:2.0
Then I prepare a new image for docker:
docker tag ubuntu:14.04 juandapc:5000/ubuntu:14.04
docker tag ubuntu:14.04 juandapc:5000/ubuntu:14.04
I've replace localhost as in docs for juandapc, the machine hostname.
Howerver when I try to access the repository from another machine (telnet juandapc 5000) I get this error:
FATA[0000] Error response from daemon: v1 ping attempt failed with error: Get https://juandapc:5000/v1/_ping: dial tcp 192.168.1.50:5000: connection refused. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry juandapc:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/juandapc:5000/ca.crt
If I pull, same error:
# docker pull juandapc:5000/ubuntu
FATA[0000] Error response from daemon: v1 ping attempt failed with error: Get https://juandapc:5000/v1/_ping: tls: oversized record received with length 20527. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry juandapc:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/juandapc:5000/ca.crt
Do I need to configure nginx? Documentation install nginx for production mode with registry 1.6 and 2.0 but it's not my case...
The firewall in the host (juandapc):
# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
MASQUERADE tcp -- 172.17.0.5 172.17.0.5 tcp dpt:5000
Chain DOCKER (2 references)
target prot opt source destination
DNAT tcp -- anywhere anywhere tcp dpt:5000 to:172.17.0.5:5000
Ports from the host juandapc (ESCUCHAR is LISTEN):
# netstat -natp
Conexiones activas de Internet (servidores y establecidos)
Proto Recib Enviad Dirección local Dirección remota Estado PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* ESCUCHAR 919/sshd
tcp 0 0 192.168.1.50:22 172.30.164.14:38412 ESTABLECIDO 3924/sshd: administ
tcp6 0 0 :::22 :::* ESCUCHAR 919/sshd
tcp6 0 0 :::5000 :::* ESCUCHAR 3651/docker-proxy
5000 is there, but no ipv4????
The registry in the container:
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1978cdff5e8c registry:2.0 "registry cmd/regist 4 hours ago Up 4 hours 0.0.0.0:5000->5000/tcp mad_shockley
# docker exec mad_shockley ps -ax
PID TTY STAT TIME COMMAND
1 ? Ssl 0:00 registry cmd/registry/config.yml
14 ? Rs 0:00 ps -ax
From juandpc I can get into the container:
# docker exec -t -i mad_shockley /bin/bash
root#1978cdff5e8c:/go/src/github.com/docker/distribution# hostname
1978cdff5e8c
The error message was the key:
FATA[0000] Error response from daemon: v1 ping attempt failed with error: Get https://juandapc:5000/v1/_ping: dial tcp 192.168.1.50:5000: connection refused. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry juandapc:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/juandapc:5000/ca.crt
Added this line: /etc/default/docker:
DOCKER_OPTS="--insecure-registry juandapc:5000"
Restarted docker and perfect!
The remote host part of your tag must contain the name of the remote registry, not the name of the local (pushing/publishing) client.
So, in this case:
juandapc:5000/ubuntu:14.04
should be
<registry-server>:5000/ubuntu:14.04
where you replace <registry-server> with whatever machine you set up the registry on. As it is, you're trying to push from juandapc to a remote repository on juandapc, and since that doesn't exist, connection refused...
If juandapc is actually where you installed the service, on the other hand - you have DNS / name resolution problems. (Did you add an /etc/hosts entry for juandapc? Why does it resolve to 192.168.1.50, instead of the actual address of the interface that netstat shows?)