On my Windows 10 host machine with Docker 4.9.1 I want to ssh into a docker container.
I followed a bunch of tutorials just like this one:
https://phoenixnap.com/kb/how-to-ssh-into-docker-container
From within the container I can ssh into the container using its IP of 172.17.0.2, but from my host machine I cannot.
Confirmation of the IP address:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' interesting_meitner
'172.17.0.2'
Ping without response:
ping 172.17.0.2
Ping wird ausgeführt für 172.17.0.2 mit 32 Bytes Daten:
Zeitüberschreitung der Anforderung.
Ping-Statistik für 172.17.0.2:
Pakete: Gesendet = 1, Empfangen = 0, Verloren = 1
(100% Verlust),
SSH with connection timeout:
ssh root#172.17.0.2
ssh: connect to host 172.17.0.2 port 22: Connection timed out
Starting the container (obviously done before trying to connect to it):
docker run -ti with_ssh:new /bin/bash
I have also tried this with options for remapping ports i.e. -p 22:666 or -p 666:22 .
Starting ssh server:
/etc/init.d/ssh start
* Starting OpenBSD Secure Shell server sshd
Checking status:
/etc/init.d/ssh status
* sshd is running
Ssh from container into container:
ssh root#172.17.0.2
The authenticity of host '172.17.0.2 (172.17.0.2)' can't be established.
ECDSA key fingerprint is SHA256:471dnz1q83owB/Nu0Qnnyz/Sct4Kwry9Sa9L9pwQeZo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.
root#172.17.0.2's password:
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 5.10.16.3-microsoft-standard-WSL2 x86_64)
[...]
Again from the Docker host I get a connection timeout. What do?
Your Docker container runs in a virtual network you cannot reach from the host (because it is isolated), which is why you cannot ping the containers IP from the host (but your docker container can, because it is attending the same network). You can expose the port like you already did with -p 666:22, but then you have to SSH to localhost not to the IP of the container: ssh -p 666 root#127.0.0.1.
You could also configure a correct routing from your hosts network to the virtual network and then you can reach the IP directly.
I did not reproduce your setup but this might work i guess. Hope it helps.
Related
I am trying to access my host system from a docker container
have tried all the following instead of 127.0.0.1 and localhost:
gateway.docker.internal,
docker.for.mac.host.internal,
host.docker.internal ,
docker.for.mac.host.internal,
docker.for.mac.localhost,
but none seem to work.
If I run my docker run command with --net=host, I can indeed access localhost however none of my port mappings get exposed and in accessible from outside docker.
I am using Docker version 20.10.5, build 55c4c88
some more info. I am running a piece of software called impervious (a layer on top of the bitcoin lightning network). It needs to connect to my local Polar lightning node on localhost:10001. Here is the config file the tool itself uses(see lnd section):
# Server configurations
server:
enabled: true # enable the GRPC/HTTP/websocket server
grpc_addr: 0.0.0.0:8881 # SET FOR DOCKER
http_addr: 0.0.0.0:8882 # SET FOR DOCKER
# Redis DB configurations
sqlite3:
username: admin
password: supersecretpassword # this will get moved to environment variable or generated dynamically
###### DO NOT EDIT THE BELOW SECTION#####
# Services
service_list:
- service_type: federate
active: true
custom_record_number: 100000
additional_service_data:
- service_type: vpn
active: true
custom_record_number: 200000
additional_service_data:
- service_type: message
active: true
custom_record_number: 400000
additional_service_data:
- service_type: socket
active: true
custom_record_number: 500000
additional_service_data:
- service_type: sign
active: true
custom_record_number: 800000
additional_service_data:
###### DO NOT EDIT THE ABOVE SECTION#####
# Lightning
lightning:
lnd_node:
ip: host.docker.internal
port: 10001 #GRPC port of your LND node
pub_key: 025287d7d6b3ffcfb0a7695b1989ec9a8dcc79688797ac05f886a0a352a43959ce #get your LND pubkey with "lncli getinfo"
tls_cert: /app/lnd/tls.cert # SET FOR DOCKER
admin_macaroon: /app/lnd/admin.macaroon # SET FOR DOCKER
federate:
ttl: 31560000 #Federation auto delete in seconds
imp_id: YOUR_IMP_ID #plain text string of your IMP node name
vpn:
price: 100 #per hour
server_ip: http://host.docker.internal #public IP of your VPN server
server_port: 51820 #port you want to listen on
subnet: 10.0.0.0/24 #subnet you want to give to your clients. .1 == your server IP.
server_pub_key: asdfasdfasdf #get this from your WG public key file
allowed_ips: 0.0.0.0/0 #what subnets clients can reach. Default is entire world.
binary_path: /usr/bin/wg #where your installed the "wg" command.
dns: 8.8.8.8 #set your preferred DNS server here.
socket:
server_ip: 1.1.1.1 #public IP of your socket server
I run impervious using the following docker comand:
docker run -p8881:8881 -p8882:8882 -v /Users/xxx/dev/btc/impervious/config/alice-config-docker.yml:/app/config/config.yml -v /Users/xxx/.polar/networks/1/volumes/lnd/alice/tls.cert:/app/lnd/tls.cert -v /Users/xxx/.polar/networks/1/volumes/lnd/alice/data/chain/bitcoin/regtest/admin.macaroon:/app/lnd/admin.macaroon -it impant/imp-releases:v0.1.4
but it just hangs when it tries to connect to the node at host.docker.internal
Have you tried docker-mac-net-connect?
The problem is related to macOS.Unlike Docker on Linux, Docker for macOS does not expose container networks directly on the macOS host.
You can use host.docker.internal which gives the localhost of the macos.
https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host
The host has a changing IP address (or none if you have no network
access). We recommend that you connect to the special DNS name
host.docker.internal which resolves to the internal IP address used by
the host. This is for development purpose and does not work in a
production environment outside of Docker Desktop.
Mac running the desktop version of docker.
The docker isn't running on the host machine and using a kind of virtual machine that includes Linux kernel. The network of this virtual machine is different from the host machine. To connect from your Mac host to running docker container used a kind of VPN connection:
When you run your docker with --net host switch you connect the container to a virtual machine network instead connect to your host machine network as it's working on Linux.
Then trying to connect to 127.0.0.1 or to localhost isn't allow connections to the running container.
The solution to this issue is to expose needed ports from running container:
docker run -p 8080:8080
If you need to expose all ports from your container you can use -P switch.
For opposite connection use host.docker.internal URL from container.
More documentation about docker desktop for Mac networking
Suddenly when I deployed some new containers with docker-compose the internal hostname resolution didn't work.
When I tried to ping one container from the other using the service name from the docker-compose.yaml file I got ping: bad address 'myhostname'
I checked that the /etc/resolv.conf was correct and it was using 127.0.0.11
When I tried to manually resolve my hostname with either nslookup myhostname. or nslookup myhostname.docker.internal I got error
nslookup: write to '127.0.0.11': Connection refused
;; connection timed out; no servers could be reached
Okay so the issue is that the docker DNS server has stopped working. All already started containers still function, but any new ones started has this issue.
I am running Docker version 19.03.6-ce, build 369ce74
I could of course just restart docker to see if it solves it, but I am also keen on understanding why this issue happened and how to avoid it in the future.
I have a lot of containers started on the server and a total of 25 docker networks currently.
Any ideas on what can be done to troubleshoot? Any known issues that could explain this?
The docker-compose.yaml file I use has worked before and no changes has been done to it.
Edit: No DNS names at all can be resolved. 127.0.0.11 refuses all connections. I can ping any external IP addresses, as well as the IP of other containers on the same docker network. It is only the 127.0.0.11 DNS server that is not working. 127.0.0.11 still replies to ping from within the container.
Make sure you're using a custom bridge network, NOT the default one. As per the Docker docs (https://docs.docker.com/network/bridge/), the default bridge network does not allow automatic DNS resolution:
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
I have the same problem. I am using the pihole/pihole docker container as the sole dns server on my network. Docker containers on the same host as the pihole server could not resolve domain names.
I resolved the issue based on "hmario"'s response to this forum post.
In brief, modify the pihole docker-compose.yml from:
---
version: '3.7'
services:
unbound:
image: mvance/unbound-rpi:1.13.0
hostname: unbound
restart: unless-stopped
ports:
- 53:53/udp
- 53:53/tcp
volumes: [...]
to
---
version: '3.7'
services:
unbound:
image: mvance/unbound-rpi:1.13.0
hostname: unbound
restart: unless-stopped
ports:
- 192.168.1.30:53:53/udp
- 192.168.1.30:53:53/tcp
volumes: [...]
Where 192.168.1.30 is a ip address of the docker host.
I'm having exactly the same problem. According to the comment here I could reproduce the setting without docker-compose, only using docker:
docker network create alpine_net
docker run -it --network alpine_net alpine /bin/sh -c "cat /etc/resolv.conf; ping -c 4 www.google.com"
stopping docker (systemctl stop docker) and enabling debug output it gives
> dockerd --debug
[...]
[resolver] read from DNS server failed, read udp 172.19.0.2:40868->192.168.177.1:53: i/o timeout
[...]
where 192.168.177.1 is my local network ip for the host that docker runs on and where also pi-hole as dns server is running and working for all of my systems.
I played around with fixing iptables configuration. but even switching them off completely and opening everything did not help.
The solution I found, without fully understanding the root case, was to move the dns to another server. I installed dnsmasq on a second system with ip 192.168.177.2 that nothing else than forwarding all dns queries back to my pi-hole server on 192.168.177.1
starting docker on 192.168.177.1 again with dns configured to use 192.168.177.2 everything was working again
with this in one terminal
dockerd --debug --dns 192.168.177.2
and the command from above in another it worked again.
> docker run -it --network alpine_net alpine /bin/sh -c "cat /etc/resolv.conf; ping -c 4 www.google.com"
search mydomain.local
nameserver 127.0.0.11
options ndots:0
PING www.google.com (172.217.23.4): 56 data bytes
64 bytes from 172.217.23.4: seq=0 ttl=118 time=8.201 ms
--- www.google.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 8.201/8.201/8.201 ms
So moving the the dns server to another host and adding "dns" : ["192.168.177.2"] to my /etc/docker/daemon.json fixed it for me
Maybe someone else can help me to explain the root cause behind the problem with running the dns server on the same host as docker.
First, make sure your container is connected to a custom bridged network. I suppose by default in a custom network DNS request inside the container will be sent to 127.0.0.11#53 and forwarded to the DNS server of the host machine.
Second, check iptables -L to see if there are docker-related rules. If there is not, probably that's because iptables are restarted/reset. You'll need to restart docker demon to re-add the rules to make DNS request forwarding working.
I had same problem, the problem was host machine's hostname. I have checked hostnamectl result and it was ok but problem solved with stupid reboot. before reboot result of cat /etc/hosts was like this:
# The following lines are desirable for IPv4 capable hosts
127.0.0.1 localhost HostnameSetupByISP
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4
# The following lines are desirable for IPv6 capable hosts
::1 localhost HostnameSetupByISP
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
and after reboot, I've got this result:
# The following lines are desirable for IPv4 capable hosts
127.0.0.1 hostnameIHaveSetuped HostnameSetupByISP
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4
# The following lines are desirable for IPv6 capable hosts
::1 hostnameIHaveSetuped HostnameSetupByISP
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
I'm having problems to get my ssh tunnel working for my container in a docker swarm cluster.
ssh connection on my local machine:
ssh -L 7180:test.XXX:7180 user#XXX
In my Dockerfile on the remote machine:
EXPOSE 7180
Container start:
docker -H test:2379 --tlsverify run -d -p 7180:7180 --net=my-net
I tried to connect in Firefox via:
localhost:7180
Unfortunately the connection gets refused on the remote machine:
channel 3: open failed: connect failed: Connection refused
"docker container ls" prints following for the ports:
xxx:7180->7180/tcp
Inside my container "netstat -ntlp | grep LISTEN" prints:
tcp 0 0 0.0.0.0:7180 0.0.0.0:* LISTEN -
I'm new to this but after all what I've read so far this should actually work. I'm using "--net=my-net" because I want to setup my own network later. I had the same issue with "--net=host". What am I doing wrong?
The ssh command should be:
ssh -L 7180:127.0.0.1:7180 user#XXX
And then from your browser, you would go to:
http://127.0.0.1:7180
I've avoided using "localhost" because some machines map this to IPv6 even if you don't have IPv6 configured.
When testing this tunnel, make sure your application is listening on the remote server by doing an ssh to that server and run a curl command directly on the server to 127.0.0.1:7180. If it doesn't work there, you would repeat your debugging with netstat inside the container and verifying the port is published in thedocker ps` output.
I got it working with
ssh -D localhost:7180 -f -C -q -N user#XXX
and using
xxx:7180
in my browser (instead of localhost).
localhost and --net=host did not work for me with ssh -L.
I'm running my docker container with:
docker run -d sequenceiq/hadoop-docker:2.6.0
The Dockerfile is here.
After it is started on my mac - I'm running docker ps and getting:
6bfa4f2fd3b5 sequenceiq/hadoop-docker:2.6.0 "/etc/bootstrap.sh -d" 4 minutes ago Up 4 minutes 22/tcp, 8030-8033/tcp, 8040/tcp, 8042/tcp, 8088/tcp, 49707/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp kind_hawking
Then I'm running
ssh -v localhost -p 22
and I'm getting
OpenSSH_7.4p1, LibreSSL 2.5.0
debug1: Reading configuration data /Users/User/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Connecting to localhost [::1] port 22.
debug1: connect to address ::1 port 22: Connection refused
debug1: Connecting to localhost [127.0.0.1] port 22.
debug1: connect to address 127.0.0.1 port 22: Connection refused
ssh: connect to host localhost port 22: Connection refused
Assumptions: I think this is not a duplicate of the other centos sshd questions as this is a different centos version. (For those that are similar - it is doing what the potentially similar question is asking and it is not working).
My question is: How to get my docker centos sshd passwordless server running?
Edit:
#Andrew has been super-helpful in helping me refine my question - so here goes.
Here is my updated Dockerfile
FROM sequenceiq/hadoop-docker:2.6.0
CMD ["/etc/bootstrap.sh", "-d"]
# Hdfs ports
EXPOSE 50010 50020 50070 50075 50090 8020 9000
# Mapred ports
EXPOSE 10020 19888
#Yarn ports
EXPOSE 8030 8031 8032 8033 8040 8042 8088
#Other ports
EXPOSE 49707 2122
EXPOSE 9000
EXPOSE 2022
Now I'm building this with:
sudo docker build -t my-hdfs .
Then I'm running this with:
sudo docker run -d -p my-hdfs
Then I'm checking the processes with:
sudo docker ps
with a result like:
d9c9855cfaf0 my-hdfs "/etc/bootstrap.sh -d" 2 minutes ago
Up 2 minutes 0.0.0.0:32801->22/tcp, 0.0.0.0:32800->2022/tcp,
0.0.0.0:32799->2122/tcp, 0.0.0.0:32798->8020/tcp, 0.0.0.0:32797->8030/tcp,
0.0.0.0:32796->8031/tcp, 0.0.0.0:32795->8032/tcp, 0.0.0.0:32794->8033/tcp,
0.0.0.0:32793->8040/tcp, 0.0.0.0:32792->8042/tcp, 0.0.0.0:32791->8088/tcp,
0.0.0.0:32790->9000/tcp, 0.0.0.0:32789->10020/tcp, 0.0.0.0:32788->19888/tcp,
0.0.0.0:32787->49707/tcp, 0.0.0.0:32786->50010/tcp, 0.0.0.0:32785->50020/tcp,
0.0.0.0:32784->50070/tcp, 0.0.0.0:32783->50075/tcp, 0.0.0.0:32782->50090/tcp
agitated_curran
Then to get the IP address I'm running:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' d9c9855cfaf0
with a result like
172.17.0.3
Then I'm testing it with:
ssh -v 172.17.0.3 -p 32800
This gives a result:
OpenSSH_7.4p1, LibreSSL 2.5.0
debug1: Reading configuration data /Users/User/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Connecting to 172.17.0.3 [172.17.0.3] port 32800.
debug1: connect to address 172.17.0.3 port 32800: Operation timed out
ssh: connect to host 172.17.0.3 port 32800: Operation timed out
My question is: How to get my docker centos sshd passwordless server running?
You are trying to connect to you local ssh server instead of container. To connect to any port inside container, you need to expose and publish it and possibly map it to another one, especially in case when you want to run multiple similar containers on different ports on the same host. See Expose.
So in your case your command should be
docker run -p 2222:22 -d sequenceiq/hadoop-docker:2.6.0
And ssh command
ssh -v localhost -p 2222
Exposing docker port (as seen in your linked docker file) makes it accessible
to other docker containers, but not to your host machine. To understand difference between exposed and published ports see this question
However, when i tried to connect to port 2222 it haven't worked. Looking at Dockerfile of 2.6.0 version, i've found that it has a bug, where sshd configured to listen on port 2122, but exposed port is 22, as can be seen here. Also, when i'm tried to build a lastest Dockerfile you provided, it failed at step 31, so you might want to inverstigate further.
Edit after question update:
Look at docker ps output you provided, and on Dockerfile. sshd configured to listen on port 2122 (if you haven’t changed that though since we don't have a complete dockerfile of yours), and in output we see
0.0.0.0:32799->2122/tcp
0.0.0.0:32800->2022/tcp
You should connect as ssh -v localhost -p 32799 instead of 32800 since nothing is listening on port 2022 inside container
My setup is the following:
Host: Win10
Guest: Ubuntu 15.10 (clean install, only docker and nodejs are added)
Base image: https://hub.docker.com/r/microsoft/aspnet/ 1.0.0-beta8-coreclr
Inside the guest I have installed Docker and created image (added sample webapp using yeoman to the image above). When I run the image inside container I can ping the container IP sucessfuly using the container IP from the linux (e.g. 172.17.0.2).
$sudo docker run -d -p 80:5000 --name web myapp
$sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' "web"
172.17.0.2
$ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.060 ms
1 packets transmitted, 1 received, 0% packet loss, time 999ms
$curl 172.17.0.2:80
curl: (7) Failed to connect to 172.17.0.2 port 80: Connection refused
I can also connect to the container and execute commands like ping, however from the linux machine (guest in VirtualBox, host for docker) I cannot access the web app that is hosted inside the container as seen above. I tried several approaches like mapping to the host IP addresses etc, but none of them worked. Did anyone have ideas where to start from ? Is the issue comes from that the docker is installed inside VirtualBox machine?
Thank you in advance.
Edit: Here are the logs from the container:
Could not open /etc/lsb_release. OS version will default to the empty string.
Hosting environment: Production
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Your command tells Docker to essentially proxy requests from port 80 of the Linux guest to port 5000 of the container. So the curl command you tried doesn't work because you're trying on port 80 on the container, while the container itself has a service listening on port 5000.
To connect to the container directly, you would use (on the Linux guest):
curl 172.17.0.2:5000
To access via the published port on the Linux guest (from your host):
curl (Linux guest IP)
Or (from the Linux guest):
curl localhost
Edit: This will also prove to be problematic:
Now listening on: http://localhost:5000
You'll want your app inside the container to bind to all interfaces (0.0.0.0) so it listens on the container's assigned IP. With localhost it won't be accessible.
You might find this example useful:
https://github.com/aspnet/Home/blob/dev/samples/1.0.0-beta8/HelloWeb/project.json
This line specifies that the app bind to all interfaces (using "*") on port 5004:
21 "kestrel": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.Kestrel --server.urls http://*:5004"
You'll need similar configuration.