For some reason whenever I suspend my VM and resume it, I can no longer connect to the docker container that is hosted within the VM. Usually, I pass -p 3000:3000 to the docker container so that I can access the rails instance within it and this works fine, but when I suspend the VM and resume it later, I can no longer connect to port 3000 even though it's listening within the docker image.
This results in me having to reboot the VM as service docker restart does not change anything.
Is there something else I should be looking at to resolve this issue? I've been suspending/resuming my VM with docker in it for quite awhile and have never run into this issue before.
EDIT
To reproduce this issue, I simply resumed my VM and tried connecting to localhost port 3000 from the VM itself (not within the docker image) and it cannot connect. However, below shows that port 3000 is listening:
[root:kali:~/app]# curl http://localhost:3000
curl: (56) Recv failure: Connection reset by peer
[root:kali:~/app]# netstat -antp | grep -i listen
tcp 0 0 127.0.0.1:43050 0.0.0.0:* LISTEN 84770/autossh
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 20478/sshd
tcp6 0 0 :::3000 :::* LISTEN 32731/docker-proxy
tcp6 0 0 :::3001 :::* LISTEN 32715/docker-proxy
tcp6 0 0 :::111 :::* LISTEN 1/systemd
tcp6 0 0 :::22 :::* LISTEN 20478/sshd
From within docker, I can see that rails is working:
[root:77f444beafff:~/app]# rails s --binding 0.0.0.0
=> Booting Puma
=> Rails 5.2.3 application starting in development
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 3.12.1 (ruby 2.5.1-p57), codename: Llamas in Pajamas
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop
And here's the netstat from within docker:
[root:77f444beafff:~/app]# netstat -antp | grep -i listen
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 478/redis-server *:
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 765/puma 3.12.1 (tc
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN -
tcp6 0 0 :::6379
If I curl from within the docker image, I can see it hits the rails app just fine:
[root:77f444beafff:~/app]# curl http://localhost:3000/ -I
HTTP/1.1 200 OK
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Download-Options: noopen
X-Permitted-Cross-Domain-Policies: none
Referrer-Policy: strict-origin-when-cross-origin
Content-Type: text/html; charset=utf-8
ETag: W/"5078d30a6c1a5f6fc5cb7f9a82cd89f5"
Cache-Control: max-age=0, private, must-revalidate
Set-Cookie: _vspm_session=Cace%2FN0zB%2F6QJOiietbuHxTHOMZUMuRmEukYqQTNaHQ91hskaN%2BPJzev0KdGUAAtYx9a35Mqdkr8eRkPdH4qOl6vOaCcPU0gy8s7IMfkb9VhRGPPbecepmI%2F9leA2dnD694P8ctXSBklOCnjhN0%3D--SglWrWvx3BFEAI3z--IkylACdXbR6eF27Hgn0Cgg%3D%3D; path=/; HttpOnly
X-Request-Id: 29aa7251-f29a-4309-adec-6af479e7bd9b
X-Runtime: 12.241723
I'm having exactly the same issue with my VMWare virtual machine (VMWare running on Windows).
The only workaround that is working for me is:
docker stop $(docker ps -aq) && sudo systemctl restart NetworkManager docker
If I had to guess I would say it may be related to some firewall rules docker setup on start, maybe when you resume the virtual machine a change in the network configuration breaks those rules.
Similar issue: https://github.com/docker/for-mac/issues/1990 (Doesn't seem specific to docker for mac).
I was able to solve this issue with the hint given by lannox in the comment. It's necessary to mark the network interfaces of the docker containers as unmanaged by NetworkManager.
To do that, create a new file /etc/NetworkManager/conf.d/10-unmanage-docker-interfaces.conf with the following content:
[keyfile]
unmanaged-devices=interface-name:docker*;interface-name:veth*;interface-name:br-*;interface-name:vmnet*;interface-name:vboxnet*
This configures NetworkManager to ignore all interfaces with names docker*,
veth*,
br-*,
vmnet*, and
vboxnet* interfaces.
Then restart NetworkManager with sudo systemctl restart NetworkManager.
Next time the host suspends and resumes, the docker containers keep their network connectivity.
Several questions here that might help you solve this :
Is your docker container still running? Run docker ps and find your container
Since the -p 3000:3000 option is set I guess the port is exposed, but you might want to check you really have run your container with this option this time
Is your app really listening? Run lsof -np | grep listen and find your app listening on port 3000
Connect to your container with docker exec -it <your_container> bash and try running lsof -np | grep listen to see if this is a docker issue or your app
It seems that when you run netstat on your VM you get the following line :
tcp6 0 0 :::3000 :::* LISTEN 32731/docker-proxy
On Docker you get :
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 765/puma 3.12.1 (tc
There are two differences here:
:::3000 vs 0 0.0.0.0:3000, the first means it is listening on IPv6 and the second on IPv4 (found the info on this question).
tcp6 vs tcp, again IPv6 vs IPv4.
According to this other question, it seems you have to run rails with -b :: option.
The -b option binds Rails to the specified IP, by default it is localhost. You can run a server as a daemon by passing a -d option.
Please do
sudo docker ps
If you do not got your container do
sudo docker ps -a
Does your container is stopped?
If its true so do
sudo docker start CONTAINER_ID
Related
This question already has answers here:
Cannot connect to Go GRPC server running in local Docker container
(3 answers)
Docker port expose issue, Recv failure: Connection reset by peer
(1 answer)
Deploying a minimal flask app in docker - server connection issues
(8 answers)
Closed 1 year ago.
I created an linux application in GO consisting of a service myappd and a client myapp. I implemented a TCP based IPC on port 12345 between service and client hence the client could communicate with the service. If I run both on one machine, everything works fine.
Now I want to containerize the service. Therefore I created a Dockerfile
FROM debian:buster
COPY ./src/* /home/
RUN chmod 777 /home/myappd
ENTRYPOINT /bin/bash /home/myapp_entrypoint.sh
with the entrypoint script
echo create log locations
mkdir /var/log/myappd
chmod 744 /var/log/myappd
touch /var/log/myappd/myappd.log
chmod 744 /var/log/myappd/myappd.log
cd /home/
./myappd
And build the image with
docker build -t myappd:latest .
Running the container with
docker run --rm --name myappd -itd -p 127.0.0.1:12345:12345 myappd:latest
Afterwards I check, if the service is running and if the port is exposed to the localhost
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4caf95e1bd72 myappd:latest "/bin/sh -c '/bin/ba…" 6 minutes ago Up 2 seconds 127.0.0.1:12345->12345/tcp myappd
$ sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/init
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 520/sshd
tcp 0 0 127.0.0.1:12345 0.0.0.0:* LISTEN 10673/docker-proxy
tcp6 0 0 :::111 :::* LISTEN 1/init
tcp6 0 0 :::22 :::* LISTEN 520/sshd
It is looking good for me. But when I start the client on localhost, the dial on port 12345 succeeds but each request on the port 12345 is responsed instantly with EOF and without any content. If I run the service locally the IPC can be reached on the port 12345 as expected
$ sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/init
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 520/sshd
tcp 0 0 127.0.0.1:12345 0.0.0.0:* LISTEN 13184/./myappd
tcp6 0 0 :::111 :::* LISTEN 1/init
tcp6 0 0 :::22 :::* LISTEN 520/sshd
Has anyone an idea why the IPC is working, when I run the service locally, but don't work if I run the service in a container?
Trying to use jupyter-notebook on a docker image (https://hub.docker.com/r/tensorflow/tensorflow), but having problem where using the port-forwarded address in browser just hangs with the (chrome) home page stuck saying Waiting for 127.0.0... until it just times out.
The docker command being run looks like
➜ ~ docker run -it -p 8888:8888 --rm tensorflow/tensorflow:latest-devel-gpu-py3 jupyter-notebook --ip 0.0.0.0 --no-browser --allow-root
[I 04:26:44.023 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
[I 04:26:44.042 NotebookApp] Serving notebooks from local directory: /root
[I 04:26:44.043 NotebookApp] The Jupyter Notebook is running at:
[I 04:26:44.043 NotebookApp] http://(f1afd4b163fd or 127.0.0.1):8888/?token=5a838cefbd58822ce3de5a9ab00ed724bc6f9e048017125a
[I 04:26:44.043 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 04:26:44.043 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://(f1afd4b163fd or 127.0.0.1):8888/?token=5a838cefbd58822ce3de5a9ab00ed724bc6f9e048017125a
(note, have also tried docker run -it -p 8888:8888 --rm tensorflow/tensorflow:latest-devel-gpu-py3 /run_jupyter.sh --allow-root to similar hanging results).
Checking docker ps shows
➜ ~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2114609d6d9d tensorflow/tensorflow:latest-devel-gpu-py3 "jupyter-notebook --…" About a minute ago Up About a minute 6006/tcp, 0.0.0.0:8888->8888/tcp mystifying_liskov
Checking for a response via curl shows
➜ ~ curl -v http://127.0.0.1:8888/?token=5a838cefbd58822ce3de5a9ab00ed724bc6f9e048017125a
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8888 (#0)
> GET /?token=5a838cefbd58822ce3de5a9ab00ed724bc6f9e048017125a HTTP/1.1
> Host: 127.0.0.1:8888
> User-Agent: curl/7.47.0
> Accept: */*
>
<at this point just hangs until I ctl+C out>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
and examine the ports shows
➜ ~ sudo netstat -plnt
[sudo] password for me:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1512/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 2485/cupsd
tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 2284/smbd
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1502/mysqld
tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 2284/smbd
tcp 0 0 127.0.0.1:5037 0.0.0.0:* LISTEN 8558/adb
tcp 0 0 127.0.0.1:6000 0.0.0.0:* LISTEN 1006/unicorn.rb --h
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1954/monitorix-http
tcp6 0 0 :::22 :::* LISTEN 1512/sshd
tcp6 0 0 ::1:631 :::* LISTEN 2485/cupsd
tcp6 0 0 :::445 :::* LISTEN 2284/smbd
tcp6 0 0 :::8888 :::* LISTEN 32491/docker-proxy
tcp6 0 0 :::139 :::* LISTEN 2284/smbd
tcp6 0 0 :::80 :::* LISTEN 1846/apache2
Other post I've seen seem to be people simply not forwarding the port that jupyter expects to use, but that does not seem to be the problem here. This occurs regardless of what docker image is used (so not just that particular image). If anyone has any ideas of what it could be or any debugging advice it would be appreciated.
Resolved the problem.
Restarted the host machine (note, this was the first time restarting since installing docker, but still did not work until...)
Ran sudo /etc/init.d/docker restart (Did this purely based on a hunch when skimming the troubleshooting docs here: https://docs.docker.com/toolbox/faqs/troubleshoot/#configure-http-proxy-settings-on-docker-machines).
Then the docker run ... statement from the posted question worked and can now reach the forwarded port on host machine and can curl the address. An Ubuntu notification popped up saying "Wired connection established".
This is a bit of a lame answer, but it's what worked for me. Oddly, seems have to rerun the sudo /etc/init.d/docker restart statement sometimes to get docker containers to open. Will try to figure out a bit more about what exactly was going on here, but if anyone with more experience thinks they know what may have been happening, please do let us know.
Im trying configure the docker daemon so i can connect to it from inside the docker containers i start..
So i changed /etc/docker/daemon.json to
{
"hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2375"]
}
So that i connect to it through the docker bridge.. However when i restart docker i get
netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address
State PID/Program name
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 3728/mysqld
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 24253/redis-server
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3756/nginx
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3634/sshd
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 3756/nginx
tcp6 0 0 :::8010 :::* LISTEN 4230/apache2
tcp6 0 0 :::9200 :::* LISTEN 26824/java
tcp6 0 0 :::9300 :::* LISTEN 26824/java
tcp6 0 0 :::22 :::* LISTEN 3634/sshd
tcp6 0 0 :::2375 :::* LISTEN 1955/dockerd
So first i though the issue was the fact that it was listening on ipv6 not ipv4. and according to
Make docker use IPv4 for port binding
It should all still work but it doesnt.. When i try
telnet 172.17.0.1(docker host) 2375
it fails to connect while
telnet 172.17.0.1(docker host) 80
works. How can i connect to docker running on the host machine? Im running on Ubuntu 14.04.5 docker Version: 17.06.2-ce
You can start your containers mounting the host docker socket into your containers.
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
With this setup, Docker clients inside the containers will be using the Docker daemon from the host. Your containers will be able to build, run, push etc. using daemon running in host. Please note that with these setup everything is happening on the host, so if you start new containers they will be “sibling” containers.
EDIT
If you are using the bridge network, you can connect to any service running on host machine using host IP address.
For example, I have mysqld running on my host with IP 10.0.0.1 and from a container I can do
mysql -u user -p -h 10.0.0.1
The trick is to find out the host IP address from containers.
In Docker for Mac (I am running version 17.07.0) is as simple as connecting to the special host "docker.for.mac.localhost"
Another option is to add an alias IP to your loopback interface
sudo ifconfig lo0 alias 192.168.1.1
And then when running containers add a host for this alias IP
docker run --rm -ti --add-host host-machine:192.168.1.1 mysql:5.7 bash
With this setup, inside container you should be able to do
mysql -u user -p -h host-machine
This answer may be a bit late, but it's better late than never as we never can tell who may be experiencing similar problem. I just fixed it be disabling the unnecessary ufw rule blocking the internal communication.
Example:
sudo ufw allow from <IP address or range> to any port [desired port]
sudo ufw allow from 172.16.0.0/12 to any port 3421.
As for me, I disabled the UFW service totally using the command below.
sudo ufw disable
I have started a docker container using the command
sudo docker run -it -P -d plcdimage
The image is built using a Dockerfile which has instruction EXPOSE 8080. Container runs a jboss server with an application deployed on it. Port mappings are :
Command: sudo docker port be1837e849dc
Output: 8080/tcp -> 0.0.0.0:32771
When I try to access the web application running on jboss in the container from the mapped host port using url:
http://IPAddressOfHost:32771/
I get connection refused error. Following is the result of command "netstat -tulpn"
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::9999 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::32771 :::* LISTEN -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
I tried doing telnet hostip 32771 and it also results in connection refused.
Docker version 1.12.1
build 23cf638
What could be the possible reason for this?
Thanks in advance
I found that jboss server running inside the container was not listening on 0.0.0.0. One option to do this is, while starting the standalone server use -b 0.0.0.0.
/bin/standalone.sh -b 0.0.0.0
I made 2 docker image( nginx, yeoman) and mapped port as below.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a71ee900cc0 webserver:0.1 "nginx" About an hour ago Up About an hour 443/tcp, 0.0.0.0:8080->80/tcp webserver
af57b93ca326 silarsis/yeoman "/bin/bash" About an hour ago Up About an hour 0.0.0.0:9000->9000/tcp yeoman
And get into yeoman docker to make grunt server.
yeoman_docker$ yo ..scafolding stuff
yeoman_docker$ ls
Gruntfile.js README.md app bower.json bower_components node_modules package.json test
yeoman_docker$ grunt serve
Running "serve" task
Done, without errors.
Execution Time (2016-09-23 07:20:28 UTC-0)
loading tasks 5ms ▇▇▇▇▇▇▇ 19%
loading grunt-contrib-copy 13ms ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 50%
copy:styles 8ms ▇▇▇▇▇▇▇▇▇▇▇ 31%
Total 26ms
Running "autoprefixer:server" (autoprefixer) task Autoprefixer's process() method is deprecated and will removed in next major release. Use postcss([autoprefixer]).process() instead File .tmp/styles/main.css created.
Running "connect:livereload" (connect) task Started connect web server on http://localhost:9000
Running "watch" task Waiting...
Keep this SSH to maintain grunt serve.
Nginx is accessable from local (127.0.0.1:8080) But grunt is not able to access(127.0.0.1:9000). Is there any difference between making grunt server in local and docker? What should I do with it?
It seems that grunt server is not on local.
$ telnet 127.0.0.1 9000
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
$ netstat -ant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 10.0.8.1:53 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::9000 :::* LISTEN
tcp6 0 0 :::8080 :::* LISTEN
tcp6 0 0 :::21 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 127.0.0.1:9000 127.0.0.1:39682 TIME_WAIT
The issue is that your connect server in the yeoman container listens to localhost inside the container:
Running "connect:livereload" (connect) task Started connect web server on http://localhost:9000
When creating a port binding with docker the service inside the docker container has to listen to host 0.0.0.0, not localhost. You need to update your gruntfile.js so that you connect server listens to 0.0.0.0.