Docker + webdevops: "This site can’t be reached" - docker

On a current project I use Docker. I must clarify that I am pretty inexperienced at it.
My project is a PHP/Symfony project. Until then, I used nginx:alpine and phpdocker/php-fpm to have my project running on my dev environment. However, I found these unfit to my case as my production actually uses Apache.
I found another project I'm on uses the webdevops Docker images without trouble. I want to replace the two containers listed above with a single one, the webdevops/php-apache-dev:alpine docker image.
Although the configuration between the two projects seems almost identical, my dev environment does not seem to work properly: I end up with this:
This site can’t be reached - 172.18.0.7 refused to connect.
(I also use Traefik, but the routed URI does not work any better. The error message is slightly different though: Bad Gateway).
I find myself unable to debug this. I don't even know where to look.
Below is the docker-compose.yml configuration I want to use:
version: '3.2'
services:
app:
image: webdevops/php-apache-dev:alpine
container_name: my-app
working_dir: /app
env_file: .env
environment:
WEB_DOCUMENT_ROOT: /public
WEB_DOCUMENT_INDEX: index.php
LOG_STDOUT: ./var/log/app.stdout.log
LOG_STDERR: ./var/log/app.stderr.log
# #todo list of unwanted PHP modules, cf. https://dockerfile.readthedocs.io/en/latest/content/DockerImages/dockerfiles/php-apache-dev.html#php-modules
# PHP_DISMOD:
php.error_reporting: E_ALL
PHP_DISPLAY_ERRORS: 1
PHP_POST_MAX_SIZE: 80M
PHP_UPLOAD_MAX_FILESIZE: 200M
PHP_MEMORY_LIMIT: 521M
PHP_MAX_EXECUTION_TIME: 300
PHP_DATE_TIMEZONE: Europe/Paris
volumes:
- .:/app
# - ./docker/apache2/conf.d:/opt/docker/etc/httpd/conf.d
- ~/.ssh:/home/application/.ssh:ro
- ~/.composer:/home/application/.composer
depends_on:
- elasticsearch
- database
The other containers work just as well as they did before. This one is the only one that fails.
When calling docker-compose up no error is thrown. All the logs I could find within the container remain silent. As far I as can tell, Traefik does not seem to be the problem. Here is the result of docker ps:
[/var/www/html/citizen-game]$ docker ps *[master]
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e9639e7a84d webdevops/php-apache-dev:alpine "/entrypoint supervi…" 4 hours ago Up 4 hours 80/tcp, 443/tcp, 9000/tcp my-app-app
be1b90fdf768 docker.elastic.co/elasticsearch/elasticsearch:6.2.4 "/usr/local/bin/dock…" 4 hours ago Up 4 hours (healthy) 9200/tcp, 9300/tcp my-app-elasticsearch
76fb8743a12f phpmyadmin/phpmyadmin "/run.sh supervisord…" 4 hours ago Up 4 hours 80/tcp, 9000/tcp my-app-phpmyadmin
dd41b4afe267 mysql:5.7 "docker-entrypoint.s…" 4 hours ago Up 4 hours (healthy) 3306/tcp, 33060/tcp my-app-database
91893783bcb1 rabbitmq:3.7-management "docker-entrypoint.s…" 4 hours ago Up 4 hours 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp my-app-rabbitmq
63f551884bbf traefik:maroilles "/traefik --web --do…" 4 hours ago Up 4 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:8080->8080/tcp
My question is, I guess: how can I debug this? Am I missing something trivial?
Edit
Here is (part of) the content of the docker-compose.override.yml file:
version: '3.2'
services:
app:
volumes:
- ~/.ssh:/home/application/.ssh
- ~/.composer:/home/application/.composer
labels:
- "traefik.backend=my-app"
- "traefik.frontend.rule=Host:my-app.docker"
- "traefik.docker.network=proxy"
networks:
- internal
- proxy
environment:
PHP_DEBUGGER: xdebug
#XDEBUG_REMOTE_HOST: <your host IP address>
XDEBUG_REMOTE_PORT: 9000
XDEBUG_REMOTE_AUTOSTART: 1
XDEBUG_REMOTE_CONNECT_BACK: 1
XDEBUG_PROFILER_ENABLE: 1
XDEBUG_PROFILER_ENABLE_TRIGGER: 1000
traefik:
image: traefik
container_name: citizen-game-traefik
command: --web --docker --docker.domain=docker --logLevel=DEBUG
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: always
networks:
- internal
- proxy
rabbitmq:
networks:
- internal
- proxy
networks:
proxy:
external:
name: traefik
internal:
EDIT 2:
#Mostafa
I ran the following:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-app-app
Result is:
172.18.0.7172.19.0.5
Trying these directly from the browser fails "This site can't be reached". I suppose it was to be expected.
I ran the following from inside the container:
bash-4.4# supervisorctl status apache:apached
apache:apached RUNNING pid 13575, uptime 0:00:00
As suggested, I used ss -plant | grep 80. This does not work from within the container. Here is the result when called outside of it:
[/var/www/html/my-app]$ ss -plant | grep 80
LISTEN 0 80 127.0.0.1:3306 0.0.0.0:*
ESTAB 0 0 192.168.1.88:39360 198.252.206.25:443 users:(("chromium-browse",pid=4203,fd=80))
SYN-SENT 0 1 192.168.1.88:50680 192.241.181.178:443 users:(("chromium-browse",pid=4203,fd=41))
LISTEN 0 128 *:80 *:*
LISTEN 0 128 *:8080 *:*
I'm not sure it tells much though. I tried to install ss from inside the container with apk but:
bash-4.4# apk add ss
ERROR: unsatisfiable constraints:
ss (missing):
required by: world[ss]
EDIT 3:
Here is the result of calling netstat:
bash-4.4# netstat -plant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 229/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.11:32843 0.0.0.0:* LISTEN -
tcp 0 0 :::22 :::* LISTEN 229/sshd
tcp 0 0 :::9000 :::* LISTEN 225/php-fpm.conf)
bash-4.4# netstat -plant | grep httpd
(nothing)
I'm not sure how much this helps though, since my other project, that works, yields the same result n bash-4.4# netstat -plant | grep httpd. Without the grep, it includes much more lines, though.

As the output that you have posted described the exposed ports 80,443,9000 for the container from this image webdevops/php-apache-dev:alpine
Then you can access the container using its IP directly from the browser. So first you need to ensure from the following:
Check if 172.18.0.7 is the actual IP of my-app-app container, use the following command to check the IP of your running container
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-app-app
Or just docker inspect my-app-app to get all info about the container
Check the logs for my-app-app and you may need to enter the container itself and check if apache is actually running by executing the following supervisorctl command which will tell you about the status of apache service
$ supervisorctl status apache:apached
apache:apached RUNNING pid 72, uptime 0:07:43
If apache is running correctly then you should be able to browse the content using the container IP, in my case it gives me something like this as I don't have an actual application
Regarding your issue with traefik which is Bad Gateway that's because traefik itself cannot reach your backend service which is the my-app-app container in our case. you need to ensure that both traefik and my-app-app are within the same network or at least they can ping each other's IPs
Update:
Instead of ss it turns out the image contains netstat command, in order to check what port is used by apache you can do the following from inside the container:
# netstat -plant | grep httpd
tcp 0 0 :::80 :::* LISTEN 98/httpd
tcp 0 0 :::443 :::* LISTEN 98/httpd

Related

Access host from within a docker container

I have a dockerized app and I use the following docker-compose.yml to run it:
version: '3.1'
services:
db:
image: mysql:5.7
ports:
- "3306:3306"
env_file:
- ./docker/db/.env
volumes:
- ./docker/db/data:/var/lib/mysql:rw
- ./docker/db/config:/etc/mysql/conf.d
command: mysqld --sql_mode="NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
php:
build: ./docker/php/7.4/
volumes:
- ./docker/php/app.ini:/usr/local/etc/php/conf.d/docker-php-ext-app.ini:ro
- ./docker/logs/app:/var/www/app/var/log:cached
- .:/var/www/app:cached
working_dir: /var/www/app
links:
- db
env_file:
- ./docker/php/.env
webserver:
image: nginx:1
depends_on:
- php
volumes:
- ./docker/webserver/app.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/logs/webserver/:/var/log/nginx:cached
- .:/var/www/app:ro
ports:
- "80:80"
I have a server that is not dockerized runing on my machine, I can access it via localhost:3000. I would like my php service to be able to access it.
I found people suggesting to add to following to my php service configuration:
extra_hosts:
- "host.docker.internal:host-gateway"
But when I add this, then docker-compose up -d and try docker exec -ti php_1 curl http://localhost:3000, I get curl: (7) Failed to connect to localhost port 3000 after 0 ms: Connection refused. I have the same error when I try to curl http://host.docker.internal:3000.
I desperatly tried to add a port mapping to the php container:
ports:
- 3000:3000
But then when I start the services I have the following error:
ERROR: for php_1 Cannot start service php: driver failed programming external connectivity on endpoint php_1 (9dacd567ee97b9a46699969f9704899b04ed0b61b32ff55c67c27cb6867b7cef): Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use
ERROR: for php Cannot start service php: driver failed programming external connectivity on endpoint php_1 (9dacd567ee97b9a46699969f9704899b04ed0b61b32ff55c67c27cb6867b7cef): Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use
Which is obvious since my server is running on that 3000 port.
I also tried to add
network_mode: host
But it fails because I already have a links. I get the following error:
Cannot create container for service php: conflicting options: host type networking can't be used with links.
I am running docker v20.10.6 on Ubuntu 21.10.
Any help appreciated, thanks in advance!
Make sure you are using version of docker that supports host.docker.internal.
If you are using linux version, then 20.10+ supports it.
For other systems you should probably consult documentation and probably some issues on github of docker-for-linux / other projects OS revelant.
After that...
Make sure extra_hosts is direct child of php service:
php:
extra_hosts:
host.docker.internal: host-gateway
build: ./docker/php/7.4/
Try using ping host.docker.internal first to check whether your host machine responds correctly.
Make sure that your service on port 3000 is working properly and there is no firewall issue.
Remember that localhost means always local ip from current container point of view. It means that localhost inside container maps to local container IP and not your host machine IP. This is a reason for sending extra_hosts section.
Also docker.host.internal is not your host loopback interface.
If service you are trying to reach listens only on localhost interface then there is no chance to reach it without doing some magic with iptables / firewall.
You can check what service is listening on which interface / ip address running following command on your host machine: netstat -tulpn
This should return something like following output:
$ netstat -tulpn
(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:39195 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
From docker container I can reach services listening on 0.0.0.0 (all interfaces) but cannot access 631 port as it is only on 127.0.0.1
$ docker run --rm -it --add-host="host.docker.internal:host-gateway" busybox
/ # ping host.docker.internal
PING host.docker.internal (172.17.0.1): 56 data bytes
64 bytes from 172.17.0.1: seq=0 ttl=64 time=0.124 ms
64 bytes from 172.17.0.1: seq=1 ttl=64 time=0.060 ms
^C
--- host.docker.internal ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.060/0.092/0.124 ms
/ # telnet host.docker.internal 631
telnet: can't connect to remote host (172.17.0.1): Connection refused
/ # telnet host.docker.internal 22
Connected to host.docker.internal
SSH-2.0-OpenSSH_8.6

telnet: Unable to connect to remote host: Connection refused when trying to connect running docker image

I am having ubuntu 18.04 running on a server. I am got a JasperServer image running on docker in it. I am trying to access it from my system. But it throws the following error:
jamshaid#jamshaid:~$ telnet my_server_address 9095
Trying my_server_ip...
telnet: Unable to connect to remote host: Connection refused
Here is the output for sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69c31ba800ab bitnami/jasperreports "/app-entrypoint.sh …" 5 hours ago Up 5 hours 0.0.0.0:9095->8080/tcp, 0.0.0.0:443->8443/tcp ceyedev_jasperreports_1
2a7cb72da0c7 bitnami/mariadb:10.3 "/opt/bitnami/script…" 5 hours ago Up 5 hours 0.0.0.0:3306->3306/tcp ceyedev_mariadb_1
if I telnet on localhost, it connects and then connection closes which means it is running well.
Here is the output when I telnet it from localhost:
ceyedev#ub18servertiny:~$ telnet localhost 9095
Trying ::1...
Connected to localhost.localdomain.
Escape character is '^]'.
Connection closed by foreign host.
Here is the docker-compose file
version: '2'
services:
mariadb:
restart: always
image: 'bitnami/mariadb:10.3'
environment:
- MARIADB_USER=bn_jasperreports
- MARIADB_DATABASE=bitnami_jasperreports
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 3306:3306
volumes:
- 'mariadb_data:/bitnami'
jasperreports:
restart: always
image: 'bitnami/jasperreports'
environment:
- MARIADB_HOST=mariadb
- MARIADB_PORT_NUMBER=3306
- JASPERREPORTS_DATABASE_USER=bn_jasperreports
- JASPERREPORTS_DATABASE_NAME=bitnami_jasperreports
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '9095:8080'
- '443:8443'
volumes:
- 'jasperreports_data:/bitnami'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
jasperreports_data:
driver: local
Here is the output for sudo docker logs container_id_for_jasper
I can telnet other ports from my local machine but having an issue with this one. Any ideas? thanks
keeping in the view of bullet 2 from answers, I executed the below command and found that 9095 is allocated by the server. Any ideas, please?
ceyedev#ub18servertiny:~$ netstat -atn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
tcp 0 244 10.0.114.15:22 182.185.223.147:54326 ESTABLISHED
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 ::1:5432 :::* LISTEN
tcp6 0 0 :::443 :::* LISTEN
tcp6 0 0 :::9095 :::* LISTEN
tcp6 0 0 :::3306 :::* LISTEN
To people who got there and didn't find solution:
Make sure your web server is listening on 0.0.0.0 to listen ALL interfaces, including docker bridge to outer network
Based on your question, you know:
Docker container is running
Docker container is listening to port 9095
telnet from Linux server to docker container is working
telnet from other client somewhere in Internet to docker container is NOT working
I guess your Ubuntu server is not accepting incoming requests from Internet on port 9095.
There can be many reasons for that:
Your server has firewall settings, which block connection
Your server did not publish port 9095 to Internet
Your client has no Internet access, when using port 9095
So I would investigate these aspects.
The docker part seems to be ok, because telnet to localhost is working.

Apache Guacamole in Docker containers: Creation of WebSocket tunnel to guacd failed

I installed Apache Guacamole using Docker on a CentOS 8.1 with Docker 19.03.
I followed the steps described here:
https://guacamole.apache.org/doc/gug/guacamole-docker.html
https://www.linode.com/docs/applications/remote-desktop/remote-desktop-using-apache-guacamole-on-docker/
I started the containers like this:
# mysql container
docker run --name guacamole-mysql -e MYSQL_RANDOM_ROOT_PASSWORD=yes -e MYSQL_ONETIME_PASSWORD=yes -d mysql/mysql-server
# guacd container
docker run --name guacamole-guacd -e GUACD_LOG_LEVEL=debug -d guacamole/guacd
# guacamole container
docker run --name guacamole-guacamole --link guacamole-guacd:guacd --link guacamole-mysql:mysql -e MYSQL_DATABASE=guacamole -e MYSQL_USER=guacamole -e MYSQL_PASSWORD=password -d -p 8080:8080 guacamole/guacamole
All went fine and I was able to access the Guacamole web interface on port 8080. I configured one VNC connection to another machine on port 5900. Unfortunately when I try to use that connection I get the following error in the web interface:
"An internal error has occurred within the Guacamole server, and the connection has been terminated..."
I had a look on the logs too and in the guacamole log I found this:
docker logs --tail all -f guacamole-guacamole
...
15:54:06.262 [http-nio-8080-exec-2] ERROR o.a.g.w.GuacamoleWebSocketTunnelEndpoint - Creation of WebSocket tunnel to guacd failed: End of stream while waiting for "args".
15:54:06.685 [http-nio-8080-exec-8] ERROR o.a.g.s.GuacamoleHTTPTunnelServlet - HTTP tunnel request failed: End of stream while waiting for "args".
I'm sure that the target machine (which is running the VNC server) is fine. I'm able to connect to it from both a VNC client and another older Guacamole which I installed previously (not using Docker).
My containers look ok too:
docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad62aaca5627 guacamole/guacamole "/opt/guacamole/bin/…" About an hour ago Up About an hour 0.0.0.0:8080->8080/tcp guacamole-guacamole
a46bd76234ea guacamole/guacd "/bin/sh -c '/usr/lo…" About an hour ago Up About an hour 4822/tcp guacamole-guacd
ed3a590b19d3 mysql/mysql-server "/entrypoint.sh mysq…" 2 hours ago Up 2 hours (healthy) 3306/tcp, 33060/tcp guacamole-mysql
I connected to the guacamole-guacamole container and pinged the other two containers: guacamole-mysql and guacamole-guacd. Both look fine and reachable.
docker exec -it guacamole-guacamole bash
root#ad62aaca5627:/opt/guacamole# ping guacd
PING guacd (172.17.0.2) 56(84) bytes of data.
64 bytes from guacd (172.17.0.2): icmp_seq=1 ttl=64 time=0.191 ms
64 bytes from guacd (172.17.0.2): icmp_seq=2 ttl=64 time=0.091 ms
root#ad62aaca5627:/opt/guacamole# ping mysql
PING mysql (172.17.0.3) 56(84) bytes of data.
64 bytes from mysql (172.17.0.3): icmp_seq=1 ttl=64 time=0.143 ms
64 bytes from mysql (172.17.0.3): icmp_seq=2 ttl=64 time=0.102 ms
Looks like there is a communication issue between the guacamole itself and guacd. And this is where I'm completely stuck.
EDIT
I tried on CentOS 7 and I got the same issues.
I also tried this solution https://github.com/boschkundendienst/guacamole-docker-compose as suggested by #BatchenRegev but I got the same issue again.
I've been experiencing the same issues under centos.
My only difference is that I'm hosting the database on a separate machine as this is all cloud-hosted and I want to be able to destroy/rebuild the guacamole server at will.
I ended creating a docker-compose.yml file as that seemed to work better.
Other gotcha's I came across:
make sure the guacd_hostname is the actual machine hostname and not 127.0.0.1
setting Selinux to allow httpd.
sudo setsebool -P httpd_can_network_connect
My docker-compose.yml is shown below replace all {variables} with your own and update the file if you are using a sql image as well.
version: "2"
services:
guacd:
image: "guacamole/guacd"
container_name: guacd
hostname: guacd
restart: always
volumes:
- "/data/shared/guacamole/guacd/data:/data"
- "/data/shared/guacamole/guacd/conf:/conf:ro"
expose:
- "4822"
ports:
- "4822:4822"
network_mode: bridge
guacamole:
image: "guacamole/guacamole"
container_name: guacamole
hostname: guacamole
restart: always
volumes:
- "/data/shared/guacamole/guacamole/guac-home:/data"
- "/data/shared/guacamole/guacamole/conf:/conf:ro"
expose:
- "8080"
ports:
- "8088:8080"
network_mode: bridge
environment:
- "GUACD_HOSTNAME={my_server_hostname}"
- "GUACD_PORT=4822"
- "MYSQL_PORT=3306"
- "MYSQL_DATABASE=guacamole"
- "GUACAMOLE_HOME=/data"
- "MYSQL_USER=${my_db_user}"
- "MYSQL_PASSWORD=${my_db_password}"
- "MYSQL_HOSTNAME=${my_db_hostname}"
i have the same problem on FreeBSD 12.2 - SOLUTION
Change "localhost" hostname in
/usr/local/etc/guacamole-client/guacamole.properties
to "example"
guacd-hostname: 192.168.10.10
next: /usr/local/etc/guacamole-server/guacd.conf
[server]
bind_host = 192.168.10.10
Check /etc/guacamole/guacamole.properties i have link:
guacd-hostname: 192.168.10.10
Restart:
/usr/local/etc/rc.d/guacd restart
/usr/local/etc/rc.d/tomcat9 restart
with name "localhost" i have:
11:01:48.010 [http-nio-8085-exec-3] DEBUG o.a.g.s.GuacamoleHTTPTunnelServlet - Internal error in HTTP tunnel.
I hope it will be useful to someone else - it works for me`

docker-compose can't connect to adjacent service via service name

I have this docker-compose.yml that basically builds my project for e2e test. It's composed of a postgres db, a backend Node app, a frontend Node app, and a spec app which runs the e2e test using cypress.
version: '3'
services:
database:
image: 'postgres'
backend:
build: ./backend
command: /bin/bash -c "sleep 3; yarn backpack dev"
depends_on:
- database
frontend:
build: ./frontend
command: /bin/bash -c "sleep 15; yarn nuxt"
depends_on:
- backend
spec:
build:
context: ./frontend
dockerfile: Dockerfile.e2e
command: /bin/bash -c "sleep 30; yarn cypress run"
depends_on:
- frontend
- backend
The Dockerfiles are just simple Dockerfiles that based off node:8 which copies the project files and run yarn install. In the spec Dockerfile, I pass http://frontend:3000 as FRONTEND_URL.
But this setup fails at the spec command when my cypress runner can't connect to frontend with error:
spec_1 | > Error: connect ECONNREFUSED 172.20.0.4:3000
As you can see, it resolves the hostname frontend to the IP correctly, but it's not able to connect. I'm scratching my head over why can't I connect to the frontend with the service name. If I switch the command on spec to do sleep 30; ping frontend, it's successfully pinging the container. I've tried deleting and let docker-compose recreate the network, I've tried specifying expose and links to the services respectively. All to no success.
I've set up a sample repo here if you wanna try replicating the issue:
https://github.com/afifsohaili/demo-dockercompose-network
Any help is greatly appreciated! Thank you!
Your application is listening on loopback:
$ docker run --rm --net container:demo-dockercompose-network_frontend_1 nicolaka/netshoot ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.11:35233 *:*
LISTEN 0 128 127.0.0.1:3000 *:*
From outside of the container, you cannot connect to ports that are only listening on loopback (127.0.0.1). You need to reconfigure your application to listen on all interfaces (0.0.0.0).
For your app, in the package.json, you can add (according to the nuxt faq):
"config": {
"nuxt": {
"host": "0.0.0.0",
"port": "3000"
}
},
Then you should see:
$ docker run --rm --net container:demo-dockercompose-network_frontend_1 nicolaka/netshoot ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:3000 *:*
LISTEN 0 128 127.0.0.11:39195 *:*
And instead of an unreachable error, you'll now get a 500:
...
frontend_1 | response: undefined,
frontend_1 | statusCode: 500,
frontend_1 | name: 'NuxtServerError' }
...
spec_1 | The response we received from your web server was:
spec_1 |
spec_1 | > 500: Server Error

Deploy app on a cluster but cannot access it successfully

I'm now learning to use docker follow get-started documents, but in part 4--Swarms I've met some problem. That is when deployed my app on a cluster, I cannot access it successfully.
docker#myvm1:~$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
gsueb9ejeur5 getstartedlab_web.1 zhugw/get-started:first myvm1 Running Preparing 11 seconds ago
ku13wfrjp9wt getstartedlab_web.2 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
vzof1ybvavj3 getstartedlab_web.3 zhugw/get-started:first myvm1 Running Preparing 11 seconds ago
lkr6rqtqbe6n getstartedlab_web.4 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
cpg91o8lmslo getstartedlab_web.5 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
docker#myvm1:~$ curl 'http://localhost'
curl: (7) Failed to connect to localhost port 80: Connection refused
➜ ~ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 - virtualbox Running tcp://192.168.99.101:2376 v17.06.0-ce
myvm2 - virtualbox Running tcp://192.168.99.100:2376 v17.06.0-ce
➜ ~ curl 'http://192.168.99.101'
curl: (7) Failed to connect to 192.168.99.101 port 80: Connection refused
What's wrong?
In addition, very strange. After adding below content in docker-compose.yml I found above question resolved automatically
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
but this time the new added visualizer does not work
docker#myvm1:~$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
xomsv2l5nc8x getstartedlab_web.1 zhugw/get-started:first myvm1 Running Running 7 minutes ago
ncp0rljod4rc getstartedlab_visualizer.1 dockersamples/visualizer:stable myvm1 Running Preparing 7 minutes ago
hxddan48i1dt getstartedlab_web.2 zhugw/get-started:first myvm2 Running Running 7 minutes ago
dzsianc8h7oz getstartedlab_web.3 zhugw/get-started:first myvm1 Running Running 7 minutes ago
zpb6dc79anlz getstartedlab_web.4 zhugw/get-started:first myvm2 Running Running 7 minutes ago
pg96ix9hbbfs getstartedlab_web.5 zhugw/get-started:first myvm2 Running Running 7 minutes ago
from above you know it's always preparing.
My whole docker-compose.yml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: zhugw/get-started:first
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
Had this problem while learning too.
It's because your none clustered image is still running from step 2 and the clustered image you just deployed uses the same port mapping (4000:80) in the docker-compose.yml file.
You have two options:
Go into your docker-compose.yml and change the port mapping to something else e.g 4010:80 and then redeploy your cluster with the update. Then try: http://localhost:4010
Remove the container you created in step 2 of the guide that's still running and using port mapping 4000:80
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
should be
volumes:
- /var/run/docker.sock:/var/run/docker.sock
this is an error in the dockers tutors
Open port 7946 TCP/UDP and port 4789 UDP between the swarm nodes. Use the ingress network. Please let me know if it works, thanks.
What helped for me to get the visualizer running was changing visualizer image tag from stable to latest.
If you are using Docker toolbox for mac, then you should check this out.
I had the same problem. As it says in the tutorial (see "Having connectivity trouble?") the following ports need to be open:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container ingress network.
So I executed the following before the swarm init (right after creation of myvm1 and myvm2) and then later could access the service e.g. in the browser with IP_node:4000
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p tcp --dport 7946 --syn -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p tcp --dport 7946 --syn -j ACCEPT"
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p udp --dport 7946 -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p udp --dport 7946 -j ACCEPT"
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p udp --dport 4789 -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p udp --dport 4789 -j ACCEPT"
Hope it helps others.

Resources