I composed Concourse on EC2 Linux (Ubuntu 22.04) and revised CONCOURSE_EXTERNAL_URL in docker-compose.yml to Elastic IP Address of EC2 Linux.
Even though secrurity group inbound and ACL allow all tcp / http /https, http://{myElasticIP}:8080/ connection refused.
(Instance is running, can ping to {myElasticIP} without fail)
This was my first time to set Concourse, so I guess something is wrong in my procedure.
Any advice would be highly appreciated.
-- command and result
$ docker-compose up -d
Starting ubuntu_concourse-db_1
Recreating ubuntu_concourse_1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
56f8859a67ba concourse/concourse "dumb-init /usr/loca…" 24 minutes ago Restarting (1) 28 seconds ago ubuntu_concourse-web_1
307a647554eb postgres:9.5 "docker-entrypoint.s…" 24 minutes ago Up 24 minutes 5432/tcp ubuntu_concourse-db_1
Error (Fiddler):
ConnectionRefused (0x274d).
-- kernel
$ uname -r
5.15.0-1015-aws
--docker-compose.yml
version: '3'
services:
concourse-db:
image: postgres:9.5
environment:
POSTGRES_DB: concourse
POSTGRES_USER: "${CONCOURSE_POSTGRES_USER}"
POSTGRES_PASSWORD: "${CONCOURSE_POSTGRES_PASSWORD}"
PGDATA: /database
concourse-web:
image: concourse/concourse
links: [concourse-db]
command: web
depends_on: [concourse-db]
ports: ["8080:8080"]
volumes: ["./keys/web:/concourse-keys"]
restart: unless-stopped # required so that it retries until conocurse-db comes up
environment:
CONCOURSE_BASIC_AUTH_USERNAME: "${CONCOURSE_BASIC_AUTH_USERNAME}"
CONCOURSE_BASIC_AUTH_PASSWORD: "${CONCOURSE_BASIC_AUTH_PASSWORD}"
CONCOURSE_EXTERNAL_URL: "${CONCOURSE_EXTERNAL_URL}"
CONCOURSE_POSTGRES_HOST: concourse-db
CONCOURSE_POSTGRES_USER: "${CONCOURSE_POSTGRES_USER}"
CONCOURSE_POSTGRES_PASSWORD: "${CONCOURSE_POSTGRES_PASSWORD}"
CONCOURSE_POSTGRES_DATABASE: concourse
--.env
CONCOURSE_BASIC_AUTH_USERNAME=concourse
CONCOURSE_BASIC_AUTH_PASSWORD=changeme
CONCOURSE_EXTERNAL_URL=http://{myElasticIP}:8080
CONCOURSE_POSTGRES_USER=concourse
CONCOURSE_POSTGRES_PASSWORD=changeme
-- port check
$ sudo lsof -i -P -n | grep LISTEN
systemd-r 390 systemd-resolve 14u IPv4 16470 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 644 root 3u IPv4 17932 0t0 TCP *:22 (LISTEN)
sshd 644 root 4u IPv6 17943 0t0 TCP *:22 (LISTEN)
Related
I have following docker-dompose file:
version: "3.9"
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
ports:
- target: 53
published: 53
protocol: tcp
mode: host
- target: 53
published: 53
protocol: udp
mode: host
# - target: 80
# published: 80
# protocol: tcp
# mode: host
environment:
TZ: 'Europe/Warsaw'
DNS1: 1.1.1.1
DNS2: 8.8.8.8
VIRTUAL_HOST: 'pihole.local'
volumes:
- ./etc/pihole/:/etc/pihole
- ./etc-dnsmasq.d:/etc/dnsmasq.d
dns:
- 1.1.1.1
- 8.8.8.8
cap_add:
- NET_ADMIN
restart: unless-stopped
networks:
- public
networks:
public:
Working solution with docker-compose
Running this with:
docker-compose --file docker-compose-pihole.yml up -d
exposes ports 53 tcp/udp on host ip address
$ nmap 172.30.0.100 -Pn
Starting Nmap 7.80 ( https://nmap.org ) at 2022-01-02 10:42 CET
Nmap scan report for 172.30.0.100
Host is up (0.0038s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
53/tcp open domain
and dns resolution is working
$ nslookup google.pl 172.30.0.100
Server: 172.30.0.100
Address: 172.30.0.100#53
Non-authoritative answer:
Name: google.pl
Address: 172.217.16.3
Name: google.pl
Address: 2a00:1450:401b:804::2003
and I'm able to telnet to port 53
$ telnet 172.30.0.100 53
Trying 172.30.0.100...
Connected to 172.30.0.100.
Escape character is '^]'.
NOT Working solution with docker stack deploy
Running the same docker-compose file with
docker stack deploy -c docker-compose-pihole.yml pihole
also exposes 53 port tcp/udp on host IP address
$ nmap 172.30.0.100 -Pn
Starting Nmap 7.80 ( https://nmap.org ) at 2022-01-02 10:46 CET
Nmap scan report for 172.30.0.100
Host is up (0.0022s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
53/tcp open domain
however name resolution is not working
nslookup google.pl 172.30.0.100
;; connection timed out; no servers could be reached
telnet to port 53 is closed by remote host
$ telnet 172.30.0.100 53
Trying 172.30.0.100...
Connected to 172.30.0.100.
Escape character is '^]'.
Connection closed by foreign host.
Another strange thing is when port 80 is exposed.
In both cases I can access web UI on port 80 connecting to host IP
I have no idea what's going on and how to fix communication on port 53.
Fixed.
One ENV was missing for pihole:
- DNSMASQ_LISTENING: all
Two days to figure this out!
I installed Apache Guacamole using Docker on a CentOS 8.1 with Docker 19.03.
I followed the steps described here:
https://guacamole.apache.org/doc/gug/guacamole-docker.html
https://www.linode.com/docs/applications/remote-desktop/remote-desktop-using-apache-guacamole-on-docker/
I started the containers like this:
# mysql container
docker run --name guacamole-mysql -e MYSQL_RANDOM_ROOT_PASSWORD=yes -e MYSQL_ONETIME_PASSWORD=yes -d mysql/mysql-server
# guacd container
docker run --name guacamole-guacd -e GUACD_LOG_LEVEL=debug -d guacamole/guacd
# guacamole container
docker run --name guacamole-guacamole --link guacamole-guacd:guacd --link guacamole-mysql:mysql -e MYSQL_DATABASE=guacamole -e MYSQL_USER=guacamole -e MYSQL_PASSWORD=password -d -p 8080:8080 guacamole/guacamole
All went fine and I was able to access the Guacamole web interface on port 8080. I configured one VNC connection to another machine on port 5900. Unfortunately when I try to use that connection I get the following error in the web interface:
"An internal error has occurred within the Guacamole server, and the connection has been terminated..."
I had a look on the logs too and in the guacamole log I found this:
docker logs --tail all -f guacamole-guacamole
...
15:54:06.262 [http-nio-8080-exec-2] ERROR o.a.g.w.GuacamoleWebSocketTunnelEndpoint - Creation of WebSocket tunnel to guacd failed: End of stream while waiting for "args".
15:54:06.685 [http-nio-8080-exec-8] ERROR o.a.g.s.GuacamoleHTTPTunnelServlet - HTTP tunnel request failed: End of stream while waiting for "args".
I'm sure that the target machine (which is running the VNC server) is fine. I'm able to connect to it from both a VNC client and another older Guacamole which I installed previously (not using Docker).
My containers look ok too:
docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad62aaca5627 guacamole/guacamole "/opt/guacamole/bin/…" About an hour ago Up About an hour 0.0.0.0:8080->8080/tcp guacamole-guacamole
a46bd76234ea guacamole/guacd "/bin/sh -c '/usr/lo…" About an hour ago Up About an hour 4822/tcp guacamole-guacd
ed3a590b19d3 mysql/mysql-server "/entrypoint.sh mysq…" 2 hours ago Up 2 hours (healthy) 3306/tcp, 33060/tcp guacamole-mysql
I connected to the guacamole-guacamole container and pinged the other two containers: guacamole-mysql and guacamole-guacd. Both look fine and reachable.
docker exec -it guacamole-guacamole bash
root#ad62aaca5627:/opt/guacamole# ping guacd
PING guacd (172.17.0.2) 56(84) bytes of data.
64 bytes from guacd (172.17.0.2): icmp_seq=1 ttl=64 time=0.191 ms
64 bytes from guacd (172.17.0.2): icmp_seq=2 ttl=64 time=0.091 ms
root#ad62aaca5627:/opt/guacamole# ping mysql
PING mysql (172.17.0.3) 56(84) bytes of data.
64 bytes from mysql (172.17.0.3): icmp_seq=1 ttl=64 time=0.143 ms
64 bytes from mysql (172.17.0.3): icmp_seq=2 ttl=64 time=0.102 ms
Looks like there is a communication issue between the guacamole itself and guacd. And this is where I'm completely stuck.
EDIT
I tried on CentOS 7 and I got the same issues.
I also tried this solution https://github.com/boschkundendienst/guacamole-docker-compose as suggested by #BatchenRegev but I got the same issue again.
I've been experiencing the same issues under centos.
My only difference is that I'm hosting the database on a separate machine as this is all cloud-hosted and I want to be able to destroy/rebuild the guacamole server at will.
I ended creating a docker-compose.yml file as that seemed to work better.
Other gotcha's I came across:
make sure the guacd_hostname is the actual machine hostname and not 127.0.0.1
setting Selinux to allow httpd.
sudo setsebool -P httpd_can_network_connect
My docker-compose.yml is shown below replace all {variables} with your own and update the file if you are using a sql image as well.
version: "2"
services:
guacd:
image: "guacamole/guacd"
container_name: guacd
hostname: guacd
restart: always
volumes:
- "/data/shared/guacamole/guacd/data:/data"
- "/data/shared/guacamole/guacd/conf:/conf:ro"
expose:
- "4822"
ports:
- "4822:4822"
network_mode: bridge
guacamole:
image: "guacamole/guacamole"
container_name: guacamole
hostname: guacamole
restart: always
volumes:
- "/data/shared/guacamole/guacamole/guac-home:/data"
- "/data/shared/guacamole/guacamole/conf:/conf:ro"
expose:
- "8080"
ports:
- "8088:8080"
network_mode: bridge
environment:
- "GUACD_HOSTNAME={my_server_hostname}"
- "GUACD_PORT=4822"
- "MYSQL_PORT=3306"
- "MYSQL_DATABASE=guacamole"
- "GUACAMOLE_HOME=/data"
- "MYSQL_USER=${my_db_user}"
- "MYSQL_PASSWORD=${my_db_password}"
- "MYSQL_HOSTNAME=${my_db_hostname}"
i have the same problem on FreeBSD 12.2 - SOLUTION
Change "localhost" hostname in
/usr/local/etc/guacamole-client/guacamole.properties
to "example"
guacd-hostname: 192.168.10.10
next: /usr/local/etc/guacamole-server/guacd.conf
[server]
bind_host = 192.168.10.10
Check /etc/guacamole/guacamole.properties i have link:
guacd-hostname: 192.168.10.10
Restart:
/usr/local/etc/rc.d/guacd restart
/usr/local/etc/rc.d/tomcat9 restart
with name "localhost" i have:
11:01:48.010 [http-nio-8085-exec-3] DEBUG o.a.g.s.GuacamoleHTTPTunnelServlet - Internal error in HTTP tunnel.
I hope it will be useful to someone else - it works for me`
I have following docker-compose.yml redis config.
version: '3.5'
services:
db:
image: redis:latest
command: redis-server --bind 0.0.0.0 --appendonly yes --protected-mode no
ports:
- target: 6379
published: 6379
protocol: tcp
mode: ingress
There are two hosts leader-0 (manager) and redis-0 (worker)
> root#leader-0:~# docker node ls
ID HOSTNAME STATUS
46tmallxr4l8xr7i90vlwntjq * leader-0 Ready
mofbedj4sqlxgnyatbxhlokc7 redis-0 Ready
Redis host redis-0 exposes 6379 port on the localhost as expected:
> root#redis-0:~# redis-cli -h 127.0.0.1 ping
PONG
but 6379 is not available on the manager (although it should):
> root#leader-0:~# redis-cli -h 127.0.0.1 ping
Could not connect to Redis at 127.0.0.1:6379: Connection timed out
Interesting part is:
Connection timed out (not refused).
redis-cli -h 127.0.0.1 ping on other workers hosts works as expected (returns PONG).
Docker overlay mash network should expose 6379 port on the local interface on each host, but it looks like something went wrong and I messed up figuring out what exactly.
Other services on the manager host works properly (I can
curl http://localhost:${SERVICE_PORT}/).
Manager host has the same firewall rules as worker hosts (+ additional ports opened)
I'm now learning to use docker follow get-started documents, but in part 4--Swarms I've met some problem. That is when deployed my app on a cluster, I cannot access it successfully.
docker#myvm1:~$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
gsueb9ejeur5 getstartedlab_web.1 zhugw/get-started:first myvm1 Running Preparing 11 seconds ago
ku13wfrjp9wt getstartedlab_web.2 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
vzof1ybvavj3 getstartedlab_web.3 zhugw/get-started:first myvm1 Running Preparing 11 seconds ago
lkr6rqtqbe6n getstartedlab_web.4 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
cpg91o8lmslo getstartedlab_web.5 zhugw/get-started:first myvm2 Running Preparing 11 seconds ago
docker#myvm1:~$ curl 'http://localhost'
curl: (7) Failed to connect to localhost port 80: Connection refused
➜ ~ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 - virtualbox Running tcp://192.168.99.101:2376 v17.06.0-ce
myvm2 - virtualbox Running tcp://192.168.99.100:2376 v17.06.0-ce
➜ ~ curl 'http://192.168.99.101'
curl: (7) Failed to connect to 192.168.99.101 port 80: Connection refused
What's wrong?
In addition, very strange. After adding below content in docker-compose.yml I found above question resolved automatically
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
but this time the new added visualizer does not work
docker#myvm1:~$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
xomsv2l5nc8x getstartedlab_web.1 zhugw/get-started:first myvm1 Running Running 7 minutes ago
ncp0rljod4rc getstartedlab_visualizer.1 dockersamples/visualizer:stable myvm1 Running Preparing 7 minutes ago
hxddan48i1dt getstartedlab_web.2 zhugw/get-started:first myvm2 Running Running 7 minutes ago
dzsianc8h7oz getstartedlab_web.3 zhugw/get-started:first myvm1 Running Running 7 minutes ago
zpb6dc79anlz getstartedlab_web.4 zhugw/get-started:first myvm2 Running Running 7 minutes ago
pg96ix9hbbfs getstartedlab_web.5 zhugw/get-started:first myvm2 Running Running 7 minutes ago
from above you know it's always preparing.
My whole docker-compose.yml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: zhugw/get-started:first
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
Had this problem while learning too.
It's because your none clustered image is still running from step 2 and the clustered image you just deployed uses the same port mapping (4000:80) in the docker-compose.yml file.
You have two options:
Go into your docker-compose.yml and change the port mapping to something else e.g 4010:80 and then redeploy your cluster with the update. Then try: http://localhost:4010
Remove the container you created in step 2 of the guide that's still running and using port mapping 4000:80
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
should be
volumes:
- /var/run/docker.sock:/var/run/docker.sock
this is an error in the dockers tutors
Open port 7946 TCP/UDP and port 4789 UDP between the swarm nodes. Use the ingress network. Please let me know if it works, thanks.
What helped for me to get the visualizer running was changing visualizer image tag from stable to latest.
If you are using Docker toolbox for mac, then you should check this out.
I had the same problem. As it says in the tutorial (see "Having connectivity trouble?") the following ports need to be open:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container ingress network.
So I executed the following before the swarm init (right after creation of myvm1 and myvm2) and then later could access the service e.g. in the browser with IP_node:4000
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p tcp --dport 7946 --syn -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p tcp --dport 7946 --syn -j ACCEPT"
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p udp --dport 7946 -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p udp --dport 7946 -j ACCEPT"
$ docker-machine ssh myvm1 "sudo iptables -I INPUT -p udp --dport 4789 -j ACCEPT"
$ docker-machine ssh myvm2 "sudo iptables -I INPUT -p udp --dport 4789 -j ACCEPT"
Hope it helps others.
On a current project I use Docker. I must clarify that I am pretty inexperienced at it.
My project is a PHP/Symfony project. Until then, I used nginx:alpine and phpdocker/php-fpm to have my project running on my dev environment. However, I found these unfit to my case as my production actually uses Apache.
I found another project I'm on uses the webdevops Docker images without trouble. I want to replace the two containers listed above with a single one, the webdevops/php-apache-dev:alpine docker image.
Although the configuration between the two projects seems almost identical, my dev environment does not seem to work properly: I end up with this:
This site can’t be reached - 172.18.0.7 refused to connect.
(I also use Traefik, but the routed URI does not work any better. The error message is slightly different though: Bad Gateway).
I find myself unable to debug this. I don't even know where to look.
Below is the docker-compose.yml configuration I want to use:
version: '3.2'
services:
app:
image: webdevops/php-apache-dev:alpine
container_name: my-app
working_dir: /app
env_file: .env
environment:
WEB_DOCUMENT_ROOT: /public
WEB_DOCUMENT_INDEX: index.php
LOG_STDOUT: ./var/log/app.stdout.log
LOG_STDERR: ./var/log/app.stderr.log
# #todo list of unwanted PHP modules, cf. https://dockerfile.readthedocs.io/en/latest/content/DockerImages/dockerfiles/php-apache-dev.html#php-modules
# PHP_DISMOD:
php.error_reporting: E_ALL
PHP_DISPLAY_ERRORS: 1
PHP_POST_MAX_SIZE: 80M
PHP_UPLOAD_MAX_FILESIZE: 200M
PHP_MEMORY_LIMIT: 521M
PHP_MAX_EXECUTION_TIME: 300
PHP_DATE_TIMEZONE: Europe/Paris
volumes:
- .:/app
# - ./docker/apache2/conf.d:/opt/docker/etc/httpd/conf.d
- ~/.ssh:/home/application/.ssh:ro
- ~/.composer:/home/application/.composer
depends_on:
- elasticsearch
- database
The other containers work just as well as they did before. This one is the only one that fails.
When calling docker-compose up no error is thrown. All the logs I could find within the container remain silent. As far I as can tell, Traefik does not seem to be the problem. Here is the result of docker ps:
[/var/www/html/citizen-game]$ docker ps *[master]
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6e9639e7a84d webdevops/php-apache-dev:alpine "/entrypoint supervi…" 4 hours ago Up 4 hours 80/tcp, 443/tcp, 9000/tcp my-app-app
be1b90fdf768 docker.elastic.co/elasticsearch/elasticsearch:6.2.4 "/usr/local/bin/dock…" 4 hours ago Up 4 hours (healthy) 9200/tcp, 9300/tcp my-app-elasticsearch
76fb8743a12f phpmyadmin/phpmyadmin "/run.sh supervisord…" 4 hours ago Up 4 hours 80/tcp, 9000/tcp my-app-phpmyadmin
dd41b4afe267 mysql:5.7 "docker-entrypoint.s…" 4 hours ago Up 4 hours (healthy) 3306/tcp, 33060/tcp my-app-database
91893783bcb1 rabbitmq:3.7-management "docker-entrypoint.s…" 4 hours ago Up 4 hours 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp my-app-rabbitmq
63f551884bbf traefik:maroilles "/traefik --web --do…" 4 hours ago Up 4 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:8080->8080/tcp
My question is, I guess: how can I debug this? Am I missing something trivial?
Edit
Here is (part of) the content of the docker-compose.override.yml file:
version: '3.2'
services:
app:
volumes:
- ~/.ssh:/home/application/.ssh
- ~/.composer:/home/application/.composer
labels:
- "traefik.backend=my-app"
- "traefik.frontend.rule=Host:my-app.docker"
- "traefik.docker.network=proxy"
networks:
- internal
- proxy
environment:
PHP_DEBUGGER: xdebug
#XDEBUG_REMOTE_HOST: <your host IP address>
XDEBUG_REMOTE_PORT: 9000
XDEBUG_REMOTE_AUTOSTART: 1
XDEBUG_REMOTE_CONNECT_BACK: 1
XDEBUG_PROFILER_ENABLE: 1
XDEBUG_PROFILER_ENABLE_TRIGGER: 1000
traefik:
image: traefik
container_name: citizen-game-traefik
command: --web --docker --docker.domain=docker --logLevel=DEBUG
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: always
networks:
- internal
- proxy
rabbitmq:
networks:
- internal
- proxy
networks:
proxy:
external:
name: traefik
internal:
EDIT 2:
#Mostafa
I ran the following:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-app-app
Result is:
172.18.0.7172.19.0.5
Trying these directly from the browser fails "This site can't be reached". I suppose it was to be expected.
I ran the following from inside the container:
bash-4.4# supervisorctl status apache:apached
apache:apached RUNNING pid 13575, uptime 0:00:00
As suggested, I used ss -plant | grep 80. This does not work from within the container. Here is the result when called outside of it:
[/var/www/html/my-app]$ ss -plant | grep 80
LISTEN 0 80 127.0.0.1:3306 0.0.0.0:*
ESTAB 0 0 192.168.1.88:39360 198.252.206.25:443 users:(("chromium-browse",pid=4203,fd=80))
SYN-SENT 0 1 192.168.1.88:50680 192.241.181.178:443 users:(("chromium-browse",pid=4203,fd=41))
LISTEN 0 128 *:80 *:*
LISTEN 0 128 *:8080 *:*
I'm not sure it tells much though. I tried to install ss from inside the container with apk but:
bash-4.4# apk add ss
ERROR: unsatisfiable constraints:
ss (missing):
required by: world[ss]
EDIT 3:
Here is the result of calling netstat:
bash-4.4# netstat -plant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 229/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.11:32843 0.0.0.0:* LISTEN -
tcp 0 0 :::22 :::* LISTEN 229/sshd
tcp 0 0 :::9000 :::* LISTEN 225/php-fpm.conf)
bash-4.4# netstat -plant | grep httpd
(nothing)
I'm not sure how much this helps though, since my other project, that works, yields the same result n bash-4.4# netstat -plant | grep httpd. Without the grep, it includes much more lines, though.
As the output that you have posted described the exposed ports 80,443,9000 for the container from this image webdevops/php-apache-dev:alpine
Then you can access the container using its IP directly from the browser. So first you need to ensure from the following:
Check if 172.18.0.7 is the actual IP of my-app-app container, use the following command to check the IP of your running container
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-app-app
Or just docker inspect my-app-app to get all info about the container
Check the logs for my-app-app and you may need to enter the container itself and check if apache is actually running by executing the following supervisorctl command which will tell you about the status of apache service
$ supervisorctl status apache:apached
apache:apached RUNNING pid 72, uptime 0:07:43
If apache is running correctly then you should be able to browse the content using the container IP, in my case it gives me something like this as I don't have an actual application
Regarding your issue with traefik which is Bad Gateway that's because traefik itself cannot reach your backend service which is the my-app-app container in our case. you need to ensure that both traefik and my-app-app are within the same network or at least they can ping each other's IPs
Update:
Instead of ss it turns out the image contains netstat command, in order to check what port is used by apache you can do the following from inside the container:
# netstat -plant | grep httpd
tcp 0 0 :::80 :::* LISTEN 98/httpd
tcp 0 0 :::443 :::* LISTEN 98/httpd