No ports, no connectivity to Docker compose stack - docker

I am new to Docker-compose and I am struggling to run my Tomcat+Postgres stack.
I have "successfully" launched the stack, in the sense that my Java Web Application successfully connects to Postgresql and deploys into Tomcat.
But no port is mapped and the hosts are not reachable. But the hosts can reach themselves.
The following is my project layout (I use Palantir's Gradle Docker plugin)
edcom3-docker/
edcom3-tomcat/
build.gradle
src/main/
docker/Dockerfile
resources
webapps/edcom3.war
(Other stuff I am too lazy to list)
edcom3-postgres/
build.gradle
src/main/
docker/Dockerfile
src/main/docker/
docker-compose.yml
.env
Thanks to Gradle Docker plugin, the context is built into $baseDir/build/docker
The following is my current docker-compose.yml. I needed to expand the directory structure to justify links
version: '3'
services:
edcom3-postgres:
build: ../../../edcom3-postgres/build/docker
image: edcom3-postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
# networks:
# - edcom3-net
expose:
- "5432/tcp"
ports:
- "${EDCOM3_SQL_PORT}:5432"
volumes:
- "edcom3-postgres-data:/var/lib/postgresql/data"
edcom3-tomcat:
depends_on:
- edcom3-postgres
build: ../../../edcom3-tomcat/build/docker
image: edcom3-tomcat
expose:
- "8009/tcp"
- "8080/tcp"
ports:
- "${EDCOM3_AJP_PORT}:8009"
volumes:
- "edcom3-config-location:/home/tomcat"
- "edcom3-file-repository:/mnt/fileRepository"
- "edcom3-logs:/mnt/phoenix-logs"
- "edcom3-tomcat-logs:/usr/local/tomcat/logs"
restart: always
# networks:
# - edcom3-net
links:
- edcom3-postgres
#networks:
# edcom3-net:
# driver: bridge
# internal: true
volumes:
edcom3-config-location:
edcom3-file-repository:
edcom3-logs:
edcom3-tomcat-logs:
edcom3-postgres-data:
What I have tried
I run first gradle :edcom3-tomcat:docker and :edcom3-postgres:gradle to build the contexts.
Then I cd into src/main/docker of the main project, where the above docker-compose is located, and launch the stack.
edcom3-tomcat_1 | 06-Feb-2020 15:51:12.943 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/usr/local/tomcat/webapps/edcom3.war] has finished in [66,278] ms
The stack starts and the application is deployed. As you can see, I have instructed docker-compose to expose AJP port (variables are bound to port 50000 and 50001) so that Apache can reverse-proxy into Tomcat. Apache is a stand-alone container.
But I can't find the port bindings in docker ps
[docker#DOCKER01 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78acb0e5ff5d edcom3-tomcat "catalina.sh run" 11 minutes ago Up 11 minutes (unhealthy) edcom3_edcom3-tomcat_1
60bbed143adf edcom3-postgres "docker-entrypoint.s…" 16 minutes ago Up 16 minutes (unhealthy) edcom3_edcom3-postgres_1
23265ae20793 postgres:11.6-alpine "docker-entrypoint.s…" 7 weeks ago Up 2 days 192.168.0.72:5432->5432/tcp postgres11
9c8b0eda42e9 portainer/portainer:1.23.0 "/portainer --ssl --…" 7 weeks ago Up 2 days 192.168.0.72:8000->8000/tcp, 192.168.0.72:9000->9000/tcp keen_grothendieck
63985a2c656f initech/sqlserver2017:20191204 "/opt/mssql/bin/nonr…" 2 months ago Up 2 days (healthy) 192.168.0.72:1433->1433/tcp sqlserver2017
09589b076513 oracle/database:12.2.0.1-SE2 "/bin/sh -c 'exec $O…" 2 months ago Up 2 days (healthy) 192.168.0.72:1521->1521/tcp, 192.168.0.72:5500->5500/tcp oracle12c
Considerations: (un)commenting the network in the compose file has no effect.
I can clearly see that the containers are reported unhealthy. I tried to remove the health check from their Dockerfiles but it had no effect: the container is not determined its health but still no port available
Then I tried to ping the containers within their network (network block in docker-compose commented out). From my Windows workstation
> docker inspect 4ce2be94fbe8 (tomcat)
....
"NetworkID": "8196b4a9dab76b899494f427286c0a9250ba4b74f8e4c6dbb8cd4459243509ac",
"EndpointID": "17d969ad49fe127870f73e63211e309f23d37a23d2918edb191381ffd7b2aaff",
"Gateway": "172.25.0.1",
"IPAddress": "172.25.0.3",
....
(Oh, cool, the server is listening on port 8080 on that)
> telnet 172.25.0.1 8009
(connection failed)
> tracert 172.25.0.1
(a number of nodes)
It is interesting to see tracert result (which I have omitted). Basically Windows 10 tries to reach 172.25.x.x, which is notably a class 16 private IP address, through the main gateway, only to be ignored by our external ISP (4 external hosts appear in the trace)
Okay, Windows has not configured routing tables.
Let's try on our docker server running CentOS
$ docker inspect 60bbed143adf
.....
"NetworkID": "10a52bc3f822f756f5b76c300787be5af255afd061453add0c70664f69ee06c8",
"EndpointID": "f054747f6a5d0370916caa74b8c01c3e7b30d255e06ebb9d0c450bf1db38efb1",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.2",
"IPPrefixLen": 16,
.....
[docker#DOCKER01 ssl]$ telnet 172.19.0.3 8009
Trying 172.19.0.3...
Connected to 172.19.0.3.
Escape character is '^]'.
It's interesting that I can finally access the network
Conclusion: question
Can somebody help me understand why can't I map port 8009 (AJP) from the web container to the host machine? If I can achieve that, the web application will be available to Apache load balancer via AJP protocol

In the compose file, the container port 8009 is exposed on the host port ${EDCOM3_AJP_PORT}.
So you should be able to access your tomcat AJP with <IP-OF-DOCKER-SERVER>:${EDCOM3_AJP_PORT}.
Port publication is done with the ports section, expose only "expose ports without publishing them to the host machine - they’ll only be accessible to linked services"
But we can see in the docker ps that the PORTS section is empty for the edcom3-tomcat container, so I'll suggests that EDCOM3_AJP_PORT is not well defined (but then it should fail...)
On using 172.25.0.1:8009: the container IP is private to the docker host (the CentOS machine), so no problem to access it (on any listening port of the container) from this machine, but it's not possible from any other machine.
See Container networking - Published ports:
By default, when you create a container, it does not publish any of its ports to the outside world.
To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag.
This creates a firewall rule which maps a container port to a port on the Docker host.

I have found that the edcom3-net was the main culprit to blame.
My issue was caused by a number of unfortunate coincidences, so let me put order on that.
Ports not exposed with network
Simply put, I have found that if you connect a Docker container to an internal network, it won't publish ports on the host. It must be connected to the host network to publish ports. If you comment the network part on the docker-compose file, ports will be successfully publishes
Host up but not reachable when internal network is in use
I found this to be another aspect, which requires me additional investigation and rather a separate Q&A.
What I have discovered is that when you attach a container to an internal network, it will obtain its own private IP address and not expose the ports. Still, the container has IP address and ports available. There are two noticeable things here
1. If you run a load balancer on another Docker container (this time on the host network)
You will have a hard time reaching the target container. You must configure routing tables to link multiple networks.
2. If the load balancer runs on the Docker host, instead
It should be easier to set a reverse proxy. However, I had the unfortunate coincidence to run this scenarion on a Windows host. And Docker Desktop, in my case, did not set the routing tables. Even after docker network inspecting the network edcom3-net several times, the 172.16.x.x address of Tomcat was routed via the main gateway

Related

Docker container networking - interal ports open to everyone

I am new to docker and have trouble setting up the network between the containers to not allow unnecessary connections from outside.
I have a Docker running on a VPS with three containers on a remote IP 123.xxx.xxx.xxx
container name published ports IP adress
sqldb 3306:3306 172.xxx.xxx.4
applet1 80:3306 172.xxx.xxx.5
applet2 4444:4444 172.xxx.xxx.3
One is database and two are java apps. The trouble I am having right now is that when I create the containers the ports on the containers become exposed to the global internet so my database sqldb is exposed by 123.xxx.xxx.xxx:3306
Right now ny java apps are connect through JDBC like so jdbc:mysql://172.xxx.xxx.4:3306/db.
I am trying to accomplish the following:
port 80 on host so 123.xxx.xxx.xxx connects to java app applet1.
The goal is to give applet1 the ability to connect to sqldb and also applet2 but I don't wan't unecessary ports to be exposed to the whole internet. Preferably that internal URIs would be left but connections from outside (apart from SSH on port 22 and TCP on port 80) would be forbidden for ports 4444, 3306. Also, I don't yet know how to use docker-compose so if possible how can I solve it without it?
*I have heard you can connect to containers by writing container names like that: have not had success with it yet jdbc:mysql://sqldb/db.
If all your containers are running on the same docker bridge network, you don't need to expose any ports for them to communicate with each other.
Docker Compose is a particularly good tool for organising several containers like this as it automatically configures a network for you
# docker-compose.yaml
version: '3.9'
services:
sqldb:
image: sqldb
applet1:
image: applet1
ports:
- '80:3306' # you sure about this container port?
depends_on:
- sqldb
applet2:
image: applet2
depends_on:
- sqldb
Now only your applet1 container will have a host port mapping. Both applets will be able to connect to any other service within the network on their container ports.

How to host multiple environments for a project using docker in the same machine?

I have a typical web stack that consists of nginx + django + database components.
I have set them up in different docker containers with docker-compose and it's running fine.
services:
billing_app_dev:
image: jameslin/billing_app:latest
expose:
- 8000
billing_postgres:
image: postgres:10.5
restart: always
volumes:
- pg_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
billing_nginx:
image: jameslin/billing_nginx:${TAG}
volumes:
- app_files:/static
links:
- 'billing_app'
ports:
- 80:80
Now I am wondering how I can set up DEV and QA environments on a single machine. I can change the django and database containers to listen to different ports, but looks like I cannot run nginx containers individually since port 80 can only be listened by one container.
I will have to share the nginx container for those 2 environments which doesn't seem very clean.
Are there any better ideas if running 2 VMs is not possible?
I have 3 apache containers and 1 nginx running in the same server, so pretty sure is not a issue.
For each stack of webserver + database i have a different docker-compose file, in this way docker will create a different network for each stack, avoiding possible problems with simultaneous port, and you only will have to bind your nginx in different ports of your server, because, you only can bind one service to one port. still, each container is a separated "machine", so even over the same network they can use the same port.
if you really need run all your services in the port 80 and 443 of your server, may be you will need to put a nginx running in those ports like a reverse proxy calling in the internal docker network those services, is a option but i never try it before over docker internal network.
I think what you needed is virtual ip or maybe called ip aliasing. Even you just have one network card, you can still set 2 ips on it.
Then, you can set 2 different nginx container on host, and use different ip but same 80 port.
Something like follows:
cd /etc/sysconfig/network-script/
cp ifcfg-eth0 ifcfg-eth0:1
vi ifcfg-eth0:1
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth0:1 ----> sub network card
HWADDR=00:0C:29:45:62:3B
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.109.108 ----> configure a new different ip
NETMASK=255.255.255.0
Detail refers to Create Multiple IP Addresses to One Single Network Interface
For nginx, from nginx guide, you had to change your nginx docker to modify listen 80 to listen your_ip:80, then it will not listen on all ip address.

How to access the docker container service running in same instance with instanceip and container-port

Here is the overview of the problem I am facing.
Suppose I have a instance x.x.x.x and where I am running the following services.
Docker container for prometheus running in 9090
Docker container for alertmanager running in 9093
The services are running well I can access the service using the instance url followed by their IP.
Problem:
My service 1 (Prometheus) needs to access service2 (alertmanager) with the hosturl:ip ie X.X.X.X:9090 is when trying to access X.X.X.X:9093 it is not being possible Throwing the following errors.
How I tested:
Entered inside service1 container and tried telneting it
telnet: can't connect to remote host (X.X.X.X): No route to host
I am not sure but have come to realize this is due to the docker firewall issues. if so than What can be done or what is the possible solutions for the scenario. How do I make the service2 accessible to service1 with the < hostIP :port >
docker ps result
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0ac39e87eb27 prom/alertmanager "/bin/alertmanager --" 53 minutes ago Up 53 minutes 0.0.0.0:9093->9093/tcp alertmanager_1
25a7de42d57f prom/prometheus "/bin/prometheus --co" 53 minutes ago Up 53 minutes 0.0.0.0:9090->9090/tcp
prometheus
NOTE:
I am running the services using the docker-compose. So the docker-compose solutions would be more appreciated
It looks like you're not linking your services. The proper solution would be to run both services from the same docker-compose file and link them toegether. Than you can use the docker-compose service name to access the Alertmanager in the other container from Prometheus.

docker deploy won't publish port in swarm

I've got a swarm set up with a two nodes, one manager and one worker. I'd like to have a port published in the swarm so I can access my applications and I wonder how I achieve this.
version: '2'
services:
server:
build: .
image: my-hub.company.com/application/server:latest
ports:
- "80:80"
This exposes port 80 when I run docker-compose up and it works just fine, however when I run a bundled deploy
docker deploy my-service
This won't publish the port, so it just says 80/tcp in docker ps, instead of pointing on a port. Maybe this is because I need to attach a load balancer or run some fancy command or maybe add another layer of config to actually expose this port in a multi-host swarm.
Can someone help me understand what I need to configure/do to make this expose a port.
My best case scenario would be that port 80 is exposed, and if I access it from different hostnames it will send me to different applications.
Update:
It seems to work if I run the following commands after deploying the application
docker service update -p 80:80 my-service_server
docker kill <my-service_server id>
I found this repository for running a HA proxy, it seems great and is supported by docker themselves, however I cannot seem to apply this separate to my services using the new swarm mode.
https://github.com/docker/dockercloud-haproxy
There's a nice description in the bottom describing how the network should look:
Internet -> HAProxy -> Service_A -> Container A
However I cannot find a way to link services through the docker service create command, optimally now looks like a way to set up a network, and when I apply this network on a service it will pick it up in the HAProxy.
-- Marcus
As far as I understood for the moment you just can publish ports updating the service later the creation, like this:
docker service update my-service --publish-add 80:80
Swarm mode publishes ports in a different way. It won't show up in docker ps because it's not publishing the port on the host, it publishes the port to all nodes so that takes it can load balancing between service replicas.
You should see the port from docker service inspect my-service.
Any other service should be able to connect to my-service:80
docker service ls will display the port mappings.

Host network access from linked container

I've following coder-compose configuration:
version: '2'
services:
nginx:
build: ./nginx
links:
- tomcat1:tomcat1
- tomcat2:tomcat2
- tomcat3:tomcat3
ports:
- "80:80"
tomcat1:
build: ./tomcat
ports:
- "8080"
tomcat2:
build: ./tomcat
ports:
- "8080"
tomcat3:
build: ./tomcat
ports:
- "8080"
So, the question is, how to get access to the host network from the linked container(s):tomcat1, tomcat2, tomcat3. Here is the diagram:
Update
Seems, my diagram doesn't help much. Nginx is a load balancer, Tomcat 1-3 are application nodes. Deployed web. app needs to get access to internet resource.
Internet access is by default active on all containers (in bridge mode). All you need to check is if the http(s)_proxy variables are set if you are behind a proxy.
If your question if how to access docker host from a container (and not the reverse: access a container from the local docker host), then you would need to inspect the routing table of a container: see "From inside of a Docker container, how do I connect to the localhost of the machine?"
export DOCKER_HOST_IP=$(route -n | awk '/UG[ \t]/{print $2}')
There is a recent (June 2016) effort to add a adding a dockerhost as an entry in /etc/hosts of all running containers: issue 23177.
Update March 2020: this issue has been closed, and redirect to PR 40007: "Support host.docker.internal in dockerd on Linux"
This PR allows containers to connect to Linux hosts by appending a special string "host-gateway" to --add-host e.g. "--add-host=host.docker.internal:host-gateway" which adds host.docker.internal DNS entry in /etc/hosts and maps it to host-gateway-ip
This PR also add a daemon flag call host-gateway-ip which defaults to
the default bridge IP
Docker Desktop will need to set this field to the Host Proxy IP so DNS requests for host.docker.internal can be routed to VPNkit
This will be in Docker for Linux (and Docker Desktop, which runs the Linux daemon, although inside a lightweight VM).
Difference between this and the current implementation on Docker Desktop is that;
the current Docker Desktop implementation is in a part of the code-base that's proprietary (i.e., part of how Docker Desktop is configured internally)
this code could be used by the Docker Desktop team in future as well (to be discussed)
this PR does not set up the "magic" host.docker.internal automatically on every container, but it can be used to run a container that needs this host by adding docker run --add-host host.docker.internal:host-gateway
(to be discussed); setting that "magic" domain automatically on containers that are started could be implemented by adding an option for this in the ~/.docker/config.json CLI configuration file.

Resources