Running docker stack deploy just hangs - docker

I'm trying to run my docker-compose setup into my localhost kubernetes cluster that comes default with Docker for Desktop.
I run the following command and it just .. hangs???
> docker stack deploy --orchestrator=kubernetes -c docker-compose.yml hornet
Ignoring unsupported options: build
Ignoring deprecated options:
container_name: Setting the container name is not supported.
expose: Exposing ports is unnecessary - services on the same network can access each other's containers on any port.
top-level network "backend" is ignored
top-level network "frontend" is ignored
service "website.public": network "frontend" is ignored
service "website.public": container_name is deprecated
service "website.public": build is ignored
service "website.public": depends_on are ignored
....
<snip> heaps of services 'ignored'
....
Waiting for the stack to be stable and running...
The docker-compose up command works great, locally, when I run that.
Is there any ways I can try and see what's going on under the hood, which hangs this?

Related

No ports, no connectivity to Docker compose stack

I am new to Docker-compose and I am struggling to run my Tomcat+Postgres stack.
I have "successfully" launched the stack, in the sense that my Java Web Application successfully connects to Postgresql and deploys into Tomcat.
But no port is mapped and the hosts are not reachable. But the hosts can reach themselves.
The following is my project layout (I use Palantir's Gradle Docker plugin)
edcom3-docker/
edcom3-tomcat/
build.gradle
src/main/
docker/Dockerfile
resources
webapps/edcom3.war
(Other stuff I am too lazy to list)
edcom3-postgres/
build.gradle
src/main/
docker/Dockerfile
src/main/docker/
docker-compose.yml
.env
Thanks to Gradle Docker plugin, the context is built into $baseDir/build/docker
The following is my current docker-compose.yml. I needed to expand the directory structure to justify links
version: '3'
services:
edcom3-postgres:
build: ../../../edcom3-postgres/build/docker
image: edcom3-postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
# networks:
# - edcom3-net
expose:
- "5432/tcp"
ports:
- "${EDCOM3_SQL_PORT}:5432"
volumes:
- "edcom3-postgres-data:/var/lib/postgresql/data"
edcom3-tomcat:
depends_on:
- edcom3-postgres
build: ../../../edcom3-tomcat/build/docker
image: edcom3-tomcat
expose:
- "8009/tcp"
- "8080/tcp"
ports:
- "${EDCOM3_AJP_PORT}:8009"
volumes:
- "edcom3-config-location:/home/tomcat"
- "edcom3-file-repository:/mnt/fileRepository"
- "edcom3-logs:/mnt/phoenix-logs"
- "edcom3-tomcat-logs:/usr/local/tomcat/logs"
restart: always
# networks:
# - edcom3-net
links:
- edcom3-postgres
#networks:
# edcom3-net:
# driver: bridge
# internal: true
volumes:
edcom3-config-location:
edcom3-file-repository:
edcom3-logs:
edcom3-tomcat-logs:
edcom3-postgres-data:
What I have tried
I run first gradle :edcom3-tomcat:docker and :edcom3-postgres:gradle to build the contexts.
Then I cd into src/main/docker of the main project, where the above docker-compose is located, and launch the stack.
edcom3-tomcat_1 | 06-Feb-2020 15:51:12.943 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/usr/local/tomcat/webapps/edcom3.war] has finished in [66,278] ms
The stack starts and the application is deployed. As you can see, I have instructed docker-compose to expose AJP port (variables are bound to port 50000 and 50001) so that Apache can reverse-proxy into Tomcat. Apache is a stand-alone container.
But I can't find the port bindings in docker ps
[docker#DOCKER01 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78acb0e5ff5d edcom3-tomcat "catalina.sh run" 11 minutes ago Up 11 minutes (unhealthy) edcom3_edcom3-tomcat_1
60bbed143adf edcom3-postgres "docker-entrypoint.s…" 16 minutes ago Up 16 minutes (unhealthy) edcom3_edcom3-postgres_1
23265ae20793 postgres:11.6-alpine "docker-entrypoint.s…" 7 weeks ago Up 2 days 192.168.0.72:5432->5432/tcp postgres11
9c8b0eda42e9 portainer/portainer:1.23.0 "/portainer --ssl --…" 7 weeks ago Up 2 days 192.168.0.72:8000->8000/tcp, 192.168.0.72:9000->9000/tcp keen_grothendieck
63985a2c656f initech/sqlserver2017:20191204 "/opt/mssql/bin/nonr…" 2 months ago Up 2 days (healthy) 192.168.0.72:1433->1433/tcp sqlserver2017
09589b076513 oracle/database:12.2.0.1-SE2 "/bin/sh -c 'exec $O…" 2 months ago Up 2 days (healthy) 192.168.0.72:1521->1521/tcp, 192.168.0.72:5500->5500/tcp oracle12c
Considerations: (un)commenting the network in the compose file has no effect.
I can clearly see that the containers are reported unhealthy. I tried to remove the health check from their Dockerfiles but it had no effect: the container is not determined its health but still no port available
Then I tried to ping the containers within their network (network block in docker-compose commented out). From my Windows workstation
> docker inspect 4ce2be94fbe8 (tomcat)
....
"NetworkID": "8196b4a9dab76b899494f427286c0a9250ba4b74f8e4c6dbb8cd4459243509ac",
"EndpointID": "17d969ad49fe127870f73e63211e309f23d37a23d2918edb191381ffd7b2aaff",
"Gateway": "172.25.0.1",
"IPAddress": "172.25.0.3",
....
(Oh, cool, the server is listening on port 8080 on that)
> telnet 172.25.0.1 8009
(connection failed)
> tracert 172.25.0.1
(a number of nodes)
It is interesting to see tracert result (which I have omitted). Basically Windows 10 tries to reach 172.25.x.x, which is notably a class 16 private IP address, through the main gateway, only to be ignored by our external ISP (4 external hosts appear in the trace)
Okay, Windows has not configured routing tables.
Let's try on our docker server running CentOS
$ docker inspect 60bbed143adf
.....
"NetworkID": "10a52bc3f822f756f5b76c300787be5af255afd061453add0c70664f69ee06c8",
"EndpointID": "f054747f6a5d0370916caa74b8c01c3e7b30d255e06ebb9d0c450bf1db38efb1",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.2",
"IPPrefixLen": 16,
.....
[docker#DOCKER01 ssl]$ telnet 172.19.0.3 8009
Trying 172.19.0.3...
Connected to 172.19.0.3.
Escape character is '^]'.
It's interesting that I can finally access the network
Conclusion: question
Can somebody help me understand why can't I map port 8009 (AJP) from the web container to the host machine? If I can achieve that, the web application will be available to Apache load balancer via AJP protocol
In the compose file, the container port 8009 is exposed on the host port ${EDCOM3_AJP_PORT}.
So you should be able to access your tomcat AJP with <IP-OF-DOCKER-SERVER>:${EDCOM3_AJP_PORT}.
Port publication is done with the ports section, expose only "expose ports without publishing them to the host machine - they’ll only be accessible to linked services"
But we can see in the docker ps that the PORTS section is empty for the edcom3-tomcat container, so I'll suggests that EDCOM3_AJP_PORT is not well defined (but then it should fail...)
On using 172.25.0.1:8009: the container IP is private to the docker host (the CentOS machine), so no problem to access it (on any listening port of the container) from this machine, but it's not possible from any other machine.
See Container networking - Published ports:
By default, when you create a container, it does not publish any of its ports to the outside world.
To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag.
This creates a firewall rule which maps a container port to a port on the Docker host.
I have found that the edcom3-net was the main culprit to blame.
My issue was caused by a number of unfortunate coincidences, so let me put order on that.
Ports not exposed with network
Simply put, I have found that if you connect a Docker container to an internal network, it won't publish ports on the host. It must be connected to the host network to publish ports. If you comment the network part on the docker-compose file, ports will be successfully publishes
Host up but not reachable when internal network is in use
I found this to be another aspect, which requires me additional investigation and rather a separate Q&A.
What I have discovered is that when you attach a container to an internal network, it will obtain its own private IP address and not expose the ports. Still, the container has IP address and ports available. There are two noticeable things here
1. If you run a load balancer on another Docker container (this time on the host network)
You will have a hard time reaching the target container. You must configure routing tables to link multiple networks.
2. If the load balancer runs on the Docker host, instead
It should be easier to set a reverse proxy. However, I had the unfortunate coincidence to run this scenarion on a Windows host. And Docker Desktop, in my case, did not set the routing tables. Even after docker network inspecting the network edcom3-net several times, the 172.16.x.x address of Tomcat was routed via the main gateway

Docker: Restart Container only on reboot?

I've got a docker-compose service that needs to be restarted only when docker or the system restarts. The service should not restart when an error occurred or the service completes. The flags --restart unless-stopped or --restart always doesn't work for me because with these flags the service is going to restart, too, when an error occurres.
I have the same question. I tried using the docker compose restart_policy and found that it did not work.
services:
hello:
deploy:
restart_policy:
condition: ...
WARNING: Some services (hello) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.
See the answer to here Docker: Restart Container only on reboot?
So, I then considered doing something in the Dockerfile but the docs suggest setting up an external process to restart containers, using the same command we use to start them normally.
See https://docs.docker.com/config/containers/start-containers-automatically/
If restart policies don’t suit your needs, such as when processes
outside Docker depend on Docker containers, you can use a process
manager such as upstart, systemd, or supervisor instead.

Can't find Docker Compose network entry

I am trying to communicate from one Docker container running on my Win10 laptop with another container also running locally.
I start up the target container, and I see the following network:
docker network ls
NETWORK ID NAME DRIVER SCOPE
...
f85b7c89dc30 w3virtualservicew3id_w3-virtual-service-w3id bridge
I then start up my calling container with docker-compose up. I can then successfully connect my other container to the network via the command line:
docker network connect w3virtualservicew3id_w3-virtual-service-w3id w3vacationatibmservice_rest_1
However, I can't connect to that same network by adding it to the network section of my docker-compose.yml file for the calling container. I was under the impression that they both basically did the same thing:
networks:
- w3_vacation-at-ibm_service
- w3virtualservicew3id_w3-virtual-service-w3id
The error message tells me it can't find the network, which is not true, since I can connect via the command line, so I know it's really there and running:
ERROR: Service "rest" uses an undefined network "w3virtualservicew3id_w3-virtual-service-w3id"
Anyone have any idea what I'm missing?
The network you define under your service is expected to be defined inside the global networks section (same thing for volumes):
version 'X.Y'
services:
calling_container:
networks:
- your_network
networks:
your_network:
external: true
Do you really have to use a separate compose yml for your calling container? If both of your container interacts with each other, you should add them both to one and the same compose yml. In this case, you don't have to specifiy any network, they will automatically be inside the same network.

How do I run docker-compose up on a a docker-swarm?

I'm new to Docker and trying to get started by deploying locally a hello-world Flask app on Docker-Swarm.
So far I have my Flask app, a Dockerfile, and a docker-compose.yml file.
version: "3"
services:
webapp:
build: .
ports:
- "5000:5000"
docker-compose up works fine and deploys my Flask app.
I have started a Docker Swarm with docker swarm init, which I understand created a swarm with a single node:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
efcs0tef4eny6472eiffiugqp * moby Ready Active Leader
Now, I don't want workers or anything else, just a single node (the manager node created by default), and deploy my image there.
Looking at these instructions https://docs.docker.com/get-started/part4/#create-a-cluster it seems like I have to create a VM driver, then scp my files there, and ssh to run docker-compose up. Is that the normal way of working? Why do I need a VM? Can't I just run docker-compose up on the swarm manager? I didn't find a way to do so, so I'm guessing I'm missing something.
Running docker-compose up will create individual containers directly on the host.
With swarm mode, all the commands to manage containers have shifted to docker stack and docker service which manage containers across multiple hosts. The docker stack deploy command accepts a compose file with the -c arg, so you would run the following on a manager node:
docker stack deploy -c docker-compose.yml stack_name
to create a stack named "stack_name" based on the version 3 yml file. This command works the same regardless of whether you have one node or a large cluster managed by your swarm.

Link Running External Docker to docker-compose services

I assume that there is a way to link via one or a combination of the following: links, external_links and networking.
Any ideas? I have come up empty handed so far.
Here is an example snippet of a Docker-compose which is started from within a separate Ubuntu docker
version: '2'
services:
web:
build: .
depends_on:
- redis
redis:
image: redis
I want to be able to connect to the redis port from the Docker that launched the docker-compose.
I do not want to bind the ports on the host as it means I won't be able to start multiple docker-compose from the same model.
-- context --
I am attempting to run a docker-compose from within a Jenkins maven build Docker so that I can run tests. But I cannot for the life of me get the original Docker to access exposed ports on the docker-compose
Reference the machines by hostname, v2 automatically connects the nodes by hostname on a private network by default. You'll be able to ping "web" and "redis" from within each container. If you want to access the machines from your host, include a "ports" definition for each service in your yml.
The v1 links were removed from the v2 compose syntax since they are now implicit. From the docker compose file documentation
links with environment variables: As documented in the environment variables reference, environment variables created by links have been
deprecated for some time. In the new Docker network system, they have
been removed. You should either connect directly to the appropriate
hostname or set the relevant environment variable yourself...

Resources