I'm trying to build an application that is able to use local integration testing via Docker Compose with Google Cloud emulator containers, while also being able to run that same Docker Compose configuration on a Docker-based CI/CD tool (Google Cloud Build).
The kind of docker-compose.yml configuration I'm using is:
version: '3.7'
services:
main-application:
build:
context: .
target: dev
image: main-app-dev
container_name: main-app-dev
network_mode: $DOCKER_NETWORK
environment:
- MY_ENV=my_env
command: ["sh", "-c", "PYTHONPATH=./ python app/main.py"]
volumes:
- ~/.config:/home/appuser/.config
- ./app:/home/appuser/app
- ./tests:/home/appuser/tests
depends_on:
- firestore
firestore:
image: google/cloud-sdk
container_name: firestore
network_mode: $DOCKER_NETWORK
environment:
- GCP_PROJECT_ID=dummy-project
command: ["sh", "-c", "gcloud beta emulators firestore start --project=$$GCP_PROJECT_ID --host-port=0.0.0.0:9000"]
ports:
- "9000:9000"
I added the network_mode arguments to enable the configuration to use the "cloudbuild" network type available on the CI/CD pipeline, which is currently working perfectly. However this network configuration is not available to local Docker, hence why I've tried to use environment variables to enable the switch depending on local vs Cloud Build CI/CD environment.
Before I added these network_mode params/args for the CI/CD, the local testing was working just fine. However since I added them, my application either can't run, or can't connect to its accompanying services, like the firestore one specified in the YAML above.
I have tried the following valid Docker network modes with no success:
"bridge" - runs the service, but doesn't allow connection between containers
"host" - doesn't allow the service to run because of not being compatible with assigning ports
"none" - doesn't allow the service to connect externally
"service" - doesn't allow the service to run due to invalid mode/service
Anyone able to provide advice on what I'm missing here?
I would assume one of these network modes would be what Docker Compose would be using if the network_mode is not assigned, so I'm not sure why all of them are failing.
I want to avoid having a separate cloud build file for the remote and local configurations, and would also like to avoid the hassle of setting up my own docker network locally. Ideally if there were some way of only applying network_mode only remotely, that would work best in my case I think.
TL;DR:
Specifying network_mode does not give me the same result as not specifying it when running docker-compose up locally.
Due to running the configuration in the cloud I can't avoid specifying it.
Found a solution thanks to this thread and the comment by David Maze.
As far as I understand it, Docker Compose when not provided a specific network_mode for all the containers, creates its own private default network, named after the folder in which the docker-compose.yml file exists (as default).
Specifying a network mode like the default "bridge" network, without using this custom network created by docker compose means container discovery between services isn't possible, as in main-application couldn't find the firestore:9000 container.
Basically all I had to do was set the network_mode variable to myapplication_default, if the folder where docker-compose.yml sat in was called "MyApplication", to force app apps to use the same custom network set up in docker-compose up
Related
Scenario
The project I'm working on (a React app) uses docker-compose to setup it's backend, webserver and frontend. I'm working inside a VSCode devcontainer (Node with Typescript).
The Docker in Docker environment I've setup seems to work fine and I'm able to start each of the Docker containers but I had to adapt the code in the following manner because otherwise Docker wasn't able to locate the specified volumes to mount.
Setup
First I needed to set a remote environment variable in my devcontainer.json:
"remoteEnv": {
// the original host directory which is needed for volume mount commands from inside the container (Docker in Docker)
"LOCAL_WORKSPACE_FOLDER": "${localWorkspaceFolder}"
}
I'm then using this environment variable in the docker-compose.yaml like so:
services:
webserver:
build:
context: ./docker
dockerfile: webserver/Dockerfile
image: webserver
container_name: webserver_nginx
ports:
- 8080:80
volumes:
- ${LOCAL_WORKSPACE_FOLDER}/webserver:/etc/nginx/conf.d
- ${LOCAL_WORKSPACE_FOLDER}/build:/var/www/html
restart: unless-stopped
depends_on:
- backend
backend:
...
Problem
On my machine (and on the machine of my colleagues who also use VSCode) everything works fine. But I have some teams members which don't use VSCode. When I commit the adapted docker-compose.yaml file, their setup doesn't work anymore and vice-versa if they adapt the file again to their needs.
Question
How can I ensure that Docker compose works in- and outside of VSCode's devcontainers?
Possible solutions?
Would it be possible to set the environment variable to a default value? Because in my case the actual value that should be set if the project is not opened inside a devcontainer is just a simple dot (.). Because when I run the command echo ${LOCAL_WORKSPACE_FOLDER} inside the integrated VSCode terminal, the correct path gets printed. So it seems that VSCode just sets normal environment variables?
(If the assumption from above is correct) wouldn't it be possible to write a simple Bash script install.sh that set's the correct path automatically? This script should only be run once during the setup of the project. How could this file look like?
Docker compose allow default value for variables:
${VARIABLE:-default} evaluates to default if VARIABLE is unset or
empty in the environment.
See: https://docs.docker.com/compose/environment-variables/
For you case example, you can use:
${LOCAL_WORKSPACE_FOLDER:-.}
PS: I never used that personnaly
There are a lot of applications which I launch on my workstation using docker-compose up.
Reasons:
They don't have an installer, or I don't want to use it
They require a dedicated storage engine to be present
They require a build process step
They are created by me and I want them to be easily launched on any workstation
e.t.c
So what I usually end up with the following file-structure:
myAppDir
- docker-compose.yml
- Dockerfile (not always)
- someConfigFile
And my docker-compose.yml is something like this:
(It can contain 2 or 3 services, but I provide the simplest form that I use)
version: '3.7'
services:
mysql:
image: mysql:5.7.29
restart: always
volumes:
- ./mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf
environment:
- MYSQL_ROOT_PASSWORD=xyz
ports:
- 3306:3306
Then when I need to launch the application I just perform:
docker-compose up # (or with --build)
Recently I tried to add:
deploy:
resources:
limits:
cpus: '0.50'
memory: 200M
and got a message:
Some services (mysql) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.
So I tried:
docker stack deploy mystack --compose-file docker-compose.yml
and got message:
Ignoring unsupported options: restart
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
This seems more complex that docker-compose up.
I saw that I can use --compatibility flag e.g.
docker-compose --compatibility up
But the word compatibility means to me that I should soon switch to a new way of launching my apps locally.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
If you want to specify memory limits and similar constraints for local containers, you need to use a version 2 Compose file. This is called out in the documentation for the deploy: resources: section. docker/compose#4513 has some reasonably clear statements that Compose file version 2 is more targeted at local setups and version 3 more at Swarm installations, and that Docker intends to keep supporting both file versions.
Docker has put many options and functions specific to their Swarm cluster-installation mode into the core product. Anything that mentions a "stack", for example, is specific to a Swarm setup. One consequence of Swarm and plain-Docker things being combined together is that the deploy: Docker Compose options only have an effect in Swarm mode. The documentation for the deploy: key notes:
This only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
Docker compose V3 is meant to be used with Docker Swarm deployments, therefore you need to run your Docker in Swarm mode, otherwise just keep using the V2 and it's simpler interface for localhost developments.
For example restart is ignored because that responsibility belongs now to the Docker Swarm, not to Docker itself.
Using the compatibility flag it's kind of converting at runtime your V3 compose file into a V2 compose file.
So in short just use V3 if you want to run Docker in Swarm mode to take advantage of all its new features, aka it's kind of a Kubernetes in Docker land.
I am trying to run an executable file from another docker container while already inside a docker container. Is this possible?
version: '3.7'
services:
py:
build: .
tty: true
networks:
- dataload
volumes:
- './src:/app'
- '~/.ssh:/ssh'
winexe:
build:
context: ./winexe
dockerfile: Dockerfile
networks:
- dataload
ports:
- '8001:8001'
volumes:
- '~/path/to/winexe:/usr/bin/winexe'
- '~/.ssh:/ssh'
depends_on:
- py
networks:
dataload:
driver: bridge
I am trying to access Winexe from 'py'
Assuming you mean running another Docker container from inside a container, this can be done in several ways:
Install the docker command inside your container and:
Contact the hosting Docker instance over TCP/IP. For this you will have to have exposed the Docker host to the network, which is neither default nor recommended.
Map the docker socket (usually /var/run/docker.sock) in to your container using a volume. This will allow the docker command inside the container to contact the host instance directly.
Be aware this essentially gives the container root level access to the host! I'm sure there are many more ways to do the same, but approach number 2 is the one I see most often.
If you mean to run another executable inside another - already running - Docker container, you can do that in the above way as well by using docker exec or run some kind of daemon in the second container that accepts commands and runs the required command for you.
So you need to think of your containers as if they were two separate computers, or servers, and they can interact accordingly.
Happily, docker-compose gives you a url you can use to communicate between the containers. In the case of your docker-compose file, you could access the winexe container from your py container like so:
http://winexe:8001 // or ws://winexe:8001 or postgres://winexe:8001 (you get the idea)
(I've used port 8001 here because that's the port you've made available for winexe – I have no idea if it could be used for this).
So now what you need is something in your winexe container than listens to that signal and sends a useful reply (like a browser sending an ajax call to a server)
Learn more here:
https://docs.docker.com/compose/networking/
I have a docker-compose.yml file which works with docker-compose up --build. My app works and everything is fine.
version: '3'
services:
myapp:
container_name: myapp
restart: always
build: ./myapp
ports:
- "8000:8000"
command: /usr/local/bin/gunicorn -w 2 -b :8000 flaskplot:app
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- myapp
But when I use docker stack deploy -c docker-compose.yml myapp, I get the following error:
Ignoring unsupported options: build, restart
Ignoring deprecated options:
container_name: Setting the container name is not supported.
Creating network myapp_default
Creating service myapp_myapp
failed to create service myapp_myapp: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
any hints how I should "translate" the docker-compose.yml file to make it compatible with docker stack deploy?
To run containers in swarm mode, you do not build them on each swarm node individually. Instead you build the image once, typically on a CI server, push to a registry server (often locally hosted, or you can use docker hub), and specify the image name inside your compose file with an "image" section for each service.
Doing that will get rid of the hard error. You'll likely remove the build section of the compose file since it no longer applies.
Specifying "container_name" is unsupported because it would break the ability to scale or perform updates (a container name must be unique within the docker engine). Let swarm name the containers and reference your app on the docker network by it's service name.
Specifying "depends_on" is not supported because containers may be started on different nodes, and rolling updates/failure recovery may remove some containers providing a service after the app started. Docker can retry the failing app until the other service starts up, or preferably you configure an entrypoint that waits for the dependencies to become available with some kind of ping for a minute or two.
Without seeing your Dockerfile, I'd also recommend setting up a healthcheck on each image. Swarm mode uses this to control rolling updates and recover from application failures.
Lastly, consider adding a "deploy" section to your compose file. This tells swarm mode how to deploy and update your service, including how many replicas, constraints on where to run, memory and CPU limits and requirements, and how fast to update the service. You can define a restart policy here as well but I recommend against it since I've seen docker engines restarting containers that conflict with swarm mode deploying containers on other nodes, or even a new container on the same node.
You can see the full compose file documentation with all of these options here: https://docs.docker.com/compose/compose-file/
I have project with docker-compose file and want to migrate to V3, but when deploy with
docker stack deploy --compose-file=docker-compose.yml vertx
It does not understand build path, links, container names...
My file locate d here
https://github.com/armdev/vertx-spring/blob/master/docker-compose.yml
version: '3'
services:
eureka-node:
image: eureka-node
build: ./eureka-node
container_name: eureka-node
ports:
- '8761:8761'
networks:
- vertx-network
postgres-node:
image: postgres-node
build: ./postgres-node
container_name: postgres-node
ports:
- '5432:5432'
networks:
- vertx-network
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: socnet
POSTGRES_DB: socnet
vertx-node:
image: vertx-node
build: ./vertx-node
container_name: vertx-node
links:
- postgres-node
- eureka-node
ports:
- '8585:8585'
networks:
- vertx-network
networks:
vertx-network:
driver: overlay
when I run docker-compose up, it is working, but with
stack deploy not.
How to define path for docker file?
docker stack deploy works only on images, not on builds.
This means that you will have to push your images to an image registry (created with the build process), later docker stack deploy will download the images and execute them.
here you have an example of how was it done for a php application.
You have to pay attention to the parts 1, 3 and 4.
The articles are about php, but can easily be applied to any other language.
The swarm mode "docker service" interface has a few fundamental differences in how it manages containers. You are no longer directly running containers like with "docker run", and it is assumed that you will be doing this in a distributed environment more often than not.
I'll break down the answer by these specific things you listed.
It does not understand build path, links, container names...
Links
The link option has been deprecated for quite some time in favor of the network service discovery feature introduced alongside the "docker network" feature. You no longer need to specify specific links to/from containers. Instead, you simply need to ensure that all containers are on the same network and then they can discovery eachother by the container name or "network alias"
docker-compose will put all your containers into the same network by default, and it sets up the compose service name as an alias. That means if you have a service called 'postgres-node', you can reach it via dns by the name 'postgres-node'.
Container Names
The "docker service" interface allows you to declare a desired state. "I want x number of identical services". Since the interface must support x number of instances of a service, it doesn't allow you to choose the specific container name. Instead, you get to choose the service name. In the case of 'docker stack deploy', the service name defined under the services key in your docker-compose.yml file will be used, but it will also prepend the stack name to the service name.
In most cases, I would argue that overriding the container name in a docker-compose.yml file is unnecessary, even when using regular containers via docker-compose up.
If you need a different name for network service discovery purposes, add a different alias or use the service name alias that you get when using docker-compose or docker stack deploy.
build path
Because swarm mode was built to be a distributed system, building an image in place locally isn't something that "docker stack deploy" was meant to do. Instead, you should build and push your image to a registry that all nodes in your cluster can access.
In the case where you are using a single node swarm "cluster", you should be able to use the docker-compose build option to get the images built locally, and then use docker stack deploy.