I have existing docker-compose.yml file that runs on my Docker CE standalone server.
I would like to deploy this same configuration using the AWS ECS service. The documentation of the ecs-cli tool states that Docker Compose files can be used. Other (simpler) container configs have worked with my existing files.
With my configuration, this errors with:
ERRO[0000] Unable to open ECS Compose Project error="External option
is not supported"
FATA[0000] Unable to create and read ECS Compose Project
error="External option is not supported"
I am using "external" Docker volumes, so that they are auto-generated as required and not deleted when a container is stopped or removed.
This is a simplification of the docker-compose.yml file I am testing with and would allow me to mount the volume to a running container:
version: '3'
services:
busybox:
image: busybox:1.31.1
volumes:
- ext_volume:/path/in/container
volumes:
ext_volume:
external: true
Alternatively, I have read in other documentation to use the ecs-params.yml file in the same directory to pass in variables. Is this a replacement to my docker-compose.yml file? I had expected to leave it's syntax unchanged.
Working config (this was ensuring the container stays running, so I could ssh in and view the mounted drive):
version: '3'
services:
alpine:
image: alpine:3.12
volumes:
- test_docker_volume:/path/in/container
command:
- tail
- -f
- /dev/null
volumes:
test_docker_volume:
And in ecs-params.yml:
version: 1
task_definition:
services:
alpine:
cpu_shares: 100
mem_limit: 28000000
docker_volumes:
- name: test_docker_volume
scope: "shared"
autoprovision: true
Related
Trying to figure out a flexible bind mount, that survives a docker compose down
Setup
I have a docker-compose.yml:
version: "3.9"
services:
redis:
image: "redis:alpine"
restart: always
command: redis-server --requirepass ${REDIS_PASSWORD}
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
volumes:
- type: volume
source: redis_data
target: /data
ports:
- ${REDISPORT}:6379
volumes:
redis_data:
external: true
name: "redis_data"
driver: local
driver_opts:
type: "none"
o: "bind"
device: ${VOLUME_REDIS}
and an .env file:
#Docker Test
REDIS_PASSWORD=supersecret
REDISPORT=6379
VOLUME_REDIS=/Users/stephan/temp/redis
The problem
When I start docker-compose up I get the error:
conflicting parameters "external" and "driver" specified for volume "redis_data"
However the documentation seems to suggest, that parameter are supported in the current version:
For version 3.3 and below of the format, external cannot be used in conjunction with other volume configuration keys (driver, driver_opts, labels). This limitation no longer exists for version 3.4 and above.
When I remove the driver entry, the error changes to:
conflicting parameters "external" and "driver_opts" specified for volume "redis_data"
Environment
Docker version 20.10.6, build 370c289
docker-compose version 1.29.1, build c34c88b2 (but I used docker compose on macOS)
OS: macOS Catalina & Ubuntu 20.04 (tested on both)
What I'm trying to achieve
Have all variables in .env files, so I can start the same docker-compose.yml multiple times by specifying different env files
Avoid that a docker-compose down wipes my volumes
use host bind, so I can process data (with Docker stopped) with host apps
Stuff I looked at
Conditionalizing bind mounted volumes for Docker Compose
How to specify site-specific volumes for docker-compose
What do I miss?
is there possible to somehow create persistent storage for containers, created with docker-compose and don't remove them even when running docker-compose down -v so they will be automaticly attached to their containers after again starting the docker-compose up -d ?
What I usually do is to use an external volume, something like:
$ docker volume create nodemodules
docker-compose.yml
version: '3.7'
services:
frontend:
image: node:11
volumes:
- nodemodules:/app/node_modules
volumes:
nodemodules:
external: true
Refer the docs for more info: https://docs.docker.com/compose/compose-file/#external
I'm using Docker Compose with Docker Config.
The config is created ahead of time with docker config create conf.yml conf.yml
The compose file specifies the configs:
version: '3.3'
configs:
conf.yml:
external: true
services:
api:
image: <image_link>
deploy:
replicas: 1
ports:
- "5002:80"
configs:
- source: conf.yml
target: /etc/conf/conf.yml
mode: 0440
I then deploy it to a docker swarm stack with docker stack deploy
Now I rotate the config according to this example, I end up with conf2.yml
That means the next time I run docker stack deploy (through our CI), the source file will be invalid.
I could re-create conf.yml then call docker service update but it's a lot of manual work for a configuration file.
Do you have any advice for a more robust handling of config files? Note that the configuration files are not in the repo and not stored in the CI runner / environment variables either.
Seems like the best solution is to edit the docker-compose file with the new config version and re-deploy.
I have a docker image "doc_image" and a docker volume "doc_volume". I want to spin up a container from the image where the volume is mounted into a specific point
If I do this with docker run like this:
docker run -d -p 5000:5000 -v doc_volume:/directory doc_image
then it runs flawlessly (I can see the expected files in /directory in interactive way). However, when I try to spin it up with docker-compose like with a docker-compose.yml like this:
version '3'
services:
my_service:
image: doc_image
volumes:
- doc_volume:/directory
volumes:
doc_volume:
there is nothing in /directory:
FileNotFoundError: [Errno 2] No such file or directory: '/directory/file.txt'
What went wrong here?
Add external property to volumes section:
version '3'
services:
my_service:
image: doc_image
volumes:
- doc_volume:/directory
volumes:
doc_volume:
external: true # << here we go
Your problem is that docker-compose creates another volume unless you explicitly tell him to use external one. External means creates not by means of docker-compose.
I am trying to allow nginx to proxy between multiple containers while also accessing the static files from those containers.
To share volumes between containers created using docker compose, the following works correctly:
version: '3.6'
services:
web:
build:
context: .
dockerfile: ./Dockerfile
image: webtest
command: ./start.sh
volumes:
- .:/code
- static-files:/static/teststaticfiles
nginx:
image: nginx:1.15.8-alpine
ports:
- "80:80"
volumes:
- ./nginx-config:/etc/nginx/conf.d
- static-files:/static/teststaticfiles
depends_on:
- web
volumes:
static-files:
However what I actually require is for the nginx compose file to be in a separate file and also in a completely different folder. In other words, the docker compose up commands would be run separately. I have tried the following:
First compose file:
version: '3.6'
services:
web:
build:
context: .
dockerfile: ./Dockerfile
image: webtest
command: ./start.sh
volumes:
- .:/code
- static-files:/static/teststaticfiles
networks:
- directorylocation-nginx_mynetwork
volumes:
static-files:
networks:
directorylocation-nginx_mynetwork:
external: true
Second compose file (ie: nginx):
version: '3.6'
services:
nginx:
image: nginx:1.15.8-alpine
ports:
- "80:80"
volumes:
- ./nginx-config:/etc/nginx/conf.d
- static-files:/static/teststaticfiles
networks:
- mynetwork
volumes:
static-files:
networks:
mynetwork:
The above two files work correctly in the sense that the site can be viewed. The problem is that the static files are not available in the nginx container. The site therefore displays without any images etc.
One work around which works correctly found here is to change the nginx container static files volume to instead be as follows:
- /var/lib/docker/volumes/directory_static-files/_data:/static/teststaticfiles
The above works correctly, but it seems 'hacky' and brittle. Is there another way to share volumes between containers which are housed in different compose files without needing to map the /var/lib/docker/volumes directory.
By separating the 2 docker-compose.yml files as you did in your question, 2 different volumes are actually created; that's the reason you don't see data from web service inside volume of nginx service, because there are just 2 different volumes.
Example : let's say you have the following structure :
example/
|- web/
|- docker-compose.yml # your first docker compose file
|- nginx/
|- docker-compose.yml # your second docker compose file
Running docker-compose up from web folder (or docker-compose -f web/docker-compose.yml up from example directory) will actually create a volume named web_static-files (name of the volume defined in docker-compose.yml file, prefixed by the folder where this file is located).
So, running docker-compose up from nginx folder will actually create nginx_static-files instead of re-using web_static-files as you want.
You can use the volume created by web/docker-compose.yml by specifying in the 2nd docker compose file (nginx/docker-compose.yml) that this is an external volume, and its name :
volumes:
static-files:
external:
name: web_static-files
Note that if you don't want the volume (and all resources) to be prefixed by the folder name (default), but by something else, you can add -p option to docker-compose command :
docker-compose \
-f web/docker-compose.yml \
-p abcd \
up
This command will now create a volume named abcd_static-files (that you can use in the 2nd docker compose file).
You can also define the volumes creation on its own docker-compose file (like volumes/docker-compose.yml) :
version: '3.6'
volumes:
static-files:
And reference this volume as external, with name volumes_static-files, in web and nginx docker-compose.yml files :
volumes:
volumes_static-files:
external: true
Unfortunately, you cannot set the volume name in docker compose, it will be automatically prefixed. If this is really a problem, you can also create the volume manually (docker volume create static-files) before running any docker-compose up command (I do not recommand this solution though because it adds a manual step that can be forgotten if you reproduce your deployment on another environment).