docker-compose.yml
version: "3"
services:
mycentos:
image: mycentos
container_name: '{{.Node.Hostname}}-rh7'
hostname: '{{.Node.Hostname}}-rh7'
env_file:
- docker_run.env
privileged: true
cap_add:
- SYS_PTRACE
- SYS_ADMIN
networks:
- testnet
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
networks:
testnet:
Running docker-compose is giving me this error:
ERROR: for mycentos-rh7 Cannot create container for service mycentos-rh7: Invalid container name ({{.Node.Hostname}}-rh7), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
PS I can run the above compose file without errors via "docker stack deploy" so the problem seem to be localized to docker-compose
The reason for wanting to use docker-compose instead of docker stack deploy is testing containers is easier because they stay to localhost and i can grab contianer id to exec into
docker-compose variable substitution is not very powerful and in your context there is nothing like {{.Node.Hostname}}, but you can override the values in an additional file:
docker-compose.override.yaml
version: "3"
services:
mycentos:
container_name: '${HOSTNAME}-rh7'
hostname: '${HOSTNAME}-rh7'
The environment variable HOSTNAME need to be set during start up:
HOSTNAME=$(hostname) docker-compose -f docker-compose.yaml -f docker-compose.override.yaml up -d
This should work for your use case.
Using templates is not supported for container_name. From official documentation:
You can use templates for some flags of service create, using the
syntax provided by the Go’s text/template package. The supported flags
are the following :
--hostname
--mount
--env
PS I can run the above compose file without errors via "docker stack
deploy" so the problem seem to be localized to docker-compose
This is because the container_name directive is ignored in that case:
Note: This option is ignored when deploying a stack in swarm mode with
a (version 3) Compose file.
Related
I have existing docker-compose.yml file that runs on my Docker CE standalone server.
I would like to deploy this same configuration using the AWS ECS service. The documentation of the ecs-cli tool states that Docker Compose files can be used. Other (simpler) container configs have worked with my existing files.
With my configuration, this errors with:
ERRO[0000] Unable to open ECS Compose Project error="External option
is not supported"
FATA[0000] Unable to create and read ECS Compose Project
error="External option is not supported"
I am using "external" Docker volumes, so that they are auto-generated as required and not deleted when a container is stopped or removed.
This is a simplification of the docker-compose.yml file I am testing with and would allow me to mount the volume to a running container:
version: '3'
services:
busybox:
image: busybox:1.31.1
volumes:
- ext_volume:/path/in/container
volumes:
ext_volume:
external: true
Alternatively, I have read in other documentation to use the ecs-params.yml file in the same directory to pass in variables. Is this a replacement to my docker-compose.yml file? I had expected to leave it's syntax unchanged.
Working config (this was ensuring the container stays running, so I could ssh in and view the mounted drive):
version: '3'
services:
alpine:
image: alpine:3.12
volumes:
- test_docker_volume:/path/in/container
command:
- tail
- -f
- /dev/null
volumes:
test_docker_volume:
And in ecs-params.yml:
version: 1
task_definition:
services:
alpine:
cpu_shares: 100
mem_limit: 28000000
docker_volumes:
- name: test_docker_volume
scope: "shared"
autoprovision: true
I'm running a Docker deployment for an application. I'm mounting a volume where I want the external path to be provided by a shell environment variable. I get this error:
ERROR: for video-server Cannot create container for service video-server: invalid volume specification: '46b9d2fb3b9b13c9404d31bae571dac3f633122393c4a77f2561afb8aed5c06e:=/opt/videos:rw': invalid mount config for type "volume": invalid mount path: '=/opt/videos' mount path must be absolute
My docker-compose configuration is this:
video-server:
build:
context: .
dockerfile: video-server_Dockerfile
container_name: video-server
networks:
- videoManagerNetwork
environment:
- VIDEO_MANAGER_DIR=/opt/videos
volumes:
- ${VIDEO_MANAGER_DIR_PROD}=/opt/videos
ports:
- 9000:8080
I can see the correct value of the VIDEO_MANAGER_DIR_PROD environment variable by doing both of these commands, so I know it's on my shell:
echo $VIDEO_MANAGER_DIR_PROD
sudo echo $VIDEO_MANAGER_DIR_PROD
What's strange is that, if I do a complete wipe of my docker configurations (sudo docker system prune --all --volumes), and then run the docker-compose for the first time (sudo docker-compose up -d), everything works.
However, if I take the container down, rebuild it, and try to run that same command (sudo docker-compose up -d) again, then I get the error displayed above.
You cannot assign the source volume like a variable, so you will use : for this assignment.
Documentation about Docker Compose volumes: docs.docker.com
video-server:
build:
context: .
dockerfile: video-server_Dockerfile
container_name: video-server
networks:
- videoManagerNetwork
environment:
- VIDEO_MANAGER_DIR: /opt/videos
volumes:
- ${VIDEO_MANAGER_DIR_PROD}:/opt/videos
ports:
- 9000:8080
I am unable to change the port that Swagger uses in docker compose. It works fine with regular docker, I simply set the -p argument on the run command. It seems that I should just need to set the ports field in the docker-compose file. But no matter what I try it just runs on 8080.
I am using the latest versions of docker and docker-compose. The docker image is called swaggerapi/swagger-ui. I have attempted setting the ports field for the container. Also tried setting the url variable in the swagger definition file. Tried changing the expose port. I tried with the docker-compose run command which lets you start an individual service and has the -p argument. Still nothing.
Ideally I should use this to build and run:
sudo docker-compose up --build --force-recreate
My compose file:
version: '3'
services:
swagger:
build: swagger
network_mode: "host"
ports:
- "8081:8080"
env_file: .env
environment:
- SWAGGER_JSON=/swagger.json
volumes:
data:
driver: "local"
And the docker file for the swagger service:
FROM swaggerapi/swagger-ui
EXPOSE 8081
COPY swagger.json /swagger.json
ENV SWAGGER_JSON "/swagger.json"
No matter what I do it wont change ports.
Just change the port in your docker-compose file
swagger:
build: swagger
network_mode: "host"
ports:
- "8081:"**Port which you want to expose**"
env_file: .env
environment:
- SWAGGER_JSON=/swagger.json
I'm using this docker image https://github.com/moodlehq/moodle-docker and it works as advertised. Among other things it exposes web server on localhost:8000 address. What I would like is to bind it to the host's ip instead.
Using raw docker something like that is accomplished with
docker run --network=host [container]
What should be placed in the yml file for docker-compose as documentation is a bit confusing for me.
You can use network_mode in compose files -
network_mode: "host"
Sample compose -
version: '3'
services:
api:
image: 'node:6-alpine'
network_mode: host
environment:
- NODE_ENV=production
command: "tail -f /dev/null"
Ref - https://docs.docker.com/compose/compose-file/#network_mode
In my docker-compose.yml file, I have the following. However the container does not pick up the hostname value. Any ideas?
dns:
image: phensley/docker-dns
hostname: affy
domainname: affy.com
volumes:
- /var/run/docker.sock:/docker.sock
When I check the hostname in the container it does not pick up affy.
As of docker-compose version 3.0 and later, you can just use the hostname key:
version: "3.0"
services:
yourservicename:
hostname: your-name
I found that the hostname was not visible to other containers when using docker run. This turns out to be a known issue (perhaps more a known feature), with part of the discussion being:
We should probably add a warning to the docs about using hostname. I think it is rarely useful.
The correct way of assigning a hostname - in terms of container networking - is to define an alias like so:
services:
some-service:
networks:
some-network:
aliases:
- alias1
- alias2
Unfortunately this still doesn't work with docker run. The workaround is to assign the container a name:
docker-compose run --name alias1 some-service
And alias1 can then be pinged from the other containers.
UPDATE: As #grilix points out, you should use docker-compose run --use-aliases to make the defined aliases available.
This seems to work correctly. If I put your config into a file:
$ cat > compose.yml <<EOF
dns:
image: phensley/docker-dns
hostname: affy
domainname: affy.com
volumes:
- /var/run/docker.sock:/docker.sock
EOF
And then bring things up:
$ docker-compose -f compose.yml up
Creating tmp_dns_1...
Attaching to tmp_dns_1
dns_1 | 2015-04-28T17:47:45.423387 [dockerdns] table.add tmp_dns_1.docker -> 172.17.0.5
And then check the hostname inside the container, everything seems to be fine:
$ docker exec -it stack_dns_1 hostname
affy.affy.com
Based on docker documentation:
https://docs.docker.com/compose/compose-file/#/command
I simply put
hostname: <string>
in my docker-compose file.
E.g.:
[...]
lb01:
hostname: at-lb01
image: at-client-base:v1
[...]
and container lb01 picks up at-lb01 as hostname.
The simplest way I have found is to just set the container name in the docker-compose.yml See container_name documentation. It is applicable to docker-compose v1+. It works for container to container, not from the host machine to container.
services:
dns:
image: phensley/docker-dns
container_name: affy
Now you should be able to access affy from other containers using the container name. I had to do this for multiple redis servers in a development environment.
NOTE The solution works so long as you don't need to scale. Such as consistant individual developer environments.
I needed to spin freeipa container to have a working kdc and had to give it a hostname otherwise it wouldn't run.
What eventually did work for me is setting the HOSTNAME env variable in compose:
version: 2
services:
freeipa:
environment:
- HOSTNAME=ipa.example.test
Now its working:
docker exec -it freeipa_freeipa_1 hostname
ipa.example.test