My Docker container keeps restarting when running docker-compose up -d. When inspecting the logs with docker logs --tail 50 --follow --timestamps db, I get the following error:
/usr/local/bin/docker-entrypoint.sh: line 37: "/run/secrets/db_mysql_root_pw": No such file or directory
This probably means that no secrets are made. The output of docker secret ls also gives no secrets.
My docker-compose.yml file looks something like this (excluding port info etc.):
version: '3.4'
services:
db:
image: mysql:8.0
container_name: db
restart: always
environment:
- MYSQL_USER_FILE="/run/secrets/db_mysql_user"
- MYSQL_PASSWORD_FILE="/run/secrets/db_mysql_user_pw"
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/db_mysql_root_pw"
secrets:
- db_mysql_user
- db_mysql_user_pw
- db_mysql_root_pw
volumes:
- "./mysql-data:/docker-entrypoint-initdb.d"
secrets:
db_mysql_user:
file: ./db_mysql_user.txt
db_mysql_user_pw:
file: ./db_mysql_user_pw.txt
db_mysql_root_pw:
file: ./db_mysql_root_pw.txt
In the same directory I have the 3 text files which simply contain the values for the environment variables. e.g. db_mysql_user_pw.txt contains password.
I am running Linux containers on a Windows host.
This is pretty dumb but changing
environment:
- MYSQL_USER_FILE="/run/secrets/db_mysql_user"
- MYSQL_PASSWORD_FILE="/run/secrets/db_mysql_user_pw"
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/db_mysql_root_pw"
to
environment:
- MYSQL_USER_FILE=/run/secrets/db_mysql_user
- MYSQL_PASSWORD_FILE=/run/secrets/db_mysql_user_pw
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db_mysql_root_pw
made it work. I still don't know why I cannot see the secrets with docker secret ls though.
Related
I am new with localstack I copied the docker-compose example. I made sure to mount the data path into my machine,and I do see it in host tmp folder,In addition I see my data being append when calling s3 write commands, but after I kill the docker-compose and start it from scratch I don't see the data from the previous session.
Is there a special flag that I need to add to reload the data?
docker-compose file:
version: '3.0'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=sqs,sns,s3
- DATA_DIR=/tmp/localstack/data
ports:
- '4566-4583:4566-4583'
volumes:
- "/tmp/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Example run:
aws --endpoint-url=http://localhost:4566 s3 mb s3://bucket-test
aws --endpoint-url=http://localhost:4566 s3 cp myfile.png s3://bucket-test
#Now this command will return the file
aws --endpoint-url=http://localhost:4566 s3 ls s3://bucket-test
# But after I will kill the docker and run docker-compose up again I will see nothing
Your data will be deleted by running docker-compose down. This stops and removes your containers: https://docs.docker.com/compose/reference/down/
To stop the containers without deleting the volumes, run: docker-compose stop
I was able to keep data related to my ssm parameters by adding "DATA_DIR" and the following volumes :
- "/tmp/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
My docker-compose.yml file looks like :
version: '3.0'
services:
localstack:
build: ./localstack
container_name: localstack
environment:
- SERVICES=${LOCALSTACK_SERVICES:-ssm,cloudwatch,cloudformation}
- DATA_DIR=${LOCALSTACK_DATA_DIR:-/tmp/localstack/data}
- AWS_DEFAULT_REGION=us-west-2
- EDGE_PORT=4566
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY}
volumes:
- ./localstack/bootstrap:/opt/bootstrap/
- ./data:/tmp/localstack
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- '4566:4566'
- '443:443'
As mentioned in the localstack documentation (see "Deprecated" section), DATA_DIR has since been deprecated. They ask that you use PERSISTANCE=1 in the Description which will store the state in the /var/lib/localstack/state directory.
However, PERSITANCE=1 is not equivilant to DATA_DIR. Following this deprecation, this is now a paid feature under Localstack Pro.
You now must use S3_DIR to specify your local S3 directory, it will only work if you specify your Localstack Pro API Key with LOCALSTACK_API_KEY.
I have existing docker-compose.yml file that runs on my Docker CE standalone server.
I would like to deploy this same configuration using the AWS ECS service. The documentation of the ecs-cli tool states that Docker Compose files can be used. Other (simpler) container configs have worked with my existing files.
With my configuration, this errors with:
ERRO[0000] Unable to open ECS Compose Project error="External option
is not supported"
FATA[0000] Unable to create and read ECS Compose Project
error="External option is not supported"
I am using "external" Docker volumes, so that they are auto-generated as required and not deleted when a container is stopped or removed.
This is a simplification of the docker-compose.yml file I am testing with and would allow me to mount the volume to a running container:
version: '3'
services:
busybox:
image: busybox:1.31.1
volumes:
- ext_volume:/path/in/container
volumes:
ext_volume:
external: true
Alternatively, I have read in other documentation to use the ecs-params.yml file in the same directory to pass in variables. Is this a replacement to my docker-compose.yml file? I had expected to leave it's syntax unchanged.
Working config (this was ensuring the container stays running, so I could ssh in and view the mounted drive):
version: '3'
services:
alpine:
image: alpine:3.12
volumes:
- test_docker_volume:/path/in/container
command:
- tail
- -f
- /dev/null
volumes:
test_docker_volume:
And in ecs-params.yml:
version: 1
task_definition:
services:
alpine:
cpu_shares: 100
mem_limit: 28000000
docker_volumes:
- name: test_docker_volume
scope: "shared"
autoprovision: true
I want to create a docker registry on my server using this docker-compose.yaml file :
version: '3'
services:
registry:
restart: always
image: registry:2
ports:
- 5000:5000
volumes:
- /home/ubuntu/registry/volumes/data:/var/lib/registry
- /home/ubuntu/registry/volumes/certs:/certs
- /home/ubuntu/registry/volumes/auth:/auth
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /home/ubuntu/registry/certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /home/ubuntu/registry/certs/domain.key
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /home/ubuntu/registry/auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
I am running
'''
docker-compose up
'''
but this error occurs.
registry_1 | time="2019-08-03T21:17:38.938127498Z" level=fatal msg="open /home/ubuntu/registry/certs/domain.crt: no such file or directory"
I am sure those files exist, do you have any idea?
volumes:
- /home/ubuntu/registry/volumes/certs:/certs
Here you are saying to use the HOST path /home/ubuntu/registry/volumes/certs and make it available as /certs inside the CONTAINER. So perhaps you want to change the path on the container side to match the host path, or change the environment variables to reflect the actual container paths.
Also note that you have used /home/ubuntu/registry/volumes/certs in one location and /home/ubuntu/registry/certs (without "volumes") in another, which I assume might need to be fixed up as well.
docker stack deploy isnt respecting the extra_hosts parameter in my compose file. when i do a simple docker-compose up the entry is created in the /etc/hosts however when i do docker deploy –compose-file docker-compose.yml myapp it ignores extra_hosts, any insights?
Below is the docker-compose.xml:
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://dbhost:5432/postgres
ports:
- 9002:9002
extra_hosts:
- "dbhost: ${DB_HOST}"
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
the docker-compose config also displays the compose file properly.
This may not be a direct answer to the question as it doesn't use env variables but what I found was that the extra_hosts block in the compose file was ignored in swarm mode if entered in the format above.
i.e. this works for me and puts entries in /etc/hosts in the container:
extra_hosts:
retisdev: 10.48.161.44
retistesting: 10.48.161.44
whereas when entered in the other format it gets ignored when deploying as a service
extra_hosts:
- "retisdev=10.48.161.44"
- "retistesting=10.48.161.44"
I think it's an ordering issue. The ${} variable you've got in the compose file runs during the YAML processing before the service definition is created. Then stack deploy processes the .env file for running in the container as envvars, but the YAML variable is needed first...
To fix that, you should use the docker-compose config command first, to process the YAML, and then use the output of that to send to the stack deploy.
docker-compose config will show you the output you're likely wanting.
Then do a pipe to get a one-liner.
docker-compose config | docker stack deploy -c - myapp
Note: Ideally you wouldn't use the extra_hosts, but rather put the envvar directly in the connection string. Your way seems like unnecessary complexity and isn't the usual way I see a connection string created.
e.g.
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://${DB_HOST}:5432/postgres
ports:
- 9002:9002
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
As i see https://github.com/moby/moby/issues/29133 seems like it is by design where in the compose command takes into consideration the environment variables mentioned in .env file however the deploy command ignores that :( why is that so, pretty lame reasons!
I'm using the latest orientdb docker image in my docker-compose. I need to set the default root password but it's not working. My docker-compose.yml:
orientdb:
image: orientdb
ports:
- "2434:2434"
- "2480:2480"
- "2424:2424"
volumes:
- "/mnt/sda1/dockerVolumes/orientdb:/opt/orientdb/databases"
environment:
- ORIENTDB_ROOT_PASSWORD
I'm currently running:
$ export ORIENTDB_ROOT_PASSWORD=anypw
$ docker-compose up -d
You need to define password in docker-compose:
environment:
- ORIENTDB_ROOT_PASSWORD=anypw
if you want to hide your password from docker-compose you can create docker-compose:
environment:
- ORIENTDB_ROOT_PASSWORD=${ORIENTDB_ROOT_PASSWORD}
I have been able to reproduce your solution and it works:
docker-compose.yml
version: '2'
services:
orientdb:
image: orientdb
ports:
- "2434:2434"
- "2480:2480"
- "2424:2424"
environment:
- ORIENTDB_ROOT_PASSWORD=test
now:
$ docker-compose up -d
Creating network ... with the default driver
Creating test_orientdb_1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d1f0a4a81222 orientdb "server.sh" 31 seconds ago Up 22 seconds 0.0.0.0:2424->2424/tcp, 0.0.0.0:2434->2434/tcp, 0.0.0.0:2480->2480/tcp test_orientdb_1
User: root
Pass: test
You probably tried to log in, but you have not created database.
Just create one and try to log in.
You have to first run docker-compose down command.
Then you can run the docker-compose up command.
This will remove previous configuration and allow you to connect to the database.