I'm trying to deploy an S3-backed private docker registry and I'm getting an error when I try to start the registry container. My docker-compose.yml looks like this:
registry:
restart: always
image: registry:2
ports:
- 5000:5000
environment:
REGISTRY_STORAGE_S3_ACCESSKEY: myKey
REGISTRY_STORAGE_S3_SECRETKEY: mySecret
REGISTRY_STORAGE_S3_BUCKET: docker.registry.bucket
REGISTRY_STORAGE_S3_ROOTDIRECTORY: registry/data
volumes:
- /home/docker/certs:/certs
And when I try to run sudo docker-compose up -d I get this error message:
registry_1 | panic: multiple storage drivers specified in configuration or environment: s3, filesystem
It seems like I'm missing something in my environment to explicitly choose s3 but it's not clear from the docs how to do this.
I was trying to override the storage configuration by using ENV vars. This workaround did the job (in json format):
{
"REGISTRY_STORAGE": "s3",
"REGISTRY_STORAGE_S3_REGION": <REGION>,
"REGISTRY_STORAGE_S3_BUCKET": <BUCKET_NAME>,
"REGISTRY_STORAGE_S3_ROOTDIRECTORY": <ROOT_PATH>,
"REGISTRY_STORAGE_S3_ACCESSKEY": <KEY>,
"REGISTRY_STORAGE_S3_SECRETKEY": <SECRET>
}
It looks like by defining REGISTRY_STORAGE we override the one in config.yml.
There is REGISTRY_STORAGE environment variable missing. Need to be added to the "env" block.
You're getting this error because the registry:2 image comes with a default config file /etc/docker/registry/config.yml which uses filesystem storage.
By adding S3 storage using environment variables there are multiple storage drivers, which I guess isn't supported.
I don't know of any way to remove configuration options with environment variables, so I think you'll probably need to create a config file and mount it as a volume (http://docs.docker.com/registry/configuration/#overriding-the-entire-configuration-file)
I was able to get this working using environment variables. Here's the snippet from my script:
-e REGISTRY_STORAGE=s3 \
-e REGISTRY_STORAGE_S3_ACCESSKEY=$AWS_KEY \
-e REGISTRY_STORAGE_S3_SECRETKEY=$AWS_SECRET \
-e REGISTRY_STORAGE_S3_BUCKET=$BUCKET \
-e REGISTRY_STORAGE_S3_REGION=$AWS_REGION \
-e REGISTRY_STORAGE_S3_ROOTDIRECTORY=$BUCKET_PATH \
In My Case I had an environment variable for data in docker-compose.yml & S3 configuration in config.yml. It took some time but once the environment variables are commented out , registry:2 started properly.
Related
According to Docker docs, you can configure a the docker registry image by either:
building a yaml file & mounting it.
Pass Environment Variables.
And the 2. approach says in the docs:
To override a configuration option, create an environment variable named REGISTRY_variable where variable is the name of the configuration option and the _ (underscore) represents indention levels. For example, you can configure the rootdirectory of the filesystem storage backend:
storage:
filesystem:
rootdirectory: /var/lib/registry
To override this value, set an environment variable like this:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/somewhere
This variable overrides the /var/lib/registry value to the /somewhere directory.
Which works perfectly although there's one case where I cannot make it work, and that is Middleware config.
I want to pass by ENV vars this piece of setup
middleware:
storage:
- name: cloudfront
options:
baseurl: https://my.cloudfronted.domain.com/
privatekey: /path/to/pem
keypairid: cloudfrontkeypairid
awsregion: us-east-1, use-east-2
I've tried passing the following env var names:
- REGISTRY_MIDDLEWARE_STORAGE_CLOUDFRONT_BASEURL
- REGISTRY_MIDDLEWARE_STORAGE_0_OPTIONS_BASEURL
but all of them seemed to be ignored, I've even tried to miswrite the config (as this will trigger a validation error and I'll be able to see it in the output), but no success.
I tried it with this:
# file.env
REGISTRY_LOG_LEVEL="debug"
REGISTRY_HTTP_ADDR=":5000"
REGISTRY_HTTP_SECRET="lol"
REGISTRY_STORAGE_S3_ENCRYPT=true
REGISTRY_STORAGE_S3_ROOTDIRECTORY=/REG
REGISTRY_STORAGE_S3_BUCKET="development-bucket-test"
REGISTRY_STORAGE_S3_ACCESSKEY="AAAAAAAA"
REGISTRY_STORAGE_S3_SECRETKEY="BBBBBBB"
REGISTRY_STORAGE_S3_REGION="XX-TTT-X"
REGISTRY_MIDDLEWARE_STORAGE_CLOUDFRONT_BASEURL="tp:/examplezzz.com"
REGISTRY_MIDDLEWARE_STORAGE_CLOUDFRONT_BASEUL="tp:/examplezzz.com"
REGISTRY_MIDDLEWARE_STORAGE_0_NAME=cloudfront
REGISTRY_MIDDLEWARE_STORAGE_0_OPTIONS_BASEUL="tp:/examplezzz.com"
REGISTRY_MIDDLEWARE_STORAGE_0_OPTIONS__AWSRGION="tp:/examplezzz.com"
# run the registry with
docker run --rm -it -p 5000:5000 --env-file file.env registry:2.7.1 sh -c 'echo "version: 0.1" > /a.conf; registry serve /a.conf'
P.S.: The /a.conf is there to force an empty configuration
Am I missing something or is this setting only possible with config files?
I went ahead and tinker with the sourcecode of the docker distribution by myself and made it accept the configs by passing:
REGISTRY_MIDDLEWARE_STORAGE="[{name: cloudfront, options: {baseurl: 'someurl', privatekey: 'somefile', keypairid: 'somestring'}}]"
I want to modify the password of the container created by the elasticsearch image,I have executed the following orders
setup-passwords auto
but it did't work
enter image description here
unexpected response code [403] from GET http://172.17.0.2:9200/_xpack/security/_authenticate?pretty
Please help me. Thank you.
When using docker it is usually best to configure services via environment variables. To set a password for the elasticsearch service you can run the container using the env variable ELASTIC_PASSWORD:
docker run -e ELASTIC_PASSWORD=`openssl rand -base64 12` -p 9200:9200 --rm --name elastic docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4
openssl rand -base64 12 sets a random value for the password
I need to define an env var in docker compose (v2).
Now I just have something like:
environment:
- SERVERNAME=192.168.xx.xx
But I don't really like this approach. People need to modifiy the compose file. Is there way that I can do this more dynamic. Something like:
docker-compose up --env SERVERNAME=192.168.xx.xx
What is the best approach to do this?
I think it's not possible but the most close solution can be pass it in a env file
From de docker docs:
You can pass multiple environment variables from an external file through to a service’s containers with the ‘env_file’ option.
So you can create a env file with the variable, for example server.env, and reference it in the docker-composer.yml
env_file:
- server.env
Or you can create a .env file in the folder
$ cat .env
SERVERNAME=192.168.xx.xx
And change your config with:
environment:
- SERVERNAME=${SERVERNAME}
You can do it with -e "SERVERNAME=192.168.1.1" -e "SOMETHING=bla" syntax.
Or use something like Hashicorp's Vault.
RabbitMQ in docker lost data after remove container without volume.
My Dockerfile:
FROM rabbitmq:3-management
ENV RABBITMQ_HIPE_COMPILE 1
ENV RABBITMQ_ERLANG_COOKIE "123456"
ENV RABBITMQ_DEFAULT_VHOST "123456"
My run script:
IMAGE_NAME="service-rabbitmq"
TAG="${REGISTRY_ADDRESS}/${IMAGE_NAME}:${VERSION}"
echo $TAG
docker rm -f $IMAGE_NAME
docker run \
-itd \
-v "rabbitmq_log:/var/log/rabbitmq" \
-v "rabbitmq_data:/var/lib/rabbitmq" \
--name "service-rabbitmq" \
--dns=8.8.8.8 \
-p 8080:15672 \
$TAG
After removing the container, all data are lost.
How do I configure RabbitMQ in docker with persistent data?
Rabbitmq uses the hostname as part of the folder name in the mnesia
directory. Maybe add a --hostname some-rabbit to your docker run?
I had the same issue and I found the answer here.
TL;DR
Didn't do too much digging on this, but it appears that the simplest way to do this is to change the hostname as Pedro mentions above.
MORE INFO:
Using RABBITMQ_NODENAME
If you want to edit the RABBITMQ_NODENAME variable via Docker, it looks like you need to add a hostname as well since the Docker hostnames are generated as random hashes.
If you change the RABBITMQ_NODENAME var to something static like my-rabbit, RabbitMQ will throw something like an "nxdomain not found" error because it's looking for something likemy-rabbit#<docker_hostname_hash>. If you know the Docker hostname and can automate pulling it into your RABBITMQ_NODENAME value like so, my-rabbit#<docker_hostname_hash> I believe it would work.
UPDATE
I previously said,
If you know the Docker hostname and can automate pulling it into your RABBITMQ_NODENAME value like so, my-rabbit#<docker_hostname_hash> I believe it would work.
This would not work as described precisely because the default docker host name is randomly generated at launch, if it is not assigned explicitly. The hurdle would actually be to make sure you use the EXACT SAME <docker_hostname_hash> as your originating run so that the data directory gets picked up correctly. This would be a pain to implement dynamically/robustly. It would be easiest to use an explicit hostname as described below.
The alternative would be to set the hostname to a value you choose -- say, app-messaging -- AND ALSO set the RABBITMQ_NODENAME var to something like rabbit#app-messaging. This way you are controlling the full node name that will be used in the data directory.
Using Hostname
(Recommended)
That said, unless you have a reason NOT to change the hostname, changing the hostname alone is the simplest way to ensure that your data will be mounted to and from the same point every time.
I'm using the following Docker Compose file to successfully persist my setup between launches.
version: '3'
services:
rabbitmq:
hostname: 'mabbit'
image: "${ARTIFACTORY}/rabbitmq:3-management"
ports:
- "15672:15672"
- "5672:5672"
volumes:
- "./data:/var/lib/rabbitmq/mnesia/"
networks:
- rabbitmq
networks:
rabbitmq:
driver: bridge
This creates a data directory next to my compose file and persists the RabbitMQ setup like so:
./data/
rabbit#mabbit/
rabbit#mabbit-plugins-expand/
rabbit#mabbit.pid
rabbit#mabbit-feature_flags
I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.
One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.
But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?
Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.
This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:
version: '2'
services:
my-db-app:
build: db/.
image: custom-db
And db/Dockerfile would look like:
FROM mysql:latest
COPY ./sql /sql
The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.
Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:
tar -cC sql . | docker run --rm -it -v sql-files:/sql \
busybox /bin/sh -c "tar -xC /sql"
Run that via a script and then have that same script bounce the db container to reload that config.
Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:
version: '3.4'
configs:
sql_file_1:
file: ./file_1.sql
services
my-db-app:
image: my-db-app:latest
configs:
- source: sql_file_1
target: /sql/file_1.sql
mode: 444
Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.
If you can not use volumes (wants stateless docker-compose.yml and using remote machine), you can have config file written by command.
Example for nginx config in official image:
version: "3.7"
services:
nginx:
image: nginx:alpine
ports:
- 80:80
environment:
NGINX_CONFIG: |
server {
server_name "~^www\.(.*)$$" ;
return 301 $$scheme://$$1$$request_uri ;
}
server {
server_name example.com
...
}
command:
/bin/sh -c "echo \"$$NGINX_CONFIG\" > /etc/nginx/conf.d/redir.conf; nginx -g \"daemon off;\""
Environment variable could also be saved in .env file, you can use Compose's extend feature or load it from shell environment (where you fetched it from enywhere else):
https://docs.docker.com/compose/compose-file/#env_file
https://docs.docker.com/compose/compose-file/#variable-substitution
To get the original entrypoint command of a container:
docker container inspect [container] | jq --raw-output .[0].Config.Cmd
To investigate which file to modify this usually will work:
docker exec --interactive --tty [container] sh
This is how I'm doing it with volumes:
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
- ./shell_scripts:/shell_scripts
i think you had to do in a compose file:
volumes:
- src/file:dest/path
As a more recent update to this question: with a docker swarm hosted on Amazon, for example, you can define a volume that can be shared by services and is available across all nodes of the swarm (using the cloudstor driver, which in turn has AWS EFS underlying for persistence).
version: '3.3'
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
shell_scripts:/shell_scripts
volumes:
shell_scripts:
driver: "cloudstor:aws"
With Compose V2 you can simply do (as in the documentation) :
docker compose cp src [service:]dest
Before v2 you can use the workaround using docker cp explained in the associated issue
docker cp /path/to/my-local-file.sql "$(docker-compose ps -q mycontainer)":/file-on-container.sql