According to Docker docs, you can configure a the docker registry image by either:
building a yaml file & mounting it.
Pass Environment Variables.
And the 2. approach says in the docs:
To override a configuration option, create an environment variable named REGISTRY_variable where variable is the name of the configuration option and the _ (underscore) represents indention levels. For example, you can configure the rootdirectory of the filesystem storage backend:
storage:
filesystem:
rootdirectory: /var/lib/registry
To override this value, set an environment variable like this:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/somewhere
This variable overrides the /var/lib/registry value to the /somewhere directory.
Which works perfectly although there's one case where I cannot make it work, and that is Middleware config.
I want to pass by ENV vars this piece of setup
middleware:
storage:
- name: cloudfront
options:
baseurl: https://my.cloudfronted.domain.com/
privatekey: /path/to/pem
keypairid: cloudfrontkeypairid
awsregion: us-east-1, use-east-2
I've tried passing the following env var names:
- REGISTRY_MIDDLEWARE_STORAGE_CLOUDFRONT_BASEURL
- REGISTRY_MIDDLEWARE_STORAGE_0_OPTIONS_BASEURL
but all of them seemed to be ignored, I've even tried to miswrite the config (as this will trigger a validation error and I'll be able to see it in the output), but no success.
I tried it with this:
# file.env
REGISTRY_LOG_LEVEL="debug"
REGISTRY_HTTP_ADDR=":5000"
REGISTRY_HTTP_SECRET="lol"
REGISTRY_STORAGE_S3_ENCRYPT=true
REGISTRY_STORAGE_S3_ROOTDIRECTORY=/REG
REGISTRY_STORAGE_S3_BUCKET="development-bucket-test"
REGISTRY_STORAGE_S3_ACCESSKEY="AAAAAAAA"
REGISTRY_STORAGE_S3_SECRETKEY="BBBBBBB"
REGISTRY_STORAGE_S3_REGION="XX-TTT-X"
REGISTRY_MIDDLEWARE_STORAGE_CLOUDFRONT_BASEURL="tp:/examplezzz.com"
REGISTRY_MIDDLEWARE_STORAGE_CLOUDFRONT_BASEUL="tp:/examplezzz.com"
REGISTRY_MIDDLEWARE_STORAGE_0_NAME=cloudfront
REGISTRY_MIDDLEWARE_STORAGE_0_OPTIONS_BASEUL="tp:/examplezzz.com"
REGISTRY_MIDDLEWARE_STORAGE_0_OPTIONS__AWSRGION="tp:/examplezzz.com"
# run the registry with
docker run --rm -it -p 5000:5000 --env-file file.env registry:2.7.1 sh -c 'echo "version: 0.1" > /a.conf; registry serve /a.conf'
P.S.: The /a.conf is there to force an empty configuration
Am I missing something or is this setting only possible with config files?
I went ahead and tinker with the sourcecode of the docker distribution by myself and made it accept the configs by passing:
REGISTRY_MIDDLEWARE_STORAGE="[{name: cloudfront, options: {baseurl: 'someurl', privatekey: 'somefile', keypairid: 'somestring'}}]"
Related
I have a Flask web application running as a Docker image that is deployed to a Kubernetes pod running on GKE. There are a few environment variables necessary for the application which are included in the docker-compose.yaml like so:
...
services:
my-app:
build:
...
environment:
VAR_1: foo
VAR_2: bar
...
I want to keep these environment variables in the docker-compose.yaml so I can run the application locally if necessary. However, when I go to deploy this using a Kubernetes deployment, these variables are missing from the pod and it throws an error. The only way I have found to resolve this is to add the following to my deployment.yaml:
containers:
- name: my-app
...
env:
- name: VAR_1
value: foo
- name: VAR_2
value: bar
...
Is there a way to migrate the values of these environment variables directly from the Docker container image into the Kubernetes pod?
I have tried researching this in Kubernetes and Docker documentation and Google searching and the only solutions I can find say to just include the environment variables in the deployment.yaml, but I'd like to retain them in the docker-compose.yaml for the purposes of running the container locally. I couldn't find anything that explained how Docker container environment variables and Kubernetes environment variables interacted.
Kompose can translate docker compose files to kubernetes resources:
https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Let us assume docker-compose file and kubernetes runs the same way,
Both take a ready to use image and schedule a new pod or container based on it.
By default this image accept a set of env variables, to send those variables: docker-compose manage them in a way and kubernetes in an another way. (a matter of syntax)
So you can use the same image over compose and over kubernetes, but the syntax of sending the env variables will differ.
If you want them to presist no matter of the deployment and tool, you can always hardcode those env variables in the image itself, in another term, in your dockerfile that you used to build the image.
I dont recommend this way ofc, and it might not work for you in case you are using pre-built official images, but the below is an example of a dockerfile with env included.
FROM alpine:latest
# this is how you hardcode it
ENV VAR_1 foo
COPY helloworld.sh .
RUN chmod +x /helloworld.sh
CMD ["/helloworld.sh"]
If you want to move toward managing this in a much better way, you can use an .env file in your docker-compose to be able to update all the variables, especially when your compose have several apps that share the same variables.
app1:
image: ACRHOST/app1:latest
env_file:
- .env
And on kubernetes side, you can create a config map, link your pods to that configmap and then you can update the value of the configmap only.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
kubectl create configmap <map-name> <data-source>
Also note that you can set the values in your configmap directly from the .env file that you use in docker, check the link above.
The docker-compose.yml file and the Kubernetes YAML file serve similar purposes; both explain how to create a container from a Docker image. The Compose file is only read when you're running docker-compose commands, though; the configuration there isn't read when deploying to Kubernetes and doesn't make any permanent changes to the image.
If something needs to be set as an environment variable but really is independent of any particular deployment system, set it as an ENV in your image's Dockerfile.
ENV VAR_1=foo
ENV VAR_2=bar
# and don't mention either variable in either Compose or Kubernetes config
If you can't specify it this way (e.g., database host names and credentials) then you need to include it in both files as you've shown. Note that some of the configuration might be very different; a password might come from a host environment variable in Compose but a Kubernetes Secret.
I want to pass environment variables that is readable by applications spin up by docker-compose up.
What is the proper way of using docker-compose up with varying configuration settings?
I don't want to use .env & environment: config as the environment variables are changing frequently & it is insecure to save tokens in a file.
docker-compose run -e does work a bit, but loses many.
It does not map the ports that defined in docker-compose.yml services.
Also multiple services are defined in docker-compose.yml and I don't want to use depends_on just because docker-compose up doesn't work.
Let's say I define service in docker-compose.yml
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
In my serviceA.js, I simply use the environment variable:
console.log("This is ", process.env.KEY, "running in service A");
When I run docker-compose run -e KEY=DockerComposeRun service-a
I do get the environment variable KEY read by serviceA.js
This is DockerComposeRun running in service A
However I could only get one single service running.
I could have use environment: in docker-compose.yml
environment:
- KEY=DockerComposeUp
But in my use case, each docker compose would have different environment variable values, meaning I would need to edit the file each time before I do docker-compose.
Also, not only single service would use the same environment variable, .env even done a better job, but it is not desired.
There doesn't seem to be a way to do the same for docker-compose up
I have tried KEY=DockerComposeUp docker-compose up,
but what I get is undefined .
Export doesn't work for me as well, it seems they are all about using environment variable for docker-compose.yml instead of for the applications in container
To safely pass sensitive configuration data to your containers you can use Docker secrets. Everything passed through Secrets is encrypted.
You can create and manage secrets using the commands below:
docker secret create
docker secret inspect
docker secret ls
docker secret rm
And use them in your docker-compose file, either referring to existing secrets (external) or use a file:
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
You can use environment like this:
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
environment:
KEY=DockerComposeRun
Refer at: https://docs.docker.com/compose/environment-variables/
I have this dockerfile that is working correctly.
https://github.com/shantanuo/docker/blob/master/packetbeat-docker/Dockerfile
The only problem is that when my host changes, I need to modify packetbeat.yml file
hosts: ["https://944fe807b7525eaf163f502e08a412c5.us-east-1.aws.found.io:9243"]
password: "rzmYYJUdHVaglRejr8XqjIX7"
Is there any way to simplify this change? Can I use environment variable to replace these 2 values?
Set environment variables in your docker container first.
You can either set them by accessing your container
docker exec -it CONTAINER_NAME /bin/bash
HOST="https://944fe807b7525eaf163f502e08a412c5.us-east-1.aws.found.io:9243"
PASS="rzmYYJUdHVaglRejr8XqjIX7"
Or in your Dockerfile
ENV HOST https://944fe807b7525eaf163f502e08a412c5.us-east-1.aws.found.io:9243
ENV PASS rzmYYJUdHVaglRejr8XqjIX7
And the in the packetbeat.yml
hosts: ['${HOST}']
password: '${PASS}'
I'm trying to deploy an S3-backed private docker registry and I'm getting an error when I try to start the registry container. My docker-compose.yml looks like this:
registry:
restart: always
image: registry:2
ports:
- 5000:5000
environment:
REGISTRY_STORAGE_S3_ACCESSKEY: myKey
REGISTRY_STORAGE_S3_SECRETKEY: mySecret
REGISTRY_STORAGE_S3_BUCKET: docker.registry.bucket
REGISTRY_STORAGE_S3_ROOTDIRECTORY: registry/data
volumes:
- /home/docker/certs:/certs
And when I try to run sudo docker-compose up -d I get this error message:
registry_1 | panic: multiple storage drivers specified in configuration or environment: s3, filesystem
It seems like I'm missing something in my environment to explicitly choose s3 but it's not clear from the docs how to do this.
I was trying to override the storage configuration by using ENV vars. This workaround did the job (in json format):
{
"REGISTRY_STORAGE": "s3",
"REGISTRY_STORAGE_S3_REGION": <REGION>,
"REGISTRY_STORAGE_S3_BUCKET": <BUCKET_NAME>,
"REGISTRY_STORAGE_S3_ROOTDIRECTORY": <ROOT_PATH>,
"REGISTRY_STORAGE_S3_ACCESSKEY": <KEY>,
"REGISTRY_STORAGE_S3_SECRETKEY": <SECRET>
}
It looks like by defining REGISTRY_STORAGE we override the one in config.yml.
There is REGISTRY_STORAGE environment variable missing. Need to be added to the "env" block.
You're getting this error because the registry:2 image comes with a default config file /etc/docker/registry/config.yml which uses filesystem storage.
By adding S3 storage using environment variables there are multiple storage drivers, which I guess isn't supported.
I don't know of any way to remove configuration options with environment variables, so I think you'll probably need to create a config file and mount it as a volume (http://docs.docker.com/registry/configuration/#overriding-the-entire-configuration-file)
I was able to get this working using environment variables. Here's the snippet from my script:
-e REGISTRY_STORAGE=s3 \
-e REGISTRY_STORAGE_S3_ACCESSKEY=$AWS_KEY \
-e REGISTRY_STORAGE_S3_SECRETKEY=$AWS_SECRET \
-e REGISTRY_STORAGE_S3_BUCKET=$BUCKET \
-e REGISTRY_STORAGE_S3_REGION=$AWS_REGION \
-e REGISTRY_STORAGE_S3_ROOTDIRECTORY=$BUCKET_PATH \
In My Case I had an environment variable for data in docker-compose.yml & S3 configuration in config.yml. It took some time but once the environment variables are commented out , registry:2 started properly.
I have my app inside a container and it's reading environment variables for passwords and API keys to access services. If I run the app on my machine (not inside docker), I just export SERVICE_KEY='wefhsuidfhda98' and the app can use it.
What's the standard approach to this? I was thinking of having a secret file which would get added to the server with export commands and then run a source on that file.
I'm using docker & fig.
The solution I settled on was the following: save the environment variables in a secret file and pass those on to the container using fig.
have a secret_env file with secret info, e.g.
export GEO_BING_SERVICE_KEY='98hfaidfaf'
export JIRA_PASSWORD='asdf8jriadf9'
have secret_env in my .gitignore
have a secret_env.template file for developers, e.g.
export GEO_BING_SERVICE_KEY='' # can leave empty if you wish
export JIRA_PASSWORD='' # write your pass
in my fig.yml I send the variables through:
environment:
- GEO_BING_SERVICE_KEY
- JIRA_PASSWORD
call source secret_env before building
docker run provides environment variables:
docker run -e SERVICE_KEY=wefsud your/image
Then your application would read SERVICE_KEY from the environment.
https://docs.docker.com/reference/run/
In fig, you'd use
environment:
- SERVICE_KEY: wefsud
in your app spec. http://www.fig.sh/yml.html
From a security perspective, the former solution is no worse than running it on your host if your docker binary requires root access. If you're allowing 'docker' group users to run docker, it's less secure, since any docker user could docker inspect the running container. Running on your host, you'd need to be root to inspect the environment variables of a running process.