Securing/Encrypting the sensitive environment variables - docker

I'm using an env file which contains sensitive information for docker creation.
But the thing is they are not secure. They can be easily viewed via docker inspect, and hence they are available to any user that can run docker commands.
I'm looking for a way in which I can secure these values from the outside users, without using docker swarm.
Is there a way to achieve this?

For variables needed in built-time (image creation):
ARG: --build-arg
For env variables needed when container starts:
--env-file: It lets you nobody can see your variables doing history inspecting your cli command.
Use docker secrets: possible in swarm, docker enterprise. (docker swarm secrets)

Related

Docker compose secrets

The newer docker compose (vs docker-compose) allows you to set secrets in the build section. This is nice because if you do secrets at runtime then the file is readable by anyone that can get into the container by reading /run/secrets/<my_secret>.
Unfortunately, it appears that it's only possible to pass the secrets via either the environment or a file. Doing it via the environment doesn't seem like a great idea because someone on the box could read the /proc/<pid>/environment while the image is being built to snag the secrets. Doing it via a file on disk isn't good because then the secret is being stored on disk unencrypted.
It seems like the best way to do this would be with something like
docker swarm init
$(read -sp "Enter your secret: "; echo $REPLY) | docker secret create my_secret -
docker compose build --no-cache
docker swarm leave --force
Alas, it appears that Docker can't read from the swarm for build time secrets for some unknown reason.
What is the best way to do this? This seems to be a slight oversight, along the lines of docker secrete create not having a way to prompt for the value instead of having to resort to to hacks like above to keep the secret out of your bash history.
UPDATE: This is for SWARM/Remote docker systems, not targeted on local build time secrets. (I realised you were asking for those primarily and just mentioned swarm in the second part of the question. I believe it still holds good advice for some so ill leave the answer undeleted.
Docker Swarm can only read runtime-based secrets you create with the docker secret create command and must already exist on the cluster when deploying stack. We had been in the same situation before. We solved the "issue" using docker contexts. You can create an SSH-based docker context which points to a manager (we just use the first one). Then on your LOCAL device (we use Win as the base platform and WSL2/Linux VM for the UNIX part), you can simply run docker commands with inline --context property. More on context on official docs. For instance: docker --context production secret create .... And so on.

Handling secrets inside docker container without using docker swarm

One question, how do you handle secrets inside dockerfile without using docker swarm. Let's say, you have some private repo on npm and restoring the same using .npmrc inside dockerfile by providing credentials. After package restore, obviously I am deleting .npmrc file from container. Similarly, it goes for NuGet.config as well for restoring private repos inside container. Currently, I am supplying these credentials as --build-arg while building the dockerfile.
But command like docker history --no-trunc will show the password in the log. Is there any decent way to handle this. Currently, I am not on kubernetes. Hence, need to handle the same in docker itself.
One way I can think of is mounting the /run/secrets/ and storing the same inside either by using some text file containing password or via .env file. But then, this .env file has to be part of pipeline to complete the CI/CD process, which means it has to be part of source control. Is there any way to avoid this or something can be done via pipeline itself or any type of encryption/decryption logic can be applied here?
Thanks.
Thanks.
First, keep in mind that files deleted in one layer still exist in previous layers. So deleting files doesn't help either.
There are three ways that are secure:
Download all code in advance outside of the Docker build, where you have access to the secret, and then just COPY in the stuff you downloaded.
Use BuildKit, which is an experimental Docker feature that enables secrets in a secure way (https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information).
Serve secrets from a network server running locally (e.g. in another container). See here for detailed explanation of how to do so: https://pythonspeed.com/articles/docker-build-secrets/
Let me try to explain docker secret here.
Docker secret works with docker swarm. For that you need to run
$ docker swarm init --advertise-addr=$(hostname -i)
It makes the node as master. Now you can create your secret here like: -
crate a file /db_pass and put your password in this file.
$docker secret create db-pass /db_pass
this creates your secret. Now if you want to list the secrets created, run command
$ docker secret ls
Lets use secret while running the service: -
$docker service create --name mysql-service --secret source=db_pass,target=mysql_root_password --secret source=db_pass,target=mysql_password -e MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_password" -e MYSQL_PASSWORD_FILE="/run/secrets/mysql_password" -e MYSQL_USER="wordpress" -e MYSQL_DATABASE="wordpress" mysql:latest
In the above command /run/secrets/mysql_root_password and /run/secrets/mysql_password files location is from container which stores the source file (db_pass) data
source=db_pass,target=mysql_root_password ( it creates file /run/secrets/mysql_root_password inside the container with db_pass value)
source=db_pass,target=mysql_password (it creates file /run/secrets/mysql_password inside the container with db_pass value)
See the screenshot from container which container secret file data: -

Can I reference an environment variable in a Dockerfile FROM statement?

What I'd like to do is this:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-alpine
# more stuff
But, this needs to run on an isolated clean build machine which does not have internet access, so I need to route it through a local mirror server (e.g. Artifactory or Nexus or another similar thing)
If the docker image were hosted on docker hub (e.g. FROM ubuntu) then the --registry-mirrors docker configuration option would solve it for us.
BUT because microsoft have decided not to use docker hub, the registry mirror thing isn't working either.
After much effort, I've set up a a custom domain mirror for mcr.microsoft.com and so now I can do this:
FROM microsoft-docker-mirror.local.domain/dotnet/core/aspnet:3.0-alpine
# more stuff
That works. But we have remote workers who may not be on the local office LAN and can't see my local mirror server. What I now want to do is vary it depending on environment. E.g.
ENV MICROSOFT_DOCKER_REPO mcr.microsoft.com
FROM ${MICROSOFT_DOCKER_REPO}/dotnet/core/aspnet:3.0-alpine
My isolated build machine would set the MICROSOFT_DOCKER_REPO environment variable, and everyone else's machines would use the default mcr.microsoft.com
Anyway. Adding the ENV line to the dockerfile results in docker throwing this error:
Error response from daemon: No build stage in current context
There seem to be lots of references to the fact that the FROM line must be the first line in the file, before even any comments...
How can I reference an environment variable in my FROM statement? Or alternatively, how can I make registry mirrors work for things that aren't docker hub?
Thanks!
"ARG is the only instruction that may precede FROM in the Dockerfile" - Dockerfile reference.
ARG MICROSOFT_DOCKER_REPO=mcr.microsoft.com
FROM ${MICROSOFT_DOCKER_REPO}/dotnet/core/aspnet:3.0-alpine
Build with a non-default MICROSOFT_DOCKER_REPO using the --build-arg: docker build --rm --build-arg MICROSOFT_DOCKER_REPO=repo.example.com -t so:58196638 .
Note: you can pass the --build-arg from the hosts environment i.e. MICROSOFT_DOCKER_REPO=repo.example.com docker build --rm --build-arg MICROSOFT_DOCKER_REPO=${MICROSOFT_DOCKER_REPO} -t so:58196638 .

Is it possible to customize environment variable by linking two docker containers?

I've created a docker image for my database server and one for the web application. Using the documentation I'm able to link between both container using the environment variables as follow:
value="jdbc:postgresql://${DB_PORT_5432_TCP_ADDR}:${DB_PORT_5432_TCP_PORT}/db_name"
It works fine now but it would be better that the environment variables are more general and without containing a static port number. Something like:
value="jdbc:postgresql://${DB_URL}:${DB_PORT}/db_name"
Is there anyway to link between the environment variables? for example by using the ENV command in the dockerfile ENV DB_URL=$DB_PORT_5432_TCP_ADDR or by using the argument --env by running the image docker run ... -e DB_URL=$DB_PORT_5432_TCP_ADDR docker_image ?
Without building this kind of functionality into your docker startup shell scripts or other orchestration mechanism, this is not possible at the moment to create environment variables like you are describing here. You do mention a couple of workarounds. However, the problem at least with using the -e DB_URL=... in your docker run command is that your $DB_PORT_5432_TCP_ADDR environment variable is not known at runtime, and so you will not be able to set this value when you run it. Typically, this is what your orchestration layer is used for, service discovery and passing this kind of data among your containers. There is at least one workaround mentioned here on SO that involves constructing a special shell script that you put in your CMD or ENTRYPOINT directives that passes the environment variable to the container.

Is it possible to use cloud-init and heat-cfntools inside a Docker container?

I want to use OpenStack Heat to create an application which consists of several Docker containers, and monitor some metrics of these containers, like: CPU/Mem utilization, and other application-specific metrics.
So is it possible to install cloud-init and heat-cfntools when prepare the Docker image via Dockerfile, and then run a Docker container based on the image which has cloud-init and heat-cfntools running in it?
Thanks!
So is it possible to install cloud-init and heat-cfntools when prepare the Docker image via Dockerfile
It is possible to use cloud-init inside a Docker container, if you (a) have an image with cloud-init installed, (b) have the correct commands configured in your ENTRYPOINT or CMD script, and (c) your container is running in an environment with an available metadata service.
Of those requirements, (c) is probably the most problematic; unless you are booting containers using the nova-docker driver, is is unlikely that your containers will have access to the Nova metadata service.
I am not particularly familiar with heat-cfntools, although a quick glance at the code suggests that it may work without cloud-init by authenticating against the Heat CFN API using ec2-style credentials, which you would presumably need to provide via environment variables or something.
That said, it is typically much less necessary to run cloud-init inside Docker containers, the theory being that if you need to customize an image, you'll use a Dockerfile to build a new one based on that image and re-deploy, or specify any necessary additional configuration via environment variables.
If your tools require monitoring processes on the host, you'll probably want to run with
docker run --pid=host
This is a feature introduced in Docker Engine version 1.5.
See http://docs.docker.com/reference/run/#pid-settings

Resources