Handling secrets inside docker container without using docker swarm - docker

One question, how do you handle secrets inside dockerfile without using docker swarm. Let's say, you have some private repo on npm and restoring the same using .npmrc inside dockerfile by providing credentials. After package restore, obviously I am deleting .npmrc file from container. Similarly, it goes for NuGet.config as well for restoring private repos inside container. Currently, I am supplying these credentials as --build-arg while building the dockerfile.
But command like docker history --no-trunc will show the password in the log. Is there any decent way to handle this. Currently, I am not on kubernetes. Hence, need to handle the same in docker itself.
One way I can think of is mounting the /run/secrets/ and storing the same inside either by using some text file containing password or via .env file. But then, this .env file has to be part of pipeline to complete the CI/CD process, which means it has to be part of source control. Is there any way to avoid this or something can be done via pipeline itself or any type of encryption/decryption logic can be applied here?
Thanks.
Thanks.

First, keep in mind that files deleted in one layer still exist in previous layers. So deleting files doesn't help either.
There are three ways that are secure:
Download all code in advance outside of the Docker build, where you have access to the secret, and then just COPY in the stuff you downloaded.
Use BuildKit, which is an experimental Docker feature that enables secrets in a secure way (https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information).
Serve secrets from a network server running locally (e.g. in another container). See here for detailed explanation of how to do so: https://pythonspeed.com/articles/docker-build-secrets/

Let me try to explain docker secret here.
Docker secret works with docker swarm. For that you need to run
$ docker swarm init --advertise-addr=$(hostname -i)
It makes the node as master. Now you can create your secret here like: -
crate a file /db_pass and put your password in this file.
$docker secret create db-pass /db_pass
this creates your secret. Now if you want to list the secrets created, run command
$ docker secret ls
Lets use secret while running the service: -
$docker service create --name mysql-service --secret source=db_pass,target=mysql_root_password --secret source=db_pass,target=mysql_password -e MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_password" -e MYSQL_PASSWORD_FILE="/run/secrets/mysql_password" -e MYSQL_USER="wordpress" -e MYSQL_DATABASE="wordpress" mysql:latest
In the above command /run/secrets/mysql_root_password and /run/secrets/mysql_password files location is from container which stores the source file (db_pass) data
source=db_pass,target=mysql_root_password ( it creates file /run/secrets/mysql_root_password inside the container with db_pass value)
source=db_pass,target=mysql_password (it creates file /run/secrets/mysql_password inside the container with db_pass value)
See the screenshot from container which container secret file data: -

Related

Docker compose secrets

The newer docker compose (vs docker-compose) allows you to set secrets in the build section. This is nice because if you do secrets at runtime then the file is readable by anyone that can get into the container by reading /run/secrets/<my_secret>.
Unfortunately, it appears that it's only possible to pass the secrets via either the environment or a file. Doing it via the environment doesn't seem like a great idea because someone on the box could read the /proc/<pid>/environment while the image is being built to snag the secrets. Doing it via a file on disk isn't good because then the secret is being stored on disk unencrypted.
It seems like the best way to do this would be with something like
docker swarm init
$(read -sp "Enter your secret: "; echo $REPLY) | docker secret create my_secret -
docker compose build --no-cache
docker swarm leave --force
Alas, it appears that Docker can't read from the swarm for build time secrets for some unknown reason.
What is the best way to do this? This seems to be a slight oversight, along the lines of docker secrete create not having a way to prompt for the value instead of having to resort to to hacks like above to keep the secret out of your bash history.
UPDATE: This is for SWARM/Remote docker systems, not targeted on local build time secrets. (I realised you were asking for those primarily and just mentioned swarm in the second part of the question. I believe it still holds good advice for some so ill leave the answer undeleted.
Docker Swarm can only read runtime-based secrets you create with the docker secret create command and must already exist on the cluster when deploying stack. We had been in the same situation before. We solved the "issue" using docker contexts. You can create an SSH-based docker context which points to a manager (we just use the first one). Then on your LOCAL device (we use Win as the base platform and WSL2/Linux VM for the UNIX part), you can simply run docker commands with inline --context property. More on context on official docs. For instance: docker --context production secret create .... And so on.

How do you access/pull data from an outside server into a Docker container?

I have run into more and more data scientists who use Docker containers, in order to allow for reproducible analyses.
Question: How do you download/pull data into a Docker container?
If the data is downloadable via a URL, naturally you could add a line like this in the Dockerfile
wget www.server_to_data.org/path/path/myfile.gz
But I have data sitting on a server, whereby users ssh into the server with a key-pair in ~/.ssh/id_rsa.pub. I'm not sure how this could work security-wise.
How does one normally download or access your data in this case?
One could possible mount the server, but I'm not sure how one accesses these within the Container/VM.
For your current situation, where you've got the data on a server, and you're handing out key pairs to people who should have access. If you want to just use that existing infrastructure without changing it. Could be done by setting a volume for the ssh keys in the image and then people running the image would need to start the container with the volume set to their ssh key.
Set a volume in the image with the Dockerfile:
FROM ubuntu
#[RUN your installation process]
VOLUME /home/container_user/.ssh
Run the container with mounting the location of the ssh key to that volume:
docker run -d -v PATH_TO_DRECITORY_HOLDING_SSH_KEY:/home/container_user/.ssh [OTHER OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
Then you can download the data as part of the script that runs when the container is started.
The basic idea is lifted from How can I get my ~/.ssh keys into a docker container running locally?
That said, if we back the question up a little and ask how exactly people are going to be using your image, where the image is going to be stored (public or private repo) and how often the data changes there may be some more user friendly ways to satisfy the need. Also if you allow for docker-compose to be the means by which the container is run there is some other options available to you.

Path interpretation in a Dockerfile

I want to run a container, by mounting on the fly my ~/.ssh path (so as to be able to clone some private gitlab repositories).
The
COPY ~/.ssh/ /root/.ssh/
directive did not work out, because the Dockerfile interpreted paths relative to a tmp dir it creates for the builds, e.g.
/var/lib/docker/tmp/docker-builder435303036/
So my next shot was to try and take advantage of the ARGS directive as follows:
ARG CURRENTUSER
COPY /home/$CURRENTUSER/.ssh/ /root/.ssh/
and run the build as:
docker build --build-arg CURRENTUSER=pkaramol <whatever follows ...>
However, I am still faced with the same issue:
COPY failed: stat /var/lib/docker/tmp/docker-builder435303036/home/pkaramol/.ssh: no such file or directory
1: How to make Dockerfile access a specific path inside my host?
2: Is there a better pattern for accessing private git repos from within ephemeral running containers, than copying my .ssh dir? (I just need it for the build process)
Docker Build Context
A build for a Dockerfile can't access specific paths outside the "build context" directory. This is the last argument to docker build, normally .. The docker build command tars up the build context and sends it to the Docker daemon to build the image from. Only files that are within the build context can be referenced in the build. To include a users .ssh directory, you would need to either base the build in the .ssh directory, or a parent directory like /home/$USER.
Build Secrets
COPYing or ADDing credentials in at build time is a bad idea as the credentials will be saved in the image build for anyone who has access to the image to see. There are a couple of caveats here. If you flatten the image layers after removal of the sensitive files in build, or create a multi stage build (17.05+) that only copies non sensitive artefacts into the final image.
Using ENV or ARG is also bad as the secrets will end up in the image history.
There is a long an involved github issue about secrets that covers most the variations on the idea. It's long but worth reading through the comments in there.
The two main solutions are to obtain secrets via the network or a volume.
Volumes are not available in standard builds, so that makes them tricky.
Docker has added secrets functionality but this only available at container run time for swarm based containers.
Network Secrets
Custom
The secrets github issue has a neat little net cat example.
nc -lp 10.8.8.8 8080 < $HOME/.ssh/id_rsa &
And using curl to collect it in the Dockerfile, use it, and remove it in the one RUN step.
RUN set -uex; \
curl -s http://10.8.8.8:8000 > /root/.ssh/id_rsa; \
ssh -i /root/.ssh/id_rsa root#wherever priv-command; \
rm /root/.ssh/id_rsa;
To make unsecured network services accessible, you might want to add an alias IP address to your loopback interface so your build container or local services can access it, but no one external can.
HTTP
Simply running a web server with your keys mounted could suffice.
docker run -d \
-p 10.8.8.8:80:80 \
-v /home/me/.ssh:/usr/share/nginx/html:ro \
nginx
You may want to add TLS or authentication depending on your setup and security requirements.
Hashicorp Vault
Vault is a tool built specifically for managing secrets. It goes beyond the requirements for a Docker build It's written and Go and also distributed as a container.
Build Volumes
Rocker
Rocker is a custom Docker image builder that extends Dockerfiles to support some new functionality. The MOUNT command they added allows you to mount a volume at build time.
Packer
The Packer Docker Builder also allows you to mount arbitrary volumes at build time.

How to move a local volume onto a remote docker machine

I have my local docker machine and a remote docker machine, on the cloud. My docker-compose app has a webcontainer with this config:
web:
container_name: web
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
- ./data:/usr/src/app/data
env_file: .env
command: /usr/local/bin/gunicorn --workers 4 --timeout 120 --bind :8000 app:app
The important part is that second volume. I have this local folder called data with some 10GB of data in it. I made it a volume in the first place because otherwise building the container takes forever. Now that the app is production-ready, I'd like to deploy it. One problem: now my remote web container has an empty data folder mounted in it. So how do I move data from my local machine into a container on a remote docker machine? Where do I even move it to?
It seems like there are two tools for this:
docker cp which doesn't seem like it will work for remote docker machines
docker-machine scp which seems made for this, right?
I'm almost positive I need to use the second of these, but since I don't quite understand how docker machine works or where it keeps its data, I'm not sure what destination path to use:
$ dm scp -r /Users/alex/Documents/Project/data remote-machine:/usr/src/app/data
fails with error message:
scp: /usr/src/app/data: No such file or directory
Where should I be scp'ing this data in order to have it mount properly on my remote web container?
Local path vs. in-container path
Assuming you will use the same model remotely that you used locally, keep in mind that the path /usr/src/app/data is the path inside the container. When you are copying the files from one system to another, you just need to copy them from the current system to the remote system, then put them in a path where docker-compose knows how to find them, to mount into a new container.
So all you have to do is copy them from here to there, and use the same path relative to docker-compose.yml. It only knows your external volume as ./data, so if you put the directory in the same place (from docker-compose's perspective), everything should work the same.
How to copy the files
As for how to do the copy, these are just files, so it doesn't matter. scp -r should work, or make a zipfile, copy that, unzip into the correct place, etc. There are a ton of ways to copy files, so pick whatever is simplest for your case.
What exactly needs to be copied?
In the comments you expressed confusion about local vs. remote operations in docker-machine, and what else you needed to copy. Here's a bit more full of an explanation:
On your local system (which I'm assuming is your own PC or laptop), you have docker-machine installed, and you've been using that for all of this development. Completely separate from that is your new cloud instance where you would like to deploy.
To run what you have locally already, up on your cloud instance, the cloud instance will need to have the following.
The docker-compose.yml file.
As long as you plan to use docker-compose to run this, that must be available.
Your .env file.
Since you are using an environment file in this setup, it must be available or docker-compose can't make use of it.
Your web image.
You have a build parameter for this container, but not an image parameter. So currently the only thing you can do is docker-compose build web which will locally generate an image, which docker-compose then knows how to run.
Another option is to add an image parameter, with a repository:tag, such as myuser/myapp_web:1.0, and push that up to Docker Hub. Then, on your cloud instance, the image can be retrieved from Docker Hub instead of building it locally.
In that case, you can add an image parameter to the web container in docker-compose.yml, then build it and push it up.
docker-compose build web
docker-compose push web
Then on the cloud instance, you can fetch it:
docker-compose pull web
docker-compose will know to use that image because of the image parameter in docker-compose.yml (which is also present on the cloud server).
Ref: Creating a new repository on Docker Hub
Which of these options is preferable depends on how you want to manage things. Either one would work, but the "local build" option would require you copy any required source files to your cloud instance too (anything that is used during the build process).
I don't see in your question where the postgres container comes from. If you are also custom-building this one, then the same goes as for web. If you are using a public image for this, then you shouldn't need to copy anything; docker-compose will know how to fetch it, i.e. you can do this:
docker-compose pull postgres
What about docker cp and docker-machine scp?
You mentioned docker cp and docker-machine scp in your question.
As you already determined, docker cp is not a solution here. That command is for copying files between a container and the host filesystem. It has nothing to do with copying over a network.
As far as I know, docker-machine scp is to copy files between your local host and a docker-machine-managed VM. To copy files to your cloud instance you can likely use a more generic tool like scp or sftp more easily.
Not sure as of which docker version, but contrary to the statements in the question and the #Dan_Lowe answer this works fine:
docker cp ./data container:/usr/src/app/
docker cp is a normal part of the API, so it works like any other command, even remotely.

how to copy dir from remote host to docker image

I am trying to build a docker image, I have dockerfile with all necessary commands. but in my build steps I need to copy one dir from remote host to docker image. But if I put scp command into dockerfile, i'll have to provide password also into dockerfile, which I dont have to.
Anyone has some better solution to do this. any suggestion would be appreciated.
I'd say there are at least options for dealing with that:
Option 1:
If you can execute scp before running docker build this may turn out to be the easiest option:
Run scp -r somewhere:remote_dir ./local_dir
Add COPY ./local_dir some_path to your Dockerfile
Run docker build
Option 2: If you have to execute scp during the build:
Start some key-value store such as etcd before the build
Place a correct SSH key (it cannot be password-protected) temporarily in the key-value store
Within a single RUN command (to avoid leaving secrets inside the image):
retrieve the SSH key from the key-value store;
put it in ~/.ssh/id_rsa or start an ssh-agent and add it;
retrieve the directory with scp
remove the SSH key
Remove the key from the key-value store
The second option is a bit convoluted, so it may be worth creating a wrapper script that retrieves the required secrets, runs any command, and removes the secrets.
You can copy a directory into a (even running) container at post build time
On remote machine: Copy from remote host to docker host with
scp -r /your/dir/ <user-at-docker-host>#<docker-host>:/some/remote/directory
On docker machine: Copy from docker host into docker container
docker cp /some/remote/directory <container-name>:/some/dir/within/docker/
Of course you can do step 1 also from your docker machine if you prefere that by simply adapting the source and target of the scp command.

Resources