I want to run a container, by mounting on the fly my ~/.ssh path (so as to be able to clone some private gitlab repositories).
The
COPY ~/.ssh/ /root/.ssh/
directive did not work out, because the Dockerfile interpreted paths relative to a tmp dir it creates for the builds, e.g.
/var/lib/docker/tmp/docker-builder435303036/
So my next shot was to try and take advantage of the ARGS directive as follows:
ARG CURRENTUSER
COPY /home/$CURRENTUSER/.ssh/ /root/.ssh/
and run the build as:
docker build --build-arg CURRENTUSER=pkaramol <whatever follows ...>
However, I am still faced with the same issue:
COPY failed: stat /var/lib/docker/tmp/docker-builder435303036/home/pkaramol/.ssh: no such file or directory
1: How to make Dockerfile access a specific path inside my host?
2: Is there a better pattern for accessing private git repos from within ephemeral running containers, than copying my .ssh dir? (I just need it for the build process)
Docker Build Context
A build for a Dockerfile can't access specific paths outside the "build context" directory. This is the last argument to docker build, normally .. The docker build command tars up the build context and sends it to the Docker daemon to build the image from. Only files that are within the build context can be referenced in the build. To include a users .ssh directory, you would need to either base the build in the .ssh directory, or a parent directory like /home/$USER.
Build Secrets
COPYing or ADDing credentials in at build time is a bad idea as the credentials will be saved in the image build for anyone who has access to the image to see. There are a couple of caveats here. If you flatten the image layers after removal of the sensitive files in build, or create a multi stage build (17.05+) that only copies non sensitive artefacts into the final image.
Using ENV or ARG is also bad as the secrets will end up in the image history.
There is a long an involved github issue about secrets that covers most the variations on the idea. It's long but worth reading through the comments in there.
The two main solutions are to obtain secrets via the network or a volume.
Volumes are not available in standard builds, so that makes them tricky.
Docker has added secrets functionality but this only available at container run time for swarm based containers.
Network Secrets
Custom
The secrets github issue has a neat little net cat example.
nc -lp 10.8.8.8 8080 < $HOME/.ssh/id_rsa &
And using curl to collect it in the Dockerfile, use it, and remove it in the one RUN step.
RUN set -uex; \
curl -s http://10.8.8.8:8000 > /root/.ssh/id_rsa; \
ssh -i /root/.ssh/id_rsa root#wherever priv-command; \
rm /root/.ssh/id_rsa;
To make unsecured network services accessible, you might want to add an alias IP address to your loopback interface so your build container or local services can access it, but no one external can.
HTTP
Simply running a web server with your keys mounted could suffice.
docker run -d \
-p 10.8.8.8:80:80 \
-v /home/me/.ssh:/usr/share/nginx/html:ro \
nginx
You may want to add TLS or authentication depending on your setup and security requirements.
Hashicorp Vault
Vault is a tool built specifically for managing secrets. It goes beyond the requirements for a Docker build It's written and Go and also distributed as a container.
Build Volumes
Rocker
Rocker is a custom Docker image builder that extends Dockerfiles to support some new functionality. The MOUNT command they added allows you to mount a volume at build time.
Packer
The Packer Docker Builder also allows you to mount arbitrary volumes at build time.
Related
I wonder if it is possible to make Docker automatically mount volumes during build or run container phase. With podman it is easy, using /usr/share/containers/mounts.conf, but I need to use Docker CE.
If it is not, may I somehow use host RHEL subscription during Docker build phase? I need to use RHEL UBI image and I have to use companys Satellite
A container image build in docker is designed to be self contained and portable. It shouldn't matter whether you run the build on your host or a CI server in the cloud. To do that, they rely on the build context and args to the build command, rather than other settings on the host, where possible.
buildah seems to have taken a different approach with their tooling, allowing you to use components from the host in your build, giving you more flexibility, but also fragility.
That's a long way of saying the "feature" doesn't exist in docker, and if it gets created, I doubt it would look like what you're describing. Instead, with buildkit, they allow you to inject secrets from the build command line, which are mounted into the steps where they are required. An example of this is available in the buildkit docs:
# syntax = docker/dockerfile:1.3
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
And to build that Dockerfile, you would pass the secret as a CLI arg:
$ docker build --secret id=aws,src=$HOME/.aws/credentials .
This is basically a follow-up question to How to include files outside of Docker's build context?: I'm using large files in all of my projects (several GBs) which I keep on an external drive, only used for development.
I want to COPY or ADD these files to my docker container when building it. The answer linked above allows one to specify a different path to a Dockerfile, potentially extending the build context. I find this unpractical, since this would require setting the build context to system root (?), to be able to include a single file.
Long story short: Is there any way or workaround to include a file that is far removed from the docker build context?
Three suggestions on things you could try:
include a file that is far removed from the docker build context?
You could construct your own build context by cp (or tar) files on the host into a dedicated directory tree. You don't have to use the actual source tree or your build tree.
rm -rf docker-build
mkdir docker-build
cp -a Dockerfile build/the-binary docker-build
cp -a /mnt/external/support docker-build
docker build ./docker-build
# reads docker-build/Dockerfile, and the files in the
# docker-build directory, but nothing else; only sends
# the docker-build directory to Docker as the build context
large files [...] (several GBs)
Docker doesn't deal well with build contexts this large. In the past I've at least seen docker build take a long time just on the step of sending the build context to itself, and docker push and docker pull have network issues when trying to send the gigabyte+ layer around.
It's a little hacky and breaks the "self-contained image" model a little bit, but you can provide these files as a Docker bind-mount instead of including them in the image. Your application needs to know what to do if the data isn't there. When you go to deploy the application, you also need to separately distribute the files alongside the Docker image and other deployment artifacts.
docker run \
-v /mnt/external/support:/app/support
...
the-image-without-the-support-files
only used for development
Potentially you can get away with not using Docker at all during this phase of development. Use a local source tree and local development tools; run your unit tests against these large test fixtures as needed. Build a Docker image only when you're about to run pre-commit integration tests; that may be late enough in the development cycle that you don't need these files.
I think the main thing you are worried about is that you do not want to send all files of a directory to docker daemon while it builds the image.
When directory was so big (in GBss) it takes lot of time to build an image.
If the requirement is to just use those files while you build anything inside docker, you can mount those to the container.
A tricky way
Run a container with base image and mount the direcotries inside it. docker run -d -v local-path:container-path
Get inside the container docker exec -it CONTAINER_ID bash
Run build step ./build-something.sh
Create image from the running container docker commit CONTAINER_ID
Tag the image docker tag IMAGE_ID tag:v1. You can get Image ID from previous command
From long term perspective this method may seem to be very tedious, but if you want to build image for 1 or 2 times , you can try this method.
I tried this for one of my docker image, as I want to avoid large amount of files sent to docker daemon during image build
The copy command gets source and destination values,
just specify full absolute path to your hard drive mount point as the src directory
COPY /absolute_path/to/harddrive /container/path
One question, how do you handle secrets inside dockerfile without using docker swarm. Let's say, you have some private repo on npm and restoring the same using .npmrc inside dockerfile by providing credentials. After package restore, obviously I am deleting .npmrc file from container. Similarly, it goes for NuGet.config as well for restoring private repos inside container. Currently, I am supplying these credentials as --build-arg while building the dockerfile.
But command like docker history --no-trunc will show the password in the log. Is there any decent way to handle this. Currently, I am not on kubernetes. Hence, need to handle the same in docker itself.
One way I can think of is mounting the /run/secrets/ and storing the same inside either by using some text file containing password or via .env file. But then, this .env file has to be part of pipeline to complete the CI/CD process, which means it has to be part of source control. Is there any way to avoid this or something can be done via pipeline itself or any type of encryption/decryption logic can be applied here?
Thanks.
Thanks.
First, keep in mind that files deleted in one layer still exist in previous layers. So deleting files doesn't help either.
There are three ways that are secure:
Download all code in advance outside of the Docker build, where you have access to the secret, and then just COPY in the stuff you downloaded.
Use BuildKit, which is an experimental Docker feature that enables secrets in a secure way (https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information).
Serve secrets from a network server running locally (e.g. in another container). See here for detailed explanation of how to do so: https://pythonspeed.com/articles/docker-build-secrets/
Let me try to explain docker secret here.
Docker secret works with docker swarm. For that you need to run
$ docker swarm init --advertise-addr=$(hostname -i)
It makes the node as master. Now you can create your secret here like: -
crate a file /db_pass and put your password in this file.
$docker secret create db-pass /db_pass
this creates your secret. Now if you want to list the secrets created, run command
$ docker secret ls
Lets use secret while running the service: -
$docker service create --name mysql-service --secret source=db_pass,target=mysql_root_password --secret source=db_pass,target=mysql_password -e MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_password" -e MYSQL_PASSWORD_FILE="/run/secrets/mysql_password" -e MYSQL_USER="wordpress" -e MYSQL_DATABASE="wordpress" mysql:latest
In the above command /run/secrets/mysql_root_password and /run/secrets/mysql_password files location is from container which stores the source file (db_pass) data
source=db_pass,target=mysql_root_password ( it creates file /run/secrets/mysql_root_password inside the container with db_pass value)
source=db_pass,target=mysql_password (it creates file /run/secrets/mysql_password inside the container with db_pass value)
See the screenshot from container which container secret file data: -
I have a docker-compose newbie question. We have an existing Jenkins build that creates Docker images and pushes them to an in house Artifactory repository. This is driven by using Maven/Docker and two Dockerfiles, one for the app and one for a volume/data container. The Dockerfiles look something like this:
App:
FROM centos
RUN useradd -u 6666 -ms /bin/bash foouser
COPY src/main/resources/home/foouser/.bashrc /home/foouser/
RUN chown -R foouser:foouser /home/foouser
USER foouser
COPY src/main/resources/opt/myapp/bin/startup.sh /opt/myapp/bin/
WORKDIR /home/foouser
ENTRYPOINT /opt/myapp/bin/startup.sh && /bin/bash
Data container:
FROM centos
# Environment variable for the path to mount/create. Defaults to /opt/data
ENV DATA_VOL_PATH="/opt/data"
# Make sure the user id is the same as the container using the volume, otherwise we may run into permission issues on
# the container mounting the volume.
RUN useradd -u 6666 -ms /bin/bash foouser && \
mkdir -p "$DATA_VOL_PATH" && \
chown -R foouser:foouser "$DATA_VOL_PATH"
VOLUME [ "$DATA_VOL_PATH" ]
I omitted stuff like labels, etc for brevity. So the images produced by the build from these Dockerfiles will end up in the local Artifactory repo. We're using Rancher/Cattle to instantiate these images, and I've added the Artifactory repo to Rancher so it can pull from there. The docker-compose.yml file in Rancher looks something like this:
# The data/volumes container for the data.
data:
image: data-image
# App
myapp:
image: app-image
environment:
DATA_VOL_PATH:
volumes_from:
- data
I know that I can pass environment variables from docker-compose (as in the DATA_VOL_PATH above), but I'm confused as to how things work. My understanding is that the commands in the Dockerfile are executed when I run docker build, and after that, the image is immutable. When I instantiate a container based on the image, it creates a new writable UFS layer on top of it if I've understood things correctly. So in the case of the data container, I can't really change the volume once it's created, right? If that assumption is correct, it boils down to 1) how do I best synchronize user ids across two different system (Maven for creating the Docker images, and Rancher for instantiating container clusters), and 2) is it better to drive the creation of the data volume container entirely from docker-compose.yml? How would I then be able to replicate the data container's Dockerfile content in docker-compose.yml?
I assume this is a fairly common scenario, so there must be a few "best practices" solutions out there. Thanks.
I've run into this issue as well: in my case there was a non-root process inside Docker container, and this process needed to access files mounted from host, so I had to pay attention to UID/GID values both inside Docker image and host.
I'm afraid that there is no good decision for it. Docker images are really immutable, so you'll have to establish some agreements or use third-party software to control both build and deployment process.
I would also strongly encourage you to stop using docker-compose and to give a try to SaltStack.
I am trying to build a docker image, I have dockerfile with all necessary commands. but in my build steps I need to copy one dir from remote host to docker image. But if I put scp command into dockerfile, i'll have to provide password also into dockerfile, which I dont have to.
Anyone has some better solution to do this. any suggestion would be appreciated.
I'd say there are at least options for dealing with that:
Option 1:
If you can execute scp before running docker build this may turn out to be the easiest option:
Run scp -r somewhere:remote_dir ./local_dir
Add COPY ./local_dir some_path to your Dockerfile
Run docker build
Option 2: If you have to execute scp during the build:
Start some key-value store such as etcd before the build
Place a correct SSH key (it cannot be password-protected) temporarily in the key-value store
Within a single RUN command (to avoid leaving secrets inside the image):
retrieve the SSH key from the key-value store;
put it in ~/.ssh/id_rsa or start an ssh-agent and add it;
retrieve the directory with scp
remove the SSH key
Remove the key from the key-value store
The second option is a bit convoluted, so it may be worth creating a wrapper script that retrieves the required secrets, runs any command, and removes the secrets.
You can copy a directory into a (even running) container at post build time
On remote machine: Copy from remote host to docker host with
scp -r /your/dir/ <user-at-docker-host>#<docker-host>:/some/remote/directory
On docker machine: Copy from docker host into docker container
docker cp /some/remote/directory <container-name>:/some/dir/within/docker/
Of course you can do step 1 also from your docker machine if you prefere that by simply adapting the source and target of the scp command.