So I want to include an rsa key in my image so I can clone a git repo into my image when its building. But I really don't want to have to keep this key in the docker build repo. Does anyone have a good recommendation on how to handle this? From the docker documentation and various other threads it seems that there is no way to COPY files from outside of the build context. Apart from the following solutions that I am not interested in using:
How to include files outside of Docker's build context?
Is there a better solution to this? Or am I going to have to either keep the key in the build repo or build from the location of the of the rsa key I want to use?
I guess a possible way to do it would be to gitignore the key from the build repo and just put it in whenever I clone it, and make a note of it in the readme so other developers know to do this too.
--- My Solution ---
I don't think there is a "correct" answer for this but here was the solution I went with.
I create a linux user (somewhere) and generate a key for it. Then create a user on gitlab with only repo cloning rights. I add the public key from the linux user to the gitlab user. Then for the build I create the .ssh folder and copy in the users private key with a config file. I just store that users key in the docker build repo.
build steps:
RUN mkdir ~/.ssh
RUN touch ~/.ssh/known_hosts
RUN ssh-keyscan -t rsa gitlab_host > ~/.ssh/known_hosts
COPY ./ssh/config /root/.ssh
COPY ./ssh/id_rsa_app /root/.ssh
RUN chmod 600 /root/.ssh/id_rsa_app
ssh config file:
Host gitlab-app
HostName gitlab_host
IdentityFile /root/.ssh/id_rsa_app
IdentitiesOnly yes
Now the git clone works inside of the build.
What about using a build argument? Do something like this in your Dockerfile:
ARG rsakey
RUN test -n "${rsakey}" && { \
mkdir -p -m 700 /root/.ssh; \
echo "${rsakey}" > /root/.ssh/id_rsa; \
chmod 600 /root/.ssh/id_rsa; \
} || :
Then, when you build the image, use the --build-arg option:
docker build -t sshtest --build-arg rsakey="$(cat /path/to/id_rsa)" .
This will inject the key into the image at build time without requiring it to live in your build context.
Related
I am trying to build a docker image that contains all of the necessary plugins/providers that several source repos need, so that when an automated terraform validate runs, it doesn't have to download gigs of redundant data.
However, I recognize that this provides for a maintenance problem in that someone may update a plugin version, and that would needed to be downloaded, since the docker image would not contain it.
The question
How can I pre-download all providers and plugins
Tell the CLI use those predownloaded plugins AND
also tell it that, if it doesn't find what it needs locally, then it can go to the network
Below are the relevant file:
.terraformrc
plugin_cache_dir = "$HOME/.terraform.d/plugin-cache"
disable_checkpoint = true
provider_installation {
filesystem_mirror {
path = "$HOME/.terraform/providers"
}
direct {
}
}
tflint (not relevant to this question, but it shows up in the below Dockerfile)
plugin "aws" {
enabled = true
version = "0.21.1"
source = "github.com/terraform-linters/tflint-ruleset-aws"
}
plugin "azurerm" {
enabled = true
version = "0.20.0"
source = "github.com/terraform-linters/tflint-ruleset-azurerm"
}
Dockerfile
FROM ghcr.io/terraform-linters/tflint-bundle AS base
LABEL name=tflint
RUN adduser -h /home/jenkins -s /bin/sh -u 1000 -D jenkins
RUN apk fix && apk --no-cache --update add git terraform openssh
ADD .terraformrc /home/jenkins/.terraformrc
RUN mkdir -p /home/jenkins/.terraform.d/plugin-cache/registry.terraform.io
ADD .tflint.hcl /home/jenkins/.tflint.hcl
WORKDIR /home/jenkins
RUN tflint --init
FROM base AS build
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh && \
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_ed25519 && \
chmod 400 /root/.ssh/id_ed25519 && \
touch /root/.ssh/known_hosts && \
ssh-keyscan mygitrepo >> /root/.ssh/known_hosts
RUN git clone git#mygitrepo:wrai/tools/g.git
RUN git clone git#mygitrepo:myproject/a.git && \
git clone git#mygitrepo:myproject/b.git && \
git clone git#mygitrepo:myproject/c.git && \
git clone git#mygitrepo:myproject/d.git && \
git clone git#mygitrepo:myproject/e.git && \
git clone git#mygitrepo:myproject/f.git
RUN ls -1d */ | xargs -I {} find {} -name '*.tf' | xargs -n 1 dirname | sort -u | \
xargs -I {} -n 1 -P 20 terraform -chdir={} providers mirror /home/jenkins/.terraform.d
RUN chown -R jenkins:jenkins /home/jenkins
USER jenkins
FROM base AS a
COPY --from=build /home/jenkins/a/ /home/jenkins/a
RUN cd /home/jenkins/a && terraform init
FROM base AS b
COPY --from=build /home/jenkins/b/ /home/jenkins/b
RUN cd /home/jenkins/b && terraform init
FROM base AS c
COPY --from=build /home/jenkins/c/ /home/jenkins/c
RUN cd /home/jenkins/c && terraform init
FROM base AS azure_infrastructure
COPY --from=build /home/jenkins/d/ /home/jenkins/d
RUN cd /home/jenkins/d && terraform init
FROM base AS aws_infrastructure
COPY --from=build /home/jenkins/e/ /home/jenkins/e
RUN cd /home/jenkins/e && terraform init
Staging plugins:
This is most easily accomplished with the plugin cache dir setting in the CLI. This supersedes the old usage with the -plugin-dir=PATH argument for the init command. You could also set a filesystem mirror in each terraform block within the root module config, but this would be cumbersome for your use case. In your situation, you are already configuring this in your .terraformrc, but the filesystem_mirror path conflicts with the plugin_cache_dir. You would want to resolve that conflict, or perhaps remove the mirror block entirely.
Use staged plugins:
Since the setting is captured in the CLI configuration file within the Dockerfile, this would be automatically used in future commands.
Download additional plugins if necessary:
This is default behavior of the init command, and therefore requires no further actions on your part.
Side note:
The jenkins user typically is /sbin/nologin for shell and /var/lib/jenkins for home directory. If the purpose of this Docker image is for a Jenkins build agent, then you may want the jenkins user configuration to be more aligned with the standard.
TL;DR:
Configure the terraform plugin cache directory
Create directory with a single TF file containing required_providers block
Run terraform init from there
...
I've stumbled over this question as I tried to figure out the same thing.
I first tried leveraging an implied filesystem_mirror by running terraform providers mirror /usr/local/share/terraform/plugins in a directory containing only one terraform file containing the required_providers block. This works fine as long as you only use the versions of the providers you mirrored.
However, it's not possible to use a different version of a provider than the one you mirrored, because:
Terraform will scan all of the filesystem mirror directories to see which providers are placed there and automatically exclude all of those providers from the implied direct block.
I've found it to be a better solution to use a plugin cache directory instead. EDIT: You can prefetch the plugins by setting TF_PLUGIN_CACHE_DIR to some directory and then running terraform init in a directory that only declares the required_providers.
Previously overengineered stuff below:
The only hurdle left was that terraform providers mirror downloads the providers in the packed layout:
Packed layout: HOSTNAME/NAMESPACE/TYPE/terraform-provider-TYPE_VERSION_TARGET.zip is the distribution zip file obtained from the provider's origin registry.
while Terraform expects the plugin cache directory to use the unpacked layout:
Unpacked layout: HOSTNAME/NAMESPACE/TYPE/VERSION/TARGET is a directory containing the result of extracting the provider's distribution zip file.
So I converted the packed layout to the unpacked layout with the help of find and parallel:
find path/to/plugin-dir -name index.json -exec rm {} +`
find path/to/plugin-dir -name '*.json' | parallel --will-cite 'mkdir -p {//}/{/.}/linux_amd64; unzip {//}/*.zip -d {//}/{/.}/linux_amd64; rm {}; rm {//}/*.zip'
I'm trying to build a Docker image using DOCKER_BUILDKIT which involves cloning a private remote repository from GitLab, with the following lines of my Dockerfile being used for the git clone:
# Download public key for gitlab.com
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#gitlab.com:*name_of_repo* *download_location*
However, when I run the docker build command using:
DOCKER_BUILDKIT=1 docker build --ssh default --tag test:local .
I get the following error when it is trying to do the git clone:
git#gitlab.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've set up the ssh access successfully on the machine I'm trying to build this image on, and both the ssh -T git#gitlab.com and trying to clone the repository outside of the Docker build work just fine.
I've had a look around but can't find any info on what might be causing this specific issue - any pointers much appreciated.
Make sure you have an SSH agent running and that you added your private key to it.
Depending on your platform, the commands may vary but since it's tagged gitlab I will assume that Linux is your platform.
Verify that you have an SSH agent running with echo $SSH_AUTH_SOCK or echo $SSH_AGENT_SOCK if both echo an empty string, you most likely do not have an agent running.
To start an agent you can usually type:
eval `ssh-agent`
Next, you can verify what key are added (if any) with:
ssh-add -l
If the key you need is not listed, you can add it with:
ssh-add /path/to/your/private-key
Then you should be good to go.
More info here: https://www.ssh.com/academy/ssh/agent
Cheers
For testing, use a non-encrypted private SSH key (meaning you don't have to manage an ssh-agent, which is only needed for encrypted private key passphrase caching)
And use ssh -Tv git#gitlab.com to check where SSH is looking for your key.
Then, in your Dockerfile, add before the line with git clone:
ENV GIT_SSH_COMMAND='ssh -Tv'
You will see again where Docker/SSH is looking when executing git clone with an SSH URL.
I suggested as much here, and there were some mounting folders missing then.
In my Dockerfile, I'm trying to pull a Python lib from a private repo:
RUN --mount=type=ssh .venv/bin/pip install SOME_LIB --extra-index-url https://example.com/pypi/ -U
Then I tried to run the build using the following command:
docker buildx build --ssh /path/to/the/private/key/id_rsa .
For some reason, it gave me the following error:
#0 0.831 Host key verification failed.
#0 0.831 fatal: Could not read from remote repository.
I've double checked the private key is correct. Did I miss any step to use --mount=type=ssh?
The error has nothing to do with your private key; it is "host key verification failed". That means that ssh doesn't recognize the key being presented by the remote host. It's default behavior is to ask if it should trust the hostkey, and when run in an environment when it can't prompt interactively, it will simply reject the key.
You have a few options to deal with this. In the following examples, I'll be cloning a GitHub private repository (so I'm interacting with github.com), but the process is the same for any other host to which you're connecting with ssh.
Inject a global known_hosts file when you build the image.
First, get the hostkey for the hosts to which you'll be connecting
and save it alongside your Dockerfile:
$ ssh-keycan github.com > known_hosts
Configure your Dockerfile to install this where ssh will find
it:
COPY known_hosts /etc/ssh/ssh_known_hosts
RUN chmod 600 /etc/ssh/ssh_known_hosts; \
chown root:root /etc/ssh/ssh_known_hosts
Configure ssh to trust unknown host keys:
RUN sed /^StrictHostKeyChecking/d /etc/ssh/ssh_config; \
echo StrictHostKeyChecking no >> /etc/ssh/ssh_config
Run ssh-keyscan in your Dockerfile when building the image:
RUN ssh-keyscan github.com > /etc/ssh/ssh_known_hosts
All three of these solutions will ensure that ssh trusts the remote host key. The first option is the most secure (the known hosts file will only be updated by you explicitly when you run ssh-keyscan locally). The last option is probably the most convenient.
the question has 2 parts, the 1st part: how to add root certificate? is simple and we can have reference from like How do I add a CA root certificate inside a docker image?
the 2nd part, which is what I actually want to ask, is: how to keep the root certificate only in docker build time?
maybe we can use buildctl and RUN --mount=type=secret; but it cannot cover all cases.
say I would like to pass sites with self-signed certificate like:
RUN curl https://x01.self-signed-site/obj01
RUN npm install --registry https://x02.self-signed-site/npm
RUN pip install -i https://x03.self-signed-site/pypi/simple
RUN mvn install
...
thus, we need to config certificate for each tool:
(prepare certificate and prepare .npmrc, .curlrc, ...)
(for, curl, npm, pip, we can use env vars; but we cannot guarantee we can use this way for other tools)
therefore, we need to download self-signed certificate into image and also modify some files to apply the cert config. how to keep the change only in build time (no persistent layer in final image)?
we resolved this problem by using docker save and docker load; but currently, docker load does not work as we expect (see also how to keep layers when do `docker load`)
anyway, below is our solution in pseudo-code:
docker save -o out.tar <image>
mkdir contents && cd contents
tar xf ../out.tar
open manifest.json, get config <hash>.json as config.json
remove target layers in:
- config.json[history]
- config.json[rootfs][diff_ids]
- manifest.json[0][Layers]
remove layer tarballs (get layer_hashes from maniefst.josn[0][Layers]):
- <layer_hash>/*
fill gap between missing layers:
- <layer_hash_next>/json[parent] = <layer_hash_prev>
tar cf ../new.tar *
docker rmi <image>
docker load -i ../new.tar
ref: https://github.com/stallpool/track-network-traffic/blob/main/bin/docker_image_cleanup.py
In docker, how to scope with the requirement of configuring known_hosts, authorized_keys and ssh connectivity in general, when container have to talk with external systems?
For example, I'm running jenkins container and try to checkout the project from github in job, but connection fails with the error host key verification failed
This could be solved by login into container, connect to github manually and trust the host key when prompted. However this isn't proper solution, as everything needs to be 100% automated (I'm building CI pipeline with ansible and docker). Another (clunky) solution would be to provision the running container with ansible, but this would make things messy and hard to maintain. Jenkins container doesn't even has ssh daemon, and I'm not sure how to ssh into container from other host. Third option would be to use my own Dockerfile extending jenkins image, where ssh is configured, but that would be hardcoding and locking the container to this specific environment.
So what is the correct way with docker to manage (and automate) connectivity with external systems?
To trust github.com host you can issue this command when you start or build your container:
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
This will add github public key to your known hosts file.
If everything is done in the Dockerfile it's easy.
In my Dockerfile:
ARG PRIVATE_SSH_KEY
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan example.com > /root/.ssh/known_hosts && \
# Add the keys and set permissions
echo "$PRIVATE_SSH_KEY" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
...do stuff with private key
# Remove SSH keys
RUN rm -rf /root/.ssh/
You need to obviously need to pass the private key as an argument to the building(docker-compose build or docker build).
One solution is to mount host's ssh keys into docker with following options:
docker run -v /home/<host user>/.ssh:/home/<docker user>/.ssh <image>
This works perfectly for git.
There is a small trick but git version should be > 2.3
export GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
git clone git#gitlab.com:some/another/repo.git
or simply
GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
you can even point to private key file path like this:
GIT_SSH_COMMAND="ssh -i /path/to/private_key_file -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
This is how I do it, not sure if you will like this solution though. I have a private git repository containing authorized_keys with a collection of public keys. Then, I use ansible to clone this repository and replace authorized_keys:
- git: repo=my_repo dest=my_local_folder force=yes accept_hostkey=yes
- shell: "cp my_local_folder/authorized_keys ~/.ssh/"
Using accept_hostkey is what actually allows me to automate the process (I trust the source, of course).
Try this:
Log into the host, then:
sudo mkdir /var/jenkins_home/.ssh/
sudo ssh-keyscan -t rsa github.com >> /var/jenkins_home/.ssh/known_hosts
The Jenkins container sets the home location to the persistent map, as such, running this in the host system will generate the required result.
Detailed answer to the one provided by #Konstantin Suvorov, if you are going to use a Dockerfile.
In my Dockerfile I just added:
COPY my_rsa /root/.ssh/my_rsa # copy rsa key
RUN chmod 600 /root/.ssh/my_rsa # make it accessible
RUN apt-get -y install openssh-server # install openssh
RUN ssh-keyscan my_hostname >> ~/.ssh/known_hosts # add hostname to known_hosts
Note that "my_hostname" and "my_rsa" are your host-name and your rsa key
This made ssh work in docker without any issues, so I could connect to DBs