Dockerfile `RUN --mount=type=ssh` doesn't seem to work - docker

In my Dockerfile, I'm trying to pull a Python lib from a private repo:
RUN --mount=type=ssh .venv/bin/pip install SOME_LIB --extra-index-url https://example.com/pypi/ -U
Then I tried to run the build using the following command:
docker buildx build --ssh /path/to/the/private/key/id_rsa .
For some reason, it gave me the following error:
#0 0.831 Host key verification failed.
#0 0.831 fatal: Could not read from remote repository.
I've double checked the private key is correct. Did I miss any step to use --mount=type=ssh?

The error has nothing to do with your private key; it is "host key verification failed". That means that ssh doesn't recognize the key being presented by the remote host. It's default behavior is to ask if it should trust the hostkey, and when run in an environment when it can't prompt interactively, it will simply reject the key.
You have a few options to deal with this. In the following examples, I'll be cloning a GitHub private repository (so I'm interacting with github.com), but the process is the same for any other host to which you're connecting with ssh.
Inject a global known_hosts file when you build the image.
First, get the hostkey for the hosts to which you'll be connecting
and save it alongside your Dockerfile:
$ ssh-keycan github.com > known_hosts
Configure your Dockerfile to install this where ssh will find
it:
COPY known_hosts /etc/ssh/ssh_known_hosts
RUN chmod 600 /etc/ssh/ssh_known_hosts; \
chown root:root /etc/ssh/ssh_known_hosts
Configure ssh to trust unknown host keys:
RUN sed /^StrictHostKeyChecking/d /etc/ssh/ssh_config; \
echo StrictHostKeyChecking no >> /etc/ssh/ssh_config
Run ssh-keyscan in your Dockerfile when building the image:
RUN ssh-keyscan github.com > /etc/ssh/ssh_known_hosts
All three of these solutions will ensure that ssh trusts the remote host key. The first option is the most secure (the known hosts file will only be updated by you explicitly when you run ssh-keyscan locally). The last option is probably the most convenient.

Related

ssh key in Dockerfile returning Permission denied (publickey)

I'm trying to build a Docker image using DOCKER_BUILDKIT which involves cloning a private remote repository from GitLab, with the following lines of my Dockerfile being used for the git clone:
# Download public key for gitlab.com
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#gitlab.com:*name_of_repo* *download_location*
However, when I run the docker build command using:
DOCKER_BUILDKIT=1 docker build --ssh default --tag test:local .
I get the following error when it is trying to do the git clone:
git#gitlab.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've set up the ssh access successfully on the machine I'm trying to build this image on, and both the ssh -T git#gitlab.com and trying to clone the repository outside of the Docker build work just fine.
I've had a look around but can't find any info on what might be causing this specific issue - any pointers much appreciated.
Make sure you have an SSH agent running and that you added your private key to it.
Depending on your platform, the commands may vary but since it's tagged gitlab I will assume that Linux is your platform.
Verify that you have an SSH agent running with echo $SSH_AUTH_SOCK or echo $SSH_AGENT_SOCK if both echo an empty string, you most likely do not have an agent running.
To start an agent you can usually type:
eval `ssh-agent`
Next, you can verify what key are added (if any) with:
ssh-add -l
If the key you need is not listed, you can add it with:
ssh-add /path/to/your/private-key
Then you should be good to go.
More info here: https://www.ssh.com/academy/ssh/agent
Cheers
For testing, use a non-encrypted private SSH key (meaning you don't have to manage an ssh-agent, which is only needed for encrypted private key passphrase caching)
And use ssh -Tv git#gitlab.com to check where SSH is looking for your key.
Then, in your Dockerfile, add before the line with git clone:
ENV GIT_SSH_COMMAND='ssh -Tv'
You will see again where Docker/SSH is looking when executing git clone with an SSH URL.
I suggested as much here, and there were some mounting folders missing then.

go get fails with private bitbucket repositories, getting server response: Access denied and 404 error

I have used in Docker for go build with get private bitbucket repo but Always getting 403 forbidden and Access denied error, like below.
[91mgo: missing Mercurial command. See https://golang.org/s/gogetcmd
[0m[91mgo: missing Mercurial command. See https://golang.org/s/gogetcmd
[0m[91mgo get bitbucket.org/Mycompany/app-client: reading https://api.bitbucket.org/2.0/repositories/Mycompany/app-client?fields=scm: 403 Forbidden
server response: Access denied. You must have write or admin access.
[0mThe command '/bin/sh -c go get bitbucket.org/Mycompany/app-client' returned a non-zero code: 1
I have added in docker file like below , also added jenkins user id_rsa.pub in bitbucket.
ARG ssh_prv_key
ARG ssh_pub_key
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
RUN echo " IdentityFile /var/lib/jenkins/.ssh/id_rsa " >> /etc/ssh/ssh_config
RUN git config --global user.email "admin#Mycompany
RUN git config --global user.name admin
RUN echo " IdentityFile ~/.ssh/id_rsa" >> /etc/ssh/ssh_config
then :- docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" --build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)" --squash .
For ubuntu VM its getting correctly , but from Docker only getting this issue.
When you ADD id_rsa /root/.ssh/id_rsa, ensure you're using the root user for the next steps (needing SSH).
Also, ensure the Bitbucket Repository doesn't have IP filtering enabled or it allows the right IPs.

ssh to deploy machine and running python file gives error not found even if it exists

I have changed the password of the user(lets call him staging_user) gitlab-runner will use to log in to a different machine that runs staging server,
and in the .gitlab-ci.yml
staging_deploy:
stage: deploy
variables:
SSH_EXEC: "ssh staging_user#staging_server"
DEPLOY_PATH: "/home/staging_user/project_site"
only:
- staging
script:
- ${SSH_EXEC} "if [ -d ${DEPLOY_PATH} ]; then \rm -r ${DEPLOY_PATH}/*; else mkdir -p ${DEPLOY_PATH}; fi"
- echo -e ${GITSSHKEY} > conf/.ssh/id_rsa
- scp -r * staging-user#staging_server://home/staging_user/project_site/
- ${SSH_EXEC} "cd ${DEPLOY_PATH}/; docker-compose build --no-cache --force-rm; docker-compose up -d"
- ${SSH_EXEC} "docker exec website_staging python /var/www/website.com/src/manage.py collectstatic --no-input"
the gitlab-runner runs on git01 machine and from there gitlab-runner ssh to staging_server user is staging_user (see SSH_EXEC value above)
I have noticed the GITSSHKEY is variable stored on the gitlab project under: gitlab.com/test_group/project_site/settings/ci_cd . So I believe need to update this SSH key, but I am bit confused and not sure where to run ssh-keygen to generate a new key and paste here, whether I run ssh-keygen on git01 from where gitlab-runner is ssh'ing or on staging_server machine.
I am getting this error:
Service 'web' failed to build: error pulling image configuration: Get https://dseasb33srnrn.cloudfront.net/registry-v2/docker/registry/v2/blobs/sha256/0a/0a2bad7da9b55f3121f0829de061f002ef059717fc2ed23c135443081200000e/data?Expires=1526503430&Signature=LZNRPPcqYzFoeE94jHgdxyN7gONaewh3ZF2688IVPhrOFKt-DB20gcSZIytqiDff8Hk7CS60SFKoROkU4VWMroByNqAcrFeMJGEAG-GKSSLXKPqQUsxYeXyW5rRGGbC8CqARQKsj1GBR-fTvRstcrnfhQVrn9gv~IFtqRXNB-LM_&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q: net/http: TLS handshake timeout
website_web_1 is up-to-date
$ ${SSH_EXEC} "ls -lh /var/www/website.com/src/manage.py"
-rw-rw-r-- 1 staging_user staging_user 280 May 15 16:26 /var/www/website.com/src/manage.py
$ ${SSH_EXEC} "docker exec website_web_1 python /var/www/website.com/src/manage.py collectstatic --no-input"
python: can't open file '/var/www/website.com/src/manage.py': [Errno 2] No such file or directory
Note regarding the initial question: changing the password should not impact an ssh key, since it relies on the public key being on the remote server ~staging_user/.ssh/authorized_keys.
Generating a new ssh key is needed on the source machine (the one which will initiate the ssh to the remote machine), and you would need to deploy the public key first to the remote ~staging_user/.ssh/authorized_keys file.
After discussion, the OP Ciasto piekarz states in the comments:
I have discovered that if the container is already running then we get this error, but if we stop the running container and update the branch for gitlab-runner to run the pipeline then the deployment goes successful

Docker: adding rsa keys to image from outside of build context

So I want to include an rsa key in my image so I can clone a git repo into my image when its building. But I really don't want to have to keep this key in the docker build repo. Does anyone have a good recommendation on how to handle this? From the docker documentation and various other threads it seems that there is no way to COPY files from outside of the build context. Apart from the following solutions that I am not interested in using:
How to include files outside of Docker's build context?
Is there a better solution to this? Or am I going to have to either keep the key in the build repo or build from the location of the of the rsa key I want to use?
I guess a possible way to do it would be to gitignore the key from the build repo and just put it in whenever I clone it, and make a note of it in the readme so other developers know to do this too.
--- My Solution ---
I don't think there is a "correct" answer for this but here was the solution I went with.
I create a linux user (somewhere) and generate a key for it. Then create a user on gitlab with only repo cloning rights. I add the public key from the linux user to the gitlab user. Then for the build I create the .ssh folder and copy in the users private key with a config file. I just store that users key in the docker build repo.
build steps:
RUN mkdir ~/.ssh
RUN touch ~/.ssh/known_hosts
RUN ssh-keyscan -t rsa gitlab_host > ~/.ssh/known_hosts
COPY ./ssh/config /root/.ssh
COPY ./ssh/id_rsa_app /root/.ssh
RUN chmod 600 /root/.ssh/id_rsa_app
ssh config file:
Host gitlab-app
HostName gitlab_host
IdentityFile /root/.ssh/id_rsa_app
IdentitiesOnly yes
Now the git clone works inside of the build.
What about using a build argument? Do something like this in your Dockerfile:
ARG rsakey
RUN test -n "${rsakey}" && { \
mkdir -p -m 700 /root/.ssh; \
echo "${rsakey}" > /root/.ssh/id_rsa; \
chmod 600 /root/.ssh/id_rsa; \
} || :
Then, when you build the image, use the --build-arg option:
docker build -t sshtest --build-arg rsakey="$(cat /path/to/id_rsa)" .
This will inject the key into the image at build time without requiring it to live in your build context.

Docker, how to deal with ssh keys, known_hosts and authorized_keys

In docker, how to scope with the requirement of configuring known_hosts, authorized_keys and ssh connectivity in general, when container have to talk with external systems?
For example, I'm running jenkins container and try to checkout the project from github in job, but connection fails with the error host key verification failed
This could be solved by login into container, connect to github manually and trust the host key when prompted. However this isn't proper solution, as everything needs to be 100% automated (I'm building CI pipeline with ansible and docker). Another (clunky) solution would be to provision the running container with ansible, but this would make things messy and hard to maintain. Jenkins container doesn't even has ssh daemon, and I'm not sure how to ssh into container from other host. Third option would be to use my own Dockerfile extending jenkins image, where ssh is configured, but that would be hardcoding and locking the container to this specific environment.
So what is the correct way with docker to manage (and automate) connectivity with external systems?
To trust github.com host you can issue this command when you start or build your container:
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
This will add github public key to your known hosts file.
If everything is done in the Dockerfile it's easy.
In my Dockerfile:
ARG PRIVATE_SSH_KEY
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan example.com > /root/.ssh/known_hosts && \
# Add the keys and set permissions
echo "$PRIVATE_SSH_KEY" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
...do stuff with private key
# Remove SSH keys
RUN rm -rf /root/.ssh/
You need to obviously need to pass the private key as an argument to the building(docker-compose build or docker build).
One solution is to mount host's ssh keys into docker with following options:
docker run -v /home/<host user>/.ssh:/home/<docker user>/.ssh <image>
This works perfectly for git.
There is a small trick but git version should be > 2.3
export GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
git clone git#gitlab.com:some/another/repo.git
or simply
GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
you can even point to private key file path like this:
GIT_SSH_COMMAND="ssh -i /path/to/private_key_file -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
This is how I do it, not sure if you will like this solution though. I have a private git repository containing authorized_keys with a collection of public keys. Then, I use ansible to clone this repository and replace authorized_keys:
- git: repo=my_repo dest=my_local_folder force=yes accept_hostkey=yes
- shell: "cp my_local_folder/authorized_keys ~/.ssh/"
Using accept_hostkey is what actually allows me to automate the process (I trust the source, of course).
Try this:
Log into the host, then:
sudo mkdir /var/jenkins_home/.ssh/
sudo ssh-keyscan -t rsa github.com >> /var/jenkins_home/.ssh/known_hosts
The Jenkins container sets the home location to the persistent map, as such, running this in the host system will generate the required result.
Detailed answer to the one provided by #Konstantin Suvorov, if you are going to use a Dockerfile.
In my Dockerfile I just added:
COPY my_rsa /root/.ssh/my_rsa # copy rsa key
RUN chmod 600 /root/.ssh/my_rsa # make it accessible
RUN apt-get -y install openssh-server # install openssh
RUN ssh-keyscan my_hostname >> ~/.ssh/known_hosts # add hostname to known_hosts
Note that "my_hostname" and "my_rsa" are your host-name and your rsa key
This made ssh work in docker without any issues, so I could connect to DBs

Resources