SSH agent forwarding during docker build - docker

While building up a docker image through a dockerfile, I have to clone a github repo. I added my public ssh keys to my git hub account and I am able to clone the repo from my docker host. While I see that I can use docker host's ssh key by mapping $SSH_AUTH_SOCK env variable at the time of docker run like
docker run --rm -it --name container_name \
-v $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK) \
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCK my_image
How can I do the same during a docker build?

For Docker 18.09 and newer
You can use new features of Docker to forward your existing SSH agent connection or a key to the builder. This enables for example to clone your private repositories during build.
Steps:
First set environment variable to use new BuildKit
export DOCKER_BUILDKIT=1
Then create Dockerfile with new (experimental) syntax:
# syntax=docker/dockerfile:experimental
FROM alpine
# install ssh client and git
RUN apk add --no-cache openssh-client git
# download public key for github.com
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# clone our private repository
RUN --mount=type=ssh git clone git#github.com:myorg/myproject.git myproject
And build image with
docker build --ssh default .
Read more about it here: https://medium.com/#tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066

Unfortunately, you cannot forward your ssh socket to the build container since build time volume mounts are currently not supported in Docker.
This has been a topic of discussion for quite a while now, see the following issues on GitHub for reference:
https://github.com/moby/moby/issues/6396
https://github.com/moby/moby/issues/14080
As you can see this feature has been requested multiple times for different use cases. So far the maintainers have been hesitant to address this issue because they feel that volume mounts during build would break portability:
the result of a build should be independent of the underlying host
As outlined in this discussion.

This may be solved using an alternative build script. For example you may create a bash script and put it in ~/usr/local/bin/docker-compose or your favourite location:
#!/bin/bash
trap 'kill $(jobs -p)' EXIT
socat TCP-LISTEN:56789,reuseaddr,fork UNIX-CLIENT:${SSH_AUTH_SOCK} &
/usr/bin/docker-compose $#
Then in your Dockerfile you would use your existing ssh socket:
...
ENV SSH_AUTH_SOCK /tmp/auth.sock
...
&& apk add --no-cache socat openssh \
&& /bin/sh -c "socat -v UNIX-LISTEN:${SSH_AUTH_SOCK},unlink-early,mode=777,fork TCP:172.22.1.11:56789 &> /dev/null &" \
&& bundle install \
...
or any other ssh commands will works
Now you can call our custom docker-compose build. It would call the actual docker script with a shared ssh socket.

This one is also interesting:
https://github.com/docker/for-mac/issues/483#issuecomment-344901087
It looks like:
On the host
mkfifo myfifo
nc -lk 12345 <myfifo | nc -U $SSH_AUTH_SOCK >myfifo
In the dockerfile
RUN mkfifo myfifo
RUN while true; do \
nc 172.17.0.1 12345 <myfifo | nc -Ul /tmp/ssh-agent.sock >myfifo \
done &
RUN export SSH_AUTH_SOCK=/tmp/ssh-agent.sock
RUN ssh ...

Related

docker volume masks parent folder in container?

I'm trying to use a Docker container to build a project that uses rust; I'm trying to build as my user. I have a Dockerfile that installs rust in $HOME/.cargo, and then I'm trying to docker run the container, map the sources from $HOME/<some/subdirs/to/project> on the host in the same subfolder in the container. The Dockerfile looks like this:
FROM ubuntu:16.04
ARG RUST_VERSION
RUN \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
# install library dependencies
apt-get install [... a bunch of stuff ...] && \
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION && \
echo 'source $HOME/.cargo/env' >> $HOME/.bashrc && \
echo apt-get DONE
The build container is run something like this:
docker run -i -t -d --net host --privileged -v /mnt:/mnt -v /dev:/dev --volume /home/stefan/<path/to/project>:/home/stefan/<path/to/project>:rw --workdir /home/stefan/<path/to/project> --name <container-name> -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -u 1000 <image-name>
And then I try to exec into it and run the build script, but it can't find rust or $HOME/.cargo:
docker exec -it <container-name> bash
$ ls ~/.cargo
ls: cannot access '/home/stefan/.cargo': No such file or directory
It looks like the /home/stefan/<path/to/project> volume is masking the contents of /home/stefan in the container. Is this expected? Is there a workaround possible to be able to map the source code from a folder under $HOME on the host, but keep $HOME from the container?
I'm un Ubuntu 18.04, docker 19.03.12, on x86-64.
Dockerfile read variable in physical machine. So you user don't have in virtual machine.
Try change: $HOME to /root
echo 'source /root/.cargo/env' >> /root/.bashrc && \
I'll post this as an answer, since I seem to have figured it out.
When the Dockerfile is expanded, $HOME is /root, and the user is root. I couldn't find a way to reliably introduce my user in the build step / Dockerfile. I tried something like:
ARG BUILD_USER
ARG BUILD_GROUP
RUN mkdir /home/$BUILD_USER
ENV HOME=/home/$BUILD_USER
USER $BUILD_USER:$BUILD_GROUP
RUN \
echo "HOME is $HOME" && \
[...]
But didn't get very far, because inside the container, the user doesn't exist:
unable to find user stefan: no matching entries in passwd file
So what I ended up doing was to docker run as my user, and run the rust install from there - that is, from the script that does the actual build.
I also realized why writing to /home/$USER doesn't work - there is no /home/$USER in the container; mapping /etc/passwd and /etc/group in the container teaches it about the user, but does not create any directory. I could've mapped $HOME from the host, but then the container would control the rust versions on the host, and would not be that self contained. I also ended up needing to install rust in a non-standard location, since I don't have a writable $HOME in the container: I had to set CARGO_HOME and RUSTUP_HOME to do that.

Dockerfile to add ssh key of container and host

I want to make a container ssh into the host without asking for the password. For this, I need to save the ssh key. I have following dockerfile:
FROM easypi/alpine-arm
RUN apk update && apk upgrade
RUN apk add openssh
RUN ssh-keygen -f /root/.ssh/id_rsa
RUN ssh-copy-id -i /root/.ssh/id_rsa user#<ipadress of host>
But the problem is the ip address is not constant. So if I use the same image on some other machine, it wont work there. How can I resolve this issue.
Thanks
None of these things should be in your Dockerfile. Putting an ssh private key in your Dockerfile is especially dangerous, since anyone who has your image can almost trivially get the key out.
Also consider that it's unusual to make either inbound or outbound ssh connections from a Docker container at all; they are usually self-contained, all of the long-term state should be described by the Dockerfile and the source control repository in which it lives, and they conventionally run a single server-type process which isn't sshd.
That all having been said: if you really want to do this, the right way is to build an image that expects the .ssh directory to be injected from the host and takes the outbound IP address as a parameter of some sort. One way is to write a shell script that's "the single thing the container does":
#!/bin/sh
usage() {
echo "Usage: docker run --rm -it -v ...:$HOME/.ssh myimage $0 10.20.30.40"
}
if [ -n "$1" ]; then
usage >&2
exit 1
fi
if [ ! -f "$HOME/.ssh" ]; then
usage >&2
exit 1
fi
exec ssh "user#$1"
Then build this into your Dockerfile:
FROM easypi/alpine-arm
RUN apk update \
&& apk upgrade \
&& apk add openssh
COPY ssh_user.sh /usr/bin
CMD ["/usr/bin/ssh_user.sh"]
Now on the host generate the ssh key pair
mkdir some_ssh
ssh-keygen -f some_ssh/id_rsa
ssh-copy-id -i some_ssh/id_rsa user#10.20.30.40
sudo chown root some_ssh
And then inject that into the Docker container at runtime
sudo docker run --rm -it \
-v $PWD/some_ssh:/root/.ssh \
my_image \
ssh_user.sh 10.20.30.40
(I'm pretty sure the outbound ssh connection will complain if the bind-mounted .ssh directory isn't owned by the same numeric user ID that's running the process; hence the chown root above. Also note that you're setting up a system where you have to have root permissions on the host to make a simple outbound ssh connection, which feels a little odd from a security perspective. [Consider that you could put any directory into that -v option and run an interactive shell.])

Enabling ssh at docker build time

Docker version 17.11.0-ce, build 1caf76c
I need to run Ansible to build & deploy to wildfly some java projects during docker build time, so that when I run docker image I have everything setup. However, Ansible needs ssh to localhost. So far I was unable to make it working. I've tried different docker images and now I ended up with phusion (https://github.com/phusion/baseimage-docker#login_ssh). What I have atm:
FROM phusion/baseimage
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd -d &
RUN ssh -tt root#127.0.0.1
CMD ["/bin/bash"]
But I still get
Step 11/12 : RUN ssh -tt root#127.0.0.1
---> Running in cf83f9906e55
ssh: connect to host 127.0.0.1 port 22: Connection refused
The command '/bin/sh -c ssh -tt root#127.0.0.1' returned a non-zero code: 255
Any suggestions what could be wrong? Is it even possible to achieve that?
RUN /usr/sbin/sshd -d &
That will run a process in the background using a shell. As soon as the shell that started the process returns from running the background command, it exits with no more input, and the container used for that RUN command terminates. The only thing saved from a RUN is the change to the filesystem. You do not save running processes, environment variables, or shell state.
Something like this may work, but you may also need a sleep command to give sshd time to finish starting.
RUN /usr/sbin/sshd -d & \
ssh -tt root#127.0.0.1
I'd personally look for another way to do this without sshd during the build. This feels very kludgy and error prone.
There are multiple problems in that Dockerfile
First of all, you can't run a background process in a RUN statement and expect that process in another RUN. Each statement of a Dockerfile are run in a different containers so processes don't persist between them.
Other issue was that 127.0.0.1 is not in known_hosts.
And finally, you must give some time to sshd to start.
Here is a working Dockerfile:
FROM phusion/baseimage
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN printf "Host 127.0.0.1\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd & sleep 5 && ssh -tt root#127.0.0.1 'ls -al'
CMD ["/bin/bash"]
Anyway, I would rather find another solution than provisioning you image with Ansible in Dockerfile. Check out ansible-container

Run Omnet++ inside docker with x11 forwarding on windows. SSH not working

Cannot ssh into container running on Windows hostmachine
For a university project i build a docker image containing Omnet++ to provide a consistent development environment.
The Image uses phusions's Baseimage and sets up x11 forwarding via SSH like rogaha did in his docker-desktop image.
The image works perfectly fine on a Linux Host System. But on Windows and OS X i was unable to ssh on the container from the host machine.
I reckon this is due to the different implementation of Docker on Windows and OS X. As explained in this Article by Microsoft Docker uses a NAT Network for Containers as a default to Separate the Networks from Host and Containers.
My problem is i don't know how to reach the running container via ssh.
I already tried the following:
Change the Container Network to a transparent Network as described in the Microsoft Article. The following error occurs both in Windows and OS X:
docker network create -d transparent MyTransparentNetwork
Error response from daemon: legacy plugin: plugin not found
On Windows run Docker in Virtualbox instead of Hyper-V
Explicitly expose port 22 like this:
docker run -p 52022:22 containerName
ssh -p 52022 root#ContainerIP
Dockerfile
FROM phusion/baseimage:latest
MAINTAINER Robin Finkbeiner
LABEL Description="Docker image for Nesting Stupro University of Stuttgart containing full omnet 5.1.1"
# Install dependencies
RUN apt-get update && apt-get install -y \
xpra\
rox-filer\
openssh-server\
pwgen\
xserver-xephyr\
xdm\
fluxbox\
sudo\
git \
xvfb\
wget \
build-essential \
gcc \
g++\
bison \
flex \
perl \
qt5-default\
tcl-dev \
tk-dev \
libxml2-dev \
zlib1g-dev \
default-jre \
doxygen \
graphviz \
libwebkitgtk-3.0-0 \
libqt4-opengl-dev \
openscenegraph-plugin-osgearth \
libosgearth-dev\
openmpi-bin\
libopenmpi-dev
# Set the env variable DEBIAN_FRONTEND to noninteractive
ENV DEBIAN_FRONTEND noninteractive
#Enabling SSH -- from phusion baseimage documentation
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
# Copied command from https://github.com/rogaha/docker-desktop/blob/master/Dockerfile
# Configuring xdm to allow connections from any IP address and ssh to allow X11 Forwarding.
RUN sed -i 's/DisplayManager.requestPort/!DisplayManager.requestPort/g' /etc/X11/xdm/xdm-config
RUN sed -i '/#any host/c\*' /etc/X11/xdm/Xaccess
RUN ln -s /usr/bin/Xorg
RUN echo X11Forwarding yes >> /etc/ssh/ssh_config
# OMnet++ 5.1.1
# Create working directory
RUN mkdir -p /usr/omnetpp
WORKDIR /usr/omnetpp
# Fetch Omnet++ source
RUN wget https:******omnetpp-5.1.1-src-linux.tgz
RUN tar -xf omnetpp-5.1.1-src-linux.tgz
# Path
ENV PATH $PATH:/usr/omnetpp/omnetpp-5.1.1/bin
# Configure and compile
RUN cd omnetpp-5.1.1 && \
xvfb-run ./configure && \
make
# Cleanup
RUN apt-get clean && \
rm -rf /var/lib/apt && \
rm /usr/omnetpp/omnetpp-5.1.1-src-linux.tgz
Solution that worked for me
First of all the linked Microsoft Article is only valid for windows container.
This Article explains very well how docker networks work.
To simplify the explanation i drew a simple example.Simple ssh into docker network.
To be able to reach a container in bridged networks one is required to expose the necessary ports explicitly.
Expose Port
docker run -p 22 {$imageName}
Find Port mapping on host machine
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a2ec2bd2b53b renderfehler/omnet_ide_baseimage "/sbin/my_init" 17 hours ago Up 17 hours 0.0.0.0:32773->22/tcp tender_newton
ssh onto container using mapped port
ssh -p 32772 root#0.0.0.0

Jenkins in docker with access to host docker

I have a workflow as follows for publishing webapps to my dev server. The server has a single docker host and I'm using docker-compose for managing containers.
Push changes in my app to a private gitlab (running in docker). The app includes a Dockerfile and docker-compose.yml
Gitlab triggers a jenkins build (jenkins is also running in docker), which does some normal build stuff (e.g. run test)
Jenkins then needs to build a new docker image and deploy it using docker-compose.
The problem I have is in step 3. The way I have it set up, the jenkins container has access to the host docker so that running any docker command in the build script is essentially the same as running it on the host. This is done using the following DockerFile for jenkins:
FROM jenkins
USER root
# Give jenkins access to docker
RUN groupadd -g 997 docker
RUN gpasswd -a jenkins docker
# Install docker-compose
RUN curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
USER jenkins
and mapping the following volumes to the jenkins container:
-v /var/run/docker.sock:/var/run/docker.sock
-v /usr/bin/docker:/usr/bin/docker
A typical build script in jenkins looks something like this:
docker-compose build
docker-compose up
This works ok, but there are two problems:
It really feels like a hack. But the only other options I've found is to use the docker plugin for jenkins, publish to a registry and then have some way of letting the host know it needs to restart. This is quite a lot more moving parts, and the docker-jenkins plugin required that the docker host is on an open port, which I don't really want to expose.
The jenkins DockerFile includes groupadd -g 997 docker which is needed to give the jenkins user access to docker. However, the GID (997) is the GID on the host machine, and is therefore not portable.
I'm not really sure what solution I'm looking for. I can't see any practical way to get around this approach, but it would be nice if there was a way to allow running docker commands inside the jenkins container without having to hard code the GID in the DockerFile. Does anyone have any suggestions about this?
My previous answer was more generic, telling how you can modify the GID inside the container at runtime. Now, by coincidence, someone from my close colleagues asked for a jenkins instance that can do docker development so I created this:
FROM bdruemen/jenkins-uid-from-volume
RUN apt-get -yqq update && apt-get -yqq install docker.io && usermod -g docker jenkins
VOLUME /var/run/docker.sock
ENTRYPOINT groupmod -g $(stat -c "%g" /var/run/docker.sock) docker && usermod -u $(stat -c "%u" /var/jenkins_home) jenkins && gosu jenkins /bin/tini -- /usr/local/bin/jenkins.sh
(The parent Dockerfile is the same one I have described in my answer to: Changing the user's uid in a pre-build docker container (jenkins))
To use it, mount both, jenkins_home and docker.sock.
docker run -d /home/jenkins:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock <IMAGE>
The jenkins process in the container will have the same UID as the mounted host directory. Assuming the docker socket is accessible to the docker group on the host, there is a group created in the container, also named docker, with the same GID.
I ran into the same issues. I ended up giving Jenkins passwordless sudo privileges because of the GID problem. I wrote more about this here: https://blog.container-solutions.com/running-docker-in-jenkins-in-docker
This doesn't really affect security as having docker privileges is effectively equivalent to sudo rights.
Please take a look at this docker file I just posted:
https://github.com/bdruemen/jenkins-docker-uid-from-volume/blob/master/gid-from-volume/Dockerfile
Here the GID extracted from a mounted volume (host directory), with
stat -c '%g' <VOLUME-PATH>
Then the GID of the group of the container user is changed to the same value with
groupmod -g <GID>
This has to be done as root, but then root privileges are dropped with
gosu <USERNAME> <COMMAND>
Everything is done in the ENTRYPOINT, so the real GID is unknown until you run
docker run -d -v <HOST-DIRECTORY>:<VOLUME-PATH> ...
Note that after changing the GID, there might be other files in the container no longer accessible for the process, so you might need a
chgrp -R <GROUPNAME> <SOME-PATH>
before the gosu command.
You can also change the UID, see my answer here Changing the user's uid in a pre-build docker container (jenkins)
and maybe you want to change both to increase security.
I solved a similar problem in the following way.
Docker is installed on the host. Jenkins is deployed in the docker container of the host. Jenkins must build and run containers with web applications on the host.
Jenkins master connects to the docker host using REST APIs. So we need to enable the remote API for our docker host.
Log in to the host and open the docker service file /lib/systemd/system/docker.service. Search for ExecStart and replace that line with the following.
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
Reload and restart docker service
sudo systemctl daemon-reload
sudo service docker restart
Docker file for Jenkins
FROM jenkins/jenkins:lts
USER root
# Install the latest Docker CE binaries and add user `jenkins` to the docker group
RUN apt-get update
RUN apt-get -y --no-install-recommends install apt-transport-https \
apt-utils ca-certificates curl gnupg2 software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable"
RUN apt-get update && apt-get install -y docker-ce-cli docker-ce && \
apt-get clean && \
usermod -aG docker jenkins
USER jenkins
RUN jenkins-plugin-cli --plugins "blueocean:1.25.6 docker-workflow:1.29 ansicolor"
Build jenkins docker image
docker build -t you-jenkins-name .
Run Jenkins
docker run --name you-jenkins-name --restart=on-failure --detach \
--network jenkins \
--env DOCKER_HOST=tcp://172.17.0.1:4243 \
--publish 8080:8080 --publish 50000:50000 \
--volume jenkins-data:/var/jenkins_home \
--volume jenkins-docker-certs:/certs/client:ro \
you-jenkins-name
Your web application has a repository at the root of which is jenkins and a docker file.
Jenkinsfile for web app:
pipeline {
agent any
environment {
PRODUCT = 'web-app'
HTTP_PORT = 8082
DEVICE_CONF_HOST_PATH = '/var/web-app'
}
options {
ansiColor('xterm')
skipDefaultCheckout()
}
stages {
stage('Checkout') {
steps {
script {
//BRANCH_NAME = env.CHANGE_BRANCH ? env.CHANGE_BRANCH : env.BRANCH_NAME
deleteDir()
//git url: "git#<host>:<org>/${env.PRODUCT}.git", branch: BRANCH_NAME
}
checkout scm
}
}
stage('Stop and remove old') {
steps {
script {
try {
sh "docker stop ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker rm ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker image rm ${env.PRODUCT}"
} catch (Exception e) {}
}
}
}
stage('Build') {
steps {
sh "docker build . -t ${env.PRODUCT}"
}
}
// ④ Run the test using the built docker image
stage('Run new') {
steps {
script {
sh """docker run
--detach
--name ${env.PRODUCT} \
--publish ${env.HTTP_PORT}:8080 \
--volume ${env.DEVICE_CONF_HOST_PATH}:/var/web-app \
${env.PRODUCT} """
}
}
}
}
}

Resources