docker buildkit mount ssh when using remote agent forwarding - docker

I use the --ssh docker buildkit feature and it works fine locally.
I want to build Docker at a remote server and for that I use the -A flag to forward my local github key, like:
ssh -i "server.pem" -A <user>#<server-ip>
Then in server terminal I run:
ssh -T git#github.com
And I get the "Hello user" message, which means the key forwarding works fine.
(In the server, $SSH_AUTH_SOCK is indeed set, and I can git clone)
Now, when building locally I use:
DOCKER_BUILDKIT=1 docker build --ssh default=~/.ssh/id_rsa -t myimage:latest .
Which works fine.
But in the server the private key does not exists at ~/.ssh/id_rsa. So how can I forward it to docker build?
Tried this in the server:
DOCKER_BUILDKIT=1 docker build --ssh default=$SSH_AUTH_SOCK -t myimage:latest .
But it does not work. The error is:
could not parse ssh: [default]: invalid empty ssh agent socket, make sure SSH_AUTH_SOCK is set
Even though SSH_AUTH_SOCK is set
Docker version: 19.03

I had a similar issue and it was fixed quite simply, I wrapped ${SSH_AUTH_SOCK} within curly braces
eval $(ssh-agent)
ssh-add ~/.ssh/id_rsa
DOCKER_BUILDKIT=1 docker build -t myimage:latest --ssh default=${SSH_AUTH_SOCK} .
In the Docker file, I have appropriate RUN instruction to run a command that requires sensitive data
RUN --mount=type=ssh \
mkdir vendor && composer install

You need to have ssh-agent running on your machine and the key added to it with ssh-add or use ssh -A -o AddKeysToAgent=true when logging in. SSH will not automatically forward the key specified with -i if you set -A afaik. After logging in you can run ssh-add -L to make sure your keys were forwarded and if you see records there then docker build --ssh default . should work fine now.
eval `ssh-agent`
ssh-add server.pem
ssh -A <user>#<server-ip>

Related

testing tkinter-based function on jenkins in a docker container on AWS

I have a python code that passes all the test on my local machine. The code uses tkinter and provides a GUI. However, none of the test functions actually open the GUI. (They call tk.Tk() though).
I created a docker container locally and could use X11 forwarding to pass the tests on the "local" container as well.
Now, I'm trying to run the tests on Jenkins that I have set up on an EC2 instance. Jenkins is supposed to create a docker container using the Dockerfile that is on my repository. And then call "docker run -e ... -v ..." (similar to what I had in my local computer) to check the tests. I understand my ec2 instance does not have a gui and therefore x11 forwarding is not as simple as it was on my computer. There should be a way for tests using a gui to be checked through Jenkins setup on AWS. Any help is appreciated.
EDIT
Here is the build script that I have on AWS, it creates the docker container using the Dockerfile:
IMAGE_NAME="test-image"
CONTAINER_NAME="deidentifier_clinical"
echo "Check current working directory"
pwd
echo "Build docker image and run container"
docker build -t $IMAGE_NAME .
echo $DISPLAY
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY $IMAGE_NAME bash -c "cd /$CONTAINER_NAME;make test"
echo "Copy coverage.xml into Jenkins container"
rm -rf reports; mkdir reports
docker cp $CONTAINER_NAME:/deidentifier_clinical/htmlcov/* reports/.
echo "Cleanup"
docker stop $CONTAINER_NAME
docker rm $CONTAINER_NAME
docker rmi $IMAGE_NAME
This fails on the docker run line. This same script runs with no problem on my local computer after setting up the X11-forwarding.

Deploy at Gitlab CI with docker-compose fails

I deployed my app from the local machine before by:
> docker context create remote --docker "host=ssh://user#myhost"
> docker --context remote ps
> docker-compose --context remote build
> docker-compose --context remote up -d
This is successful, all Dockerfiles are right.
Now I want to do the same but at GitLab CI. This is my gitlab-ci.yml file for building:
image: docker:19.03.12
services:
- docker:dind
stages:
- build
install_dependencies:
stage: build
before_script:
- 'command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "StrictHostKeyChecking no " > ~/.ssh/config
script:
- echo "Building deploy package"
- echo "$NPMRC" > ~/.npmrc
- apk add --no-cache docker-compose
- docker context create remote --docker "host=ssh://user#myhost"
- docker --context remote ps
- docker context use remote
- docker-compose --context remote build
- echo "Build successful"
Everything goes right before docker-compose --context remote build, when --context arg is not recognized, I can't understand why.
$ docker context use remote
Current context is now "remote"
Warning: DOCKER_HOST environment variable overrides the active context. To use "remote", either set the global --context flag, or unset DOCKER_HOST environment variable.
remote
$ docker-compose --context remote build
Define and run multi-container applications with Docker.
Usage:
docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]
docker-compose -h|--help
Options:
-f, --file FILE Specify an alternate compose file
(default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name
(default: directory name)
--verbose Show more output
--log-level LEVEL Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--no-ansi Do not print ANSI control characters
-v, --version Print version and exit
-H, --host HOST Daemon socket to connect to
--tls Use TLS; implied by --tlsverify
--tlscacert CA_PATH Trust certs signed only by this CA
--tlscert CLIENT_CERT_PATH Path to TLS certificate file
--tlskey TLS_KEY_PATH Path to TLS key file
--tlsverify Use TLS and verify the remote
--skip-hostname-check Don't check the daemon's hostname against the
name specified in the client certificate
--project-directory PATH Specify an alternate working directory
(default: the path of the Compose file)
--compatibility If set, Compose will attempt to convert keys
in v3 files to their non-Swarm equivalent
--env-file PATH Specify an alternate environment file
Commands:
build Build or rebuild services
config Validate and view the Compose file
create Create services
down Stop and remove containers, networks, images, and volumes
events Receive real time events from containers
exec Execute a command in a running container
help Get help on a command
images List images
kill Kill containers
logs View output from containers
pause Pause services
port Print the public port for a port binding
ps List containers
pull Pull service images
push Push service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker-Compose version information
ERROR: Job failed: exit code 1
To fix this, docker-compose version should be at least 1.26.0.

BitBucket Pipeline cannot find container after ssh into DigitalOcean Droplet

Here is my code
- step:
name: SSH to Digital Ocean and update docker image
script:
- head ~/.ssh/config
- ssh -i ~/.ssh/config root#XXX.XXX.XXX.XXX
- docker ps
- docker rm -f gvcontainer
- docker image rm -f myrepo/myimage:tag
- docker pull myrepo/myimage:tag
- docker run --name gvcontainer -p 12345:80 -d=true --restart=always myrepo/myimage:tag
services:
- docker
Here I can see that the Pipeline ssh into my DO droplet successfully, but for some reason(I guess, it was too quick to type the "docker ps". it should to wait a few seconds, but I just don't know how to postpone the operation) it could not find the container.
So I manually ssh into my droplet and checked, the gvcontainer is there.
Please enlighten me with any possible reasons.
Thanks
The commands listed after your SSH session are not being run on the remote system - they're being run in Pipelines. Since the Pipelines container doesn't have a gvcontainer to remove, it returns that error.
You have several options, one of which I outlined in answering your other question (pass the commands as arguments to SSH, as in ssh -i /path/to/key user#host "command1 && command2"). Another option would be to put a script on the droplet that does all the things you want, and have Pipelines execute it via SSH (ssh -i /path/to/key user#host "./do-all-the-things.sh").

SSH agent forwarding during docker build

While building up a docker image through a dockerfile, I have to clone a github repo. I added my public ssh keys to my git hub account and I am able to clone the repo from my docker host. While I see that I can use docker host's ssh key by mapping $SSH_AUTH_SOCK env variable at the time of docker run like
docker run --rm -it --name container_name \
-v $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK) \
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCK my_image
How can I do the same during a docker build?
For Docker 18.09 and newer
You can use new features of Docker to forward your existing SSH agent connection or a key to the builder. This enables for example to clone your private repositories during build.
Steps:
First set environment variable to use new BuildKit
export DOCKER_BUILDKIT=1
Then create Dockerfile with new (experimental) syntax:
# syntax=docker/dockerfile:experimental
FROM alpine
# install ssh client and git
RUN apk add --no-cache openssh-client git
# download public key for github.com
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# clone our private repository
RUN --mount=type=ssh git clone git#github.com:myorg/myproject.git myproject
And build image with
docker build --ssh default .
Read more about it here: https://medium.com/#tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
Unfortunately, you cannot forward your ssh socket to the build container since build time volume mounts are currently not supported in Docker.
This has been a topic of discussion for quite a while now, see the following issues on GitHub for reference:
https://github.com/moby/moby/issues/6396
https://github.com/moby/moby/issues/14080
As you can see this feature has been requested multiple times for different use cases. So far the maintainers have been hesitant to address this issue because they feel that volume mounts during build would break portability:
the result of a build should be independent of the underlying host
As outlined in this discussion.
This may be solved using an alternative build script. For example you may create a bash script and put it in ~/usr/local/bin/docker-compose or your favourite location:
#!/bin/bash
trap 'kill $(jobs -p)' EXIT
socat TCP-LISTEN:56789,reuseaddr,fork UNIX-CLIENT:${SSH_AUTH_SOCK} &
/usr/bin/docker-compose $#
Then in your Dockerfile you would use your existing ssh socket:
...
ENV SSH_AUTH_SOCK /tmp/auth.sock
...
&& apk add --no-cache socat openssh \
&& /bin/sh -c "socat -v UNIX-LISTEN:${SSH_AUTH_SOCK},unlink-early,mode=777,fork TCP:172.22.1.11:56789 &> /dev/null &" \
&& bundle install \
...
or any other ssh commands will works
Now you can call our custom docker-compose build. It would call the actual docker script with a shared ssh socket.
This one is also interesting:
https://github.com/docker/for-mac/issues/483#issuecomment-344901087
It looks like:
On the host
mkfifo myfifo
nc -lk 12345 <myfifo | nc -U $SSH_AUTH_SOCK >myfifo
In the dockerfile
RUN mkfifo myfifo
RUN while true; do \
nc 172.17.0.1 12345 <myfifo | nc -Ul /tmp/ssh-agent.sock >myfifo \
done &
RUN export SSH_AUTH_SOCK=/tmp/ssh-agent.sock
RUN ssh ...

Docker Container Migration with criu

With Help from Saied Kazemi I was able to checkpoint and migrate a container using criu on ubuntu 14 by this docker suspend and resume using criu
Now I am trying to migrate this container from one location to another.
I am using these steps:
export cid=$(docker run -d ubuntu tail -f /dev/null)
docker exec $cid touch /test.walid
mkdir /tmp/docker-migration
mkdir /tmp/docker-migration/$cid
docker checkpoint --image-dir=/tmp/docker-migration/$cid $cid
ssh walid#192.168.1.10 mkdir /tmp/docker-migration
ssh walid#192.168.1.10 mkdir /tmp/docker-migration/$cid
scp -r /tmp/docker-migration/$cid walid#192.168.1.10:/tmp/docker-migration
ssh walid#192.168.1.10 mkdir /tmp/$cid
scp -r /var/lib/docker/0.0/containers/$cid walid#192.168.1.13:/tmp
ssh -t walid#192.168.1.10 sudo mv /tmp/$cid /var/lib/docker/0.0/containers/
ssh -t walid#192.168.1.10 sudo docker restore --force=true --image-dir=/tmp/docker-migration/$cid $cid
and Got this response
Error response from daemon: No such container: fea338e81750b2377c2a845e30c49b7055519e39448091715c2c6a7896da3562
Error: failed to restore one or more containers
Both Machines have docker and criu installed and checkpoint works alone.
Docker container migration using CRIU is still under development. So far the focus of checkpoint and restore integration into Docker has been C/R'ing on the same machine.
That said, it is possible to manually migrate containers by not only copying the container image created by CRIU after checkpoint (as you have done) but also by copying the container directory created by Docker in /var/lib/docker/0.0/containers/$cid as well as the container's root filesystem in /var/lib/docker/0.0/image. Manually migrating container's filesystem is a bit tricky specially if you are using a union filesystem like AUFS or OverlayFS. Also, you need to restart the Docker daemon on the destination machine to see the container.
On the destination machine, you have to create or run a container. This container will be overwritten by the restored image.
So :
ssh -t walid#192.168.1.10 export NewID=$(docker run -d ubuntu tail -f /dev/null)
ssh -t walid#192.168.1.10 sudo docker restore --force=true --image-dir=/tmp/docker-migration/$cid $newID
In my case, that worked like a charm!
here is my test:
migration container across nodes.
restore:
vagrant ssh vm2 -- 'docker run --name=foo -d ubuntu tail -f /dev/null && docker rm -f foo'
docker create --name=CONTAINNER_NAME base_image
docker restore --force=true --image-dir=/tmp/{dump files}
github: https://github.com/hixichen/criu_test

Resources