Access ~/.ssh/id_rsa from gitlab runner using docker+machine - docker

I'm fairly new to Docker so I might be missing something.
I have a ec2 instance with gitlab-runner on it that spawns ec2 instances to be used as Gitlab runners.
Here's my Dockerfile
# syntax=docker/dockerfile:1
FROM python:3.9
RUN apt-get -y install openssh-client
RUN ssh-keygen -q -t rsa -N '' -f /root/.ssh/id_rsa
I built, tagged and pushed the image to AWS ECR. The image is then used on the runners when a gitlab job is created. But for the life of me I just cant figure out why the files on ~/.ssh/ cant be accessed on the runner. I've tested access to those files on the CLI of Docker Desktop and had no trouble accessing those files.
Here is the .gitlab-ci.yml
before_script:
- chmod 700 ~/.ssh
- chmod 600 ~/.ssh/id_rsa
- chmod 644 ~/.ssh/id_rsa.pub
The error on the runner is:
chmod: cannot access '/root/.ssh/id_rsa': No such file or directory

Either the image is not properly configured in your runners.docker section.
[runners.docker]
image = "alpine" <=== your ECR image?
Or EC2 mount a /root/.ssh folder which obscure the initial content of that image.

Related

How to send sensitive data to docker container during run time

I am trying to containerise API automation repo to run it on ci/cd(gocd). Below is the Dockerfile content.
FROM alpine:latest
RUN apk add --no-cache python3 \
&& pip3 install --upgrade pip
WORKDIR /api-automation
COPY . /api-automation
RUN pip --no-cache-dir install .
COPY api_tests.conf /usr/.ops/config/api_tests.conf
ENTRYPOINT ["pytest" "-s" "-v" "--cache-clear" "--html=report.html"]
Below is the content of api_tests.conf configuration file.
[user]
username=<user_name>
apikey=<api_key>
[tokens]
token1=<token1>
api_tests.conf is the configuration file and it has sensitive data like API keys, tokens etc(Note: Configuration file is not encrypted). Currently I am copying this config from repo to following location /usr/.ops/config/api_tests.conf in container but i do not want to do this as there are security concerns. So how i can copy this api_tests.conf file when i run container from ci/cd machine(it means, from Dockerfile, i need to remove instruction COPY api_tests.conf /usr/.ops/config/api_tests.conf).
My second question is,
If I create a secret file using command docker secret create my_secret file_path, how i can copy this secret api_tests.conf file when i run container.
Note: Once api_tests.conf file is copied to container then i need to run command "pytest -s -v --cache-clear --html=report.html"
Please provide your inputs.
If you want to avoid putting this line COPY api_tests.conf /usr/.ops/config/api_tests.conf in dockerfile then use -v option of docker run command which mounts file/dir from host into container filesystem.
docker run -itd -v /Users/basavarajlamani/Documents/api_tests.conf:/usr/.ops/config/api_tests.conf image-name
If you want to use docker secret to copy config file
Make sure you're using docker swarm, since docker secret works with swarm orchestrator.
Create docker secret with contents of config file docker secret create api_test.conf /Users/basavarajlamani/Documents/api_tests.conf
docker secret ls will show the created secret.
Run your docker container as a service in swarm.
docker service create \
--name myservice \
--secret source=api_test.conf,target=/usr/.ops/config/api_tests.conf \
image-name
NOTE: You can also use docker config rather than docker secret, the only difference is they are not encrypted at rest and are mounted directly into the container’s filesystem.
Hope it helps.

Create react app + Gitlab CI + Digital Ocean droplet - Pipeline succeeds but Docker container is deleted right after

I'm having my first steps into Docker/CI/CD.
For that, I'm trying to deploy a raw create-react-app to my Digital Ocean droplet (Docker One-Click Application) using Gitlab CI. Those are my files:
Dockerfile.yml
# STAGE 1 - Building assets
FROM node:alpine as building_assets_stage
WORKDIR /workspace
## Preparing the image (installing dependencies and building static files)
COPY ./package.json .
RUN yarn install
COPY . .
RUN yarn build
# STAGE 2 - Serving static content
FROM nginx as serving_static_content_stage
ENV NGINX_STATIC_FILE_SERVING_PATH=/usr/share/nginx/html
EXPOSE 80
COPY --from=building_assets_stage /workspace/build ${NGINX_STATIC_FILE_SERVING_PATH}
docker-compose.yml
## Use a Docker image with "docker-compose" installed on top of it.
image: tmaier/docker-compose:latest
services:
- docker:dind
variables:
DOCKER_CONTAINER_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: ${SECRETS_DOCKER_LOGIN_USERNAME}/${CI_PROJECT_NAME}:latest
before_script:
## Install ssh agent (so we can access the Digital Ocean Droplet) and run it.
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
## Write the environment variable value to the agent store, create the ssh directory and give the right permissions to it.
- echo "$SECRETS_DIGITAL_OCEAN_DROPLET_SSH_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Make sure that ssh will trust the new host, instead of asking
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
## Test that everything is setup correctly
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
stages:
- deploy
deploy:
stage: deploy
script:
## Login this machine into Docker registry, creates a production build and push it to the registry.
- docker login -u ${SECRETS_DOCKER_LOGIN_USERNAME} -p ${SECRETS_DOCKER_LOGIN_PASSWORD}
- docker build -t ${DOCKER_IMAGE_TAG} .
- docker push ${DOCKER_IMAGE_TAG}
## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
- docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
- docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}
# Everything works, exit.
- exit 0
only:
- master
In a nutshell, on Gitlab CI, I do the following:
(before_install) Install ssh agent and copy my private SSH key to this machine, so we can connect to the Digital Ocean Droplet;
(deploy) I build my image and push it to my public docker hub repository;
(deploy) I connect to my Digital Ocean Droplet via SSH, pull the image I've just built and run it.
The problem is that if I do everything from my computer's terminal, the container is created and the application is deployed successfully.
If I execute it from the Gitlab CI task, the container is generated but nothing is deployed because the container dies right after (click here to see CI job output).
I can guarantee that the container is being erase because if I manually SSH the server and docker ps -a, it doesn't listen anything.
I'm mostly confused by the fact that this image CMD is CMD ["nginx", "-g", "daemon off;"], which shouldn't make my container gets deleted because it has a process running.
What I'm doing wrong? I'm lost.
Thank you in advance.
My question was answered by d g - thank you very much!
The problem relies on the fact that I was connecting to my Digital Ocean Droplet via SSH and executing commands inside using its bash, when I should be passing the entire command to be executed as an argument to the ssh connection instruction.
Changed my .gitlab.yml file from:
## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
- docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
- docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}
To:
# Execute as follow:
# ssh -t digital-ocean-server "docker cmd1; docker cmd2;
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP} "docker ps -q --filter \"name=${DOCKER_CONTAINER_NAME}\" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}; docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}"

Can't get Docker outside of Docker to work with Jenkins in ECS

I am trying to get the Jenkins Docker image deployed to ECS and have docker-compose work inside of my pipeline.
I have hit wall after wall trying to get this Jenkins container launched and functioning. Most of the issues have just been getting the docker command to work inside of a pipeline (including getting the permissions/group right).
I've gotten to the point that the command works and uses the host docker socket (docker ps outputs the jenkins container and ecs agent) and docker-compose is working (docker-compose --version works) but when I try to run anything that involves files inside the pipeline, I get a "no such file or directory" error. This happens when I run docker-compose -f docker-compose.testing.yml up -d --build (it can't find the yml file) and also when I try to run a basic docker build, it can't find local files used in the COPY command (ie. COPY . /app). I've tried from changing the command to be ./file.yml and $PWD/file.yml and still getting the same error.
Here is my Jenkins Dockerfile:
FROM jenkins/jenkins:lts
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends curl
RUN apt-get remove docker
RUN curl -sSL https://get.docker.com/ | sh
RUN curl -L --fail https://github.com/docker/compose/releases/download/1.21.2/run.sh -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN groupadd -g 497 dockerami \
&& usermod -aG dockerami jenkins
USER jenkins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
COPY jobs /app/jenkins/jobs
COPY jenkins.yml /var/jenkins_home/jenkins.yml
RUN xargs /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state
ENV CASC_JENKINS_CONFIG /var/jenkins_home/jenkins.yml
I also have Terraform building the task definition and binding the /var/run/docker.sock from the host to the jenkins container.
I'm hoping to get this working since I have liked Jenkins since we started using it about 2 years ago and I've had these pipelines working with docker-compose in our non-containerized Jenkins install, but getting Jenkins containerized so far has been pulling teeth. I would much prefer to get this working than to have to change my workflows right now to something like Concourse or Drone.
One issue that you have in your Dockerfile is that you are copying a file into /var/jenkins_home which will disappear as /var/jenkins_home is defined as a volume in the parent Jenkins image and any files you copy into a volume after the volume has been declared are discarded - see https://docs.docker.com/engine/reference/builder/#notes-about-specifying-volumes

Mount a nfs share in docker build to install software

I am building a docker image from a dockerfile. However I am doing some installs from files that are currently hosted on a NFS share. In regular Centos I mount the drive with mount.nfs, then run the commands to do the install and point to the NFS share as repository for the install files.
Is there any way to do this with dockers? I read a few posts of docker run -v, but I am not ready to run the docker yet, I first need to create the image.
The alternative is copy the whole repository via zip or tar, then unarchive, do the install and then delete files. However I think this will end up in a huge image.
You'll need experimental software (as of writing) for doing this.
First of all, you have to create a buildx builder instance:
docker buildx create --name insecure --driver docker-container \
--driver-opt image=moby/buildkit:master \
--buildkitd-flags '--allow-insecure-entitlement security.insecure \
--allow-insecure-entitlement network.host'
As of today, the latest release (v0.9.0) of buildkit doesn't have the --insecure support, so you need master.
You should issue this command as the user which does the build.
Then you'll need to add these to your Dockerfile:
# syntax = docker/dockerfile:experimental
RUN --security=insecure mkdir /nfs && \
mount -t nfs -o nolock -o vers=4 $SERVER_IP:/nfs /nfs && \
ls -la /nfs
Third, you have to do your build with buildx and give the following options (--allow and --builder along with your normal options):
docker buildx build --allow security.insecure,network.host \
--builder insecure \
-t image:tag --file=Dockerfile .
You should then have your NFS server mounted at /nfs.
Be aware that this mount will be present only in the same RUN context, because all those steps run in a different container. The next RUN line will see only an empty /nfs directory.
So you should do everything which needs data from /nfs from that RUN step!
When you are building docker image you have full access to host's file system what means that you should easily write in Dockerfile
ADD /nfs-path/file /path-inside-docker-image/file
You don't need any additional action in docker to do that.

SSH agent forwarding during docker build

While building up a docker image through a dockerfile, I have to clone a github repo. I added my public ssh keys to my git hub account and I am able to clone the repo from my docker host. While I see that I can use docker host's ssh key by mapping $SSH_AUTH_SOCK env variable at the time of docker run like
docker run --rm -it --name container_name \
-v $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK) \
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCK my_image
How can I do the same during a docker build?
For Docker 18.09 and newer
You can use new features of Docker to forward your existing SSH agent connection or a key to the builder. This enables for example to clone your private repositories during build.
Steps:
First set environment variable to use new BuildKit
export DOCKER_BUILDKIT=1
Then create Dockerfile with new (experimental) syntax:
# syntax=docker/dockerfile:experimental
FROM alpine
# install ssh client and git
RUN apk add --no-cache openssh-client git
# download public key for github.com
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# clone our private repository
RUN --mount=type=ssh git clone git#github.com:myorg/myproject.git myproject
And build image with
docker build --ssh default .
Read more about it here: https://medium.com/#tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
Unfortunately, you cannot forward your ssh socket to the build container since build time volume mounts are currently not supported in Docker.
This has been a topic of discussion for quite a while now, see the following issues on GitHub for reference:
https://github.com/moby/moby/issues/6396
https://github.com/moby/moby/issues/14080
As you can see this feature has been requested multiple times for different use cases. So far the maintainers have been hesitant to address this issue because they feel that volume mounts during build would break portability:
the result of a build should be independent of the underlying host
As outlined in this discussion.
This may be solved using an alternative build script. For example you may create a bash script and put it in ~/usr/local/bin/docker-compose or your favourite location:
#!/bin/bash
trap 'kill $(jobs -p)' EXIT
socat TCP-LISTEN:56789,reuseaddr,fork UNIX-CLIENT:${SSH_AUTH_SOCK} &
/usr/bin/docker-compose $#
Then in your Dockerfile you would use your existing ssh socket:
...
ENV SSH_AUTH_SOCK /tmp/auth.sock
...
&& apk add --no-cache socat openssh \
&& /bin/sh -c "socat -v UNIX-LISTEN:${SSH_AUTH_SOCK},unlink-early,mode=777,fork TCP:172.22.1.11:56789 &> /dev/null &" \
&& bundle install \
...
or any other ssh commands will works
Now you can call our custom docker-compose build. It would call the actual docker script with a shared ssh socket.
This one is also interesting:
https://github.com/docker/for-mac/issues/483#issuecomment-344901087
It looks like:
On the host
mkfifo myfifo
nc -lk 12345 <myfifo | nc -U $SSH_AUTH_SOCK >myfifo
In the dockerfile
RUN mkfifo myfifo
RUN while true; do \
nc 172.17.0.1 12345 <myfifo | nc -Ul /tmp/ssh-agent.sock >myfifo \
done &
RUN export SSH_AUTH_SOCK=/tmp/ssh-agent.sock
RUN ssh ...

Resources