Docker: share private key via arguments - docker

I want to share my github private key into my docker container.
I'm thinking about sharing it via docker-compose.yml via ARGs.
Is it possible to share private key using ARG as described here?
Pass a variable to a Dockerfile from a docker-compose.yml file
# docker-compose.yml file
version: '2'
services:
my_service:
build:
context: .
dockerfile: ./docker/Dockerfile
args:
- PRIVATE_KEY=MULTI-LINE PLAIN TEXT RSA PRIVATE KEY
and then I expect to use it in my Dockerfile as:
ARG PRIVATE_KEY
RUN echo $PRIVATE_KEY >> ~/.ssh/id_rsa
RUN pip install git+ssh://git#github.com/...
Is it possible via ARGs?

If you can use the latest docker 1.13 (or 17.03 ce), you could then use the docker swarm secret: see "Managing Secrets In Docker Swarm Clusters"
That allows you to associate a secret to a container you are launching:
docker service create --name test \
--secret my_secret \
--restart-condition none \
alpine cat /run/secrets/my_secret
If docker swarm is not an option in your case, you can try and setup a docker credential helper.
See "Getting rid of Docker plain text credentials". But that might not apply to a private ssh key.
You can check other relevant options in "Secrets and LIE-abilities: The State of Modern Secret Management (2017)", using standalone secret manager like Hashicorp Vault.

Although the ARG itself will not persist in the built image, when you reference the ARG variable somewhere in the Dockerfile, that will be in the history:
FROM busybox
ARG SECRET
RUN set -uex; \
echo "$SECRET" > /root/.ssh/id_rsa; \
do_deploy_work; \
rm /root/.ssh/id_rsa
As VonC notes there's now a swarm feature to store and manage secrets but that doesn't (yet) solve the build time problem.
Builds
Coming in Docker ~ 1.14 (or whatever the equivalent new release name is) should be the --build-secret flag (also #28079) that lets you mount a secret file during a build.
In the mean time, one of the solutions is to run a network service somewhere that you can use a client to pull secrets from during the build. Then if the build puts the secret in a file, like ~/.ssh/id_rsa, the file must be deleted before the RUN step that created it completes.
The simplest solution I've seen is serving a file with nc:
docker network create build
docker run --name=secret \
--net=build \
--detach \
-v ~/.ssh/id_rsa:/id_rsa \
busybox \
sh -c 'nc -lp 8000 < /id_rsa'
docker build --network=build .
Then collect the secret, store it, use it and remove it in the Dockerfile RUN step.
FROM busybox
RUN set -uex; \
nc secret 8000 > /id_rsa; \
cat /id_rsa; \
rm /id_rsa
Projects
There's a number of utilities that have this same premise, but in various levels of complexity/features. Some are generic solutions like Hashicorps Vault.
Dockito Vault
Hashicorp Vault
docker-ssh-exec

Related

Build all containers before pushing to registry using Buildkit

I've the following Dockerfile:
FROM php:7.4 as base
RUN apt-get install -y libzip-dev && \
docker-php-ext-install zip
# install some other things...
FROM base as intermediate
COPY upgrade.sh /usr/local/bin
FROM base as final
COPY start-app.sh /usr/local/bin
As you can see, I've 3 stages:
base
intermedia
final
At first, I'm building the base container and then both "derived" containers. My problem is that I need to push the intermediate and the final container to my (gitlab) registry. The containers are built using the following gitlab-ci.yml:
.build_container: &build_container
stage: build
image:
name: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/moby/buildkit:rootless
entrypoint: ["sh", "-c"]
variables:
BUILDKITD_FLAGS: --oci-worker-no-process-sandbox
script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > ~/.docker/config.json
- |
buildctl-daemonless.sh build \
--frontend dockerfile.v0 \
--local context=${CI_PROJECT_DIR} \
--local dockerfile=${CI_PROJECT_DIR} \
--opt filename=./${DOCKERFILE} \
--import-cache type=registry,ref=${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG} \
--export-cache type=inline \
--output type=image,name=${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG},push=true
Build:Container:
<<: *build_container
variables:
DOCKERFILE: "Dockerfile"
Ok, I can simply add another "buildctl-daemonless.sh"-command and use a separate file, but I want to make sure that both containers (intermediate and final) are build successfully before pushing them. So I'm looking for a solution that builds the intermediate and the final containers at first and then pushing both to the registry, e.g. something like this:
buildctl-daemonless.sh build \
--frontend dockerfile.v0 \
--local context=${CI_PROJECT_DIR} \
--local dockerfile=${CI_PROJECT_DIR} \
--opt filename=./${DOCKERFILE} \
--import-cache type=registry,ref=${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG} \
--export-cache type=inline
buildctl-daemonless.sh push intermediate final
Unfortunately there's no "push" command in buildctl-daemonless.sh as far as I know. So es anyone having an idea how I can to this?
Directly from buildkit, I don't think there's a separate push command. The output of a multi-platform image is usually directly to a registry, but could also be an OCI Layout tar file. To write to that tar file, you would add the following option:
--output type=oci,dest=path/to/output.tar
With that tar file, there are various standalone tools that can help. Pretty sure RedHat's skopeo and Google's go-containerregistry/crane both have options. My own tool is regclient which includes the following regctl command:
regctl image import ${image_name_tag} path/to/output.tar
There's a regclient/regctl:alpine image that's made to be embedded in CI pipelines like GitLab, and it will read registry auths from the same ~/.docker/config.json.

Auto-create Rundeck jobs on startup (Rundeck in Docker container)

I'm trying to setup Rundeck inside a Docker container. I want to use Rundeck to provision and manage my Docker fleet. I found an image which ships an ansible-plugin as well. So far running simple playbooks and auto-discovering my Pi nodes work.
Docker script:
echo "[INFO] prepare rundeck-home directory"
mkdir ../../target/work/home
mkdir ../../target/work/home/rundeck
mkdir ../../target/work/home/rundeck/data
echo -e "[INFO] copy host inventory to rundeck-home"
cp resources/inventory/hosts.ini ../../target/work/home/rundeck/data/inventory.ini
echo -e "[INFO] pull image"
docker pull batix/rundeck-ansible
echo -e "[INFO] start rundeck container"
docker run -d \
--name rundeck-raspi \
-p 4440:4440 \
-v "/home/sebastian/work/workspace/workspace-github/raspi/target/work/home/rundeck/data:/home/rundeck/data" \
batix/rundeck-ansible
Now I want to feed the container with playbooks which should become jobs to run in Rundeck. Can anyone give me a hint on how I can create Rundeck jobs (which should invoke an ansible playbook) from the outside? Via api?
One way I can think of is creating the jobs manually once and exporting them as XML or YAML. When the container and Rundeck is up and running I could import the jobs automatically. Is there a certain folder in rundeck-home or somewhere where I can put those files for automatic import? Or is there an API call or something?
Could Jenkins be more suited for this task than Rundeck?
EDIT: just changed to a Dockerfile
FROM batix/rundeck-ansible:latest
COPY resources/inventory/hosts.ini /home/rundeck/data/inventory.ini
COPY resources/realms.properties /home/rundeck/etc/realms.properties
COPY resources/tokens.properties /home/rundeck/etc/tokens.properties
# import jobs
ENV RD_URL="http://localhost:4440"
ENV RD_TOKEN="yJhbGciOiJIUzI1NiIs"
ENV rd_api="36"
ENV rd_project="Test-Project"
ENV rd_job_path="/home/rundeck/data/jobs"
ENV rd_job_file="Ping_Nodes.yaml"
# copy job definitions and script
COPY resources/jobs-definitions/Ping_Nodes.yaml /home/rundeck/data/jobs/Ping_Nodes.yaml
RUN curl -kSsv --header "X-Rundeck-Auth-Token:$RD_TOKEN" \
-F yamlBatch=#"$rd_job_path/$rd_job_file" "$RD_URL/api/$rd_api/project/$rd_project/jobs/import?fileformat=yaml&dupeOption=update"
Do you know how I can delay the curl at the end until after the rundeck service is up and running?
That's right you can design an script with an API call using cURL (pointing to your Docker instance) after deploying your instance (a script that deploys your instance and later import the jobs), I leave a basic example (in this example you need the job definition in XML format).
For XML job definition format:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="localhost"
rdeck_port="4440"
rdeck_api="36"
rdeck_token="qNcao2e75iMf1PmxYfUJaGEzuVOIW3Xz"
# specific api call info
rdeck_project="ProjectEXAMPLE"
rdeck_xml_file="HelloWorld.xml"
# api call
curl -kSsv --header "X-Rundeck-Auth-Token:$rdeck_token" \
-F xmlBatch=#"$rdeck_xml_file" "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/project/$rdeck_project/jobs/import?fileformat=xml&dupeOption=update"
For YAML job definition format:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="localhost"
rdeck_port="4440"
rdeck_api="36"
rdeck_token="qNcao2e75iMf1PmxYfUJaGEzuVOIW3Xz"
# specific api call info
rdeck_project="ProjectEXAMPLE"
rdeck_yml_file="HelloWorldYML.yaml"
# api call
curl -kSsv --header "X-Rundeck-Auth-Token:$rdeck_token" \
-F xmlBatch=#"$rdeck_yml_file" "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/project/$rdeck_project/jobs/import?fileformat=yaml&dupeOption=update"
Here the API call.

How to configure Databricks token inside Docker File

I have a docker file where I want to
Download the Databricks CLI
Configure the CLI by adding a host and token
And then running a python file that hits the Databricks token
I am able to install the CLI in the docker image, and I have a working python file that is able to submit the job to the Databricks API but Im unsure of how to configure my CLI within docker.
Here is what I have
FROM python
MAINTAINER nope
# Creating Application Source Code Directory
RUN mkdir -p /src
# Setting Home Directory for containers
WORKDIR /src
# Installing python dependencies
RUN pip install databricks_cli
# Not sure how to do this part???
# databricks token kicks off the config via CLI
RUN databricks configure --token
# Copying src code to Container
COPY . /src
# Start Container
CMD echo $(databricks --version)
#Kicks off Pythern Job
CMD ["python", "get_run.py"]
If I was to do databricks configure --token in the CLI it would prompt for the configs like this :
databricks configure --token
Databricks Host (should begin with https://):
It's better not to do it this way for multiple reasons:
It's insecure - if you configure Databricks CLI this way it will generate a file inside the container that could be read by anyone who has access to it
Token has time-to-live (default is 90 days) - this means that you'll need to rebuild your containers regularly...
Instead it's just better to pass two environment variables to the container, and they will be picked up by the databricks command. These are DATABRICKS_HOST and DATABRICKS_TOKEN as it described in the documentation.
When databricks configure is run successfully, it writes the information to the file ~/.databrickscfg:
[DEFAULT]
host = https://your-databricks-host-url
token = your-api-token
One way you could set this in the container is by using a startup command (syntax here for docker-compose.yml):
/bin/bash -ic "echo '[DEFAULT]\nhost = ${HOST_URL}\ntoken = ${TOKEN}' > ~/.databrickscfg"
It is not very secure to put your token in the DockerFile. However, if you want to pursue this approach you can use the code below.
RUN export DATABRICKS_HOST=XXXXX && \
export DATABRICKS_API_TOKEN=XXXXX && \
export DATABRICKS_ORG_ID=XXXXX && \
export DATABRICKS_PORT=XXXXX && \
export DATABRICKS_CLUSTER_ID=XXXXX && \
echo "{\"host\": \"${DATABRICKS_HOST}\",\"token\": \"${DATABRICKS_API_TOKEN}\",\"cluster_id\":\"${DATABRICKS_CLUSTER_ID}\",\"org_id\": \"${DATABRICKS_ORG_ID}\", \"port\": \"${DATABRICKS_PORT}\" }" >> /root/.databricks-connect
Make sure to run all the commands in a line using one RUN command. Otherwise, the variable such as DATABRICKS_HOST or DATABRICKS_API_TOKEN may not properly propagate.
If you want to connect to a Databricks Cluster within a docker container you need more configuration. You can find the required details in this article: How to Connect a Local or Remote Machine to a Databricks Cluster
The number of personal access tokens per user is limited to 600
But via bash is easy
echo "y
$(WORKSPACE-REGION-URL)
$(CSE-DEVELOP-PAT)
$(EXISTING-CLUSTER-ID)
$(WORKSPACE-ORG-ID)
15001" | databricks-connect configure
If you want to access databricks models/download_artifacts using hostname and access token like how you do on databricks cli
databricks configure --token --profile profile_name
Databricks Host (should begin with https://): your_hostname
Token : token
if you have created profile name and pushed models and just want to access the model/artifacts in docker using this profile
Add below code in the docker file.
RUN pip install databricks_cli
ARG HOST_URL
ARG TOKEN
RUN echo "[<profile name>]\nhost = ${HOST_URL}\ntoken = ${TOKEN}" >> ~/.databrickscfg
#this will created your .databrickscfg file with host and token after build the same way you do using databricks configure command
Add args HOST_URL and TOKEN in the docker build
e.g
your host name = https://adb-5443106279769864.19.azuredatabricks.net/
your access token = dapi********************53b1-2
sudo docker build -t test_tag --build-arg HOST_URL=<your host name> --build-arg TOKEN=<your access token> .
And now you can access your experiments using this profilename Databricks:profile_name in the code.

Understanding the difference in sequence of ENTRYPOINT/CMD between Dockerfile and docker run

Docker noob here...
I am trying to build and run an IBM DataPower container from a Dockerfile, but it doesn't seem to work the same as when just running docker run and passing the same parameters in the terminal.
This works (docker run)
docker run -it \
-v $PWD/config:/drouter/config \
-e DATAPOWER_ACCEPT_LICENSE=true \
-e DATAPOWER_INTERACTIVE=true \
-e DATAPOWER_WORKER_THREADS=4 \
-p 9090:9090 \
--name mydatapower \
ibmcom/datapower
... the key part being that it mounts the ./config folder and the custom configuration is picked up by datapower running in the container.
This doesn't (Dockerfile)
Dockerfile:
FROM ibmcom/datapower
ENV DATAPOWER_ACCEPT_LICENSE=true
ENV DATAPOWER_INTERACTIVE=true
ENV DATAPOWER_WORKER_THREADS=4
EXPOSE 9090
COPY config/auto-startup.cfg /drouter/config/auto-startup.cfg
Build:
docker build -t local/datapower .
Run:
docker run -it \
-p 9090:9090 \
--name mydatapower local/datapower
The problem is that DataPower doesn't pick up the auto-startup.cfg file, so the additional config options doesn't get used. I know the source file path is correct because if I misspell the file name docker throws an error.
I have a theory that it might be running the inherited ENTRYPOINT or CMD before the config file is available. I don't know how to test or prove this. I don't know what the ENTRYPOINT or CMD is because the inherited image is not open source and I can't figure out how to find it.
Does that seem likely?
UPDATE:
The content of the auto-startup.cfg is:
top; co
ssh
web-mgmt
admin enabled
port 9090
exit
It simply enables the DataPower WebGUI.
The output when running it in the commandline with:
docker run -it -v $PWD/config:/drouter/config -v $PWD/local:/drouter/local -e DATAPOWER_ACCEPT_LICENSE=true -e DATAPOWER_INTERACTIVE=true -e DATAPOWER_WORKER_THREADS=4 -p 9091:9090 --name myconfigureddatapower ibmcom/datapower`
...contains this:
20170908T121729.015Z [0x8100006e][system][notice] : Executing startup configuration.
20170908T121729.970Z [0x00350014][mgmt][notice] web-mgmt(WebGUI-Settings): tid(303): Operational state up
...but with Dockerfile it doesn't. That's why I think the config files may be copied into place too late.
I've tried adding CMD ["/bin/drouter"] to the end of my Dockerfile to no avail.
I have tested your Dockerfile and it seems to be working. My auto-startup.cfg file is copied in the proper location and when I launch the container it's reading the file.
I get this output:
[root#ip-172-30-2-164 tmp]# docker run -ti -p 9090:9090 test
20170908T123728.818Z [0x8040006b][system][notice] logging target(default-log): Logging started.
20170908T123729.067Z [0x804000fe][system][notice] : Container instance UUID: 36bcca0e-6139-4694-91b0-2b7b66c3a498, Cores: 4, vCPUs: 4, CPU model: Intel(R) Xeon(R) CPU E5-2676 v3 # 2.40GHz, Memory: 16049.1MB, Platform: docker, OS: dpos, Edition: developers-limited, Up time: 0 minutes
20170908T123729.071Z [0x8040001c][system][notice] : DataPower IDG is on-line.
20170908T123729.071Z [0x8100006f][system][notice] : Executing default startup configuration.
20170908T123729.416Z [0x8100006d][system][notice] : Executing system configuration.
20170908T123729.417Z [0x8100006b][mgmt][notice] domain(default): tid(8143): Domain operational state is up.
708f98be1390
Unauthorized access prohibited.
20170908T123731.239Z [0x806000dd][system][notice] cert-monitor(Certificate Monitor): tid(399): Enabling Certificate Monitor to scan once every 1 days for soon to expire certificates
20170908T123731.552Z [0x8100006e][system][notice] : Executing startup configuration.
20170908T123732.436Z [0x8100003b][mgmt][notice] domain(default): Domain configured successfully.
20170908T123732.449Z [0x00350014][mgmt][notice] web-mgmt(WebGUI-Settings): tid(303): Operational state up
login:
To check that your file has been copied to the container you can run docker run -ti local/datapower sh to enter the container and then check the content of /drouter/config/.
Your base image command is: CMD ["/bin/drouter"] you can check it running docker history ibmcom/datapower.
UPDATE:
The drouter user in the container must be able to read the auto-startup.cfg file. You have 2 options:
set your local auto-startup.cfg with the proper permissions (chmod 644 config/autostart.cfg).
or add these line in the Dockerfile so drouter can read the file:
USER root
RUN chown drouter /drouter/config/auto-startup.cfg
USER drouter

How can I let the gitlab-ci-runner DinD image cache intermediate images?

I have a Dockerfile that starts with installing the texlive-full package, which is huge and takes a long time. If I docker build it locally, the intermedate image created after installation is cached, and subsequent builds are fast.
However, if I push to my own GitLab install and the GitLab-CI build runner starts, this always seems to start from scratch, redownloading the FROM image, and doing the apt-get install again. This seems like a huge waste to me, so I'm trying to figure out how to get the GitLab DinD image to cache the intermediate images between builds, without luck so far.
I have tried using the --cache-dir and --docker-cache-dir for the gitlab-runner register command, to no avail.
Is this even something the gitlab-runner DinD image is supposed to be able to do?
My .gitlab-ci.yml:
build_job:
script:
- docker build --tag=example/foo .
My Dockerfile:
FROM php:5.6-fpm
MAINTAINER Roel Harbers <roel.harbers#example.com>
RUN apt-get update && apt-get install -qq -y --fix-missing --no-install-recommends texlive-full
RUN echo Do other stuff that has to be done every build.
I use GitLab CE 8.4.0 and gitlab/gitlab-runner:latest as runner, started as
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/local/gitlab-ci-runner/config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest \
; \
The runner is registered using:
docker exec -it gitlab-runner gitlab-runner register \
--name foo.example.com \
--url https://gitlab.example.com/ci \
--cache-dir /cache/build/ \
--executor docker \
--docker-image gitlab/dind:latest \
--docker-privileged \
--docker-disable-cache false \
--docker-cache-dir /cache/docker/ \
; \
This creates the following config.toml:
concurrent = 1
[[runners]]
name = "foo.example.com"
url = "https://gitlab.example.com/ci"
token = "foobarsldkflkdsjfkldsj"
tls-ca-file = ""
executor = "docker"
cache_dir = "/cache/build/"
[runners.docker]
image = "gitlab/dind:latest"
privileged = true
disable_cache = false
volumes = ["/cache"]
cache_dir = "/cache/docker/"
(I have experimented with different values for cache_dir, docker_cache_dir and disable_cache, all with the same result: no caching whatsoever)
I suppose there's no simple answer to your question. Before adding some details, I strongly suggest to read this blog article from the maintainer of DinD, which was originally named "do not use Docker in Docker for CI".
What you might try is declaring /var/lib/docker as a volume for your GitLab runner. But be warned, depending on your file-system drivers you may use AUFS in the container on an AUFS filesystem on your host, which is very likely to cause problems.
What I'd suggest to you is creating a separate Docker-VM, only for the runner(s), and bind-mount docker.sock from the VM into your runner-container.
We are using this setup with GitLab with great success (>27.000 builds in about 12 months).
You can take a look at our runner with docker-compose support which is actually based on the shell-executor of GitLab's runner.
Currently you cannot cache intermediate layers in GitLab Docker-in-Docker. Altough there are plans to add that (that are mentioned in the link below). What you can do today to speed up your DinD build is to use the overlay filesystem. To do this you need to be running a liunx kernel >=3.18 and make sure you load the overlay kernel module. Then you set this variable in your gitlab-ci.yml:
variables:
DOCKER_DRIVER: overlay
For more information see this issue and in particular this comment on "The state of optimising Docker Builds!", see the "Using docker executor with dind" section.
https://gitlab.com/gitlab-org/gitlab-ce/issues/17861#note_12991518
For build dependencies that do not change so ofter you can do kinda manual caching with gitlab image registry.
In CI script you do not explicitely call docker build but rather wrap it in a shell script
# cat build_dependencies.sh
registry=registry.example.com
project=group/project
imagebase=$registry/$project/linux
docker pull $imagebase/devbase:1.0
if [ $? -ne 0 ]; then
docker build -f devbase.dockerfile -t $imagebase/devbase:1.0 .
docker push $imagebase/devbase:1.0
fi
...
and call that script in your CI
...
script:
- ./build_dependencies.sh
The downside to this is that when your devbase.dockerfile is updated this would get unnoticed by CI, so you need to force build and push of a new image. So for dynamicly changing images this does not work well, but for your use case this seems like a possible way to go.

Resources