Using a Docker secret for accessing database in Jenkinsfile - docker

In my Jenkinsfile I'm running a Maven command to start a database migration. This database is running in a Docker container.
When deploying the database container we're using a Docker secret from the swarm manager node for the password.
Is there any way how to use that Docker secret in the Jenkins pipeline script instead of putting it in in plain text? I could use Jenkins credentials but then I'd need to maintain the same secrets in two different places.
sh """$mvn flyway:info \
-Dproject.host=$databaseHost \
-Dproject.port=$databasePort \
-Dproject.schema=$databaseSchema \
-Dproject.user=db_user \
-Dproject.password=db_pass \ // <--- Use a Docker secret here...
"""

You can create a username and password credential in jenkins, for example database_crendential, then use it like this :
environment {
DATABASE_CREDS = credentials('database_crendential ')
}
...
sh """$mvn flyway:info \
-Dproject.host=$databaseHost \
-Dproject.port=$databasePort \
-Dproject.schema=$databaseSchema \
-Dproject.user=${DATABASE_CREDS_USR}\
-Dproject.password=${DATABASE_CREDS_PSW}\ // <--- Use a Docker secret here...
"""

Related

Auto-create Rundeck jobs on startup (Rundeck in Docker container)

I'm trying to setup Rundeck inside a Docker container. I want to use Rundeck to provision and manage my Docker fleet. I found an image which ships an ansible-plugin as well. So far running simple playbooks and auto-discovering my Pi nodes work.
Docker script:
echo "[INFO] prepare rundeck-home directory"
mkdir ../../target/work/home
mkdir ../../target/work/home/rundeck
mkdir ../../target/work/home/rundeck/data
echo -e "[INFO] copy host inventory to rundeck-home"
cp resources/inventory/hosts.ini ../../target/work/home/rundeck/data/inventory.ini
echo -e "[INFO] pull image"
docker pull batix/rundeck-ansible
echo -e "[INFO] start rundeck container"
docker run -d \
--name rundeck-raspi \
-p 4440:4440 \
-v "/home/sebastian/work/workspace/workspace-github/raspi/target/work/home/rundeck/data:/home/rundeck/data" \
batix/rundeck-ansible
Now I want to feed the container with playbooks which should become jobs to run in Rundeck. Can anyone give me a hint on how I can create Rundeck jobs (which should invoke an ansible playbook) from the outside? Via api?
One way I can think of is creating the jobs manually once and exporting them as XML or YAML. When the container and Rundeck is up and running I could import the jobs automatically. Is there a certain folder in rundeck-home or somewhere where I can put those files for automatic import? Or is there an API call or something?
Could Jenkins be more suited for this task than Rundeck?
EDIT: just changed to a Dockerfile
FROM batix/rundeck-ansible:latest
COPY resources/inventory/hosts.ini /home/rundeck/data/inventory.ini
COPY resources/realms.properties /home/rundeck/etc/realms.properties
COPY resources/tokens.properties /home/rundeck/etc/tokens.properties
# import jobs
ENV RD_URL="http://localhost:4440"
ENV RD_TOKEN="yJhbGciOiJIUzI1NiIs"
ENV rd_api="36"
ENV rd_project="Test-Project"
ENV rd_job_path="/home/rundeck/data/jobs"
ENV rd_job_file="Ping_Nodes.yaml"
# copy job definitions and script
COPY resources/jobs-definitions/Ping_Nodes.yaml /home/rundeck/data/jobs/Ping_Nodes.yaml
RUN curl -kSsv --header "X-Rundeck-Auth-Token:$RD_TOKEN" \
-F yamlBatch=#"$rd_job_path/$rd_job_file" "$RD_URL/api/$rd_api/project/$rd_project/jobs/import?fileformat=yaml&dupeOption=update"
Do you know how I can delay the curl at the end until after the rundeck service is up and running?
That's right you can design an script with an API call using cURL (pointing to your Docker instance) after deploying your instance (a script that deploys your instance and later import the jobs), I leave a basic example (in this example you need the job definition in XML format).
For XML job definition format:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="localhost"
rdeck_port="4440"
rdeck_api="36"
rdeck_token="qNcao2e75iMf1PmxYfUJaGEzuVOIW3Xz"
# specific api call info
rdeck_project="ProjectEXAMPLE"
rdeck_xml_file="HelloWorld.xml"
# api call
curl -kSsv --header "X-Rundeck-Auth-Token:$rdeck_token" \
-F xmlBatch=#"$rdeck_xml_file" "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/project/$rdeck_project/jobs/import?fileformat=xml&dupeOption=update"
For YAML job definition format:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="localhost"
rdeck_port="4440"
rdeck_api="36"
rdeck_token="qNcao2e75iMf1PmxYfUJaGEzuVOIW3Xz"
# specific api call info
rdeck_project="ProjectEXAMPLE"
rdeck_yml_file="HelloWorldYML.yaml"
# api call
curl -kSsv --header "X-Rundeck-Auth-Token:$rdeck_token" \
-F xmlBatch=#"$rdeck_yml_file" "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/project/$rdeck_project/jobs/import?fileformat=yaml&dupeOption=update"
Here the API call.

Pass Jenkins credentials to Docker build for Composer usage

I've got a composer packages in our company's private repository on BitBucket. To access it I need to use credentials stored in Jenkins. Currently the whole build is based on Declarative Pipeline and Dockerfile. To pass credentials to Composer I need those credentials in build stage to pass them to Dockerfile.
How can I achieve it?
I've tried:
// Jenkinsfile
agent {
dockerfile {
label 'mylabel'
filename '.docker/php/Dockerfile'
args '-v /net/jenkins-ex-work/workspace:/net/jenkins-ex-work/workspace'
additionalBuildArgs '--build-arg jenkins_usr=${JENKINS_CREDENTIALS_USR} --build-arg jenkins_credentials=${JENKINS_CREDENTIALS} --build-arg test_arg=test'
}
}
// Dockerfile
ARG jenkins_usr
ARG jenkins_credentials
ARG test_arg
But the args are empty.
TL;DR
Use jenkins withCredentials([sshUserPrivateKey()]) and echo the private key into id_rsa in the container.
EDITED: Removed the "run as root" step, as I think this caused issues. Instead a jenkins user is created inside the docker container with the same UID as the jenkins user that builds the docker container (no idea if that matters, but we need a user with a home dir so we can create ~/.ssh/id_rsa)
For those that suffered like me... My solution is below. It is NOT ideal as:
it risks exposing your private key in the build logs if you are not careful (the below is careful, but it's easy to forget). (Although with that in mind, it appears extracting jenkins credentials is extremely easy for anyone with naughty intentions?)
So use with caution...
In my (legacy) git project, a simple php app with internal git based composer dependencies, I have
Dockerfile.build
FROM php:7.4-alpine
# install git, openssh, composer... whatever u need here, then:
# create a jenkins user inside the docker image
ARG UID=1001
RUN adduser -D -g jenkins -s /bin/sh -u $UID jenkins \
&& mkdir -p /home/jenkins/.ssh \
&& touch /home/jenkins/.ssh/id_rsa \
&& chmod 600 /home/jenkins/.ssh/id_rsa \
&& chown -R jenkins:jenkins /home/jenkins/.ssh
USER jenkins
# I think only ONE of the below are needed, not sure.
RUN echo "Host bitbucket.org\n\tStrictHostKeyChecking no\n" >> /home/jenkins/.ssh/config \
&& ssh-keyscan bitbucket.org >> /home/jenkins/.ssh/known_hosts
Then in my Jenkinsfile:
def sshKey = ''
pipeline {
agent any
environment {
userId = sh(script: "id -u ${USER}", returnStdout: true).trim()
}
stages {
stage('Prep') {
steps {
script {
withCredentials([
sshUserPrivateKey(
credentialsId: 'bitbucket-key',
keyFileVariable: 'keyFile',
passphraseVariable: 'passphrase',
usernameVariable: 'username'
)
]) {
sshKey = readFile(keyFile).trim()
}
}
}
}
stage('Build') {
agent {
dockerfile {
filename 'Dockerfile.build'
additionalBuildArgs "--build-arg UID=${userId}"
}
}
steps {
// Turn off command trace for next line, as we dont want to log ssh key
sh '#!/bin/sh -e\n' + "echo '${sshKey}' > /home/jenkins/.ssh/id_rsa"
// .. proceed with whatever else, like composer install, etc
To be fair, I think some of the RUN commands in the docker container aren't even necessary, or could be run from the jenkins file? ¯_(ツ)_/¯
There was a similar issue, supposedly fixed in PR 327, with pipeline-model-definition-1.3.9
So start checking the version of your plugin.
But heed also the Dockerfile warning:
It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc.
Build-time variable values are visible to any user of the image with the docker history command.
Using buildkit with --secret is a better approach for that.

How to fetch PCF credentials configured in Jenkins variable?

Jenkins is configured to deploy PCF application. PCF login credentials is configured in Jenkins as variables. Is there any way to fetch the PCF login credential details from Jenkins variables?
echo "Pushing PCF App"
cf login -a https://api.pcf.org.com -u $cduser -p $cdpass -o ORG -s ORG_Dev
cf push pcf-app-04v2\_$BUILD_NUMBER -b java_buildpack -n pcf-app-04v2\_$BUILD_NUMBER -f manifest-prod.yml -p build/libs/*.jar
cf map-route pcf-app-04v2\_$BUILD_NUMBER apps-pr03.cf.org.com --hostname pcf-app
cf delete -f pcf-app
cf rename pcf-app-04v2\_$BUILD_NUMBER pcf-app
cf delete-orphaned-routes -f
Rather than accessing Jenkins credentials outside to manually run your app, you may probably define a simple pipeline in Jenkins and can define a custom script into it to perform these tasks. In script you can access it through credentials() function and use the environment variable in your commands.
E.g.
environment {
CF_USER = credentials('YOUR JENKINS CREDENTIAL KEY')
CF_PWD = credentials('CF_PWD')
}
def deploy() {
script {
sh '''#!/bin/bash
set -x
cf login -a https://api.pcf.org.com -u ${CF_USER} -p ${CF_PWD} -o ORG -s ORG_Dev
pwd
'''
}

Flyway with Jenkins - Unable to resolve location

I am trying to integrate db migrations with Flyway in the ci/cd pipeline by running a shell command in one of the stages. (since I am not allowed to add any new plugins to the pipeline, so can't use the Flyway plugin)
I have tried it like:
stage('migrate-sql') {
steps {
sh """
docker run --rm \
-v /GetShorty/Apis/Sql:/flyway/sql \
boxfuse/flyway:5.2.4 \
-url=jdbc:postgresql://****:5432/**** \
-user=**** \
-password=**** \
-baselineOnMigrate=false \
-locations=/GetShorty/Apis/Sql \
-connectRetries=60 \
migrate
"""
}
}
but no migrations are applied since it doesn't seem to find the migrations folder
WARNING: Unable to resolve location /GetShorty/Apis/Sql
Successfully validated 0 migrations (execution time 00:00.378s)
Current version of schema "public": << Empty Schema >>
Schema "public" is up to date. No migration necessary.
Considering the following projecture structure:
Any idea what might be going wrong here?
The docker volume settings is mounting the /GetShorty/Apis/Sql directory on the host to the /flyway/sql directory inside the container:
-v /GetShorty/Apis/Sql:/flyway/sql
Flyway is running inside the container so the locations flag needs to be the directory inside:
-locations=/flyway/sql
Stumbling on this issue I realized I was facing a similar problem.
The way I managed to solve it was to create a seperate folder in the root project called flyway that contains a sql folder with all the migrations and the following Dockerfile:
FROM boxfuse/flyway:5.2.4
COPY ./sql ./sql
Going back to the jenkins file I added a new step to build the docker image:
DOCKER_IMAGE_FLYWAY = "flyway"
stages {
stage('build docker images') {
steps {
script {
dockerImage_flyway = docker.build("$DOCKER_REGISTRY/${DOCKER_PROJECT}/${DOCKER_IMAGE_FLYWAY}:${env.BUILD_NUMBER}", "flyway")
}
}
}
and modified the migrations stage to use the that image
stage('migrate-sql') {
steps {
sh """
docker run --rm \
"$DOCKER_REGISTRY/${DOCKER_PROJECT}/${DOCKER_IMAGE_FLYWAY}:${env.BUILD_NUMBER}" \
-url=jdbc:postgresql://****:5432/**** \
-user=**** \
-password=**** \
-baselineOnMigrate=false \
-schemas=**** \
-connectRetries=60 \
migrate
"""
}
}
Now works like a charm.

Docker: share private key via arguments

I want to share my github private key into my docker container.
I'm thinking about sharing it via docker-compose.yml via ARGs.
Is it possible to share private key using ARG as described here?
Pass a variable to a Dockerfile from a docker-compose.yml file
# docker-compose.yml file
version: '2'
services:
my_service:
build:
context: .
dockerfile: ./docker/Dockerfile
args:
- PRIVATE_KEY=MULTI-LINE PLAIN TEXT RSA PRIVATE KEY
and then I expect to use it in my Dockerfile as:
ARG PRIVATE_KEY
RUN echo $PRIVATE_KEY >> ~/.ssh/id_rsa
RUN pip install git+ssh://git#github.com/...
Is it possible via ARGs?
If you can use the latest docker 1.13 (or 17.03 ce), you could then use the docker swarm secret: see "Managing Secrets In Docker Swarm Clusters"
That allows you to associate a secret to a container you are launching:
docker service create --name test \
--secret my_secret \
--restart-condition none \
alpine cat /run/secrets/my_secret
If docker swarm is not an option in your case, you can try and setup a docker credential helper.
See "Getting rid of Docker plain text credentials". But that might not apply to a private ssh key.
You can check other relevant options in "Secrets and LIE-abilities: The State of Modern Secret Management (2017)", using standalone secret manager like Hashicorp Vault.
Although the ARG itself will not persist in the built image, when you reference the ARG variable somewhere in the Dockerfile, that will be in the history:
FROM busybox
ARG SECRET
RUN set -uex; \
echo "$SECRET" > /root/.ssh/id_rsa; \
do_deploy_work; \
rm /root/.ssh/id_rsa
As VonC notes there's now a swarm feature to store and manage secrets but that doesn't (yet) solve the build time problem.
Builds
Coming in Docker ~ 1.14 (or whatever the equivalent new release name is) should be the --build-secret flag (also #28079) that lets you mount a secret file during a build.
In the mean time, one of the solutions is to run a network service somewhere that you can use a client to pull secrets from during the build. Then if the build puts the secret in a file, like ~/.ssh/id_rsa, the file must be deleted before the RUN step that created it completes.
The simplest solution I've seen is serving a file with nc:
docker network create build
docker run --name=secret \
--net=build \
--detach \
-v ~/.ssh/id_rsa:/id_rsa \
busybox \
sh -c 'nc -lp 8000 < /id_rsa'
docker build --network=build .
Then collect the secret, store it, use it and remove it in the Dockerfile RUN step.
FROM busybox
RUN set -uex; \
nc secret 8000 > /id_rsa; \
cat /id_rsa; \
rm /id_rsa
Projects
There's a number of utilities that have this same premise, but in various levels of complexity/features. Some are generic solutions like Hashicorps Vault.
Dockito Vault
Hashicorp Vault
docker-ssh-exec

Resources