Passing credentials though Jenkinsfile -> Docker-compose -> Docker - jenkins

I need to shove credentials to a docker container via environment variables. This container is launched from a docker-compose file which is being maintained by Jenkins with CI/CD. I have the credentials stored in Jenkins as separate secret text entries under the domain of global. To the best of my knowledge, I can shim those credentials through via the environment block in my build stage using the credentials function. My attempt for the Jenkinsfile is shown below:
#!groovy
pipeline {
agent any
stages {
stage("build") {
environment {
DISCORD_TOKEN = credentials('DICEBOT_DISCORD_TOKEN')
APPLICATION_ID = credentials('DICEBOT_APPLICATION_ID')
}
steps {
sh 'docker-compose up --build --detach'
}
}
}
}
Not much is printed for the error. All I get is this simple error: ERROR: DICEBOT_APPLICATION_ID. That is it. So is the scope of where I stored the secret text incorrect? I was reading that since the domain is global, anything should be able to access it. So maybe my idea for domain scoping is wrong? Is the location of environment in the Jenkinsfile wrong?
I am not sure. The error is very bland and does not really describe what it doesn't like about DICEBOT_APPLICATION_ID.
To make matters worse, fixing this issue doens't really even solve the main issue at hand: getting the docker container to hold these credentials. The issue that I am currently dealing with is just to scope the environment variables to running docker-compose and probably will not shim the environment variables into the container I need them in.
For the second part, getting docker-compose to pass on the credentials to the container, I think the snippet below might do the trick?
version: "3"
services:
dicebot:
environment:
DISCORD_TOKEN: ${DISCORD_TOKEN}
APPLICATION_ID: ${APPLICATION_ID}
build:
context: .
dockerfile: Dockerfile

Solution
The environment block is in the right location if you only intend on using those variables within that stage. Your docker-compose.yml file looks fine but you aren't passing the environment variables as build arguments to the docker-compose command. Please see my modifications below
steps {
sh "docker-compose build --build-arg DISCORD_TOKEN='$DISCORD_TOKEN' --build-arg APPLICATION_ID='$APPLICATION_ID' --detach --verbose"
sh "docker-compose up -d"
}
I'm assuming the docker-compose.yml is in the same repository as your Jenkinsfile. Be cognizant of the scope of your credentials.

Related

How do environment variables work when a Docker container builds itself?

I'm trying to update an environment variable for a Docker image
The image builds like this in Jenkins:
insideNode { // executed inside the container docker-compose is building
withEnv(['COMPOSE_OPTIONS=-e http_proxy -e https_proxy -e no_proxy']) {
sh """
export http_proxy=${env.http_proxy}
export https_proxy=${env.https_proxy}
export no_proxy=${env.no_proxy}
VERSION=${project.version} docker-compose --file docker-compose.yml node-container
"""
}
}
The insideNode closure is executed inside the Docker container I'm building - so it uses itself to build a new version of itself. I'm trying to update those proxy variables, which you can see are drawn from the environment (the Jenkins container originally, but now its own container).
If I update the environment variables in the overall Jenkins environment, do they flow down to the container build here? I would assume that the closure would be using the container's own environment, which means you can't really update them since they need to be updated, to use the updated values, so it becomes a cyclical dependency. So in this scenario, how would overwrite those env vars?

How to run a container without a shell in GitLab CI job

I want to run conform as part of my pipeline to check commit messages, but the container image lacks a shell, and has entrypoint /conform and argument enforce. My .gitlab-ci.yml should look like:
conform:
image: docker.io/autonomy/conform:latest
without a script section, but as far as I know this is not allowed in GitLab.
Edit
There is a GitLab issue open on this.
You can always install conform as part of your CI:
conformJob:
image: golang
script:
- go get github.com/talos-systems/conform
- conform enforce

Inject AWS Codebuild Environment Variables into Dockerfile

Is there a way to pass AWS Codebuild environment variables into a Dockerfile?
I'd like to be able to pull from ECR like this:
FROM $My_AWS_ACCOUNT.dkr.ecr.us-east-1.amazonaws.com/someimage:latest
Where $My_AWS_ACCOUNT references an environment variables within my codebuild project.
Yes, you can use FROM ${My_AWS_ACCOUNT}.xxx. My_AWS_ACCOUNT should be passed as an argument to the docker build.
This is how I would do it:
ARG My_AWS_ACCOUNT=SOME_DEFAULT_IMAGE
FROM ${My_AWS_ACCOUNT}.xxx
When you build:
docker build --build-arg My_AWS_ACCOUNT=${My_AWS_ACCOUNT}
Yet another amazingly annoying thing in Docker that doesn't actually need to be this difficult but for some reason is supremely complicated and/or non-intuitive.
command line:
docker build --build-arg My_AWS_ACCOUNT=${My_AWS_ACCOUNT}
Dockerfile:
ARG My_AWS_ACCOUNT
FROM ${My_AWS_ACCOUNT}.dkr.ecr.us-east-1.amazonaws.com/someimage:latest

Automate Flyway migration with Docker and Jenkins

I'd like to automate the Flyway migrations for our MariaDB database. For testing purposes I added the following service to my docker-compose.yml running only the info command.
flyway:
image: boxfuse/flyway:5.2.4
command: -url=jdbc:mariadb://mariadb_service -schemas=$${MYSQL_DATABASE} -table=schema_version -connectRetries=60 info
volumes:
- ./db/migration:/flyway/sql
depends_on:
- mariadb_service
This seems to be working, i.e. I can see the output of info.
Now I'd like to take this idea one step further and integrate this into our Jenkins build pipeline. This is where I get stuck.
If I deployed the Docker stack with the above docker-compose.yml in my Jenkinsfile would the corresponding stage fail upon errors during the migration? Speaking, would Jenkins notice that error?
If this is not true how can I integrate the Flyway migration in my Jenkins pipeline? I found that there is a Flyway Runner plugin but I didn't see if this can connect to a database in a Docker stack deployed by the Jenkinsfile
You can use Jenkins built-in support for Docker. Then your pipeline script may contain the stage
stage('Apply DB changes') {
agent {
docker {
image 'boxfuse/flyway:5.2.4'
args '-v ./db/migration:/flyway/sql --entrypoint=\'\''
}
}
steps {
sh "/flyway/flyway -url=jdbc:mariadb://mariadb_service -schemas=${MYSQL_DATABASE} -table=schema_version -connectRetries=60 info"
}
}
This way the steps will be executed within temporary Docker container created by Jenkins agent from boxfuse/flyway image. If the command fails the entire stage will fail as well.

Passing environmental variables when deploying docker to remote host

I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.

Resources