Automate Flyway migration with Docker and Jenkins - docker

I'd like to automate the Flyway migrations for our MariaDB database. For testing purposes I added the following service to my docker-compose.yml running only the info command.
flyway:
image: boxfuse/flyway:5.2.4
command: -url=jdbc:mariadb://mariadb_service -schemas=$${MYSQL_DATABASE} -table=schema_version -connectRetries=60 info
volumes:
- ./db/migration:/flyway/sql
depends_on:
- mariadb_service
This seems to be working, i.e. I can see the output of info.
Now I'd like to take this idea one step further and integrate this into our Jenkins build pipeline. This is where I get stuck.
If I deployed the Docker stack with the above docker-compose.yml in my Jenkinsfile would the corresponding stage fail upon errors during the migration? Speaking, would Jenkins notice that error?
If this is not true how can I integrate the Flyway migration in my Jenkins pipeline? I found that there is a Flyway Runner plugin but I didn't see if this can connect to a database in a Docker stack deployed by the Jenkinsfile

You can use Jenkins built-in support for Docker. Then your pipeline script may contain the stage
stage('Apply DB changes') {
agent {
docker {
image 'boxfuse/flyway:5.2.4'
args '-v ./db/migration:/flyway/sql --entrypoint=\'\''
}
}
steps {
sh "/flyway/flyway -url=jdbc:mariadb://mariadb_service -schemas=${MYSQL_DATABASE} -table=schema_version -connectRetries=60 info"
}
}
This way the steps will be executed within temporary Docker container created by Jenkins agent from boxfuse/flyway image. If the command fails the entire stage will fail as well.

Related

BitBucket Pipeline log has no output when using custom image

I'm attempting to POC BitBucket Pipelines for some terraform work. I've got a self-hosted runner, running locally in my Docker environment, which is registered to my repository. This was set up following the generic instructions in the BitBucket UI.
My bitbucket-pipelines.yml file looks like this:
pipelines:
branches:
master:
- step:
runs-on: self.hosted
image: hashicorp/terraform:latest
name: 'Terraform Version'
script:
- terraform -v
Extremely basic, just run a terraform -v command on the hashicorp/terraform image.
The pipeline succeeds, and I can see the image is pulled, however there is absolutely no output in BitBucket from the container. All I see in the step log is:
Runner matching labels:
- linux
- self.hosted
Runner name: my-runner
Runner labels: self.hosted, linux
Runner version:
current: 1.252
latest: 1.252
Images used:
build: hashicorp/terraform#sha256:984ac701744995019b1309b542de03535a63097444e72b8f248d0a0d95520443
Even a simple echo "string" script does not get to the logs as output. I find that really strange, and I must be missing something fundamental. I've scoured the docs and can't find anything.
Does anyone know how to get the output from a custom image into the Bitbucket logs?
Do you use Docker Desktop on Windows?
You won't see any logs from containers if you use DockerDesktop (tested on 4.3.2) on Windows with WSL integration. That's due to container logs have another location and they're not available to bitbucket runner container.
-- Update --
There's a feature request to add local runners WSL full compatibility now. Pls vote if you need it too.
https://jira.atlassian.com/browse/BCLOUD-21611
I had a similar issue where I was getting no logs in my Pipeline output UI, though the ultimate status was reflected correctly (i.e. pass or fail).
I was using the command provided by Bitbucket to create a Linux Docker runner, and I noticed it contains this volume definition:
... -v /var/lib/docker/containers:/var/lib/docker/containers:ro ...
However, I am using a custom data-root for docker (see this blog for details), so the path /var/lib/docker/containers doesn't exist on my host machine. So, I modified this volume to point at my data-root setting, and then the logs showed up as expected.

Passing credentials though Jenkinsfile -> Docker-compose -> Docker

I need to shove credentials to a docker container via environment variables. This container is launched from a docker-compose file which is being maintained by Jenkins with CI/CD. I have the credentials stored in Jenkins as separate secret text entries under the domain of global. To the best of my knowledge, I can shim those credentials through via the environment block in my build stage using the credentials function. My attempt for the Jenkinsfile is shown below:
#!groovy
pipeline {
agent any
stages {
stage("build") {
environment {
DISCORD_TOKEN = credentials('DICEBOT_DISCORD_TOKEN')
APPLICATION_ID = credentials('DICEBOT_APPLICATION_ID')
}
steps {
sh 'docker-compose up --build --detach'
}
}
}
}
Not much is printed for the error. All I get is this simple error: ERROR: DICEBOT_APPLICATION_ID. That is it. So is the scope of where I stored the secret text incorrect? I was reading that since the domain is global, anything should be able to access it. So maybe my idea for domain scoping is wrong? Is the location of environment in the Jenkinsfile wrong?
I am not sure. The error is very bland and does not really describe what it doesn't like about DICEBOT_APPLICATION_ID.
To make matters worse, fixing this issue doens't really even solve the main issue at hand: getting the docker container to hold these credentials. The issue that I am currently dealing with is just to scope the environment variables to running docker-compose and probably will not shim the environment variables into the container I need them in.
For the second part, getting docker-compose to pass on the credentials to the container, I think the snippet below might do the trick?
version: "3"
services:
dicebot:
environment:
DISCORD_TOKEN: ${DISCORD_TOKEN}
APPLICATION_ID: ${APPLICATION_ID}
build:
context: .
dockerfile: Dockerfile
Solution
The environment block is in the right location if you only intend on using those variables within that stage. Your docker-compose.yml file looks fine but you aren't passing the environment variables as build arguments to the docker-compose command. Please see my modifications below
steps {
sh "docker-compose build --build-arg DISCORD_TOKEN='$DISCORD_TOKEN' --build-arg APPLICATION_ID='$APPLICATION_ID' --detach --verbose"
sh "docker-compose up -d"
}
I'm assuming the docker-compose.yml is in the same repository as your Jenkinsfile. Be cognizant of the scope of your credentials.

Building and testing component with docker and jmeter during gitlab CI/CD pipeline

I'm currently working on a project where I have a component in gitlab that I run through a CI/CD pipeline whenever there is a change to it. This runs a few junit tests etc
I want to be able to mount the snapshot zip file in docker when the pipeline is finished and build the image so I can run a few jmeter tests. I need the component in order to do these tests.
My question is, would I be able to do this during the end of the pipeline?
I know you can run a few scripts in the projects gitlab-ci.yml file, can I add my docker image to the yml file and my jmeter cli command line?
I want to build and mount the image in docker during pipeline, run the jmeter tests and then get a log of the results basically.
I don't know exactly how to accomplish what you're after, but you could create a job in your .gitlab-ci.yml to build the image and push it up with your component as an artifact. Then you could have another stage to pull down the image and run any jMeter tests against the container you want. Without seeing your current cfg, it's sorta hard to tell where you are progress-wise, but hopefully this helps a little. Good luck!
stages:
...
- jmeter
test stage:
image: your_pushed_image_with_component
stage: jmeter
script:
- cli cmd to run jMeter against running container
artifacts:
expire_in: 1 week
paths:
- test_results

Some issue with docker image in yml file. unable to runmy pipeline after adding a new runner in gitlab

I have configured my project to run on a pipeline
Here is my .git.yml file content:
image: markhobson/maven-chrome:latest
stages:
- flow1
execute job1:
stage: flow1
tags:
- QaFunctional
script:
- export
- set +e -x
- mvn --update-snapshots --define updateResults=${updateResults} clean test
Error after executing the pipeline :
bash: line 328: mvn: command not found
Running after_script
00:00
Uploading artifacts for failed job
00:00
ERROR: Job failed: exit status 127
Can anyone help me spot the error please ?
Is that not able to load the docker image?
When I use a shared runner I am able to execute the same.
Error you get means there is no maven installed on job executor mvn: command not found
Looks like image you specified image: markhobson/maven-chrome:latest has maven command:
# docker run markhobson/maven-chrome:latest mvn --version
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Other thing you specified is tags:
...
tags:
- QaFunctional
...
So when both image and tags are specified in your yaml then tags takes precedence and image is ignored.
Looks like your custom runner tagged with QaFunctional is shell runner without mvn configured.
As a solution either install mvn on QaFunctional or run job on docker runner (shared runners should do). To avoid such confusion don't specify image when you want to run your job on tagged shell runner.

How to version docker images with build number in Jenkins to deploy as Kubernetes deployment?

Currently I am trying to add version number or build number for Docker image to deploy on Kubernetes cluster. Previously I was working only with :latest tag. But when I am using latest tag , I found problem for pulling from Dockerhub image registry. So when I am using the build number to my docker image like <image-name>:{build-number} .
Application Structure
In my Kubernetes deployment, I am using deployment and service. I am defining my image repository in my deployment file like the following,
containers:
- name: test-kube-deployment-container
image: samplekubernetes020/testimage:latest
ports:
- name: http
containerPort: 8085
protocol: TCP
Here instead of latest tag, I want to put build number with my image in deployment YAML.
Can I use an environment variable for holding the random build number for accessing like <image-name>:${buildnumber} ?
If i want to use a environment variable which providing the random number how I can generate a random number to a environment variable?
Updates On Image Version Implementation
My modified Jenkinsfile contains the step like following to assign the image version number to image. But still I am not getting the updated result after changes to repository,
I created step like the following in Jenkinsfile
stage ('imagebuild')
{
steps
{
sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes /var/lib/jenkins/workspace/jpipeline/pipeline'
sh 'docker login --username=my-username --password=my-password'
sh "docker tag spacestudymilletech010/spacestudykubernetes:latest spacestudymilletech010/spacestudykubernetes:${VERSION}"
sh 'docker push spacestudymilletech010/spacestudykubernetes:latest'
}
}
And my deployment YAML file contains like the following,
containers:
- name: test-kube-deployment-container
image: spacestudymilletech010/spacestudykubernetes:latest
ports:
- name: http
containerPort: 8085
protocol: TCP
Confusions
NB: When I am checking the dockerhub repository, every time it showing the latest push status
So my confusions are:
Is there any problem with pulling latest image in my deployment.yaml file?
Is the problem when I am tagging the image at my machine from where I am building the image and pushing?
The standard way or at least the way that has worked for most of us is to create versioned or tagged images. For example
samplekubernetes020/testimage:1
samplekubernetes020/testimage:2
samplekubernetes020/testimage:3
...
...
Now I will try to answer your actual question which is how do I update the image which is in my deployment when my image tag upgrades?
Enter Solution
When you compile and build a new image with latest version of code, tag it with an incremental unique version. This tag can be anything unique or build number, etc.
Then push this tagged image to docker registry
Once the image is uploaded, this is when you can use kubectl or kubernetes API to update the deployment with the latest container image.
kubectl set image deployment/my-deployment test-kube-deployment-container=samplekubernetes020/testimage:1 --record
The above set of steps generally take place in your CI pipeline, where you store the image version or the image: version in the environment variable itself.
Update Post comment
Since you are using Jenkins, you can get the current build number and commit-id and many other variables in Jenkinsfile itself as Jenkins injects these values at builds runtime. For me, this works. Just a reference.
environment {
NAME = "myapp"
VERSION = "${env.BUILD_ID}-${env.GIT_COMMIT}"
IMAGE = "${NAME}:${VERSION}"
}
stages {
stage('Build') {
steps {
echo "Running ${VERSION} on ${env.JENKINS_URL}"
git branch: "${BRANCH}", .....
echo "for brnach ${env.BRANCH_NAME}"
sh "docker build -t ${NAME} ."
sh "docker tag ${NAME}:latest ${IMAGE_REPO}/${NAME}:${VERSION}"
}
}
}
This Jenkins pipeline approach worked for me.
I am using Jenkins build number as a tag for docker image, pushing to docker hub. Now applying yaml file to k8s cluster and then updating the image in deployment with same tag.
Sample pipeline script snippet is here,
stage('Build Docker Image'){
sh 'docker build -t {dockerId}/{projectName}:${BUILD_NUMBER} .'
}
stage('Push Docker Image'){
withCredentials([string(credentialsId: 'DOKCER_HUB_PASSWORD', variable: 'DOKCER_HUB_PASSWORD')]) {
sh "docker login -u {dockerId} -p ${DOKCER_HUB_PASSWORD}"
}
sh 'docker push {dockerId}/{projectName}:${BUILD_NUMBER}'
}
stage("Deploy To Kuberates Cluster"){
sh 'kubectl apply -f {yaml file name}.yaml'
sh 'kubectl set image deployments/{deploymentName} {container name given in deployment yaml file}={dockerId}/{projectName}:${BUILD_NUMBER}'
}

Resources