Jenkinsfile variable not getting set for helm - jenkins

I've a Jenkinsfile within which I call upon helm install command via shell script.
I pass a config variable called host_ingress:
steps {
script {
info "Starting Deploy";
switch(kubeNamespace) {
case "***":
host_ingress=".foo.com"
.
.
.
.
.
break;
.
.
.
.
sh script: """
./bin/helm.sh ${dryrun}\\
-t '${appVersion}' \\
-r '${helmReleaseName}' \\
-f ${helmChart}/${valuesFile} \\
--namespace '${kubeNamespace}' \\
--set ...... \\
--set host_ingress='${dml_ingress_host}' \\
--set grafana..... \\
""" , label: "Deploy with Helm"
And in my ingress.yaml file I have:
spec:
rules:
- host: {{ $host_ingress }}
http:
paths:
It is getting set correctly within jenkins but fails:
Error: parse error at (*****/templates/ingress.yaml:20): undefined variable "$host_ingress"
I've tried loads of options of what it could be and am having no luck.

To set environment variables in jenkinsfile you can do like the examples bellow
pipeline {
agent any
environment {
host_ingress = '.foo.bar.com' //can be used in whole pipeline
...
or
steps {
script {
info "Starting Deploy";
switch(kubeNamespace) {
case "***":
environment {
host_ingress=".foo.com" //can be used in this stage only
}

Related

Jenkins environmental variables on config yaml file

we have jenkins file with below Env variable. When we are trying to integration the docker compose and the config yaml file is not getting the value of the jenkins env variable instead its taking the variable name. Can some one help here what we are doing wrong
jenkins file:
stage('Run tests') {
steps {
withEnv([
"PASSWORD=${env.TEST_DB_CREDS_PSW}",
"USER=${env.TEST_DB_CREDS_USR}",
]){
sh '''#!/bin/bash
{
cd atests
docker-compose down
kill $$
Config.yaml:
dbconfig:
dbuser: ${USER}
dbpass: ${PASSWORD}
dbname:
dbdrivername:
tablename:
Can't begin Tx with ocd store: ERROR 1045 (28000): Access denied for user '${USER}'#'ip' (using password: YES)"}

Jenkins Kubernetes Plugin doesn't executre entrypoint of Docker image

I'm fairly new into Jenkins Kubernetes Plugin and Kubernetes in general - https://github.com/jenkinsci/kubernetes-plugin
I want to use the plugin for E2E tests setup inside my CI.
Inside my Jenkinsfile I have a podTemplate which looks and used as follows:
def podTemplate = """
apiVersion: v1
kind: Pod
spec:
containers:
- name: website
image: ${WEBSITE_INTEGRATION_IMAGE_PATH}
command:
- cat
tty: true
ports:
- containerPort: 3000
- name: cypress
resources:
requests:
memory: 2Gi
limit:
memory: 4Gi
image: ${CYPRESS_IMAGE_PATH}
command:
- cat
tty: true
"""
pipeline {
agent {
label 'docker'
}
stages {
stage('Prepare') {
steps {
timeout(time: 15) {
script {
ci_machine = docker.build("${WEBSITE_IMAGE_PATH}")
}
}
}
}
stage('Build') {
steps {
timeout(time: 15) {
script {
ci_machine.inside("-u root") {
sh "yarn build"
}
}
}
}
post {
success {
timeout(time: 15) {
script {
docker.withRegistry("https://${REGISTRY}", REGISTRY_CREDENTIALS) {
integrationImage = docker.build("${WEBSITE_INTEGRATION_IMAGE_PATH}")
integrationImage.push()
}
}
}
}
}
}
stage('Browser Tests') {
agent {
kubernetes {
label "${KUBERNETES_LABEL}"
yaml podTemplate
}
}
steps {
timeout(time: 5, unit: 'MINUTES') {
container("website") {
sh "yarn start"
}
container("cypress") {
sh "yarn test:e2e"
}
}
}
}
}
In Dockerfile that builds an image I added an ENTRYPOINT
ENTRYPOINT ["bash", "./docker-entrypoint.sh"]
However it seems that it's not executed by the kubernetes plugin.
Am I missing something?
As per Define a Command and Arguments for a Container docs:
The command and arguments that you define in the configuration file
override the default command and arguments provided by the container
image.
This table summarizes the field names used by Docker and Kubernetes:
| Docker field name | K8s field name |
|------------------:|:--------------:|
| ENTRYPOINT | command |
| CMD | args |
Defining a command implies ignoring your Dockerfile ENTRYPOINT:
When you override the default ENTRYPOINT and CMD, these rules apply:
If you supply a command but no args for a Container, only the supplied command is used. The default ENTRYPOINT and the default CMD defined in the Docker image are ignored.
If you supply only args for a Container, the default ENTRYPOINT
defined in the Docker image is run with the args that you supplied.
So you need to replace the command in your pod template by args, which will preserve your Dockerfile ENTRYPOINT (acting equivalent to a Dockerfile CMD).

How do I set up postgres database in Jenkins pipeline?

I am using docker to simulate postgres database for my app. I was testing it in Cypress for some time and it works fine. I want to set up Jenkins for further testing, but I seem stuck.
On my device, I would use commands
docker create -e POSTGRES_DB=myDB -p 127.0.0.1:5432:5432 --name myDB postgres
docker start myDB
to create it. How can I simulate this in Jenkins pipeline? I need the DB for the app to work.
I use Dockerfile as my agent, and I have tried putting the ENV variables there, but it does not work. Docker is not installed on the pipeline.
The way I see it is either:
Create an image by using a
Somehow install docker inside the pipeline and use the same commands
Maybe with master/slave nodes? I don't understand them well yet.
This might be a use case for sidecar pattern one of Jenkins Pipeline's advanced features.
For example (from the above site):
node {
checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
The above example uses the object exposed by withRun, which has the
running container’s ID available via the id property. Using the
container’s ID, the Pipeline can create a link by passing custom
Docker arguments to the inside() method.
Best thing is that the containers should be automatically stopped and removed when the work is done.
EDIT:
To use docker network instead you can do the following (open Jira to support this OOTB). Following helper function
def withDockerNetwork(Closure inner) {
try {
networkId = UUID.randomUUID().toString()
sh "docker network create ${networkId}"
inner.call(networkId)
} finally {
sh "docker network rm ${networkId}"
}
}
Actual usage
withDockerNetwork{ n ->
docker.image('sidecar').withRun("--network ${n} --name sidecar") { c->
docker.image('main').inside("--network ${n}") {
// do something with host "sidecar"
}
}
}
For declarative pipelines:
pipeline {
agent any
environment {
POSTGRES_HOST = 'localhost'
POSTGRES_USER = myuser'
}
stages {
stage('run!') {
steps {
script {
docker.image('postgres:9.6').withRun(
"-h ${env.POSTGRES_HOST} -e POSTGRES_USER=${env.POSTGRES_USER}"
) { db ->
// You can your image here but you need psql to be installed inside
docker.image('postgres:9.6').inside("--link ${db.id}:db") {
sh '''
psql --version
until psql -h ${POSTGRES_HOST} -U ${POSTGRES_USER} -c "select 1" > /dev/null 2>&1 || [ $RETRIES -eq 0 ]; do
echo "Waiting for postgres server, $((RETRIES-=1)) remaining attempts..."
sleep 1
done
'''
sh 'echo "your commands here"'
}
}
}
}
}
}
}
Related to Docker wait for postgresql to be running

Jenkinsfile: Curl logs from previous (still running) stage

I'm in a process of migrating from freestyle jobs chained into pipeline to have the pipeline in a Jenkinsfile.
My current pipeline will execute 2 jobs in parallel, one will create a tunnel to database (with a randomly generated port) and the next job needs to get this port number, so I'm performing a curl command and reading the console of the create-db-tunnel job and storing the port number. The create-db-tunnel needs to keep running as the follow up job is connecting to the database and is taking DB dump. This is the curl command which I run on the second job and which is returning the randomly generated port number from the established DB tunnel:
Port=$(curl -u ${USERNAME}:${TOKEN} http://myjenkinsurl.com/job/create-db-tunnel/lastBuild/consoleText | grep Port | grep -Eo '[0-9]{3,5}')
I wonder if there is anything similar I can use in Jenkinsfile? I currently have the 2 jobs triggered in parallel, but since the create-db-tunnel is no longer a freestyle job, I'm not sure if I can get the port number still? I can confirm that the console logs for the db_tunnel stage has the port number in there, just not sure how can I query that console. Here is my jenkinsfile:
pipeline {
agent any
environment {
APTIBLE_LOGIN = credentials('aptible')
}
stages {
stage('Setup') {
parallel {
// run db_tunnel and get_port in parralel
stage ('db_tunnel') {
steps {
sh """
export PATH=$PATH:/usr/local/bin
aptible login --email=$APTIBLE_LOGIN_USR --password=$APTIBLE_LOGIN_PSW
aptible db:tunnel postgres-prod & sleep 30s
"""
}
}
stage('get_port') {
steps {
sh """
sleep 15s
//this will not work
Port=$(curl -u ${USERNAME}:${TOKEN} http://myjenkinsurl.com/job/db_tunnel/lastBuild/consoleText | grep Port | grep -Eo '[0-9]{3,5}')
echo "Port=$Port" > port.txt
"""
}
}
}
}
}
}
Actually, I found a solution to my question - it was a very similar curl command I had to run and I'm now getting the desired port number I needed. Here is the jenkinsfile if someone is interested:
pipeline {
agent any
environment {
APTIBLE_LOGIN = credentials('aptible')
JENKINS_TOKEN = credentials('jenkins')
}
stages {
stage('Setup') {
parallel {
// run db_tunnel and get_port in parralel
stage ('db_tunnel') {
steps {
sh """
export PATH=$PATH:/usr/local/bin
aptible login --email=$APTIBLE_LOGIN_USR --password=$APTIBLE_LOGIN_PSW
aptible db:tunnel postgres-prod & sleep 30s
"""
}
}
stage('get_port') {
steps {
sh """
sleep 20
Port=\$(curl -u $JENKINS_TOKEN_USR:$JENKINS_TOKEN_PSW http://myjenkinsurl.com/job/schema-archive-jenkinsfile/lastBuild/consoleText | grep Port | grep -Eo '[0-9]{3,5}')
echo "Port=\$Port" > port.txt
"""
}
}
}
}
}
}

Jenkins Pipeline Kubernetes: Define pod yaml dynamically

I am trying to run a test docker image in kubernetes which will test my application. The application container and test container have the same version which is incremented if any of tests or application changes. How can I define pod yaml dynamically for kubernetes plugin so that I can get the version in the first stage(which is outside the kubernetes cluster) and then update pod yaml with the right version of the container?
APP_VERSION = ""
pod_yaml = """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: ci--my-app
spec:
containers:
- name: test-runner
image: my.docker.registry/app-tester:${-> APP_VERSION}
imagePullPolicy: Always
command:
- cat
tty: true
"""
pipeline {
agent none
stages {
stage('Build and Upload') {
agent { node { label 'builder' } }
steps {
script {
APP_VERSION = sh(
script: "cat VERSION",
returnStdout: true
).trim()
}
}
}
stage('Deploy and Test application') {
agent {
kubernetes {
label 'ci--data-visualizer-kb'
defaultContainer 'jnlp'
yaml pod_yml
}
}
steps {
container('test-runner') {
sh "echo ${APP_VERSION}"
sh "ls -R /workspace"
}
}
}
}
}
The kubernetes block in pipeline do not accept lazy evaluation of string pod_yaml which contains ${-> APP_VERSION}. Is there any workaround for this or I am doing it totally wrong?
PS: I cannot use the scripted pipeline for other reasons. So, I have to stick to the declarative pipeline.
It might be a bit odd, but if you're out of other options, you can use jinja2 template engine and python to dynamically generate the file you want.
Check it out - it's quite robust.

Resources