I am trying to run the JenkinsFile below which contains two sed commands. But I faced different issues with string interpolation when I cat the file.
Do you know how I can run it inside the JenkinsFile?
Thanks in advance.
pipeline {
agent any
tools {nodejs "NodeJS 6.7.0"}
stages {
stage('checking out gitlab branch master') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/development']]])
}
}
stage('executing release process') {
environment {
ARTIFACTORY_APIKEY = credentials('sandbox-gms-password')
}
steps {
sh 'cp bowerrc.template .bowerrc'
sh 'sed -i -e "s/username/zest-jenkins/g" .bowerrc'
sh 'sed -i -e "s/password/${ARTIFACTORY_APIKEY}/g" .bowerrc'
sh 'cat .bowerrc'
}
}
}
}
Put the commands in single "sh" block, please take the reference from the below:-
pipeline {
agent any
tools {nodejs "NodeJS 6.7.0"}
stages {
stage('checking out gitlab branch master') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/development']]])
}
}
stage('executing release process') {
environment {
ARTIFACTORY_APIKEY = credentials('sandbox-gms-password')
}
steps {
sh '''
cp bowerrc.template .bowerrc
sed -i -e "s/username/zest-jenkins/g" .bowerrc
sed -i -e "s/password/${ARTIFACTORY_APIKEY}/g" .bowerrc
cat .bowerrc
'''
}
}
}
}
Related
I am trying to replace the the DB cred based on the env name in Jenkins, but I am unable to achieve the same.
I have a JSON Config Files like this named 'JsonConfig'
{
"production": {
"DB_USERNAME": "userABC"
},
"development": {
"DB_USERNAME": "userXYZ"
}
}
and this what I have in Jenkinsfile
def getEnvName() {
if ("master".equals(env.BRANCH_NAME)) {
return "production";
}
return env.BRANCH_NAME;
}
def config;
node(){
configFileProvider([configFile(fileId: 'secret-credentials', targetLocation: 'JsonConfig')]) {
config = readJSON file: 'JsonConfig'
}
}
pipeline {
agent any
stages {
stage("Setup") {
when {
beforeAgent true
anyOf { branch 'master', branch 'development' }
}
steps {
sh """
sed -i 's#__DB_USERNAME__#config.${getEnvName()}.DB_USERNAME#' ./secret-data.yml
cat ./secret-data.yml
"""
//Alternative
sh "sed -i 's#__DB_USERNAME__#${config.getEnvName().DB_USERNAME}#' ./secret-data.yml"
}
}
}
}
If I statically pass the var name like this, then it is working fine.
sh "sed -i 's#__DB_USERNAME__#${config.production.DB_USERNAME}#' ./k8s/secret-data.yml"
I want to make "production" dynamic so that it reads the value which is returned from getEnvName() method.
The problematic line is
sh """
sed -i 's#__DB_USERNAME__#config.${getEnvName()}.DB_USERNAME#' ./secret-data.yml
"""
This will evaluate as the shell command
sed -i 's#__DB_USERNAME__#config.production.DB_USERNAME#' ./secret-data.yml
But you want to be evaluated to
sed -i 's#__DB_USERNAME__#userABC#' ./secret-data.yml
Since the config is a Groovy object representing the parsed JSON file we can access its properties dynamically using the subscript operator ([]):
sh """
sed -i 's#__DB_USERNAME__#${config[getEnvName()].DB_USERNAME}#' ./secret-data.yml
"""
I am looking to use a database username/password in my config.ini file. I have the following withCredentials line in my Jenkinsfile:
withCredentials([usernamePassword(credentialsId: 'database', usernameVariable: 'DATABASE_USER', passwordVariable: 'DATABASE_PASSWORD')])
I don't explicitly call this config.ini file in my Jenkinsfile, however I do use a bash script to:
export CONFIG_FILE='config.ini'
Is there any way to set these accordingly in my config.ini:
DB_USERNAME = {DATABASE_USER}
DB_PASSWORD = {DATABASE_PASSWORD}
Bash can do this for you. You have two options:
Use envsubst. You'll need to install it on all of your nodes (it's usually part of the gettext package).
Use evil eval
Full example:
pipeline {
agent {
label 'linux' // make sure we're running on Linux
}
environment {
USER = 'theuser'
PASSWORD = 'thepassword'
}
stages {
stage('Write Config') {
steps {
sh 'echo -n "user=$USER\npassword=$PASSWORD" > config.ini'
}
}
stage('Envsubst') {
steps {
sh 'cat config.ini | envsubst > config_envsubst.ini'
sh 'cat config_envsubst.ini'
}
}
stage('Eval') {
steps {
sh 'eval "echo \"$(cat config.ini)\"" > config_eval.ini'
sh 'cat config_eval.ini'
}
}
}
}
This this Stackexchange question for more options.
Am I able somehow to copy data from one stage for usage on another?
For example, I have one stage where I want to clone my repo, and on another run the Kaniko which will copy (on dockerfile) all data to container and build it
How to do this? Because
Stages are independent and I not able to operate via the same data on both
on Kaniko I not able to install the GIT to clone it there
Thanks in advance
Example of code :
pipeline {
agent none
stages {
stage('Clone repository') {
agent {
label 'builder'
}
steps {
sh 'git clone ssh://git#myrepo.com./repo.git'
sh 'cd repo'
}
}
stage('Build application') {
agent {
docker {
label 'builder'
image 'gcr.io/kaniko-project/executor:debug'
args '-u 0 --entrypoint=""'
}
}
steps {
sh '''#!/busybox/sh
/kaniko/executor -c `pwd` -f Dockerfile"
'''
}
}
}
}
P.S. On dockerfile I using such as
ADD . /
You can try to use stash:
stage('Clone repository') {
agent {
label 'builder'
}
steps {
sh 'git clone ssh://git#myrepo.com./repo.git'
script {
stash includes: 'repo/', name: 'myrepo'
}
}
}
stage('Build application') {
agent {
docker {
label 'builder'
image 'gcr.io/kaniko-project/executor:debug'
args '-u 0 --entrypoint=""'
}
}
steps {
script {
unstash 'myrepo'
}
sh '''#!/busybox/sh
/kaniko/executor -c `pwd` -f Dockerfile"
'''
}
I have a pipeline and I'm building my image through a docker container and it output the image tag, I want to pass that image tag to next stage, when I echo it in the next stage it prints out. but when I use it in a shell it goes empty. here is my pipeline
pipeline {
agent any
stages {
stage('Cloning Git') {
steps {
git( url: 'https://xxx#bitbucket.org/xxx/xxx.git',
credentialsId: 'xxx',
branch: 'master')
}
}
stage('Building Image') {
steps{
script {
env.IMAGE_TAG = sh script: "docker run -e REPO_APP_BRANCH=master -e REPO_APP_NAME=exampleservice -e DOCKER_HUB_REPO_NAME=exampleservice --volume /var/run/docker.sock:/var/run/docker.sock registry.xxxx/build", returnStdout: true
}
}
}
stage('Integration'){
steps{
script{
echo "passed: ${env.IMAGE_TAG}"
sh """
helm upgrade exampleservice charts/exampleservice --set image.tag=${env.IMAGE_TAG}
"""
sh "sleep 5"
}
}
}
}
}
pipeline output
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Integration)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
passed:
b79c3bf-b6eec4f
[Pipeline] sh
[test101] Running shell script
+ helm upgrade exampleservice charts/exampleservice --set image.tag=
getting empty image tag
You should override this by using the 'env'.
Replace your code with this one:
pipeline {
agent any
stages {
stage('Cloning Git') {
steps {
git( url: 'https://xxx#bitbucket.org/xxx/xxx.git',
credentialsId: 'xxx',
branch: 'master')
}
}
stage('Building Image') {
steps{
script {
env.IMAGE_TAG = sh script: "docker run -e REPO_APP_BRANCH=master -e REPO_APP_NAME=exampleservice -e DOCKER_HUB_REPO_NAME=exampleservice --volume /var/run/docker.sock:/var/run/docker.sock registry.xxxx/build", returnStdout: true
}
}
}
stage('Integration'){
steps{
script{
echo "passed: ${env.IMAGE_TAG}"
sh """
helm upgrade exampleservice charts/exampleservice\
--set image.tag="${env.IMAGE_TAG}"
"""
sh "sleep 5"
}
}
}
}
}
I have a Jenkins pipeline that builds and runs a Docker machine, not as an agent, but using a scripting block along with the Docker Pipeline Plugin methods docker.build() and Image.run(). This works fine but if the build fails, the docker container is left running! I currently have Container.stop() in a post{ always{} } block but it doesn't seem to work. I don't want ssh into my Jenkins server to delete the container after every build and I can't just leave it because it has a specific and necessary name. How do I stop and rm the container regardless of failure of the build?
My pipeline:
pipeline {
agent none
stages {
stage('Checkout') {
agent any
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: '<some credentials>', url: '<a git repo>']]])
}
}
stage('Spin Up Receiver') {
agent any
steps {
script {
def receiver = docker.build("receiver", "--rm centos7_receiver")
def receiver_container = receiver.run("-d -v ${PWD}/realtime_files/station_name/201707/f/20170710_191:/DSK1/SSN/LOG0_f/17001 --network='rsync_test' --name='test_receiver'")
}
}
}
stage('Run Tests') {
agent { dockerfile { args '-v /etc/passwd:/etc/passwd --network="rsync_test"' } }
steps {
sh "python ./rsyncUnitTests.py"
}
}
}
post {
always {
script {
receiver_container.stop()
}
}
failure {
sendEmail('foo#bar.com')
}
changed {
sendEmail('foo#bar.com')
}
}
}
Here is a working solution. You simply have to define a variable for the container outside the main pipeline. Then you can use it anywhere in the pipeline to start or stop the container. In particular, you can remove the container in post{ always{ } }.
def receiver_container
pipeline {
agent any
stages {
stage('Checkout') {
agent any
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: '<some credentials>', url: '<a git repo>']]])
}
}
stage('Spin Up Receiver') {
agent any
steps {
script {
def receiver = docker.build("receiver", "--rm receiver_docker")
receiver_container = receiver.run("-d -u 0:0 -v /var/lib/jenkins/workspace/RsyncRealtime/jenkins_rt:/DSK1/SSN/LOG5_F/17191 --network='rsync_test' --name='test_receiver'")
}
}
}
stage('Run Unit Tests') {
agent {
dockerfile {
args '-u 0:0 -v /etc/passwd:/etc/passwd --network="rsync_test"'
}
}
steps {
sh "sshpass -p 'test' ssh anonymous#test_receiver ls -l /DSK1/SSN/LOG5_F/17191"
sh "python ./rsyncUnitTests.py"
}
}
}
post {
always {
script {
receiver_container.stop()
}
}
failure {
sendEmail('foo#bar.com')
}
changed {
sendEmail('foo#bar.com')
}
}
}
You can use Image.withRun() instead of Image.run().
Image.withRun[(args[, command])] {…}
Like run but stops the container as soon as its body exits, so you do not need a try-finally block.
Here other useful commands:
https://qa.nuxeo.org/jenkins/pipeline-syntax/globals#docker