Run pipline on slave not master - jenkins

I am running Jenkins pipline (on Jenkins v2.58) and am trying to get the build to run on a slave not the master. Yet, whatever magic I try in the Jenkinsfile, Jenkins keeps running on master.
How do I specify a slave executor?
Here is my toy Jenkinsfile, if that helps:
pipeline {
agent {
node {
label='CentOS7'
}
}
stages {
stage('Creating tox virtual environment') {
steps {
sh 'uname -a'
sh 'tox -v --recreate'
}
}
}
}

The right syntax appears to be:
pipeline {
agent { label 'CentOS7' }
stages {
stage('Creating tox virtual environment') {
steps {
sh 'uname -a'
sh 'tox -v --recreate'
}
}
}
}
Also, make sure your master is running.

Related

How to use helm commands in jenkins pipeline script

I have been trying to deploy the image built on jenkins by docker to helm charts, i have referred couple of documents on website https://dev.to/sword-health/seamless-ci-cd-with-jenkins-helm-and-kubernetes-5e00
and https://cloudcompilerr.wordpress.com/2018/06/03/docker-jenkins-kubernetes-run-jenkins-on-kubernetes-cluster/ and managed till the point where docker image gets pushed into dockerhub but i get stuck at helm
i'm not getting what the error exactly is.
JENKINS ERROR
+ helm list
/var/lib/jenkins/workspace/01#tmp/durable-68e91f76/script.sh: 1: /var/lib/jenkins/workspace/01#tmp/durable-68e91f76/script.sh: helm: not found
PIPELINESCRIPT
pipeline {
environment {
registry = "hemanthpeddi/springboot"
registryCredential = 'dockerhub'
}
agent any
tools {maven "maven" }
stages {
stage('Cloning Git') {
steps {
git 'https://github.com/hrmanth/game-of-life.git'
}
}
stage('Build'){
steps{
sh script: 'mvn clean package'
}
}
stage('Building image') {
steps{
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploy Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
}
}
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
stage('Run Helm') {
steps {
script {
container('helm') {
sh "helm ls"
}
}
}
}
}
}
Is there any specific configuration that i'm missing before i use helm in jenkins? And i have configured my kubernetes IP in the cloud configuration in jenkins, Please help
Plugins Installed
Kubernetes Plugin
Docker Plugin
You need helm, it is not available by default. You could add helm as a tool in Jenkins and use it.
https://www.jenkins.io/doc/book/pipeline/syntax/#tools
you can install helm in the container itself by adding an extra stage
stage("install helm"){
steps{
sh 'wget https://get.helm.sh/helm-v3.6.1-linux-amd64.tar.gz'
sh 'ls -a'
sh 'tar -xvzf helm-v3.6.1-linux-amd64.tar.gz'
sh 'sudo cp linux-amd64/helm /usr/bin'
sh 'helm version'
}
}
I am not so familiar with that, but when you are using the "container('helm')" step, I think it refers to
Kubernetes Plugin.
So, reading this docs, I think that the "podTemplate" is missing in your configuration.
Thus what you need to do is to configure a Helm container in the "podTemplate" and put the name "helm". You can try to use, for example, the "alpine/helm" image.
See you later.

Multiple agents in Jenkins pipeline but use one agent for certain stages

I have this Jenkinsfile:
#!groovy
pipeline
{
options {
buildDiscarder(logRotator(numToKeepStr: '10'))
}
agent {
label 'docker && new'
}
stages
{
stage('Docker build')
{
when {
branch 'dev'
}
steps
{
sh "echo ${env.BUILD_NUMBER}"
sh "./scripts/push.sh Docker http://xxxxx.xxxx ${env.BUILD_NUMBER} ${env.GIT_BRANCH}"
sh "echo ${env.BUILD_NUMBER}"
sh "echo ${env.GIT_BRANCH}"
}
}
stage("Initialise")
{
agent {
dockerfile {
filename 'Dockerfile'
label 'docker && new'
args '--entrypoint ""'
}
}
steps {
sh "terraform init -input=false"
}
}
stage("WorkspaceDev") {
agent {
dockerfile {
filename 'Dockerfile'
label 'docker && new'
args '--entrypoint ""'
}
}
when {
branch 'dev'
}
steps {
sh "terraform workspace select dev || terraform workspace new dev"
}
}
}
}
It builds a container from my Dockerfile, However when running this job it is creating a new docker container to run the next stage called WorkspaceDev. I need to use a separate agent for the very first stage and then dockerfile agent for all other stages
How can I use the same container built for the Initialise stage?
Problem:
When running this pipeline the "Docker build" stage is executing on the agent itself as expected.
It then gets to the "initialisation" stage. This build a new docker container (docker build (my Dockerfile I have specified in the agent section for this stage). This stage completes inside this container.
Next it gets to the "WorkspaceDev" stage - this then AGAIN rebuilds the container with docker build.
I want to use the same container built in the "Initialisation" stage
Can't you use agent { label 'NODE_LABEL' }?
Your question is not very clear, but it seem you are defining a global agent in the upper part of your Jenkinsfile. If the stage you are trying to run on a different agent is the "Docker build" stage, you just have to specify an agent like in your other stages.
There is the agent documentation:
https://jenkins.io/doc/book/pipeline/syntax/#agent
I think that the label has no meaning (so it is ignored) when you're using a Dockerfile-agent, so it creates a new container for each stage.
therefore, try the following:
#!groovy
pipeline
{
options {
buildDiscarder(logRotator(numToKeepStr: '10'))
}
agent {
dockerfile {
filename 'Dockerfile'
args '--entrypoint ""'
}
}
stages
{
stage('Docker build')
{
agent {
dockerfile {
filename 'Dockerfile'
args '--entrypoint ""'
}
}
when {
branch 'dev'
}
steps
{
sh "echo ${env.BUILD_NUMBER}"
sh "./scripts/push.sh Docker http://xxxxx.xxxx ${env.BUILD_NUMBER} ${env.GIT_BRANCH}"
sh "echo ${env.BUILD_NUMBER}"
sh "echo ${env.GIT_BRANCH}"
}
}
stage("Initialise")
{
steps {
sh "terraform init -input=false"
}
}
stage("WorkspaceDev") {
when {
branch 'dev'
}
steps {
sh "terraform workspace select dev || terraform workspace new dev"
}
}
}
}

How can I use agent docker in a declartive pipeline running on jenkins ssh-slave node?

I am running jenkins master and slave as docker container. I have setup a slave node using jenkins/ssh-slave image with label 'worker'. I can successfully run my pipeline on the worker node. However, when I am trying to run docker build command using the Jenkinsfile, I am getting error docker: not found.
pipeline {
agent { label 'worker' }
tools {nodejs "node"}
stages {
stage ('Build APP') {
steps {
echo 'BUILDING APPLICATION'
sh 'npm install'
}
}
stage ('Create Package') {
steps {
script{
echo 'BUILDING DOCKER IMAGE'
docker.build("package${env.BUILD_NUMBER}")
}
}
}
stage('Package Test') {
agent { docker }
steps {
echo 'RUNNING IMAGE IN CONATAINER'
sh "docker run -p 5050:4000 -d package${env.BUILD_NUMBER}"
echo 'CHECKING HEALTH STATUS'
script {
try {
sh "curl -s --head --request GET http://127.0.0.1:5050/ | grep '200'"
echo 'Health Check Passed!'
} catch(Exception e) {
echo "Health Check Failed!"
}
}
}
}
In the third step 'package test' I have placed agent docker in the file but it doesn't seem to work. How can I place agent docker in a declarative pipeline?

Missing Required Parameter in Docker File

I'm learning how to use Jenkins and jenkinsfile for my CI/CD project, and when trying to run a docker image to run my selenium tests against, an error is thrown saying that the docker image param is missing.
I've followed the docks on the jenkins site for a tutorial and I'm now trying to fit that for my own purposes.
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
sh 'npm install'
}
}
stage('Test') {
steps {
echo 'Testing..'
docker {
image 'selenium/standalone-firefox:3.141.59-gold'
args '-p 4444:4444'
}
sh 'npm test'
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
}
}
}
}
Docker should run on my Ubuntu server with port 4444 of the docker exposed and mapped to port 4444 of the server.
You used Declarative Pipeline for your Jenkinsfile, not Scripted Pipeline. For Declarative Pipeline, the docker is a directive which can only be used to specify agent for entire pipeline or stage as following:
pipeline {
agent { // specify docker container for entire pipeline
docker {
image ''
args ''
}
}
}
stage('test') {
agent { // all steps of this stage will be executed inside this docker container
docker {
image ''
args ''
}
}
}
You can't use this docker directive as pipeline step, like sh, 'echo'.
Jenkins indeed supply a docker DSL which can be directly used in Scripted Pipeline.
Declarative Pipeline supply a step script in where we can put Scripted Pipeline-liked script as following:
stage('test') {
steps {
script {
def version = ....
def img = docker.build(...)
img.push()
docker.image(...).inside(){}
}
}
}
Thus you can change your Jenkinsfile as following and give a trying.
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
sh 'npm install'
}
}
stage('Test') {
steps {
echo 'Testing..'
script {
docker.image('selenium/standalone-firefox:3.141.59-gold')
.inside('-p 4444:4444'){}
}
sh 'npm test'
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
}
}
}
}
By default the Docker Pipeline integrates assumes the default Docker Registry of Docker Hub.
If you intend to use use a custom Docker Registry, you can use docker.withRegistry to specify the custom Registry URL and Credentials as following:
stage('Test') {
steps {
echo 'Testing..'
script {
docker.withRegistry('<custom docker registry>',
'<credentialsId for custom docker registry if required>') {
docker.image('selenium/standalone-firefox:3.141.59-gold')
.inside('-p 4444:4444'){}
}
}
sh 'npm test'
}
}
Note: If the custom docker registry need credentails, you have to add your account for custom docker registry into Jenkins via Jenkins Credentials. After adding, Jenkins will assign a id for your account, the id called credentialsId which used in above code.

How to push the docker image to DTR using Jenkins multibranch pipeline

My problem statement is to push the docker image to DTR using Jenkins multibranch pipeline.
I want to configure my Jenkins file in such a way that it will build the image and then Push it.
for building the image I will use 10.1.2.3 machine and DTR will be https://something.dtr01.eastus2.cloudapp.azure.com/
I am totally new to Jenkins as per the instruction I have put the sample Jenkins file in the git repo.
Please suggest me the configuration changes in Jenkisfile
node {
// Clean workspace before doing anything
deleteDir()
try {
stage ('Clone') {
checkout scm
}
stage ('Build') {
"echo 'shell scripts to build project...'"
}
stage ('Tests') {
parallel 'static': {
sh "echo 'shell scripts to run static tests...'"
},
'unit': {
sh "echo 'shell scripts to run unit tests...'"
},
'integration': {
sh "echo 'shell scripts to run integration tests...'"
}
}
stage ('Deploy') {
sh "echo 'shell scripts to deploy to server...'"
}
} catch (err) {
currentBuild.result = 'FAILED'
throw err
}
}
Any help will be appreciable

Resources