Kubernetes cloud agent for Jenkins always offline during pipeline execution - jenkins

I try to deploy microservices into k8s pod using Jenkins pipeline and k8s cloud agent.
I configured my agent on Jenkins but during execution I always have message saying my agent is offline.
You could find my Jenkins configuration and my Jenkins file.
Regards,
Jenkinsfile
pipeline {
environment {
MAVEN_SETTINGS = ''
MAVEN_ENV = 'maven-3.6.3'
JDK_ENV = 'jdk-1.2'
GIT_URL = 'PRIVATE REPO'
}
agent any
parameters{
booleanParam(name:"RELEASE",
description:"Release",
defaultValue:false
)
}
stages {
stage ('build & deploy'){
when{
expression {!params.RELEASE}
}
steps{
withMaven(
maven: env.MAVEN_ENV,
mavenSettingsConfig: env.MAVEN_SETTINGS,
jdk: env.JDK_ENV) {
sh "mvn clean deploy -U"
}
}
}
stage ('create image'){
steps{
script {
docker.withRegistry('https://registry.digitalocean.com', 'images-credential') {
def customImage = docker.build("images-repo:${params.IMAGE_VERSION}","./target")
customImage.push("${params.IMAGE_VERSION}")
}
}
}
}
stage ('Deploy Pod') {
agent { label 'kubepod' }
steps {
script {
kubernetesDeploy(configs: "/kubernetes/pod.yml", kubeconfigId: "mykubeconfig")
kubernetesDeploy(configs: "/kubernetes/services.yml", kubeconfigId: "mykubeconfig")
}
}
}
}
}
Kubernetes agent config
Pod template config
Jenkins global security config
Build log

Use TCP port for inbound agents: Fixed and put 50000

You do not have to create direction connection.Please go to Kubernetes cloud details and add your service token and ssl of that token to make connection. you have a issue in your pipeline.Please correct it like
stage(test){
agent{
cloud 'cloud name'
yaml"""
"""
}
}
Please follow this article to start from the basic details.

Related

How to invoke the job from one slave(server) to another slave(server) in jenkins

Please, can you advise as I am planning to invoke a job that is on different server. For example:
Slave1 server1: deploy job
Slave2 server2: build job
I want build job trigger the deploy job. Any suggestion, please
If you are trying to run two Jobs in the same Jenkins server. Your Pipeline should look something like below. Here from build Job you can call the Deploy Job.
Build Job
pipeline {
agent { label 'server2' }
stages {
stage('Build') {
steps {
build job: 'DeployJobName'
}
}
}
}
Deploy Job
pipeline {
agent { label 'server1' }
stages {
stage('Deploy') {
steps {
// Deploy something
}
}
}
}
Update
If you want to trigger a Job in a different Jenkins server, you can use a Plugin like RemoteTriggerPlugin or simply using the Jenkins API.
curl http://serer1:8080/job/DeployJobName/build?token=YOUR_REMOTE_TRIGGER_TOKEN
pipeline {
agent { label 'server2' }
stages {
stage('Build') {
steps {
sh 'curl http://server1:8080/job/DeployJobName/build?token=YOUR_REMOTE_TRIGGER_TOKEN'
}
}
}
}

Jenkins using docker agent with environment declarative pipeline

I would like to install maven and npm via docker agent using Jenkins declarative pipeline. But When I would like to use below script Jenkins throws an error as below. It might be using agent none but how can I use node with docker agent via declarative pipeline jenkins.
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
I try to set agent any but this time I received an error "Still waiting to schedule task
Waiting for next available executor"
pipeline {
agent none
// environment{
proxy = https://
// stable_revision = sh(script: 'curl -H "Authorization: Basic $base64encoded"
// }
stages {
stage('Build') {
agent {
docker { image 'maven:3-alpine'}
}
steps {
sh 'mvn --version'
echo "$apigeeUsername"
echo "Stable Revision: ${env.stable_revision}"
}
}
stage('Test') {
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } }
environment {
HOME = '.'
}
steps {
script{
try{
sh 'npm install'
sh 'node --version'
//sh 'npm test/unit/*.js'
}catch(e){
throw e
}
}
}
}
// stage('Policy-Code Analysis') {
// steps{
// sh "npm install -g apigeelint"
// sh "apigelint -s wiservice_api_v1/apiproxy/ -f codeframe.js"
// }
// }
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
}
}
Basically your error message tells you everything you need to know:
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
so what is the issue here? You use agent none for your pipeline which means you do not specify a specific agent for all stages. An agent executes a specific stage. If a stage has no agent it can't be executed and this is your issue here.
The following 2 stage have no agent which means no docker-container / server or whatever where it can be executed.
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
so you have to add agent { ... } to both stage seperately or use a global agent like following and remove the agent from your stages:
pipeline {
agent {
docker { image 'maven:3-alpine'}
} ...
For further information see guide to set up master and agent machines or distributed jenkins builds or the official documentation.
I think you meant to add agent any instead of agent none, because each stage requires at least one agent (either declared at the top for the pipeline or per stage).
Also, I see some more issues.
Your Test stage specifies two images for the same stage.
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } } although, your stage is executing only npm commands. I believe only one of the image will be downloaded.
To clarify bit more on mkemmerz answer, your Promotion stage is designed correctly. If you plan to have an input step in the pipeline, do not add an agent for the pipeline because input steps block the executor context. See this link https://jenkins.io/blog/2018/04/09/whats-in-declarative/

Jenkins declarative pipeline: npm command not found

So I have set up this jenkins ec2 instance, ssh into it, globally installed node and set PATH. But when executing my pipeline, it gives me npm command not found error.
I put echo $PATH in my pipeline and the result is:
/home/ec2-user/.nvm/versions/node/v10.15.1/bin:/sbin:/usr/sbin:/bin:/usr/bin
Which looks correct.
For reference, here's my very simple pipeline:
pipeline {
agent { label 'master' }
environment {
PATH = "/home/ec2-user/.nvm/versions/node/v10.15.1/bin:${env.PATH}"
}
stages {
stage('Test npm') {
steps {
sh """
echo $PATH
npm --version
"""
}
}
}
}
Appreciate with any help.
As #Dibakar Adtya pointed, the problem is when jenkins executes a pipeline, it's under the user jenkins, whereas I configured node under another user, ec2-user, and jenkins doesn't have access to ec2-user's bin. Thank you #Dibakar!
A more elegant solution is to use Jenkins NodeJS Plugin. It saves you from the environment hassles. Now the pipeline is:
pipeline {
agent { label 'master' }
tools { nodejs "nodejs" }
stages {
stage('Test npm') {
steps {
sh """
npm --version
"""
}
}
}
}

How to build docker images using a Declarative Jenkinsfile

I'm new to using Jenkins....
I'm trying to automate the production of an image (to be stashed in a repo) using a declarative Jenkinsfile. I find the documentation to be confusing (at best). Simply put, how can I convert the following scripted example (from the docs)
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
to a declarative Jenkinsfile....
You can use scripted pipeline blocks in a declarative pipeline as a workaround
pipeline {
agent any
stages {
stage('Build image') {
steps {
echo 'Starting to build docker image'
script {
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
}
}
}
}
I'm using following approach:
steps {
withDockerRegistry([ credentialsId: "<CREDENTIALS_ID>", url: "<PRIVATE_REGISTRY_URL>" ]) {
// following commands will be executed within logged docker registry
sh 'docker push <image>'
}
}
Where:
CREDENTIALS_ID stands for key in Jenkis under which you store credentials to your docker registry.
PRIVATE_REGISTRY_URL stands for url of your private docker registry. If you are using docker hub then it should be empty.
I cannot recommend the declarative syntax for building a Docker image bcos it seems that every important step requires falling back to the old scripting syntax. But if you must, a hybrid approach seems to work.
First a detail about the scm step: when I defined the Jenkins "Pipeline script from SCM" project that fetches my Jenkinsfile with a declarative pipline from git, Jenkins cloned the repo as the first step in the pipeline even tho I did not define a scm step.
For the build and push steps, I can only find solutions that are a hybrid of old-style scripted pipeline steps inside the new-style declarative syntax. For example see gustavoapolinario's work at Medium:
https://medium.com/#gustavo.guss/jenkins-building-docker-image-and-sending-to-registry-64b84ea45ee9
which has this hybrid pipeline definition:
pipeline {
environment {
registry = "gustavoapolinario/docker-test"
registryCredential = 'dockerhub'
dockerImage = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git 'https://github.com/gustavoapolinario/microservices-node-example-todo-frontend.git'
}
}
stage('Building image') {
steps{
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploy Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
}
}
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
}
}
Because the first step here is a clone, I think he built this example as a standalone pipeline project in Jenkins (not a Pipeline script from SCM project).

Jenkins pipeline step happens on master instead of slave

I am getting started with Jenkins Pipeline. My pipeline has one simple step that is supposed to run on a different agent - like the "Restrict where this project can be run" option.
My problem is that it is running on master.
They are both Windows machines.
Here's my Jenkinsfile:
pipeline {
agent {label 'myLabel'}
stages {
stage('Stage 1') {
steps {
echo pwd()
writeFile(file: 'test.txt', text: 'Hello, World!')
}
}
}
}
pwd() prints C:\Jenkins\workspace\<pipeline-name>_<branch-name>-Q762JIVOIJUFQ7LFSVKZOY5LVEW5D3TLHZX3UDJU5FWYJSNVGV4Q.
This folder is on master. This is confirmed by the presence of the test.txt file.
I expected test.txt to be created on the slave agent.
Note 1
I can confirm that the pipeline finds the agent because the logs contain:
[Pipeline] node
Running on MyAgent in C:\Jenkins\workspace\<pipeline-name>_<branch-name>-Q762JIVOIJUFQ7LFSVKZOY5LVEW5D3TLHZX3UDJU5FWYJSNVGV4Q
But this folder does not exist on MyAgent, which seems related to the problem.
Note 2
This question is similar to Jenkins pipeline not honoring agent specification
, except that I'm not using the build instruction so I don't think the answer applies.
Note 3
pipeline {
agent any
stages {
stage('Stage 1') {
steps {
echo "${env.NODE_NAME}"
}
}
stage('Stage 2') {
agent {label 'MyLabel'}
steps {
echo "${env.NODE_NAME}"
}
}
}
}
This prints the expected output - master and MyAgent. If this is correct, then why is the workspace located in a different folder on master instead of being on MyAgent?
here is an example
pipeline {
agent none
stages {
stage('Example Build') {
agent { label 'build-label' }
steps {
sh 'env'
sh ' sleep 8'
}
}
stage('Example Test') {
agent { label 'deploy-label' }
steps {
sh 'env'
sh ' sleep 5'
}
}
}
}
I faced similar issue and the following pipeline code worked for me (i.e. the file got created on the Windows slave instead of Windows master),
pipeline {
agent none
stages {
stage("Stage 1") {
steps {
node('myLabel'){
script {
writeFile(file: 'test.txt', text: 'Hello World!', encoding: 'UTF-8')
}
// This should print the file content on slave (Hello World!)
bat "type test.txt"
}
}
}
}
}
I'm debugging a completely unrelated issue and this fact was thrown in my face. Apparently the pipeline is processed in the built-in node (previously known as the master node), with the steps being forwarded to the agent.
So even though echo runs on the agent, but pwd() will run on the built-in node. You can do sh 'pwd' to get the path on the agent.

Resources