node is not a kubernetes node - jenkins

I am trying to run simple jenkins pipeline for Maven project. When I try to run it on Jenkins, I am getting below error:
ERROR: Node is not a Kubernetes node:
I have searched everything related to this error but could not find anything.
Can someone tell me where am I doing mistake?
Jenkinsfile:
pipeline {
agent {
kubernetes {
cloud 'openshift'
label 'test'
yamlFile 'jenkins/BuildPod.yaml'
}
}
stages {
stage('Build stage') {
steps {
sh 'mvn -B clean verify'
}
}
stage('Test stage') {
steps {
sh 'mvn test'
}
}
stage('Package stage') {
steps {
sh 'mvn package'
}
}
}
}
BuildPod.yaml:
kind: Pod
apiVersion: v1
metadata:
name: test
labels:
app: test
spec:
containers:
- name: jnlp
image: openshift/jenkins-slave-base-centos7:latest
envFrom:
- configMapRef:
name: jenkins-config
- name: oc-dev
image: reliefmelone/ocalpine-os:latest
tty: true
command:
- cat
- name: maven
image: maven:3.6.1-jdk-13
tty: true
command:
- cat
- name: jdk
image: 13-jdk-alpine
tty: true
command:
- cat
I just want to build my project now. But it is not working.

You're missing the container in your stage step.
Example:
stage('Build stage') {
steps {
container('maven') {
sh 'mvn -B clean verify'
}
}
}

Related

How to run dynamic stages in paralell on jenkins with a separate kubernetes agent for each stage

I tried combining things I have found on the syntax but this is as close as I can get. It creates multiple stages but says they have no steps.
I can get it to run a bunch of parallel steps on the same agent if I move the agent syntax down to where the "test" stage is defined but I want to spin up separate pods for each one so I can actually use the kubernetes cluster effectively and do my work parallel.
attached is an example Jenkinsfile for reference
def parallelStagesMap
def generateStage(job) {
return {
stage ("$job.key") {
agent {
kubernetes {
cloud 'kubernetes'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: name
image: image
command:
- sleep
args:
- infinity
"""
}
}
steps {
sh """
do some important stuff
"""
}
}
}
}
pipeline {
agent none
stages {
stage('Create List of Stages to run in Parallel') {
steps {
script {
def map = [
"name" : "aparam",
"name2" : "aparam2"
]
parallelStagesMap = map.collectEntries {
["${it.key}" : generateStage(it)]
}
}
}
}
stage('Test') {
steps {
script {
parallel parallelStagesMap
}
}
}
stage('Release') {
agent etc
steps {
etc
}
}
}
}
To run your dynamically created jobs in parallel you will have to use scripted pipeline syntax.
The equivalent syntax for the declarative kubernetes agent in the scripted pipeline is podTemplate and node (see full Doucumentation):
podTemplate(yaml: '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-8
command:
- sleep
args:
- 99d
''') {
node(POD_LABEL) {
...
}
}
Notice that the podTemplate can receive the cloud parameter in addition to the yaml but it defaults to kubernetes so there is no need to pass it.
So in your case you can use this syntax to run the jobs in parallel on different agents:
// Assuming yaml is same for all nodes - if not it can be passed as parameter
podYaml= """
apiVersion: v1
kind: Pod
spec:
containers:
- name: name
image: image
command:
- sleep
args:
- infinity
"""
pipeline {
agent none
stages {
stage('Create List of Stages to run in Parallel') {
steps {
script {
def map = ["name" : "aparam",
"name2" : "aparam2"]
parallel map.collectEntries {
["${it.key}" : generateStage(it)]
}
}
}
}
}
}
def generateStage(job) {
return {
stage(job.key) {
podTemplate(yaml:podYaml) {
node(POD_LABEL) {
// Each execution runs on its own node (pod)
sh "do some important stuff with ${job.value}"
}
}
}
}
}
As explained in this answer:
Dynamic parallel stages could be created only by using Scripted Pipelines. The API built-it Declarative Pipeline is not available (like agent).
So, you can't run dynamic stages in parallel on different agents.
To achieve what you want to do, a solution would be to trigger another pipeline that run on a new kube pod and wait for its completion before next steps.
Here is the Jenkinsfiles for more understanding:
Main job Jenkinsfile:
def parallelJobsMap
def triggerJob(item) {
return {
build job: 'myChildJob', parameters: [string(name: 'MY_PARAM', value: "${item.value}")], wait: true
}
}
pipeline {
agent none
stages {
stage('Create List of Stages to run in Parallel') {
steps {
script {
def map = [
"name" : "aparam",
"name2" : "aparam2"
]
parallelJobsMap = map.collectEntries {
["${it.key}" : triggerJob(it)]
}
}
}
}
stage('Test') {
steps {
script {
parallel parallelJobsMap
}
}
}
stage('Release') {
agent any
steps {
echo "Release stuff"
}
}
}
}
Child job Jenkinsfile:
pipeline {
agent none
parameters {
string(
name: 'MY_PARAM',
description: 'My beautiful parameter',
defaultValue: 'A default value',
trim: true
)
}
stages {
stage ("Job") {
agent {
kubernetes {
cloud 'kubernetes'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: name
image: image
command:
- sleep
args:
- infinity
"""
}
}
steps {
echo "Do some important stuff with the parameter " + params.MY_PARAM
}
}
}
}

Run helm in jenkins pipeline

I have added helm as podtemplate in value.yaml file
podTemplates:
helm: |
- name: helm
label: jenkins-helm
serviceAccount: jenkins
containers:
- name: helm
image: lachlanevenson/k8s-helm:v3.1.1
command: "/bin/sh -c"
args: "cat"
ttyEnabled: true
privileged: true
resourceRequestCpu: "400m"
resourceRequestMemory: "512Mi"
resourceLimitCpu: "1"
resourceLimitMemory: "1024Mi"
so i want to run helm in pipeline as
steps {
container('helm') {
sh "helm upgrade --install --force ./helm"
}
}
but i got error
/home/jenkins/workspace/coverwhale#tmp/durable-4d1fbfd5/script.sh: 1: /home/jenkins/workspace/coverwhale#tmp/durable-4d1fbfd5/script.sh: helm: not found
Version of Helm and Kubernetes:
Helm Version:
$ helm version
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
Kubernetes Version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Which version of the chart:
What happened:
/home/jenkins/workspace/coverwhale#tmp/durable-4d1fbfd5/script.sh: 1: /home/jenkins/workspace/coverwhale#tmp/durable-4d1fbfd5/script.sh: helm: not found
What you expected to happen:
run helm chart
pipeline code
pipeline {
agent any
stages {
stage('Initialize Docker'){
steps {
script {
def docker = tool 'whaledocker'
echo "${docker}"
echo "${env.PATH}"
env.PATH = "${docker}/bin:${env.PATH}"
echo "${env.PATH}"
}
}
}
stage('Checkout Source') {
steps {
git url:'https://github.com/alialrabi/laravel-example.git', branch: 'uat', credentialsId: 'github'
}
}
stage("Build image") {
steps {
script {
myapp = docker.build("alialrabi/coverwhale:${env.BUILD_ID}")
}
}
}
stage("Run Test") {
steps {
script {
docker.image("alialrabi/coverwhale:${env.BUILD_ID}").inside {
// sh 'composer install'
// sh 'php artisan test'
}
}
}
}
stage("Push image") {
steps {
script {
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') {
myapp.push("latest")
myapp.push("${env.BUILD_ID}")
}
}
}
}
stage('Deploy Uat') {
steps {
script {
echo "Done Uat"
sh "helm upgrade --install --force"
}
}
}
}
}
I have solved it by add containerTemplate to agent.
stage('Deploy dev') {
agent {
kubernetes {
containerTemplate {
name 'helm'
image 'lachlanevenson/k8s-helm:v3.1.1'
ttyEnabled true
command 'cat'
}
}
}
steps {
container('helm') {
sh "helm upgrade full-cover ./helm"
}
}
}
you can install helm on the instance as well in which you are running your jenkins pipeline
stage("helm install"){
steps{
echo "Helm install"
sh 'curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.9/2020-11-02/bin/linux/amd64/kubectl'
sh 'curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" '
sh 'sudo cp kubectl /usr/bin'
sh 'sudo chmod +x /usr/bin/kubectl'
sh 'wget https://get.helm.sh/helm-v3.6.1-linux-amd64.tar.gz'
sh 'ls -a'
sh 'tar -xvzf helm-v3.6.1-linux-amd64.tar.gz'
sh 'sudo cp linux-amd64/helm /usr/bin'
}
}

Jenkins Declarative: Kubernetes Plugin with multiple agents

I am trying to setup a Jenkins declarative pipeline to use two different agents during its execution. The agents are dynamically spawned by the Kubernetes plugin. For sake of argument and simplicity, let's assume I want to do this:
On Agent 1 (Cloud name: "ubuntu"):
Run apt-get and some installs
Run a shell script
Additional steps
On Agent 2 (Cloud name: "fedora"):
Run dnf and some installs
Run a shell script
Additional steps
The problem I have is that if if I use a global agent declaration:
pipeline {
agent {
kubernetes {
cloud 'ubuntu'
label "ubuntu-agent"
containerTemplate {
name 'support'
image 'blenderfox/support'
ttyEnabled true
command 'cat'
}
}
}
...
}
Then that is used across all the stages if I don't declare an agent on each of the stages.
If I use agent none:
pipeline {
agent none
...
}
Then I have to declare an agent spec for each stage, for example:
stage ("apt update") {
agent {
kubernetes {
cloud 'ubuntu'
label "ubuntu-agent"
containerTemplate {
name 'support'
image 'blenderfox/support'
ttyEnabled true
command 'cat'
}
}
}
steps {
sh """
apt update
"""
}
}
While this would work for me in that I can declare per stage which agent I want, the problem this method causes, is that it spins up a new agent for each stage, meaning the state isn't carried between, for example, these two stages:
stage ("apt-update") {
agent {
....
}
steps {
sh """
apt update
"""
}
}
stage ("apt-install") {
agent {
....
}
steps {
sh """
apt install -y ....
"""
}
}
Can I reuse the same agent across stages? For example, something like this:
stage ("provision agent") {
agent {
...
label "ubuntu-agent"
...
}
steps {
sh """
echo "Provisioning agent"
"""
}
}
stage ("apt-update") {
agent {
label "ubuntu-agent" //reuse agent from previous stage
}
steps {
sh """
apt update
"""
}
}
stage ("apt-install") {
agent {
label "ubuntu-agent" //reuse agent from previous stage
}
steps {
sh """
apt install -y ....
"""
}
}
Found a solution. Very hacky but it works:
pipeline {
agent none
stages {
stage ("Provision dev agent") {
agent {
kubernetes {
cloud 'dev-cloud'
label "dev-agent-${env.BUILD_NUMBER}"
slaveConnectTimeout 300
idleMinutes 5
yamlFile "jenkins-dev-agent.yaml"
}
}
steps {
sh """
## Do any agent init steps here
"""
}
}
stage ("Do something on dev agent") {
agent {
kubernetes {
label "dev-agent-${env.BUILD_NUMBER}"
}
}
steps {
sh """
## Do something here
"""
}
}
stage ("Provision production agent") {
agent {
kubernetes {
cloud 'prod-cloud'
label "prod-agent-${env.BUILD_NUMBER}"
slaveConnectTimeout 300
idleMinutes 5
yamlFile "jenkins-prod-agent.yaml"
}
}
steps {
sh """
## Do any agent init steps here
"""
}
}
stage ("Do something on prod agent") {
agent {
kubernetes {
label "prod-agent-${env.BUILD_NUMBER}"
}
}
steps {
sh """
## Do something here
"""
}
}
}
}
The agent yamls vary, but you can do something like this:
spec:
containers:
- name: docker
image: docker:18.06.1
command: ["tail", "-f", "/dev/null"]
imagePullPolicy: Always
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
volumes:
- hostPath:
path: "/var/run/docker.sock"
name: "docker"
And then use the agent like so:
stage ("docker build") {
agent {
kubernetes {
label "dev-agent-${env.BUILD_NUMBER}"
}
}
steps {
container('docker') {
sh """
## docker build....
"""
}
}
}
There's a solution for this using sequential stages - you define a stage with your agent, and then you can nest other stages inside it
pipeline {
agent none
stages {
stage ("Provision dev agent") {
agent {
kubernetes {
cloud 'dev-cloud'
slaveConnectTimeout 300
yamlFile "jenkins-dev-agent.yaml"
}
}
stages {
stage ("Do something on dev agent") {
steps {
sh """
## Do something here
"""
}
}
stage ("Do something else on dev agent") {
steps {
sh """
## Do something here
"""
}
}
}
}
stage ("Provision prod agent") {
agent {
kubernetes {
cloud 'prod-cloud'
slaveConnectTimeout 300
yamlFile "jenkins-prod-agent.yaml"
}
}
stages {
stage ("Do something on prod agent") {
steps {
sh """
## Do something here
"""
}
}
stage ("Do something else on prod agent") {
steps {
sh """
## Do something here
"""
}
}
}
}
}
}

Jenkins Pipeline docker.withRegistry() push leads to endless loop

I managed to setup a jenkins on kubernetes and gitbucket on kubernetes. Now I am trying out to create my own first docker file for uploading on dockerhub. Unfortunately it fails while uploading to docker. Build is successfully, but I cant manage how to upload it to dockerhub (private repository).
Jenkinsfile
def label = "${BUILD_TAG}"
podTemplate(label: label, containers: [
containerTemplate(name: 'docker', image: 'docker:latest', command: 'cat', ttyEnabled: true)
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]) {
node(label) {
def app
def myRepo = checkout scm
def gitCommit = myRepo.GIT_COMMIT
def gitBranch = myRepo.GIT_BRANCH
def shortGitCommit = "${gitCommit[0..10]}"
def previousGitCommit = sh(script: "git rev-parse ${gitCommit}~", returnStdout: true)
stage('Decommission Infrastructure') {
container('kubectl') {
echo "Decmomission..."
}
}
stage('Build application') {
container('docker') {
app = docker.build("fasautomation/recon", ".")
}
}
stage('Run unit tests') {
container('docker') {
app.inside {
sh 'echo "Test passed"'
}
}
}
stage('Docker publish') {
container('docker') {
docker.withRegistry('https://registry.hub.docker.com', '<<jenkins store-credentials>>') {
echo "Pushing 1..."
// Push tagged version
app.push("${env.BUILD_NUMBER}")
echo "Pushing 2..."
// Push latest-tagged version
app.push("latest")
echo "Pushed!"
}
}
}
stage('Deployment') {
container('docker') {
// Deploy to Kubernetes
echo 'Deploying'
}
}
stage('Provision Infrastructure') {
container('kubectl') {
echo 'Provision...'
}
}
}
}
Jenkins Logs
[...]
[Pipeline] stage (hide)
[Pipeline] { (Docker publish)
[Pipeline] container
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
Executing sh script inside container docker of pod jenkins-recon-master-116-0ksw8-f7779
Executing command: "docker" "login" "-u" "*****" "-p" ******** "https://index.docker.io/v1/"
exit
<<endless loading symbol>>
Does anyone has a clue how to debug here? Credentials work. Not sure why there is the exit in the log without the logs for pushing afterwards... :-(

Building docker images inside a Jenkins container

I am using a jenkins container to execute a pipeline based on this Jenkinsfile:
pipeline {
agent any
tools {
maven 'Maven 3.6.0'
jdk 'jdk8'
}
stages {
stage('Pull from git') {
steps {
checkout scm
}
}
stage('Compile App') {
steps {
sh "mvn clean install"
}
}
stage('Build da Imagem') {
steps {
script {
docker.withTool("docker") {
def readyImage = docker.build("dummy-project/dummy-project-image", "./docker")
}
}
}
}
}
}
At the last stage i'm getting this Error when it tries to build the docker image.
Is it possible build a docker image inside jenkins container?
Your pipeline executing agent doesn't communicate with docker daemon, so you need to configure it properly and you have three ways (the ones I know):
1) Provide your agent with a docker installation
2) Add a Docker installation from https:/$JENKINS_URL/configureTools/
3) If you use Kubernetes as orchestrator you may add a podTemplate definition at the beginning of your pipeline and then use it, here an example:
// Name of the application (do not use spaces)
def appName = "my-app"
// Start of podTemplate
def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(
label: label,
containers: [
containerTemplate(
name: 'docker',
image: 'docker',
command: 'cat',
ttyEnabled: true)],
volumes: [
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock'),
hostPathVolume(hostPath: '/usr/bin/kubectl', mountPath: '/usr/bin/kubectl'),
secretVolume(mountPath: '/etc/kubernetes', secretName: 'cluster-admin')],
annotations: [
podAnnotation(key: "development", value: appName)]
)
// End of podTemplate
[...inside your pipeline]
container('docker') {
stage('Docker Image and Push') {
docker.withRegistry('https://registry.domain.it', 'nexus') {
def img = docker.build(appName, '.')
img.push('latest')
}
I hope this helps you

Resources