I am trying to create a pipeline job for Angular code to deploy the application into k8 cluster. Below there is a code for pipeline container podTemplate, during the build I get the next error.
def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(
cloud: 'kubernetes',
namespace: 'test',
imagePullSecrets: ['regcred'],
label: label,
containers: [
containerTemplate(name: 'nodejs', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'docker', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'kubectl', image: 'k8spoc1/kubctl:latest', ttyEnabled: true, command: 'cat')
],
volumes: [
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock'),
hostPathVolume(hostPath: '/root/.m2/repository', mountPath: '/root/.m2/repository')
]
) {
node(label) {
def scmInfo = checkout scm
def image_tag
def image_name
sh 'pwd'
def gitCommit = scmInfo.GIT_COMMIT
def gitBranch = scmInfo.GIT_BRANCH
def commitId
commitId= scmInfo.GIT_COMMIT[0..7]
image_tag = "${scmInfo.GIT_BRANCH}-${scmInfo.GIT_COMMIT[0..7]}"
stage('NPM Install') {
container ('nodejs') {
withEnv(["NPM_CONFIG_LOGLEVEL=warn"]) {
sh 'npm install'
}
}
}
}
}
Error from Jenkins:
[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: Labels must follow required specs - https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set: Ubuntu-82f3782f-b5aa-4029-9c51-57610153747c
Finished: FAILURE
Do I need to mention a spec value of my Jenkins file?
The error message you get:
ERROR: Labels must follow required specs - https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set: Ubuntu-82f3782f-b5aa-4029-9c51-57610153747c
points out quite precisely what can be wrong with your Pod template. As you can see in link to kubernetes documentation given in the ERROR message, you need to follow certain rules when defining a Pod. labels element is a dictionary/map field that requires you to provide at least one key-value pair so you cannot just write label: label in your specification.
You can try to define your PodTemplate in yaml format (which is mostly used in kubernetes) like in this example:
podTemplate(yaml: """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: busybox
command:
- cat
tty: true
"""
) {
node(POD_LABEL) {
container('busybox') {
sh "hostname"
}
}
}
As you can read here:
label The label of the pod. Can be set to a unique value to avoid
conflicts across builds, or omitted and POD_LABEL will be defined
inside the step.
label field can be omitted at all so first you can try without it and you shouldn't get any error message.
ERROR: Labels must follow required specs - https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set: Ubuntu-82f3782f-b5aa-4029-9c51-57610153747c
Your label Ubuntu-82f3782f-b5aa-4029-9c51-57610153747c has a space before Ubuntu which is not valid
But that error message doesn't seem to match the pod definition you posted as there is no mention of Ubuntu anywhere. Maybe inherited
Related
I have a Jenkins deployed on a kubernetes cluster A, and that can spawn pods in the same cluster A. My workflow is the following
Start a pod with an image designed to build a docker image. This image will be built with a deterministic tag example/appname:${GIT_COMMIT}
After the image was build in the previous pod, start a new pod with the image from the previous build (example/appname:${GIT_COMMIT}), and run multiple test tools
Currently I am successfully making a build with
podTemplate(yaml: """
apiVersion: v1
kind: Pod
spec:
containers:
- name: aws-dockerizer
image: example/aws-dockerizer:0.1.7
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
""") {
node(POD_LABEL) {
stage('Clone') {
git url: 'https://github.com/example/my-app/', branch: '${build_branch_name}', credentialsId: 'github-app'
container('aws-dockerizer') {
stage('Build and deploy') {
withAWS(credentials: 'aws-credentials', region: 'eu-central-1') {
sh '''#!/usr/bin/env bash
git config --global --add safe.directory ${WORKSPACE}
scripts/build_and_push_docker_image_to_aws.sh
'''
}
}
}
}
}
}
I'd like to add the following stage to my pipeline. Note that the new pod "experience-${GIT_COMMIT}" CANNOT be started because he image is not available until the previous step is complete.
podTemplate(
label: "experience-${GIT_COMMIT}"
yaml: """
apiVersion: v1
kind: Pod
spec:
containers:
- name: experience
image: example.dkr.ecr.eu-central-1.amazonaws.com/example/appname:${GIT_COMMIT}
command: ['cat']
tty: true
""")
stage('Run tests') {
node("experience-${GIT_COMMIT}") {
stage('Run tests') {
container('experience') {
stage('Run Rspec') {
sh 'bundle exec rspec'
}
post {}
}
}
}
}
}
Any idea if this is possible ? What's the DSL / concepts I need to use ? How do I "merge" the two pieces of code to achieve what I want ?
I've tried to play around a bit, but when I declare both pod templates at the beginning, it hangs the job until the 2nd pod is ready, which it never will be...
I have a script that uses curl and that script should be run in Kubernetes agent on Jenkins. Here is my original agent configuration:
pipeline {
agent {
kubernetes {
customWorkspace 'ng-cleaner'
yaml """
kind: Pod
metadata:
spec:
imagePullSecrets:
- name: jenkins-docker
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: agentpool
operator: In
values:
- build
schedulerName: default-scheduler
tolerations:
- key: type
operator: Equal
value: jenkins
effect: NoSchedule
containers:
- name: jnlp
env:
- name: CONTAINER_ENV_VAR
value: jnlp
- name: build
image: tixartifactory-docker.jfrog.io/baseimages/helm:helm3.2.1-helm2.16.2-kubectl.0
ttyEnabled: true
command:
- cat
tty: true
"""
}
}
The error message is "curl ....
/home/jenkins/agent/ng-cleaner#tmp/durable-0d154ecf/script.sh: 2: curl: not found"
What I tried:
added shell step to main "build" container:
shell: sh "apk add --no-cache curl", also tried "apt install curl"- didn't help
added new container with curl image:
- name: curl
image: curlimages/curl:7.83.1
ttyEnabled: true
tty: true
command:
- cat - didn't help as well
Any suggestions on how I can make it work?
I resolved it.
It was needed to add shell step to main container:
shell: sh "apk add --no-cache curl"
and then place my script inside container block:
stages {
stage('MyStage') {
steps {
container('build'){
script {
In General
I'm trying to use label when using kubernetes-plugin for Jenkins, but I get a little bit confused.
In my pipeline bellow I'm trying to build test job in parallel steps with different labels (agents).
I already have configured the plugin with pod template and container in my Jenkins config, where I use same settings as it's in the pipeline podTemplate defined.
Issue
The problem is that when I use agent label in stage 2 there is jnpl image running instead the image that I point in the config someimage:latest.
In stage 1 where I define the pod in pipeline there is no problem and the required images are running fine.
Question
What I'm doing wrong?
Here is my jenkinsfile and config of the kubernetes-plugin in Jenkins:
def podTemplate = """
apiVersion: v1
kind: Pod
spec:
containers:
- name: k8s
image: someimage:latest
command:
- sleep
args:
- infinity
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins/agent
workingDir: "/home/jenkins/agent"
volumes:
- name: "workspace-volume"
persistentVolumeClaim:
claimName: "jenkins-worker-pvc"
readOnly: false
"""
pipeline {
agent none
stages {
stage("Parallel") {
parallel {
stage("1.k8s") {
agent {
kubernetes {
yaml podTemplate
defaultContainer 'k8s'
}
}
steps {
sh """
mvn -version
"""
}
}
stage("2. k8s") {
agent { label 'k8s' }
steps {
sh """
mvn -version
"""
}
}
stage("win") {
agent { label 'windows' }
steps {
bat "dir"
}
}
}
}
}
}
You did not specified an image for stage with label k8s and windows.
You can read in the docs that:
The plugin creates a Kubernetes Pod for each agent started, defined by the Docker image to run, and stops it after each build.
Agents are launched using JNLP, so it is expected that the image connects automatically to the Jenkins master.
You are using podTemplate and I would advice setting up container , this might look like the following:
podTemplate(containers: [
containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat')
]) {
node(POD_LABEL) {
stage('Get a Maven project') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
stage('Build a Maven project') {
sh 'mvn -B clean install'
}
}
}
stage('Get a Golang project') {
git url: 'https://github.com/hashicorp/terraform.git'
container('golang') {
stage('Build a Go project') {
sh """
mkdir -p /go/src/github.com/hashicorp
ln -s `pwd` /go/src/github.com/hashicorp/terraform
cd /go/src/github.com/hashicorp/terraform && make core-dev
"""
}
}
}
}
}
You can read more about Container Configuration and Container Group Support
I am newly studying kubernetes for my own interest, i am trying to create jenkins jobs to deploy our application. I have one master and worker machine and both are up and running i can able ping both machine from one to another.
As of now, i don't have any pods and deployment services my cluster its fresh setup environment.
right now jenkins file contains Pod Template for nodejs and docker with single stage to install NPM modules.
def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(
cloud: 'kubernetes',
namespace: 'test',
imagePullSecrets: ['regcred'],
label: label,
containers: [
containerTemplate(name: 'nodejs', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'docker', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'kubectl', image: 'k8spoc1/kubctl:latest', ttyEnabled: true, command: 'cat')
],
volumes: [
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock'),
hostPathVolume(hostPath: '/root/.m2/repository', mountPath: '/root/.m2/repository')
]
) {
node(label) {
def scmInfo = checkout scm
def image_tag
def image_name
sh 'pwd'
def gitCommit = scmInfo.GIT_COMMIT
def gitBranch = scmInfo.GIT_BRANCH
def commitId
commitId= scmInfo.GIT_COMMIT[0..7]
image_tag = "${scmInfo.GIT_BRANCH}-${scmInfo.GIT_COMMIT[0..7]}"
stage('NPM Install') {
container ('nodejs') {
withEnv(["NPM_CONFIG_LOGLEVEL=warn"]) {
sh 'npm install'
}
}
}
}
}
Now the question is if run the jenkins jobs with above code, above mentioned docker and nodejs image will download from docker registry and this will save into my local machine ? how this will work, can you please some one explain me ?
The above code is for jenkins plugin https://github.com/jenkinsci/kubernetes-plugin.
So the running the above jenkins would run job on a agent or on master. The images would be downloaded to that agent/master. The above plugin is used to setup jenkins agent, so if there are no agents, it would be run on master.
I have a scenario, I run my Jenkins in K8s cluster in Minikube. I run a groovy script within my Jenkins Pipeline which builds the docker image
using Kaniko (which builds a docker image without docker daemon) and pushes to Azure container registry. I have created secret to authenticate to Azure.
But when I push an image - I get an error
" [36mINFO[0m[0004] Taking snapshot of files...
[36mINFO[0m[0004] ENTRYPOINT ["jenkins-slave"]
error pushing image: failed to push to destination Testimage.azurecr.io/test:latest: unexpected end of JSON input
[Pipeline] }"
My Script
My groovy script --
def label = "kaniko-${UUID.randomUUID().toString()}"
podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
imagePullPolicy: Always
command:
- /busybox/cat
tty: true
volumeMounts:
- name: jenkins-pv
mountPath: /root
volumes:
- name: jenkins-pv
projected:
sources:
- secret:
name: pass
items:
- key: .dockerconfigjson
path: .docker/config.json
"""
) {
node(label) {
stage('Build with Kaniko') {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
container(name: 'kaniko', shell: '/busybox/sh') {
sh '''#!/busybox/sh
/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --skip-tls-verify --destination=testimage.azurecr.io/test:latest
'''
}
}
}
}
Could you please help how to overcome the error? And also :
How do I know the name of the image build by kaiko?
I m just pushing like - registry.acr.io/test: latest, probably it's an incorrect image name that's the reason I get JSON output error?