Jenkins pipeline: kubectl: not found - jenkins

I have the following Jenkinsfile:
node {
stage('Apply Kubernetes files') {
withKubeConfig([credentialsId: 'jenkins-deployer', serverUrl: 'https://192.168.64.2:8443']) {
sh 'kubectl apply -f '
}
}
}
While running it, I got "kubectl: not found". I installed Kubernetes-cli plugin to Jenkins, generated secret key via kubectl create sa jenkins-deployer. What's wrong here?

I know this is a fairly old question, but I decided to describe an easy workaround that might be helpful.
To use the Kubernetes CLI plugin we need to have an executor with kubectl installed.
One possible way to get kubectl is to install it in the Jenkins pipeline like in the snipped below:
NOTE: I'm using ./kubectl get pods to list all Pods in the default Namespace. Additionally, you may need to change kubectl version (v1.20.5).
node {
stage('List pods') {
withKubeConfig([credentialsId: 'kubernetes-config']) {
sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl"'
sh 'chmod u+x ./kubectl'
sh './kubectl get pods'
}
}
}
As a result, in the Console Output, we can see that it works as expected:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl
...
[Pipeline] sh
+ chmod u+x ./kubectl
[Pipeline] sh
+ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-zhxwb 1/1 Running 0 34s
my-jenkins-0 2/2 Running 0 134m

You call kubectl from the shellscript step. To be able to do that the agent (node) executing the build needs to have kubectl available as executable.

Related

How to run a "sidecar" container in Jenkins Blue Ocean?

I am fairly new to Jenkins and CI/CD in general, but believe that I have searched long enough to conclude things are not as I expect.
I want to do some frontend tests on my website and just as in real life I want to test with the site in one Docker container and the database in another container. Jenkins has this documented as "sidecar" containers which can be part of a pipeline.
Their example:
node {
checkout scm
/*
* In order to communicate with the MySQL server, this Pipeline explicitly
* maps the port (`3306`) to a known port on the host machine.
*/
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw" -p 3306:3306') { c ->
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done'
/* Run some tests which require MySQL */
sh 'make check'
}
}
The thing is that I do not have a 'traditional' Jenkins pipeline, but I am running Jenkins Blue Ocean instead. This gives me a fancy pipeline editor, but also my pipeline code (Jenkinsfile) looks really different from the example:
pipeline {
agent {
docker {
image 'php'
}
}
stages {
stage('Build') {
steps {
sh 'composer --version'
sh 'composer install'
}
}
stage('Tests') {
steps {
echo 'Do test'
}
}
}
}
So how would I be spawning (and tearing down) these "sidecar" containers in a Blue Ocean pipeline? Currently the Pipeline editor has no available options if I want to add a step related to Docker. Can I still use docker.image? I do have the Docker Pipeline plugin installed.
.
The example provided by Jenkins in the link is actually a fully functional pipeline, with one exception. You need to comment out checkout scm if you provide the pipeline script directly into Jenkins.
node {
// checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
What it may be confusing to you is that the code style in the example above is very different than the one generated by Blue Ocean pipeline editor. That is because the script is written in the Scripted Pipeline and Blue Ocean has generated a Declarative Pipeline. Both are fully supported in Jenkins and both use the same engine underneath, but the syntax differences may lead to confusion at start.
You can use the Scripted Pipeline example above just fine, but if you want to keep the Declarative Pipeline, you can run the scripted part inside the script step. In both cases you need to change the docker images and executed commands according to your needs.
pipeline {
agent any
stages {
stage('Build and test') {
steps {
script {
node {
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
}
}
}
}
}
Please note:
Docker container link feature used in this example is a legacy feature, it may eventually be removed.
The pipeline will fail at make check, as make is not provided in centos:7 image.
More then half a year later I finally figured out it was much simpler than I thought. It can be done with docker-compose.
You need to make sure that your Jenkins has access to docker-compose. So if you are running Jenkins as a docker container ensure it has access to the Docker socket. Also Jenkins is not likely to ship with docker-compose included (JENKINS-51898) so you will have to build your own blue ocean image to install docker-compose.
Rather than copying below file, check https://docs.docker.com/compose/install/ for the latest version!
# Dockerfile
FROM jenkinsci/blueocean
USER root
RUN curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose && \
chmod +x /usr/local/bin/docker-compose
USER jenkins
Once you have Jenkins and Docker up and running you can deploy a test version of your application with a regular docker-compose file, including all database and other containers you might need. You can install dependencies and start the tests by using docker-compose exec to execute commands inside containers started with docker-compose.
Note that docker-compose -f docker-compose.test.yml exec -T php composer-install executes the composer-install command in the container that was defined as the php service in the docker-compose file.
In the end, no matter the outcome of the test, all containers and associated volumes (-v flag) are shutdown and removed.
# Jenkinsfile
pipeline {
agent any
stages {
stage('Start') {
steps {
sh 'docker-compose -f docker-compose.test.yml up -d'
}
}
stage('Composer install') {
steps {
sh 'docker-compose -f docker-compose.test.yml exec -T php composer install --no-interaction --no-progress --optimize-autoloader'
}
}
stage('Test') {
steps {
sh 'docker-compose -f docker-compose.test.yml exec -T php <run test script>'
}
}
}
post {
always {
sh 'docker-compose -f docker-compose.test.yml down -v'
}
}
}

Store Kubernetes Cluster Credentials in Jenkins and use in declarative pipeline

I am trying to deploy k8s cluster using Helm 3 and jenkins. Jenkins and k8s running on different servers.I merged the kubeconfig files and I had all information in one config file ./kube directory. I would like to deploy my app to the related environment and namespace according to the GIT_BRANCH value. I have two question for below script.
1.What is the best way should I store k8s cluster credentials and will use in pipeline. I saw some plugins such as Kubernetes CLI but I can not be sure whether it will cover my requirement. If I use this plugin, should I store k8s file in to Jenkins machine manually or this plugin already handle this with uploading config file.
2.Should I change anything in below script to follow best practices?
stage('Deploy to dev'){
script{
steps{
if(env.GIT_BRANCH.contains("dev")){
def namespace="dev"
def ENV="development"
withCredentials([file(credentialsId: ...)]) {
// change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
}
}
}
stage('Deploy to Test'){
script{
steps{
if(env.GIT_BRANCH.contains("test")){
def namespace="test"
def ENV="test"
withCredentials([file(credentialsId: ...)]) {
// change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
}
}
}
}
stage ('Deploy to Production'){
when {
anyOf{
environment name: 'DEPLOY_TO_PROD' , value: 'true'
}
}
steps{
script{
DEPLOY_PROD = false
def namespace = "production"
withCredentials([file(credentialsId: 'kube-config', variable: 'kubecfg')]){
//Change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying to production"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
}
}
}
I have never tried this, but in theory the credentials variable is available as environment variable. Try to use KUBECONFIG as a variable name
withCredentials([file(credentialsId: 'secret', variable: 'KUBECONFIG')]) {
// change context with related namespace
sh "kubectl config set-context $(kubectl config current-context) --namespace=${namespace}"
//Deploy with Helm
echo "Deploying"
sh "helm upgrade --install road-dashboard -f values.${ENV}.yaml --set tag=$TAG --namespace ${namespace}"
}
A workaround that worked for me:
withCredentials([file(credentialsId: 'k8s-dk-staging', variable: 'KUBECRED')]) {
sh 'cat $KUBECRED > ~/.kube/config'
sh './deploy-app.sh'
}
I don't like to do that, ideally I would like use KUBECONFIG but for now this is that works for me.

Using Jekyll docker inside Jenkins

I'm trying to build a jekyll website via my Jenkins server (which runs inside a container) and I have a stage in my Jenkinsfile that looks like this:
stage('Building Website') {
agent {
docker {
image 'jekyll/jekyll:builder'
}
}
steps {
sh 'jekyll --version'
}
}
The very first time I run my job it pulls the jekyll docker image and runs fine (although it does fetch a bunch of gems before running jekyll which doesn't happen when I run the docker manually outside jenkins) but then the next jobs fail giving this error:
jekyll --version
/usr/jekyll/bin/jekyll: exec: line 15: /usr/local/bundle/bin/jekyll: not found
Any ideas what I'm doing wrong here?
As you can see in the jenkins log file, jenkins runs docker with the -u 1000:1000 argument, since this user does not exits in the jekyll/jekyll image, the command fails with the error .../bin/jekyll: not found
Here is a sample Jenkinsfile:
pipeline {
agent
{
docker
{
image 'jekyll/jekyll:3.8'
args '''
-u root:root
-v "${WORKSPACE}:/srv/jekyll"
'''
}
}
stages {
stage('Test') {
steps {
sh '''
cd /srv/jekyll
jekyll --version
'''
}
}
}
}
To add to the other answer, note the containerized Jenkins doesn't not contain the docker binary, so docker commands will still fail.
A few solutions
Make a dockerfile that inherits from the jenkins image and installs docker as well, creating a new image.
Manually install docker inside of the container. This will work until you pull a new image, and you'll have to do it over again.
Open an interactive terminal into the jenkins container
docker container exec -it -u root <container id> bash
Then install docker
curl https://get.docker.com/ > dockerinstall && chmod 777 dockerinstall && ./dockerinstall
Exit the container and set perms on docker.sock
sudo chmod 666 /var/run/docker.sock
Finished!

SSH from jenkins to same host

I have a jenkins instance running on my raspberry pi 3 and i also have my (simple) apache webserver running on the same raspberry pi.
I've got a pipeline from jenkins to fetch a git repo, build it and put (via scp) the build files on my webserver.
I have a ssh private/public key setup, but it's a bit stupid (?) to have an ssh key when the jenkins is hosted on the same 'machine' with the same ip address no?
Anyway, on my raspberry pi i have setup the autorized keys file and the known host file with the public key on it, and i've added the private key to jenkins via the ssh-agent plugin.
Here you have my jenkinsfile thats being used by jenkins to define my pipeline:
node{
stage('Checkout') {
checkout scm
}
stage('install') {
nodejs(nodeJSInstallationName: 'nodeJS10.5.0') {
sh "npm install"
}
}
stage('build'){
nodejs(nodeJSInstallationName: 'nodeJS10.5.0') {
sh "npm run build"
}
}
stage('connect ssh and remove files') {
sshagent (credentials: ["0527982f-7794-45d0-99b0-135c868c5b36"]) {
sh "ssh pi#123.456.789.123 -p 330 rm -rf /var/www/html/*"
}
}
stage('upload new files'){
sshagent (credentials: ["0527982f-7794-45d0-99b0-135c868c5b36"]) {
sh "scp -P 330 -r ./build/* pi#123.456.789.123:/var/www/html"
}
}
}
Here is the output from the second to last job that is failing:
[Pipeline] }
[Pipeline] // nodejs
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (connect ssh and remove files)
[Pipeline] sh
[Deploy_To_Production] Running shell script
+ ssh pi#123.456.789.123 -p 330 rm -rf /var/www/html/asset-manifest.json /var/www/html/css /var/www/html/favicon.ico /var/www/html/fonts /var/www/html/images /var/www/html/index.html /var/www/html/manifest.json /var/www/html/service-worker.js /var/www/html/static /var/www/html/vendor
Host key verification failed.
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 255
Finished: FAILURE
Note: I've changed my IP address and my ssh port for security reasons.
Manually i can ssh to my raspberry pi and i can execute the commands manually from my laptop (both from same and other domain works).
I've also port forwarded the local ip so that i connect to it via SSH when i'm not home.
I guess I'm doing something wrong with the SSH keys etc, but i'm no expert whatsoever!
Can anyone help?
I need 4 more reputation point to comment, so I must write answer:)
Try use -v to debug ssh connection:
stage('connect ssh and remove files') {
sshagent (credentials: ["0527982f-7794-45d0-99b0-135c868c5b36"]) {
sh "ssh -v pi#123.456.789.123 -p 330 rm -rf /var/www/html/*"
}
}
In another hand
Host key verification failed means that the host key of the remote host was changed or you don't have the host key of the remote host. So at first try just ssh -v pi#123.456.789.123 as Jenkins user, from Jenkins host.
The issue was indeed that the host key verification was failing. I think this was due to not trusting the host.
But the real issue was pointed out by #3sky (see other answer). I needed to login as the jenkins user and try to ssh to my raspberry pi (which are both on the same machine).
So these are the steps i did:
Login via ssh to my raspberry pi
ssh -v pi#123.456.789.123 -p 330
Then I switched user to the jenkins user. After some google search i've found out how
sudo su -s /bin/bash jenkins
Then i ssh again to my own machine (where i already was ssh'ed in), so that i get the pop-up for thrusting this host once and for all!
ssh -v pi#123.456.789.123 -p 330
This solved my issue! Big thanks to 3sky for helping out!

run jenkins pipeline agent with sudo

I have an Jenkins Server running in an docker container and have access to docker an the host system, so far it is working well. Now I want to set up a pipeline testing an script inside an docker container.
Jenkinsfile:
pipeline {
agent { docker 'nginx:1.11' }
stages {
stage('build') {
steps {
sh 'nginx -t'
}
}
}
}
Error Message:
> + docker pull nginx:1.11
>
> Warning: failed to get default registry endpoint from daemon (Got
> permission denied while trying to connect to the Docker daemon socket
> at unix:///var/run/docker.sock: Get
> http://%2Fvar%2Frun%2Fdocker.sock/v1.29/info: dial unix
> /var/run/docker.sock: connect: permission denied). Using system
> default: https://index.docker.io/v1/
>
> Got permission denied while trying to connect to the Docker daemon
> socket at unix:///var/run/docker.sock: Post
> http://%2Fvar%2Frun%2Fdocker.sock/v1.29/images/create?fromImage=nginx&tag=1.11:
> dial unix /var/run/docker.sock: connect: permission denied
>
> script returned exit code 1
My problem is that jenkins needs to run the docker command with sudo, but how to say the agent running the command with sudo?
I have faced the same issue. After analysing the console log, I have found that the reason is that the Docker Jenkins Plugin starts a new container with a specific option -u 107:112:
...
docker run -t -d -u 107:112 ...
...
After trying many options such as: add jenkins to sudo group (it did not work because jenkins user does not exist in container), add USER root into Dockerfile, ... but none of them do the trick.
Finally I have found a solution that is using args in docker agent to overwrite the -u option. This is my Jenkinsfile:
pipeline {
agent {
docker {
image 'ubuntu'
args '-u root:sudo -v $HOME/workspace/myproject:/myproject'
}
}
stages {
stage("setup_env") {
steps {
sh 'apt-get update -y'
sh 'apt-get install -y git build-essential gcc cmake make'
}
}
stage("install_dependencies") {
steps {
sh 'apt-get install -y libxml2-dev'
}
}
stage("compile_dpi") {
steps {
sh 'cd /myproject && make clean && make -j4'
}
}
stage("install_dpi") {
steps {
sh 'cd /myproject && make install'
}
}
stage("test") {
steps {
sh 'do some test here'
}
}
}
post {
success {
echo 'Do something when it is successful'
bitbucketStatusNotify(buildState: 'SUCCESSFUL')
}
failure {
echo 'Do something when it is failed'
bitbucketStatusNotify(buildState: 'FAILED')
}
}
}
There's maybe a security issue here but it is not the problem in my case.
I'd solve the problem differently, matching the jenkins group id inside the container to that of the docker socket you've mounted a volume. I do this with an entrypoint that runs as root, looks up the gid of the socket, and if that doesn't match that of the gid inside the current container, it does a groupmod to correct it inside the container. Then I drop privileges to the jenkins user to launch Jenkins. This entrypoint run on every startup, but fairly transparently to the Jenkins app that is launched.
All the steps to perform this are included in this github repo: https://github.com/sudo-bmitch/jenkins-docker/
You can work around that by:
1- In your Dockerfile add jenkins to the sudoers file:
RUN echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
2- Add an extra step in your Jenkinsfile to give jenkins the right permissions to use docker:
pipeline {
agent none
stages {
stage("Fix the permission issue") {
agent any
steps {
sh "sudo chown root:jenkins /run/docker.sock"
}
}
stage('Step 1') {
agent {
docker {
image 'nezarfadle/tools'
reuseNode true
}
}
steps {
sh "ls /"
}
}
}
}
As others have suggested, the issue is that jenkins does not have permission to run docker containers. Let's go over the ways you could launch jenkins first, and then see what could be done in each of these ways.
1. running jenkins manually
Surely you could download & run jenkins with java as suggested in here. In this method, you could do several things to allow your jenkins user to use docker:
a. give jenkins user root access:
I do not suggest this way, after all you are giving your pipelines access to everything! So you probably do not want this to happen.
b. add jenkins user to docker group
Like explained here you could manage docker as non-root user. just add your user to docker group and thats all. I recommend it if you know who is going to use docker (cause well, you are giving him root access in docker in a way).
c. make docker rootless
This is a new feature docker added to its arsenal recently. You could read in detail what it implies here. To tell you the truth I am not a fan of this feature! The reason is that you can not (at least I could not find a way) to make it work for a user in a container (as you need to stop docker service to make it happen), also I had some difficulties configing dns when using rootless mode. But it should be fine if you are not in a container.
2. running jenkins in docker
This method is more troublesome actually! I struggled with the ways I could use docker in jenkins container, but in the end got the results needed, so was worth the effort.
To run docker in jenkins (which is also a docker container itself) you have three ways:
1. use dind (docker in docker)
It is pretty straight forward, you run dind image & connect docker in jenkins container to the dind, without any special permission handling you can use docker at will.
2. use dood (docker outside of docker)
mount docker path as a volume in the docker run script for your jenkins, note that you need to use one of the two ways I explained above (in running jenkins manually) to be able to use docker, it could be a bit tricky but possible.
3. run agent as a docker in a different environment & connect remote agent in jenkins
At last it is possible to run the agent separately & connecting the remote agent in jenkins. Although this does not exactly answer your question, but is a way you could use.
These ways for just running a docker in jenkins, you will probably have some issues after you ran a docker as agent, like having permission issues in the agent container itself, which is most likely because of the agent's user (if you like, you could access the user with command
docker exec -it [agent container id] whoami
e.g. in this sample the user in agent is node
agent {
docker { image 'node:14-alpine' }
}
steps{
sh 'npm i -g random'
}
so it would throw an error because the node user does not have permission to install npm module globally (I know, it is weird!)
so as luongnv89 mentioned, you could change the user running the docker like this
agent {
docker { image 'node:14-alpine' args '-u root' }
}
Hope this was helpful understanding the whole picture. 😊
What worked for me was
node() {
String jenkinsUserId = sh(returnStdout: true, script: 'id -u jenkins').trim()
String dockerGroupId = sh(returnStdout: true, script: 'getent group docker | cut -d: -f3').trim()
String containerUserMapping = "-u $jenkinsUserId:$dockerGroupId "
docker.image('image')
.inside(containerUserMapping + ' -v /var/run/docker.sock:/var/run/docker.sock:ro') {
sh "..."
}
}
This way the user in the container still uses the jenkins user id + group id to avoid permissions conflicts with shared data but is also member of the docker group inside container which is required to access the docker socket (/var/run/docker.sock)
I prefer this solution as it doesn't require any additional scripts or dockerfiles
I just had the same exact issue. You need to add jenkins user to docker group:
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GROUP=docker
JENKINS_USER=jenkins
if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
sudo groupadd -for -g ${DOCKER_GID} ${DOCKER_GROUP}
sudo usermod -aG ${DOCKER_GROUP} ${JENKINS_USER}
fi
# Start Jenkins service
sudo service jenkins restart
After you run the above, pipelines successfully start docker
I might have found a reasonably good solution for this.
Setup
I run Jenkins as a container and use it to build containers on the dockerhost it's running on. To do this, I pass /var/run/docker.sock as a volume to the container.
Just to reiterate the disclaimer some other people already stated: Giving access to the docker socket is essentially like giving root access to the machine - be careful!
I assume that you've already installed docker into your Jenkins Image.
Solution
This is based on the fact, that the docker binary is not in the first directory of $PATH. We basically place a shell script that runs sudo docker instead of just the plain docker command (and passes the parameters along).
Add a file like this to your jenkins repository and call it docker_sudo_overwrite.sh:
#! /bin/sh
# This basically is a workaround to add sudo to the docker command, because aliases don't seem to work
# To be honest, this is a horrible workaround that depends on the order in $PATH
# This file needs to be place in /usr/local/bin with execute permissions
sudo /usr/bin/docker $#
Then extend your Jenkins Dockerfile like this:
# Now we need to allow jenkins to run docker commands! (This is not elegant, but at least it's semi-portable...)
USER root
## allowing jenkins user to run docker without specifying a password
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/bin/docker" >> /etc/sudoers
# Create our alias file that allows us to use docker as sudo without writing sudo
COPY docker_sudo_overwrite.sh /usr/local/bin/docker
RUN chmod +x /usr/local/bin/docker
# switch back to the jenkins-user
USER jenkins
This gives the jenkins service user the ability to run the docker binary as root with sudo (without providing a password). Then we copy our script to /usr/local/bin/docker which "overlays" the actual binary and runs it with sudo. If it helps, you can look at my example on Github.
Same issue here where.
[...]
agent { docker 'whatever_I_try_doesnt_work'} # sudo, jenkins user in dockerroot group etc
[...]
So my workaround is to add it as one of the steps in the the build stage of the pipeline as follow:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'sudo docker pull python:3.5.1'
}
}
}
}

Resources