Pass binary from Jenkins host to agent - jenkins

Can you pass a binary from a Jenkins host to an agent?
I've got Jenkins running in Kubernetes, and the terraform plugin installed on my Jenkins master with the binary located at /var/jenkins_home/tools/org.jenkinsci.plugins.terraform.TerraformInstallation/terraform/terraform
I would like to pass this to my Jenkins agent by configuring my pod template and mounting the host volume path /var/jenkins_home/tools/org.jenkinsci.plugins.terraform.TerraformInstallation/terraform/terraform to the agent's path /usr/bin/terraform
But this doesn't seem to work as expected
When I exec into the agent and run a terraform version I get the error bash: terraform: command not found indicating that it doesn't have the binary.
I can see a terraform directory mounted in /usr/bin but without the binary. What I expect is for terraform to be installed on the agent. But my thinking might be incorrect here.
Is it possible to do this, has anyone has any experience with this?

As a #David Maze mentioned binary from Jenkins needs to be manually installed on every node, which can be a difficult to manage. However you can set Jenkins to run pipeline steps inside a container where the image contains the tools you need, which simplifies such case.
Read more: execution-env-jenkins.

One alternative is to use the slaves setup plugin. We use it to install and configure internal tools (and end) on nodes bases on labels. A log less hassle than #Malgorata's (and our previous) manually copy approach
Not sure how well it works with Kubernetes as not in our configuration.

Related

CICD Jenkins with Docker Container, Kubernetes & Gitlab

I have a workflow where
1) Team A & Team B would push changes in an app to a private gitlab (running in docker container) on Server A.
2) Their app should contain Dockerfile or docker-compose.yml
3) Gitlab should trigger jenkins build (jenkins runs in docker container which is also on Server A) and executes the usual build things like test
4) Jenkins should build a new docker image and deploy it.
Question:
If Team A needs packages like maven and npm to do web applications but Team B needs other packages like c++ etc, how do i solve this issue?
Because i don't think it is correct for my jenkins container to have all these packages (mvn, npm, yarn, c++ etc) and then execute the jenkins build.
I was thinking that Team A should get a container with packages it needs installed. Similarly for Team B.
I want to make use of Docker, Kubernetes, Jenkins and Gitlab. (As much container technology as possible)
Please advise me on the workflow. Thank you
I would like to share a developer's perspective which is different than the presented in the question "operation-centric" state of mind.
For developers the Jenkins is also a tool that can trigger the build of the application. Yes, of course, a built artifact can be a docker image as well but this is not what developers really concerned about. You've referred to this as "the usual build things like test" but developers have entire ecosystems around this "usual build".
For example, mvn that you've mentioned has great dependency resolution capabilities. It can alone resolve the dependencies. Roughly the same assumption holds for other build tools. I'll stick with maven during this answer.
So you don't need to maintain dependencies by yourself but, as a Jenkins maintainer, you should give a technical ability to actually build the product (which is running maven that in turn resolves/downloads all the dependencies and then runs the tests, produces tests results and can even create a docker image or even to deploy the image to some images repository if you wish ;) ).
So developers who use some build technologies maintain their own scripts (declarative as in the case of maven or something like make files in case of C++) should be able to run their own tools.
Now this picture doesn't contradict with the containerization:
The jenkins image can contain maven/make/npm really a small number of tools just to run the build. The actual scripts can be a part of the application source code base (maintained in git).
So when Jenkins gets the notification about the build - it should checkout the source code, run some script (like mvn package), show the test results and then as a separate step or from maven to create an image of your application and upload it to some repository or supply it to the kubernetes cluster depending on your actual devops needs.
Note that during mvn package maven will download all the dependencies (3rd-party packages) into the jenkins workspace, compile everything with Java compiler that you should also obviously need to make available on Jenkins machine.
Whoa, that's a big one and there are many ways to solve this challenge.
Here are 2 approaches which you can apply:
Solve it by using different types of Jenkins Slaves
In the long run you should consider running Jenkins workloads on slaves.
This is not only desirable for your use case but also scales much better on higher workloads.
Because in worst case denser workloads can kill your Jenkins master.
See https://wiki.jenkins.io/display/JENKINS/Distributed+builds for reference.
When defining slaves in Jenkins (e.g. with the ec2 plugin for AWS integration) you can use different slaves for different workloads. That means you prepare a slave (or slave image, in AWS this would be an AMI) for each specific purpose you've got. One for Java, one for node, you name it.
These defined slaves then can be used within your jobs by checking the Restrict where this project can be run and entering the label of your slave.
Solve it by doing everything within the Docker Environment
The simplest solution in your case would be to just use the Docker infrastructure to build Docker images of any kind which you'll then push to your private Docker Registry (all big cloud providers have such Container Registry services) and then download at deploy time. This would save you pains in installing new technology stacks everytime you change them.
I don't really know how familiar you're with Docker or other containerization technologies, so I roughly outline the solution for you.
Create a Dockerfile with the desired base image as starting point. Just google them or look for them on Dockerhub. Here's a sample for nodeJS:
MAINTAINER devops#yourcompany.biz
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --silent
#i recommend using a .dockerignore file to not copy unnecessary stuff in your image
COPY . .
CMD [ "npm", "start"]
Adapt the image and create a Jenkinsfile for your CI/CD pipeline. There you should build and push your docker image like this:
stage('build docker image') {
...
steps {
script {
docker.build('your/registry/your-image', '--build-arg NODE_ENV=production .')
docker.withRegistry('https://yourregistryurl.com', 'credentials token to access registry') {
docker.image('your/registry/your-image').push(env.BRANCH_NAME + '-latest')
}
}
}
}
Do your deployment (in my case it's done via Ansible) by downloading and running the previously deployed image during your deployment scripts.
If you'd try to define your questions a little more in detail, you'd get better answers.

Jenkins configuration using command line

I am trying to move the complete eco-system of our SAAS product to Kubernetes (and use Docker containers).
I am supposed to give a bash script which will set up everything. Only manual intervention should be setting up the Kubernetes cluster and mounting Persistent Volumes.
We were using Jenkins for code deployment and cron jobs. I am able to create the Jenkins service but I can not find ways to configure it using the command line. Tried finding ways online but can not find any good documentation.
First welcome to kubernetes, second, there are a lot of tools, templates over there, I would recommend you to check what is Helm
This is the Jenkins chart if you want to check:
https://github.com/helm/charts/tree/master/stable/jenkins
There is also a "fork" of jenkins for containerized environments, that I like, you can check more about Jenkins-X here
You can use helm package manager and simply install the Jenkin stable version.
Before using helm you have to setup tiller on kubernetes cluster.
$ helm install --name my-release stable/jenkins
here stable version of jenkin using helm.
https://github.com/helm/charts/tree/master/stable/jenkins
I can add that you can store Jenkins home folder as well as plugins and artifacts folder on persistent volume and mount that volume to Jenkins pod as a part of Helm installation. You can also make daily snapshots/backups of Jenkins disk. In this way Jenkins deployment becomes very smooth, quick and reliable.

What's the benefits of docker with Jenkins Pipelines?

I'm new to Jenkins/Docker. So far I've found lots of Jenkins official Documents recommended to be used with Docker. But the necessity and advantages of running Jenkins as a docker container remain vague to me. In my case, it's a node/react app and environment required is not complicated.
Disadvantages I've found running Jenkins as a Docker container:
High usage of hard drive
Directory path in docker container is more complicated to deal with, esp when working with ssh in pipeline scripts
Without docker, I can easily achieve the same and there's also blueocean plugin available.
So, what's the main benefits of Docker with Jenkins/Jenkins Pipeline? Are there pitfalls for my node application using Jenkins without Docker? Articles to help me dive into are also appreciated.
Jenkins as Code
The main advantages of Jenkins in Docker is that it helps you to get: Jenkins as Code
Advantages of Jenkins as code are:
SCM: Code can be in put under version control
History is transparant, backup and roll-back becomes easy.
The code is the documentation of your Jenkins setup.
Jenkins becomes portable, so you can run Jenkins locally to try new plugins etc.
Jenkins pipelines work really well with Docker. As #Ivthillo mentioned: there is no need to install additional tools, you just use images of these tool. Jenkins will download them from internet for you (Docker Hub).
For each stage in the pipeline you can use a different image (i.e. tool). Essentially you get "micro Jenkins agents" which only exists temporary. This makes your Jenkins setup much more clean.
Disadvantage is:
Jenkins initial (Groovy) configuration is poorly documented on the web.
Simple Node setup
Most arguments also holds for a simple Node setup.
Change the node version or run multiple job each with a different Node version becomes easy.
Add your Jenkinsfile inside the Node repo. So everyone with a Jenkins+Docker setup can run your CI/CD.
And finaly: gather knowledge on running your app inside a container will enable your to run your production app in Docker in the future.
Getting started
A while ago I have written an small blog on how to get started with Jenkins and Docker, i.e. create a Jenkins image for development which you can launch and destroy in seconds.

Run Jenkins master and slave with Docker

I want to setup Jenkins master on server A and slave on server B with use of Docker.
Both servers are virtual machines dedicated for Jenkins.
Currently I have started Docker container on server A for master, based on the official Jenkins docker image. But what docker image should I use for Jenkins slave?
That actually depends on the environment and tools you need in your build environment. For example, if you build a C project, you would need an image containing a C compiler and possibly make if you use Makefiles. If you build a Java project, you would need a JDK with a Java compiler and possibly Ant / Maven / Gradle if you use them as part of your build.
You can use the evarga/jenkins-slave as a good starting point for your build slave.
This image already contains JDK. If you simply need JDK and Maven on your build slave, you can build your Docker image with the following Dockerfile:
FROM evarga/jenkins-slave
run apt-get install maven
Using Docker images for build slaves is actually a good idea. Some of the reasons appear at Templating Jenkins Build Environments with Docker Containers:
Docker has established itself as a popular and convenient way to
bootstrap isolated and reproducible environments, which enables Docker
containers to be the most maintainable slave environments. Docker
containers’ tooling and other configurations can be version controlled
in an environment definition called a Dockerfile, and Dockerfiles
allows multiple identical containers can be created quickly using this
definition or for more customized off-shoots to be created by using
that Dockerfile’s image as a base.
I suggest you take trying to use dynamic|ephemeral docker nodes, instead of manually creating nodes and connecting to them via ssh. Take a look at https://engineering.riotgames.com/news/putting-jenkins-docker-container, it's very powerful and I think it's one of killer usecases for Docker.

Setup Jenkins to monitor external job

I read the part of the Jenkins wiki that covers setting up a remote job to be monitored by a Jenkins instance. However, the documentation is confusing as it doesn't tell me what to configure on the Jenkins machine or the remote machine (the one that does the job).
Further, the documentation mentions Java commands that can be fired directly and others that need a servlet container. Do I have to install a servlet container on the remote machine?
Maybe it's all there but for me it's like a mix of two documentations. Can you please clarify:
What do I need to do on the remote machine?
What do I need to do on the Jenkins machine?
Thank you.
In Jenkins, you need to create a job using the "Monitor an external job" option. Give this a name, for example "nightly-backup".
On the machine where the external job is running, you need Java installed and some basic Jenkins JAR files, so that the job results can be sent to Jenkins.
As the wiki page says, on some versions of Debian or Ubuntu you can do this with:
sudo apt-get install jenkins-external-tool-monitor
Otherwise, you have to copy a bunch of JARs manually — i.e. those listed on the wiki page — to your remote machine.
Once you have the JARs available on your remote machine, you can execute whichever command you like there, so long as you prefix it with some Jenkins information: where to find the Jenkins installation, the main Java JAR, and the job name:
JENKINS_HOME=http://my-jenkins/ java -jar jenkins-core-*.jar nightly-backup ./backup.sh --nightly /home
Where http://my-jenkins/ is the base URL to Jenkins, nightly-backup matches the name of the "Monitor an external job" you created in Jenkins, and ./backup.sh --nightly /home is the command you wish to run.
The output of this ./backup.sh command will show up in Jenkins automatically once it's complete.
It looks like this is now called "jenkins-external-job-monitor", so you'd type:
sudo apt-get install jenkins-external-job-monitor

Resources