Jenkins Using result artifacts in different steps - stash and unstash - jenkins

I have a Jenkinsfile declarative pipeline which has two steps:
build an RPM file inside a docker container
build a docker image with the RPM and run it
The first step is built inside a docker container because it require a specific app to build the RPM.
The second step is run directly on a Jenkins slave, can be other slave than the slave which ran the first step.
In order to use the RPM produced by the first step I'm currently using stash and unstash steps. If I do not use them the second step doesn't have access to the RPM file.
The RPM file is about 215MB which is more than the 100MB recommended limit so I'll like to know if there is a better solution?
pipeline {
agent any
options {
timestamps()
}
stages {
stage('Gradle: build') {
agent {
docker {
image 'some-internal-image'
}
}
steps {
sh """
chmod +x gradlew
./gradlew buildRpm
"""
}
post {
success {
stash name: 'rpm', includes: 'Server/target/myapp.rpm'
}
}
}
stage('Gradle: build docker image') {
steps {
unstash 'rpm'
sh """
chmod +x gradlew
./gradlew buildDockerImage
"""
}
}
}
}

You could use docker's multi-stage build, but I'm not aware of a nice implementation using Jenkins Pipelines.
We're stashing also several hundreds of megabytes to distribute it to build agents. I've experimented with uploading the artifacts to S3 and downloading them again from there with now visible performance improvement (only that it takes off load from the Jenkins Master).
So my very opinionated recommendation: Keep it like it is and optimize, once you really run into performance / load issues.

you can use Artifactory or any other binary repository manager..
From Artifactory's webpage:
As the first, and only, universal Artifact Repository Manager on the
market, JFrog Artifactory fully supports software packages created by
any language or technology.
...
...Artifactory provides an end-to-end, automated and bullet-proof
solution for tracking artifacts from development to production.

Related

how to build fast api python script on jenkins and fast api should run forever

I have written a fast api. I want to build it CI/CD pipeline. I want to run fast api forever until I manually interrupt it.
I am very new to jenkins. I have written a Jenkins pipeline script which is followed.
pipeline {
agent any
stages {
stage('Build') {
steps {
// Get some code from a GitHub repository
git url: 'https://gitlab.com/something1234/test.git', branch: 'hello'
}
}
stage('deploy'){
steps {
sh """
ls
#!/bin/bash
echo $PATH
echo $HOME
. /home/soleman/anaconda3/bin/activate tensor2
conda activate tensor2
python api.py
"""
}
}
}
}
when I start build this script.The build runs forever instead of success beacuse of python api is running. As shown in figure.
How to build it successfully and deploy it? Please guide.
I don't want to use dockers as I am utilizing system own variables.

How to use a Jenkinsfile for these build steps?

I'm learning how to use Jenkins and working on configuring a Jenkins file instead of the build using the Jenkins UI.
The source code management step for building from Bitbucket:
The build step for building a Docker container:
The build is of type multi configuration project:
Reading the Jenkins file documentation at https://www.jenkins.io/doc/book/pipeline/jenkinsfile/index.html and creating a new build using Pipeline :
I'm unsure how to configure the steps I've configured via the UI: Source Code Management & Build. How to convert the config for Docker and Bitbucket that can be used with a Jenkinsfile ?
The SCM will not be changed, regardless if you are using UI configuration or a pipeline, although in theory you can do the git clone from the steps in the pipeline, if you really insist convert the SCM steps in pure pipeline steps.
The pipeline will can have multiple stages, and each of the stages can have different execution environment. You can use the Docker pipeline plug-in, or you can use plain sh to issue the docker commands on the build agent.
Here is small sample from one of my manual build pipelines:
pipeline {
agent none
stages {
stage('Init') {
agent { label 'docker-x86' }
steps {
checkout scm
sh 'docker stop demo-001c || true'
sh 'docker rm demo-001c || true'
}
}
stage('Build Back-end') {
agent { label 'docker-x86' }
steps {
sh 'docker build -t demo-001:latest ./docker'
}
}
stage('Test') {
agent {
docker {
label 'docker-x86'
}
}
steps {
sh 'docker run --name demo-001c demo-001:latest'
sh 'cd test && make test-back-end'
}
}
}
}
You need to create a Pipeline type of a project and specify the SCM configuration in the General tab. In the Pipeline tab, you will have option to select Pipeline script or Pipeline script from SCM. It's always better to start with the Pipeline script while you are building and modifying your workflow. Once it's stabilized, you can add it to the repository.

Creating a Python Pipeline on Jenkins but get access denied from docker

I've created a Jenkinsfile in my Git repository that is defined as this:
pipeline {
//None parameter in the agent section means that no global agent will be allocated for the entire Pipeline’s
//execution and that each stage directive must specify its own agent section.
agent none
stages {
stage('Build') {
agent {
docker {
//This image parameter (of the agent section’s docker parameter) downloads the python:3.8
//Docker image and runs this image as a separate container. The Python container becomes
//the agent that Jenkins uses to run the Build stage of the Pipeline project.
image 'python:3.8.3'
}
}
steps {
//This sh step runs the Python command to compile the application
sh 'pip install -r requirements.txt'
}
}
}
}
When I tried to run the job with this Pipeline, I've got the following error:
I also tried to use image python:latest but this option didn't work either.
Can someone explain me :)?
Go to Computer Management -> Local Users and Groups and make sure the user used by jenkins is added to the docker-users group

Jenkins avoid tool installation if it is installed already

Not a jenkins expert here. I have a scripted pipeline where I have tool installed (Node). Unfortunately it was configured to pull in other dependencies which takes 250sec in overall now. I'd like to add a condition to avoid this installation if it(Node with packages) was already installed previously, but don't know where to start. Perhaps jenkins stores meta info from prev runs that can be checked?
node {
env.NODEJS_HOME = "${tool 'Node v8.11.3'}"
env.PATH = "${env.NODEJS_HOME}/bin:${env.PATH}"
env.PATH = "/opt/xs/bin:${env.PATH}"
// ...
}
Are you using dynamic jenkins agents (docker containers)? In this case tools will be installed everytime you run build.
Mount volumes to containers, use persistant agents or build your own docker image with installed nodejs.
As I see you use workaround to install nodejs tool.
Jenkins supports it native way (declarative style):
pipeline {
agent any
tools {
nodejs 'NodeJS_14.5'
}
stages {
stage ('nodejs test') {
steps {
sh 'npm -v'
}
}
}
}
On first run tools will be installed. On next ones - will not since it is installed.

Jenkins differences between tools and docker agent

Sorry it might be a simple question but what is the differences between using tools and docker agent.I think using docker agent is much more flexible instead of using tools. When should I use docker agent or tools?
Tools
pipeline {
agent any
tools {
maven 'Maven 3.3.9'
jdk 'jdk8'
}
stages {
stage ('Initialize') {
steps {
sh '''
echo "PATH = ${PATH}"
echo "M2_HOME = ${M2_HOME}"
'''
}
}
stage ('Build') {
steps {
sh 'mvn -Dmaven.test.failure.ignore=true install'
}
Docker Agent
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
These two options serve a bit different purpose. The tools block allows you to add specific versions of maven, jdk, or gradle in your PATH. You can't use any version - you can only use versions that are configured in the Global Tool Configuration Jenkins page:
If your Jenkins configuration contains only a single Maven version, e.g., Maven 3.6.3, you can use only this version. Specifying a version that is not configured in the Global Tool Configuration will cause your pipeline to fail.
pipeline {
agent any
tools {
maven 'Maven 3.6.3'
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
Using the tools block to specify different versions of supported tools will be a good option if your Jenkins server does not support running docker containers.
The docker agent, on the other hand, gives you total freedom when it comes to specifying tools and their versions. It does not limit you to maven, jdk, and gradle, and it does not require any pre-configuration in your Jenkins server. The only tool you need is docker, and you are free to use any tool you need in your Jenkins pipeline.
pipeline {
agent {
docker {
image "maven:3.6.3-jdk-11-slim"
}
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
When to use one over another?
There is no single right answer to this question. It depends on the context. The tools block is very limiting, but it gives you control over what tools are used in your Jenkins. In some cases, people decide not to use docker in their Jenkins environment, and they prefer to control what tools are available to their users. We can agree with this or not. When it comes to using the docker agent, you get full access to any tools that can be shipped as a docker container.
In some cases, this is the best choice when it comes to using a tool with a specific version - your operating system may not allow you to install the desired version. Of course, you need to keep in mind that this power and flexibility comes with a cost. You lose control over what tools are used in your Jenkins pipelines. Also, if you pull tons of different docker images, you will increase disk space consumption. Not to mention that the docker agent allows you to run the pipeline with tools that may consume lots of CPU and memory. (I have seen Jenkins pipelines starting Elasticsearch, Logstash, Zookeeper, and other services, on nodes that were not prepared for that load.)

Resources