Go not found in Docker-based Jenkins agent - docker

I have created a Docker-based Jenkins agent that uses docker:stable-dind (which is based on Alpine 3.10) as its base. Full Dockerfile.
In the Dockerfile I install Go: RUN apk add go
When running locally, e.g. go version, go env GOROOT, etc... I get results, e.g. 1.12.6, /usr/lib/go.
I then attempt using this agent in Jenkins, and print env to verify the above environment variables are, but they not there and also go version fails`.
So, I update the Docker agent template in Jenkins with:
GOROOT: /usr/lib/go
PATH:/bin/sonar-scanner/bin/:/usr/local/bin:$GOROOT/bin:$PATH
Now when checking env they are there...
GOROOT=/usr/lib/go
PATH=/bin/sonar-scanner/bin/:/usr/local/bin:/usr/lib/go/bin:/bin:/usr/bin:/sbin:/usr/sbin
GOPATH=/home/jenkins/workspace/go test
... but go version still fails.
GOPATH is set to the current $WORKSPACE as that's where I will clone the Go project source if this actually works.
This is the Jenkins job:
#!groovy
pipeline {
agent {
label 'cli-agent'
}
stages {
stage ("test") {
steps {
script {
withEnv(["GOPATH=${WORKSPACE}"]) {
sh """
env
go version
"""
}
}
}
}
}
}

Related

Jenkins Pipeline - Agent used with wrong user

I have a simple Jenkins pipeline script like this:
pipeline {
agent {
label 'agent2'
}
stages {
stage('test') {
steps {
build job: 'doSomething'
}
}
}
}
When running the job, it starts correctly on the node "agent2", but it runs as 'jenkins' (the OS shell user of the master server where Jenkins is installed) instead of the OS SSH Shell user of the node.
The node has own credentials assigned to it, but they are not used.
When I run the job "doSomething" on its own and set "Restrict where this project can be run" to "node1" everything is fine. The job then is run by the correct user.
This behaviour can be easily recreated:
1. create a new OS user (pw: testrpm)
sudo useradd -m testrpm
sudo passwd testrpm
2. create a new node and use this settings:
3. create a freestyle job (called 'doSomething) with a shell step which does 'whoami'
doSomething Job XML
4. create a pipeline job and paste this code into the pipeline script
pipeline {
agent {
label 'agent2'
}
stages {
stage('test') {
steps {
build job: 'doSomething'
}
}
}
}
test-pipeline Job XML
5. run the pipeline and check the console output of the 'doSomething' job. The output of 'whoami' is not 'testrpm' but 'jenkins'
Can somebody explain this behaviour and tell me where the error is ?
If I change the "Usage" of the built-in node to "Only build jobs with label expressions matching this node", than it works as expected an the "doSomething" job is called with the user testrpm:

how to build fast api python script on jenkins and fast api should run forever

I have written a fast api. I want to build it CI/CD pipeline. I want to run fast api forever until I manually interrupt it.
I am very new to jenkins. I have written a Jenkins pipeline script which is followed.
pipeline {
agent any
stages {
stage('Build') {
steps {
// Get some code from a GitHub repository
git url: 'https://gitlab.com/something1234/test.git', branch: 'hello'
}
}
stage('deploy'){
steps {
sh """
ls
#!/bin/bash
echo $PATH
echo $HOME
. /home/soleman/anaconda3/bin/activate tensor2
conda activate tensor2
python api.py
"""
}
}
}
}
when I start build this script.The build runs forever instead of success beacuse of python api is running. As shown in figure.
How to build it successfully and deploy it? Please guide.
I don't want to use dockers as I am utilizing system own variables.

How to use a Jenkinsfile for these build steps?

I'm learning how to use Jenkins and working on configuring a Jenkins file instead of the build using the Jenkins UI.
The source code management step for building from Bitbucket:
The build step for building a Docker container:
The build is of type multi configuration project:
Reading the Jenkins file documentation at https://www.jenkins.io/doc/book/pipeline/jenkinsfile/index.html and creating a new build using Pipeline :
I'm unsure how to configure the steps I've configured via the UI: Source Code Management & Build. How to convert the config for Docker and Bitbucket that can be used with a Jenkinsfile ?
The SCM will not be changed, regardless if you are using UI configuration or a pipeline, although in theory you can do the git clone from the steps in the pipeline, if you really insist convert the SCM steps in pure pipeline steps.
The pipeline will can have multiple stages, and each of the stages can have different execution environment. You can use the Docker pipeline plug-in, or you can use plain sh to issue the docker commands on the build agent.
Here is small sample from one of my manual build pipelines:
pipeline {
agent none
stages {
stage('Init') {
agent { label 'docker-x86' }
steps {
checkout scm
sh 'docker stop demo-001c || true'
sh 'docker rm demo-001c || true'
}
}
stage('Build Back-end') {
agent { label 'docker-x86' }
steps {
sh 'docker build -t demo-001:latest ./docker'
}
}
stage('Test') {
agent {
docker {
label 'docker-x86'
}
}
steps {
sh 'docker run --name demo-001c demo-001:latest'
sh 'cd test && make test-back-end'
}
}
}
}
You need to create a Pipeline type of a project and specify the SCM configuration in the General tab. In the Pipeline tab, you will have option to select Pipeline script or Pipeline script from SCM. It's always better to start with the Pipeline script while you are building and modifying your workflow. Once it's stabilized, you can add it to the repository.

Creating a Python Pipeline on Jenkins but get access denied from docker

I've created a Jenkinsfile in my Git repository that is defined as this:
pipeline {
//None parameter in the agent section means that no global agent will be allocated for the entire Pipeline’s
//execution and that each stage directive must specify its own agent section.
agent none
stages {
stage('Build') {
agent {
docker {
//This image parameter (of the agent section’s docker parameter) downloads the python:3.8
//Docker image and runs this image as a separate container. The Python container becomes
//the agent that Jenkins uses to run the Build stage of the Pipeline project.
image 'python:3.8.3'
}
}
steps {
//This sh step runs the Python command to compile the application
sh 'pip install -r requirements.txt'
}
}
}
}
When I tried to run the job with this Pipeline, I've got the following error:
I also tried to use image python:latest but this option didn't work either.
Can someone explain me :)?
Go to Computer Management -> Local Users and Groups and make sure the user used by jenkins is added to the docker-users group

Jenkins Using result artifacts in different steps - stash and unstash

I have a Jenkinsfile declarative pipeline which has two steps:
build an RPM file inside a docker container
build a docker image with the RPM and run it
The first step is built inside a docker container because it require a specific app to build the RPM.
The second step is run directly on a Jenkins slave, can be other slave than the slave which ran the first step.
In order to use the RPM produced by the first step I'm currently using stash and unstash steps. If I do not use them the second step doesn't have access to the RPM file.
The RPM file is about 215MB which is more than the 100MB recommended limit so I'll like to know if there is a better solution?
pipeline {
agent any
options {
timestamps()
}
stages {
stage('Gradle: build') {
agent {
docker {
image 'some-internal-image'
}
}
steps {
sh """
chmod +x gradlew
./gradlew buildRpm
"""
}
post {
success {
stash name: 'rpm', includes: 'Server/target/myapp.rpm'
}
}
}
stage('Gradle: build docker image') {
steps {
unstash 'rpm'
sh """
chmod +x gradlew
./gradlew buildDockerImage
"""
}
}
}
}
You could use docker's multi-stage build, but I'm not aware of a nice implementation using Jenkins Pipelines.
We're stashing also several hundreds of megabytes to distribute it to build agents. I've experimented with uploading the artifacts to S3 and downloading them again from there with now visible performance improvement (only that it takes off load from the Jenkins Master).
So my very opinionated recommendation: Keep it like it is and optimize, once you really run into performance / load issues.
you can use Artifactory or any other binary repository manager..
From Artifactory's webpage:
As the first, and only, universal Artifact Repository Manager on the
market, JFrog Artifactory fully supports software packages created by
any language or technology.
...
...Artifactory provides an end-to-end, automated and bullet-proof
solution for tracking artifacts from development to production.

Resources