Running pyinstaller with Jenkinsfile - jenkins

I'm very new to using docker and Jenkinfiles in Jenkins.
Currently, I want to run a docker container (pyinstaller-windows) on linux, so I wrote the following Jenkinsfile to test it:
pipeline {
agent none
stages {
stage('Deliver') {
agent {
docker {
image 'cdrx/pyinstaller-windows:python3'
}
}
steps {
sh 'cd app/'
sh 'pip install -r requirements.txt'
sh 'cd gui'
sh './gui_to_exe.sh' //containing pyinstaller command
}
post {
success {
archiveArtifacts 'appName.exe'
}
}
}
}
}
After running it in Jenkins, I received the following error message:
cdrx/pyinstaller-windows:python3 cat
$ docker top 8...5 -eo pid,comm
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] End of Pipeline
java.io.IOException: Failed to run top '8...5'. Error: Error response
from daemon: Container 8...5 is not running.
at org.jenkinsci.plugins.docker.workflow.client.DockerClient.listProcess(DockerClient.java:152)
at org.jenkinsci.plugins.docker.workflow.WithContainerStep$Execution.start(WithContainerStep.java:201)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:322)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:196)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:124)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:47)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:20)
at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(Docker.groovy:140)
at org.jenkinsci.plugins.docker.workflow.Docker.node(Docker.groovy:66)
at org.jenkinsci.plugins.docker.workflow.Docker$Image.inside(Docker.groovy:125)
at org.jenkinsci.plugins.docker.workflow.declarative.DockerPipelineScript.runImage(DockerPipelineScript.groovy:54)
at
...
What am I doing wrong here?

Check the following minimal pipelinne. I fixed the issues I observed. Other than that, I'm not sure from where you are getting the app directory. Probably you may want to mount the workspace to your Container if you have any sources in the host machine.
pipeline {
agent any
stages {
stage('Deliver') {
agent {
docker {
image 'cdrx/pyinstaller-windows:python3'
args "--entrypoint=''"
}
}
steps {
echo "Something"
sh '''
cd app/
pip install -r requirements.txt
cd gui
./gui_to_exe.sh
'''
}
post {
success {
archiveArtifacts 'appName.exe'
}
}
}
}
}

Related

Dockerfile in Declarative pipeline job fails

Jenkins Version:-
Jenkins - 2.277.1 LTS.
My Dockerfile:-
FROM maven:3.6.0-jdk-13
RUN useradd -m -u 1000 -s /bin/bash jenkins
My Declarative Pipeline:-
pipeline {
agent {
label "VM-Linux-Agent"
}
environment {
DOCKERFILE = "Dockerfile"
}
stages {
stage("Checkout") {
steps {
git(
url: 'git#gitlab.company.com:maven-prj-group/mavenapp.git',
branch: "master"
)
}
}
stage("Build") {
agent {
dockerfile {
filename DOCKERFILE
args "-v $WORKSPACE:/var/maven"
}
}
steps {
sh "mvn clean install"
}
}
}
}
From Jenkins master i have configured Linux server as node VM-Linux-Agent and using this node pipeline job code checkout is happening and further using Dockerfile building a docker container then to run build and others steps on docker itself steps are not working. it shows below errors.
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build)
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/Dockerfile-Pipeline
[Pipeline] {
[Pipeline] isUnix
[Pipeline] readFile
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
java.nio.file.NoSuchFileException: /var/jenkins_home/workspace/Dockerfile-Pipeline/Dockerfile
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at hudson.FilePath.newInputStreamDenyingSymlinkAsNeeded(FilePath.java:2112)
at hudson.FilePath.read(FilePath.java:2097)
at hudson.FilePath.read(FilePath.java:2089)
at org.jenkinsci.plugins.workflow.steps.ReadFileStep$Execution.run(ReadFileStep.java:104)
at org.jenkinsci.plugins.workflow.steps.ReadFileStep$Execution.run(ReadFileStep.java:94)
at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
Build step instead of running on docker container it is running on master then it fails. Since i want to run only build + test steps on docker container from my node system(This is Docker Host). So how do i fix this in my declarative pipeline? please let me know the way to do this. Thanks in advance.
In the pipeline code section global agent replaced with none and to checkout the code on specific slave(In my case VM-Linux-Agent) added as agent label.
The below code working for me.
pipeline {
agent none
stages {
stage("Checkout") {
agent {
label "VM-Linux-Agent"
}
steps {
git(
url: 'git#gitlab.company.com:maven-prj-group/mavenapp.git',
branch: "master"
)
}
}
stage("Build") {
agent {
dockerfile {
filename 'Dockerfile'
label 'VM-Linux-Agent'
args "-v /home/user/maven:/var/maven"
}
}
steps {
sh "mvn clean install"
}
}
}
}

Proper way to run docker container using Jenkinsfile

When making a jenkinsfile, I have steps to run dockers image which pulling from my docker hub.
stage('pull image and run') {
steps {
sh '''
docker login -u <username> -p <password>
docker run -d -p 9090:3000 <tag>
'''
}
}
This step is okay if I run this script the first time. However, if I run this script the second time, it will get this error.
Login Succeeded
+ docker run -d -p 9090:3000 <tag>
669955464d74f9b5186b437b7127ca0a24f6ea366f3a903c673489bec741cf78
docker: Error response from daemon: driver failed programming external connectivity on endpoint distracted_driscoll (db16abd899cf0cbd4f26cf712b1eee4ace5b491e061e2e31795c2669296068eb): Bind for 0.0.0.0:9090 failed: port is already allocated.
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 125
Finished: FAILURE
Obviously, the port 9090 is not available so the execution failed.
Question:
What is the correct way to upgrade an app inside a docker container?
I can stop the container before running the docker run, but I can't find a proper way to do that in jenkinsfile steps.
Any suggestion?
Thanks
Jenkins has really good docker support to make your build proceed within docker container. good example can be found here
One declarative example to do maven build will be:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /tmp:/tmp'
registryUrl 'https://myregistry.com/'
registryCredentialsId 'myPredefinedCredentialsInJenkins'
}
}
stages {
stage("01") {
steps {
sh "mvn -v"
}
}
stage("02") {
steps {
sh "mvn --help"
}
}
}
}
In a scripted pipeline, it would be
node {
docker.withRegistry('https://registry.example.com', 'credentials-id') {
docker.image('node:14-alpine').inside("-v /tmp:/tmp") {
stage('Test') {
sh 'node --version'
}
}
}
}

Jenkins pipeline fails to login to docker hub

I am trying to use pipeline to run a few things and when i started running my pipeline it failed to login to docker.
the weird thing is that i am able to login on the machine itself but when i run the pipeline is fails with this weird error:
Started by user admin
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] stage
[Pipeline] { (Front-end)
[Pipeline] node
Running on test-env in /var/www/test-env/workspace/client-e2e
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
Using the existing docker config file.Removing blacklisted property: auths$ docker login -u ***** -p ******** https://hub.docker.com/?namespace=******
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] End of Pipeline
ERROR: docker login failed
Finished: FAILURE
I don't know why it performs a login when this image is publicly available for everyone.
can someone help me ?
this is the pipeline itself:
pipeline {
agent none
stages {
stage('Front-end') {
agent {
docker {
image 'node:8-alpine'
label "test-env"
}
}
steps {
sh 'node --version'
}
}
}
}
You can try with Credentials Binding Plugin in:
steps {
sh 'node --version'
}
You can do:
withCredentials([string(credentialsId: 'mytoken', variable: 'TOKEN')]) {
sh '''
docker login -u '<your_user>' -p '<$TOKEN>'
node --version
'''
}
There is an example here:
https://issues.jenkins-ci.org/browse/JENKINS-41051
Ok, so after a while i found out that it was as simple as doing this
pipeline {
agent none
stages {
stage('Front-end') {
agent {
docker {
image 'node:8-alpine'
registryUrl 'https://index.docker.io/v1/'
label "test-env"
}
}
steps {
sh 'node --version'
}
}
}
}
this was aded: registryUrl 'https://index.docker.io/v1/'

Docker Container Jenkins Pipeline Script: Permission denied while executing the script

I am running jenkins inside a docker container. I have created a simple pipleline to checkout,build and run docker image, but I am getting the following error.
Below is my pipleline script:
node {
def mvnHome = tool name: 'Maven Path', type: 'maven'
stage('Git CheckOut') {
git branch: '2019_DOCKER_SERVICES', credentialsId: 'git-creds', url: 'http://10.10.10.84:8111/scm/git/JEE_M_SERVICES'
}
stage('Maven Build') {
// Run the maven build
withEnv(["MVN_HOME=$mvnHome"]) {
if (isUnix()) {
sh '"$MVN_HOME/bin/mvn" -f Services/user-service/pom.xml clean install'
} else {
// bat(/"%MVN_HOME%\bin\mvn" -f Services\\user-service\\pom.xml clean install/)
}
}
}
stage('Docker Image Build') {
sh '"Services/user-service/" docker build -t user-service'
}
}
But I am getting the follow error in last stage, the first two stages ran successfully.
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Docker Image Build)
[Pipeline] sh
+ Services/user-service/ docker build -t user-service
/var/jenkins_home/jobs/docker-demo/workspace#tmp/durable-a5c035cf/script.sh: 1: /var/jenkins_home/jobs/docker-demo/workspace#tmp/durable-a5c035cf/script.sh: Services/user-service/: Permission denied
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
You have to set up new Jenkins slaves using Docker
It's weird to run Docker inside the Docker container
To access low-level operations you have to run your Docker container privileged

pass variables between stages jenkins pipeline

I'm creating a Jenkins pipeline for simple deployment into kubernetes cluster, I have my private Docker registry, in here I simply clone my repo and build my docker image and update build docker image id into kubernetes deployment manifest and deploy the pod. but I'm having trouble passing my build image id to next stage, I did some research and try to solve it so I managed to pass the id to next stage but when I try to add the new id to deployment manifests its empty.
here is my pipeline
pipeline {
environment {
BUILD_IMAGE_ID = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git( url: 'https://xxxxxx.git',
credentialsId: 'id',
branch: 'master')
}
}
stage('Login Docker Registry') {
steps{
script {
sh 'docker login --username=xxxx --password=xxxx registry.xxxx.com'
}
}
}
stage('Building Image') {
steps{
script {
def IMAGE_ID = sh script:'docker run -e REPO_APP_BRANCH=xxxx -e REPO_APP_NAME=xxx --volume /var/run/docker.sock:/var/run/docker.sock registry.xxxx/image-build', returnStdout: true
println "Build image id: ${IMAGE_ID} "
BUILD_IMAGE_ID = IMAGE_ID.replace("/n","")
env.BUILD_IMAGE_ID = BUILD_IMAGE_ID
}
}
}
stage('Integration'){
steps{
script{
echo "passed: ${BUILD_IMAGE_ID} "
//update deployment manifests with latest docker tag
sh 'sed -i s,BUILD_ID,${BUILD_IMAGE_ID},g deployment-manifests/development/Service-deployments.yaml'
}
}
}
}
}
I don't want to save that value into a file and read and do the operation
output
[Pipeline] echo
Build image id:
registry.xxxx.com/service:3426d51-baeffc2
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Integration)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
passed:
registry.xxxx.com/service:3426d51-baeffc2
[Pipeline] sh
[orderservice] Running shell script
+ sed -i s,BUILD_ID,,g deployment-manifests/development/service-deployments.yaml

Resources