How do we install npm pdf-parse library in jenkins docker container - docker

While running the Cypress tests on jenkins, I am getting the below error. Our jenkins is integrated with Docker container and devs asked me to install the pdf-parse library in docker container which will solve the issue. How do I install pdf-parse in docker container, which file does that ? Could some one please advise ?
Note: I am unable to see a docker file in my project root directory
11:38:29 Or you might have renamed the extension of your `pluginsFile`. If that's the case, restart the test runner.
11:38:29
11:38:29 Please fix this, or set `pluginsFile` to `false` if a plugins file is not necessary for your project.
11:38:29
11:38:29 Error: Cannot find module 'pdf-parse'
docker file:
FROM cypress/browsers:node12.14.1-chrome85-ff81
COPY package.json .
COPY package-lock.json .
RUN npm install --save-dev cypress
RUN $(npm bin)/cypress verify
# there is a built-in user "node" that comes from the very base Docker Node image
# we are going to recreate this user and give it _same id_ as external user
# that is going to run this container.
ARG USER_ID=501
ARG GROUP_ID=999
# if you want to see all existing groups uncomment the next command
# RUN cat /etc/group
RUN groupadd -g ${GROUP_ID} appuser
# do not log creating new user, otherwise there could be a lot of messages
RUN useradd -r --no-log-init -u ${USER_ID} -g appuser appuser
RUN install -d -m 0755 -o appuser -g appuser /home/appuser
# move test runner binary folder to the non-root's user home directory
RUN mv /root/.cache /home/appuser/.cache
USER appuser
jenkins file:
pipeline {
agent {
docker {
image 'abcdtest'
args '--link postgres:postgres -v /.composer:/.composer'
}
}
options {
ansiColor('xterm')
}
stages {
stage("print env variables") {
steps {
script {
echo sh(script: 'env|sort', returnStdout: true)
}
}
}
stage("composer install") {
steps {
script {
withCredentials([usernamePassword(credentialsId: 'bitbucket-api', passwordVariable: 'bitbucketPassword', usernameVariable: 'bitbucketUsername')]) {
def authProperties = readJSON file: 'auth.json.dist'
authProperties['http-basic']['bitbucket.sometest.com']['username'] = bitbucketUsername
authProperties['http-basic']['bitbucket.sometest.com']['password'] = bitbucketPassword
writeJSON file: 'auth.json', json: authProperties
}
}
sh 'php composer.phar install --prefer-dist --no-progress'
}
}
stage('unit tests') {
steps {
lock('ABCD Unit Tests') {
script {
try {
sh 'mv codeception.yml.dist codeception.yml'
sh 'mv tests/unit.suite.yml.jenkins tests/unit.suite.yml'
sh 'php vendor/bin/codecept run tests/unit --html'
}
catch (err) {
echo "unit tests step failed"
currentBuild.result = 'FAILURE'
}
finally {
publishHTML (target: [
allowMissing: false,
alwaysLinkToLastBuild: false,
keepAll: true,
reportDir: 'tests/_output/',
reportFiles: 'report.html',
reportName: "Unit Tests Report"
])
}
}
}
}
}
}
post {
success {
slackSend color: 'good', channel: '#jenkins-abcdtest-ci', message: "*SUCCESSED* - CI passed successfully for *${env.BRANCH_NAME}* (<${env.BUILD_URL}|build ${env.BUILD_NUMBER}>)"
}
failure {
slackSend color: 'danger', channel: '#jenkins-abcdtest-ci', message: "*FAILED* - CI failed for *${env.BRANCH_NAME}* (<${env.BUILD_URL}|build ${env.BUILD_NUMBER}> - <${env.BUILD_URL}console|click here to see the console output>)"
}
}
}

I suppose you use cypress/base:10 as the image to new a container in jenkins. If you don't have dockerfile, you may have to write your own dockerfile extends from cypress/base:10.
Dockerfile:
FROM cypress/base:10
RUN npm install pdf-parse
Then, docker build -t mycypress ., docker push mycypress to push the image to dockerhub(You may need an account) to let your jenkins use your new image to setup container.
NOTE: you will have to find how your project choose image to start your container, with this, you can find suitable way to install pdf-parse. One possible maybe next:
pipeline {
agent {
docker { image 'cypress/base:10' }
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
Then, you may change docker { image 'cypress/base:10' } to docker { image 'mycypress' }.

Related

Can't copy test result file from the top layer of docker container to local or any visible contaner for pulishing via HTML Publicher

I run my regression tests on docker container and I am trying to publish Test Results in jenkins-pipline using HTML-Publisher. This doesn't work properly, thought I get a mistake by trying to copy the result-file from docker container (Error type: such file does not exist).
My Jenkinsfile looks like this:
//https://www.jenkins.io/doc/book/pipeline/syntax/
pipeline {
agent any
stages {
stage('Deploy webstore') {
steps {
//start and run an application container using .yml file
sh "docker compose -f webstore-compose.yml up -d"
}
}
stage ('Regression Tests') {
//setting up docker container for regression tests
agent {
docker {
image 'localhost:5000/dotnet_s3'
args '--add-host=host.docker.internal:host-gateway'
reuseNode true
}
}
steps {
//running tests located in /guiautomationtask directory in the top layer and logging into testResults.html file
sh 'id; cd /guiautomationtask; dotnet test --logger "html;logfilename=testResults.html"'
sleep(time: 10, unit: "SECONDS")
/*To Do:
copy logfile from container to local*/
//console output
echo "++++++++++++++++++++++++++++++++++ Display Test Results in the Console +++++++++++++++++++++++++++++++++++++++++"
echo "Running build ${env.BUILD_ID} on jenkins ${env.JENKINS_URL}"
echo "current docker container ID is ${hostname}"
sh "id; cd /guiautomationtask; dotnet test -v normal"
echo "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
/*sh "dotnet publish /guiautomationtask/GuiTest/GuiTest.csproj"
sleep(time: 10, unit: "SECONDS")*/
}
}
stage ('Publish results') {
steps {
//view test-logs via HTML Publisher plugin
publishHTML(target:[
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: false,
reportDir: "", //here should be report directory with saved html report file
reportFiles: "testResults.html",
reportName: 'HTML-Report',
//reportTitles: ''
])
echo "artifacts saved in zip";
}
}
}
post {
always {
//stop an application container
sh "docker compose -f webstore-compose.yml stop"
}
}
}

Jenkins pipeline and Docker multi-stage builds howto

Question
I have to configure CI/CD for number of Git repositories with help of Jenkins (and DockerHub as CD target). I did that with help of Docker multi-stage build (see Considerations). I'm afraid to misunderstand/overcomplicate a simple idea.
Is Jenkins + Docker multi-stage build = best/good practice? Am I applying the idea in the correct way?
Considerations
From this presentation I assume using Docker inside Jenkins is a good idea. After reading an article Using Multi-Stage Builds to Simplify and Standardize Build Processes, Docker multi-stage builds looks to be the next step of using Jenkins + Docker.
Answers to similar question also say Docker multi-stage makes sense, but doesn't provide an example of realisation.
Implementation
Jenkins creates pipeline from SCM repository.
Git repository
Dockerfile
Jenkinsfile
project-folder
|-src
|-pom.xml
Dockerfile
FROM alpine as source
RUN apk --update --no-cache add git
COPY project-folder repo
FROM maven:3.6.3-jdk-8 as test
COPY --from=source repo repo
WORKDIR repo
RUN mvn clean test
FROM maven:3.6.3-jdk-8 as build
COPY --from=test repo repo
WORKDIR repo
RUN mvn clean package
FROM openjdk:8 as final
MAINTEINER xxx <xxx#gmail.com>
LABEL owner="xxx"
COPY --from=build repo/target/some-lib-1.8.jar /usr/local/some-lib.jar
ENTRYPOINT ["java", "-jar", "/usr/local/some-lib.jar"]
Jenkinsfile
I used docker build --target for more granularity on Jenkins UI.
#!/usr/bin/env groovy
def imageId = "use-name/image-name:1.$BUILD_NUMBER"
pipeline {
agent {
label 'docker' # separate agent (launched as JAR on host machine) to avoid running docker inside docker
}
stages {
stage('Test') {
steps {
script {
sh "docker build --no-cache --target test -t ${imageId} ."
}
}
}
stage('Build') {
steps {
script {
sh "docker build --target build -t ${imageId} ."
}
}
}
stage('Image') {
steps {
script {
sh "docker build --target final -t ${imageId} ."
}
}
}
stage('Deploy') {
steps {
script {
docker.withRegistry('' , 'dockerhub') {
dockerImage = docker.build("${imageId}")
dockerImage.push()
}
}
}
}
stage('Clean') {
steps{
sh "docker rmi ${imageId}"
}
}
}
}
following taleodor's answer I would suggest next jenkinsfile:
pipeline {
agent {
label 'docker' # separate agent (launched as JAR on host machine) to avoid running docker inside docker
}
environment {
imageId = 'use-name/image-name:1.$BUILD_NUMBER'
docker_registry = 'your_docker_registry'
docker_creds = credentials('your_docker_registry_creds')
}
stages {
stage('Docker build') {
steps {
sh "docker build --no-cache --force-rm -t ${imageId} ."
}
}
stage('Docker push') {
steps {
sh'''
docker login $docker_registry --username $docker_creds_USR --password $docker_creds_PSW
docker push $imageId
docker logout
'''
}
}
stage('Clean') {
steps{
sh "docker rmi ${imageId}"
}
}
}
}

Run docker build inside Jenkins Docker Slave

Currently I've a CI pipeline with the following stages:
Build
Unit Tests
Static Code Analysis
This is how my Jenkinsfile looks like:
pipeline {
agent any
stages {
stage("Install") {
steps {
sh "npm install"
}
}
stage("Build") {
steps {
sh "npm run build"
}
}
stage("Format") {
steps {
sh "npm run format"
}
}
stage("Lint") {
steps {
sh "npm run lint"
}
}
stage("Test") {
steps {
sh "npm run test"
}
}
stage("Code Coverage") {
steps {
sh "npm run test:cov"
publishHTML(target: [
reportDir: "./coverage/lcov-report",
reportFiles: "index.html",
reportName: "Jest Coverage Report"
])
}
}
stage("End-To-End Testing") {
steps {
sh "npm run test:e2e"
}
}
}
}
I want to add more stages to my pipeline:
Build and tag Docker Image from Dockerfile
Push the image to the Docker Hub
Some more steps which would need Docker CLI
Example:
pipeline {
.
.
.
stage("Docker Build") {
steps {
sh "docker build -t [user_name]/[image_name]:[tag] .
}
}
}
I'm quite new to this, and I have tried multiple ways to install docker and it was unsuccessful and it is a bad practice too.
We can run docker run -v /var/run/docker.sock:/var/run/docker.sock ... but I can't use bind mounting while using docker build command.
Can someone please suggest me a way where I can use docker commands inside Jenkins SSH Agents?
Solution
Install Docker CLI without the Daemon in Jenkins Docker Slave. I have used this Docker Agent and installed Docker CLI inside it using this method
Then as a docker daemon I've used my remote docker host. (Also, you can configure the local docker host as remote using these steps). You can use docker remote host using --host flag. E.g. docker --host x.x.x.x:2375 build -t johndoe:calculator .
Syntax: docker --host [Docker_Host]:[Port] build -t [Image_Name]:[Image_Tag] .
My New Jenkinsfile is as follows:
pipeline {
agent any
stages {
stage("Install") {
steps {
sh "npm install"
}
}
stage("Build") {
steps {
sh "npm run build"
}
}
stage("Format") {
steps {
sh "npm run format"
}
}
stage("Lint") {
steps {
sh "npm run lint"
}
}
stage("Test") {
steps {
sh "npm run test"
}
}
stage("Code Coverage") {
steps {
sh "npm run test:cov"
publishHTML(target: [
reportDir: "./coverage/lcov-report",
reportFiles: "index.html",
reportName: "Jest Coverage Report"
])
}
}
stage("End-To-End Testing") {
steps {
sh "npm run test:e2e"
}
}
stage("Docker Build") {
steps {
withCredentials([string(credentialsId: 'Docker_Host', variable: 'DOCKER_HOST')]) {
sh 'docker --host $DOCKER_HOST build -t xxx/xxx .'
}
}
}
}
}
Note: I have stored Docker host URL on Jenkins as a credential and used it using withCredentials function.

Docker: not found when running cmds in jenkinsfile

I am new to docker and CI. I am trying to create a jenkinsfile that would build and test my application, then build a docker image with the Dockerfile i've composed and then push it into AWS ECR. The steps I am stuck on is building an image with docker, i receive and error message docker: not found. I downloaded docker plug-in and configured it in the global tool configuration tab. Am i not adding it into tools correctly?
There was another post wear you could use kubernetes to do that however kubernetes no longer supports docker.
image of how i configured docker in global tools config:
global tool config
error
/var/jenkins_home/workspace/client-pipeline_feature-jenkins#tmp/durable-41220eb0/script.sh: 1: /var/jenkins_home/workspace/client-pipeline_feature-jenkins#tmp/durable-41220eb0/script.sh: docker: not found
error with permission to sock
def gv
containerVersion = "1.0"
appName = "foodcore"
imageName = appName + ":" + version
pipeline {
agent any
environment {
CI = 'true'
}
tools {
nodejs "node"
docker "docker"
}
stages {
stage("init") {
steps {
script {
gv = load "script.groovy"
CODE_CHANGES = gv.getGitChanges()
}
}
}
stage("build frontend") {
steps {
dir("client") {
sh 'npm install'
}
}
}
stage("build backend") {
steps {
dir("server") {
sh 'npm install'
}
}
}
stage("test") {
when {
expression {
script {
CODE_CHANGES == false
}
}
}
steps {
dir("client") {
sh 'npm test'
}
}
}
stage("build docker image") {
when {
expression {
script {
env.BRANCH_NAME.toString().equals('Main') && CODE_CHANGES == false
}
}
}
steps {
sh "docker build -t ${imageName} ."
}
}
stage("push docker image") {
when {
expression {
env.BRANCH_NAME.toString().equals('Main')
}
}
steps {
sh 'aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin repoURI'
sh 'docker tag foodcore:latest ...repoURI
sh 'docker push repoURI'
}
}
}
}
Use echo hello world to make...
Docker should be installed on the server Jenkins is running. The docker plugin provided by Jenkins is just like a tool to generate some snippets for the pipeline scripts. Installing and configuring the tool doesn't install a docker daemon. Please check if docker is installed on the OS or not.
As we can see in the thread, you are start getting permission denied on docker.sock.
docker.sock permissions will be lost if you restart system or docker service.
To make it persistence setup a cron to change ownership after each reboot
#reboot chmod 777 /var/run/docker.sock
and When you restart the docker, make sure to run the below command
chmod 777 /var/run/docker.sock
Or you can put a cron for it also, which will execute in each every 5 minutes.
To use docker inside Jenkins build, There are 2 methods.
Use Jenkins docker plugins as describe in above solution.
Or install docker itself in the Jenkins container and mount the docker.sock file.

How to write a multi-stage Dockerfile without from flag

This is actually the continuation of this question that I asked today.
I have a multi-stage Dockerfile that uses --from flag:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk
WORKDIR /app
COPY . ./aspnetapp/
WORKDIR /app/aspnetapp
RUN dotnet publish -c Release -o out
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime
WORKDIR /app
COPY --from=docker.m.our-intra.net/microsoft/dotnet:2.1-sdk /app/aspnetapp/MyProject.WebApi/out ./
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
With the help of this file I am able to build the image locally successfully.
BUT I can't use this Dockerfile in my Jenkins pipeline because the Jenkins Server engine is less than 17.05 version and it's not going to be updated (maybe later but not now).
I'm very new in Docker and Jenkins stuff. I would appreciate if anyone can help me to modify the Dockerfile in such way that I can use it without --from flag.
UPDATE:
The upper-mentioned Dockerfile is wrong. The working version of Dockerfile with the help of which I build the image on my local machine successfully and run the app also successfully is as follows:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk AS build
WORKDIR /app
COPY . ./aspnetapp/
WORKDIR /app/aspnetapp
RUN dotnet publish -c Release -o out
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime AS runtime
WORKDIR /app
COPY --from=build /app/aspnetapp/MyProject.WebApi/out ./
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
UPDATE 2:
I'm trying to follow Carlos advice and now I have two docker files.
This is my Docker-build:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk
WORKDIR /app
COPY . ./aspnetapp/
WORKDIR /app/aspnetapp
RUN dotnet publish -c Release -o out
This my Dockerfile:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime
COPY . .
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
This my Jenkinsfile:
def docker_repository_url = 'docker.m.our-intra.net'
def artifact_group = 'some-artifact-group'
def artifact_name = 'my-service-api'
pipeline {
agent {
label 'build'
}
stages {
stage('Checkout') {
steps {
script {
echo 'Checkout...'
checkout scm
echo 'Checkout Completed'
}
}
}
stage('Build') {
steps {
script {
echo 'Build...'
sh 'docker version'
sh 'docker build -t fact:v${BUILD_NUMBER} -f Dockerfile-build .'
echo 'Build Completed'
}
}
}
stage('Extract artifact') {
steps {
script {
echo 'Extract...'
sh 'docker create --name build-stage-container fact:v${BUILD_NUMBER}'
sh 'docker cp build-stage-container:/app/aspnetapp/MyProject.WebApi/out .'
sh 'docker rm -f build-stage-container'
echo 'Extract Completed'
}
}
}
stage('Copy compiled artifact') {
steps {
script {
echo 'Copy artifact...'
sh "docker build -t ${docker_repository_url}/${artifact_group}/${artifact_name}:v${BUILD_NUMBER} -f Dockerfile ."
echo 'Copy artifact Completed'
}
}
}
stage('Push image') {
steps {
script {
withCredentials([[
$class: 'UsernamePasswordMultiBinding',
credentialsId: 'jenkins',
usernameVariable: 'USERNAME',
passwordVariable: 'PASSWORD'
]]) {
def username = env.USERNAME
def password = env.PASSWORD
echo 'Login...'
sh "docker login ${docker_repository_url} -u ${username} -p ${password}"
echo 'Login Successful'
echo 'Push image...'
sh "docker push ${docker_repository_url}/${artifact_group}/${artifact_name}:v${BUILD_NUMBER}"
echo 'Push image Completed'
}
}
}
}
}
}
All steps are successed but when I try to run the image locally (after pulling it from Maven) or run it on OpehShift cluster it fails and says:
Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
What am I doing wrong?
TL;DR: You need to replicate the underlying functionality yourself, outside of Docker
Firstly, you are using the --from option wrong. To copy from a previous build stage, you must refer to its index or its name, e.g.:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk
...
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime
COPY --from=0 /app/aspnetapp/MyProject.WebApi/out ./
or
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk AS build-stage
...
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime
COPY --from=build-stage /app/aspnetapp/MyProject.WebApi/out ./
With your current Dockerfile, it would try to copy the file from the upstream docker image, not from the previous build stage.
Secondly, you can't do multi-stage Docker builds with a version prior to 17.05. You need to replicate the underlying functionality yourself, outside of Docker.
To do so, you can have one Dockerfile to build your artifact and run a throwaway container based on that image, from which to extract the artifact. You don't need to run the container, you can simply create it with docker create (this creates the writeable container layer):
docker create --name build-stage-container build-stage-image
docker cp build-stage-container:/app/aspnetapp/MyProject.WebApi/out .
Then you can have a second Dockerfile to build an image copying the artifact extracted from the previous stage, with a simple COPY from the build context.
#Carlos answer is perfectly valid. However as you are using jenkins and pipelines you might be happy with the following alternative solution:
If you are using jenkins with dynamic pod-provisioning on a kubernetes-distribution you can do the following:
Use a pod-template for your build which is based on <registry>/microsoft/dotnet:2.1-sdk. Compile your application within that pod in regular dotnet-way.
Keep the second part of your Dockerfile, but just copy the compiled artifact into the docker-image.
In summary you move out the first part of your Dockerfile into the Jenkinsfile to do the application build. The second part remains to do the docker-build from the already compiled binary.
The Jenkinsfile would look similar to this:
podTemplate(
...,
containers: ['microsoft/dotnet:2.1-sdk', 'docker:1.13.1'],
...
) {
container('microsoft/dotnet:2.1-sdk') {
stage("Compile Code") {
sh "dotnet restore"
sh "dotnet publish -c Release -o out"
}
}
container('docker:1.13.1') {
stage("Build Docker image") {
docker.build("mydockerimage:1.0")
}
}
}
This Jenkinsfile is far from complete and only illustrates how it would work.
Find more documentation here:
Jenkins kubernetes plugin
Jenkins docker global variable in scripted pipeline
This my final working solution.
Docker-build:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1-sdk
WORKDIR /app
COPY . ./aspnetapp/
WORKDIR /app/aspnetapp
RUN dotnet publish -c Release -o out
Dockerfile:
FROM docker.m.our-intra.net/microsoft/dotnet:2.1.4-aspnetcore-runtime
ADD output/out /output
WORKDIR /output
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
Jenkinsfile:
def docker_repository_url = 'docker.m.our-intra.net'
def artifact_group = 'some-artifact-group'
def artifact_name = 'my-service-api'
pipeline {
agent {
label 'build'
}
stages {
stage('Checkout') {
steps {
script {
echo 'Checkout...'
checkout scm
echo 'Checkout Completed'
}
}
}
stage('Build') {
steps {
script {
echo 'Build...'
sh 'docker version'
sh "docker build -t sometag:v${BUILD_NUMBER} -f Dockerfile-build ."
echo 'Build Completed'
}
}
}
stage('Extract artifact') {
steps {
script {
echo 'Extract...'
sh "docker run -d --name build-stage-container sometag:v${BUILD_NUMBER}"
sh 'mkdir output'
sh 'docker cp build-stage-container:/app/aspnetapp/MyProject.WebApi/out output'
sh 'docker rm -f build-stage-container'
sh "docker rmi -f sometag:v${BUILD_NUMBER}"
echo 'Extract Completed'
}
}
}
stage('Copy compiled artifact') {
steps {
script {
echo 'Copy artifact...'
sh "docker build -t ${docker_repository_url}/${artifact_group}/${artifact_name}:v${BUILD_NUMBER} -f Dockerfile ."
echo 'Copy artifact Completed'
}
}
}
stage('Push image') {
steps {
script {
withCredentials([[
$class: 'UsernamePasswordMultiBinding',
credentialsId: 'jenkins',
usernameVariable: 'USERNAME',
passwordVariable: 'PASSWORD'
]]) {
def username = env.USERNAME
def password = env.PASSWORD
echo 'Login...'
sh "docker login ${docker_repository_url} -u ${username} -p ${password}"
echo 'Login Successful'
echo 'Push image...'
sh "docker push ${docker_repository_url}/${artifact_group}/${artifact_name}:v${BUILD_NUMBER}"
echo 'Push image Completed'
sh "docker rmi -f ${docker_repository_url}/${artifact_group}/${artifact_name}:v${BUILD_NUMBER}"
}
}
}
}
}
}

Resources