Missing package.json when using Jenkins to run cypress - jenkins

I've been trying to follow the CI set up example that cypress provides for Jenkins here.
I'm using the following Jenkins pipeline script (also provided by the good folks at Cypress):
// Example Jenkins pipeline with Cypress end-to-end tests running in parallel on 2 workers
// Pipeline syntax from https://jenkins.io/doc/book/pipeline/
// Setup:
// before starting Jenkins, I have created several volumes to cache
// Jenkins configuration, NPM modules and Cypress binary
// docker volume create jenkins-data
// docker volume create npm-cache
// docker volume create cypress-cache
// Start Jenkins command line by line:
// - run as "root" user (insecure, contact your admin to configure user and groups!)
// - run Docker in disconnected mode
// - name running container "blue-ocean"
// - map port 8080 with Jenkins UI
// - map volumes for Jenkins data, NPM and Cypress caches
// - pass Docker socket which allows Jenkins to start worker containers
// - download and execute the latest BlueOcean Docker image
// docker run \
// -u root \
// -d \
// --name blue-ocean \
// -p 8080:8080 \
// -v jenkins-data:/var/jenkins_home \
// -v npm-cache:/root/.npm \
// -v cypress-cache:/root/.cache \
// -v /var/run/docker.sock:/var/run/docker.sock \
// jenkinsci/blueocean:latest
// If you start for the very first time, inspect the logs from the running container
// to see Administrator password - you will need it to configure Jenkins via localhost:8080 UI
// docker logs blue-ocean
pipeline {
agent {
// this image provides everything needed to run Cypress
docker {
image 'cypress/base:10'
}
}
stages {
// first stage installs node dependencies and Cypress binary
stage('build') {
steps {
// there a few default environment variables on Jenkins
// on local Jenkins machine (assuming port 8080) see
// http://localhost:8080/pipeline-syntax/globals#env
echo "Running build ${env.BUILD_ID} on ${env.JENKINS_URL}"
sh 'npm ci'
sh 'npm run cy:verify'
}
}
stage('start local server') {
steps {
// start local server in the background
// we will shut it down in "post" command block
sh 'nohup npm start &'
}
}
// this tage runs end-to-end tests, and each agent uses the workspace
// from the previous stage
stage('cypress parallel tests') {
environment {
// we will be recordint test results and video on Cypress dashboard
// to record we need to set an environment variable
// we can load the record key variable from credentials store
// see https://jenkins.io/doc/book/using/using-credentials/
CYPRESS_RECORD_KEY = credentials('cypress-example-kitchensink-record-key')
// because parallel steps share the workspace they might race to delete
// screenshots and videos folders. Tell Cypress not to delete these folders
CYPRESS_trashAssetsBeforeRuns = 'false'
}
// https://jenkins.io/doc/book/pipeline/syntax/#parallel
parallel {
// start several test jobs in parallel, and they all
// will use Cypress Dashboard to load balance any found spec files
stage('tester A') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run e2e:record:parallel"
}
}
// second tester runs the same command
stage('tester B') {
steps {
echo "Running build ${env.BUILD_ID}"
sh "npm run e2e:record:parallel"
}
}
}
}
}
post {
// shutdown the server running in the background
always {
echo 'Stopping local server'
sh 'pkill -f http-server'
}
}
}
I'm using a "Trigger builds remotely" build trigger. When I execute that URL, the first steps (downloading the newest blue-ocean docker image) seem to work successfully.
When npm tries to launch package.json, however, it doesn't seem to find it. Images of my Jenkins console log here and here. I'm note sure why the package.json file is not there? Should I be adding it from the kitchensink repo. somehow?
Many thanks for any tips.

Related

Copy build artifacts from insider docker to host

This is my jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
steps {
echo '####################################################
echo 'Building Docker container'
echo '####################################################
script {
sh 'docker build -t my-gcc:1.0 .'
}
}
}
stage('Run') {
steps {
echo '##########################################################
echo 'Running Docker Image'
echo '##########################################################
script {
sh 'docker run --privileged -i my-gcc:1.0'
sh 'docker cp my-gcc:1.0:/usr/src/myCppProject/build/*.hex .'
}
}
}
stage('Program') {
steps {
echo '#######################################################
echo 'Programming target '
echo '#######################################################
script {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
}
the docker image is build and then run, after this I would like to extract the hex file form the container to the jenkins working directory so that I can flash it to the board.
But when I try to copy the file I get this error:
+ docker cp my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex .
Error: No such container:path: my-gcc:1.0:1.0:/usr/src/myCppProject/build/*.hex
I tried to access other folders in the container and copy the content, but always the same error. This way it seems that I cannot access any folder or file in the container.
What am I doing wrong?
Regards
Martin
Jenkins has some standard support for Docker; this is described in Using Docker with Pipeline in the Jenkins documentation. In particular, Jenkins knows how to use a Docker image that contains just tools, combined with the project's workspace directory. I'd use that support instead of trying to script docker cp.
That might look roughly like so:
pipeline {
agent none
stages {
stage('Build') {
// Jenkins will run `docker build` for you
agent { dockerfile { args '--privileged' } }
steps {
// The current working directory is bind-mounted into the container;
// the image's `ENTRYPOINT`/`CMD` is ignored.
// Copy the file out of the container:
sh "cp /usr/src/myCppProject/build/*.hex ."
}
}
stage('Program') {
agent any // so not in Docker
steps {
sh 'openocd -d0 -f board/st_nucleo_f4.cfg -c "init;targets;halt;flash write_image erase Testbench.hex;shutdown"'
}
}
}
}
If you use this approach, also consider whether you should run the main build sequence via Jenkins pipeline steps, or a sh invocation that runs a shell script, or a Makefile, or if a Dockerfile is actually right. It might make sense to build a Docker image out of your customized compiler, but then use the Jenkins pipeline support to build the image for the target board rather than trying to do it all in a Dockerfile.
In the invocation you show, you can't directly docker cp a file out of an image. When you start the container, use docker run --name to give it a name, then docker cp from that container name.
sh 'docker run --name builder ... my-gcc:1.0'
sh 'docker cp builder:/usr/src/myCppProject/build/*.hex .'
sh 'docker rm builder'

How to run a "sidecar" container in Jenkins Blue Ocean?

I am fairly new to Jenkins and CI/CD in general, but believe that I have searched long enough to conclude things are not as I expect.
I want to do some frontend tests on my website and just as in real life I want to test with the site in one Docker container and the database in another container. Jenkins has this documented as "sidecar" containers which can be part of a pipeline.
Their example:
node {
checkout scm
/*
* In order to communicate with the MySQL server, this Pipeline explicitly
* maps the port (`3306`) to a known port on the host machine.
*/
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw" -p 3306:3306') { c ->
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done'
/* Run some tests which require MySQL */
sh 'make check'
}
}
The thing is that I do not have a 'traditional' Jenkins pipeline, but I am running Jenkins Blue Ocean instead. This gives me a fancy pipeline editor, but also my pipeline code (Jenkinsfile) looks really different from the example:
pipeline {
agent {
docker {
image 'php'
}
}
stages {
stage('Build') {
steps {
sh 'composer --version'
sh 'composer install'
}
}
stage('Tests') {
steps {
echo 'Do test'
}
}
}
}
So how would I be spawning (and tearing down) these "sidecar" containers in a Blue Ocean pipeline? Currently the Pipeline editor has no available options if I want to add a step related to Docker. Can I still use docker.image? I do have the Docker Pipeline plugin installed.
.
The example provided by Jenkins in the link is actually a fully functional pipeline, with one exception. You need to comment out checkout scm if you provide the pipeline script directly into Jenkins.
node {
// checkout scm
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
What it may be confusing to you is that the code style in the example above is very different than the one generated by Blue Ocean pipeline editor. That is because the script is written in the Scripted Pipeline and Blue Ocean has generated a Declarative Pipeline. Both are fully supported in Jenkins and both use the same engine underneath, but the syntax differences may lead to confusion at start.
You can use the Scripted Pipeline example above just fine, but if you want to keep the Declarative Pipeline, you can run the scripted part inside the script step. In both cases you need to change the docker images and executed commands according to your needs.
pipeline {
agent any
stages {
stage('Build and test') {
steps {
script {
node {
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
docker.image('mysql:5').inside("--link ${c.id}:db") {
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
}
docker.image('centos:7').inside("--link ${c.id}:db") {
/*
* Run some tests which require MySQL, and assume that it is
* available on the host name `db`
*/
sh 'make check'
}
}
}
}
}
}
}
}
Please note:
Docker container link feature used in this example is a legacy feature, it may eventually be removed.
The pipeline will fail at make check, as make is not provided in centos:7 image.
More then half a year later I finally figured out it was much simpler than I thought. It can be done with docker-compose.
You need to make sure that your Jenkins has access to docker-compose. So if you are running Jenkins as a docker container ensure it has access to the Docker socket. Also Jenkins is not likely to ship with docker-compose included (JENKINS-51898) so you will have to build your own blue ocean image to install docker-compose.
Rather than copying below file, check https://docs.docker.com/compose/install/ for the latest version!
# Dockerfile
FROM jenkinsci/blueocean
USER root
RUN curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose && \
chmod +x /usr/local/bin/docker-compose
USER jenkins
Once you have Jenkins and Docker up and running you can deploy a test version of your application with a regular docker-compose file, including all database and other containers you might need. You can install dependencies and start the tests by using docker-compose exec to execute commands inside containers started with docker-compose.
Note that docker-compose -f docker-compose.test.yml exec -T php composer-install executes the composer-install command in the container that was defined as the php service in the docker-compose file.
In the end, no matter the outcome of the test, all containers and associated volumes (-v flag) are shutdown and removed.
# Jenkinsfile
pipeline {
agent any
stages {
stage('Start') {
steps {
sh 'docker-compose -f docker-compose.test.yml up -d'
}
}
stage('Composer install') {
steps {
sh 'docker-compose -f docker-compose.test.yml exec -T php composer install --no-interaction --no-progress --optimize-autoloader'
}
}
stage('Test') {
steps {
sh 'docker-compose -f docker-compose.test.yml exec -T php <run test script>'
}
}
}
post {
always {
sh 'docker-compose -f docker-compose.test.yml down -v'
}
}
}

Jenkins build of Gradle project only succeeds when starting Gradle Docker container as root

I have a Jenkins running as a standalone server, i.e. it is not running as a Docker container. For building my application with Gradle I'm using a Docker image as shown below.
stages {
stage('Compile') {
steps {
script {
docker.image("gradle:6.2.2-jdk8").inside("-v ${HOME}/.gradle:/gradle/.gradle") {
sh './gradlew clean vaadinPrepareNode build -Pvaadin.productionMode'
sh "mkdir -p ${GRADLE_DEPENDENCY_PATH} && (cd build/dependency; jar -xf ../libs/*.jar)"
}
}
}
}
}
My problem is the following: There is a Gradle task which tries to access the directory /.npm. However the jenkins user launching the Docker container (uid=1002(jenkins) gid=1002(jenkins)) seems not to have the permissions to do that. Therefore the task fails.
When changing the pipeline statement to docker.image("gradle:6.2.2-jdk8").inside("-u root -v ${HOME}/.gradle:/gradle the build finishes successfully but then I have folders owned by root in the workspace which cannot be deleted by Jenkins.

Jenkins Pipeline Docker -- Container is Not Running

I have Jenkins running on an EC2 Instance. I have the EC2 Plugin Configured in a Peered VPC, and when a job is tagged 'support_ubuntu_docker' it will spin up an Jenkins Slave, with Docker pre-installed.
I am able to follow the examples, and get my job to connect to the local docker running on the Slave, and run commands inside the container.
Working: https://pastebin.com/UANvjhnA
pipeline {
agent {
docker {
image 'node:7-alpine'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
Not Working https://pastebin.com/MsSZaPha
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}
I have tried with the ansible/ansible:default image, as well as a image I created myself.
FROM alpine:3.7
RUN apk add --no-cache terraform
RUN apk add --no-cache ansible
ENTRYPOINT ["/bin/ash"]
This image behaves locally.
[jenkins_test] docker exec -it 3843653932c8 ash 10:56:42 ☁ master ☂ ⚡ ✭
/ # terraform --version
Terraform v0.11.0
/ # ansible --version
ansible 2.4.6.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15 (default, Aug 22 2018, 13:24:18) [GCC 6.4.0]
/ #
I really just want to be able to clone my terraform git repo, and use the terraform in the container to run my init/plan/applies.
The error im getting for all of these is.
java.io.IOException: Failed to run top 'c9dfeda21b718b9df1035500adf2ef80c5c3807cf63e724317d620d4bcaa14b3'. Error: Error response from daemon: Container c9dfeda21b718b9df1035500adf2ef80c5c3807cf63e724317d620d4bcaa14b3 is not running
The question really should have been a Docker question; what's the difference between node:7-alpine and hashicorp/terraform:light?
hashicorp/terraform:light has an ENTRYPOINT entry, pointing to /bin/terraform.
Basically that means you run it this way:
docker run hashicorp/terraform:light --version
And it will exit right away, i.e., it's not interactive.
So if you want an interactive shell within that Docker container, you'll have to override the ENTRYPOINT to point at a shell, say, /bin/bash and also tell Docker to run interactively:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-it --entrypoint=/bin/bash'
label 'support_ubuntu_docker'
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}
In a scripted pipeline you can do this:
docker.image(dockerImage).inside("--entrypoint=''") {
// code to run on container
}
If you are creating the image to use in Jenkins from a base image that already has an ENTRYPOINT instruction, you can override it by adding this line to the end of your own Dockerfile:
ENTRYPOINT []
Then the whole --entrypoint is no longer needed.
I had to change the entrypoint to empty to get it working with the following script it worked like a charm:
pipeline {
agent {
docker {
image 'hashicorp/terraform:light'
args '-i --entrypoint='
}
}
stages {
stage('Test') {
steps {
sh 'terraform --version'
}
}
}
}

Jenkins Maven Pipeline

I want to make a Jenkinsfile that will do tests and build my Spring boot Java application. The problem is that my tests require Postgres and RabbitMQ.
What I'm trying to do:
1) Setup Jenkins in docker
## Run Jenkins Docker :
sudo docker run -d -p 8080:8080 -p 50000:50000 -v /home/jenkins:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock -u root jenkins
Bash into docker container
## Bash into new docker container
docker exec -it {{ontainer_ID}} bash
## Download an install docker as root
curl -sSL https://get.docker.com/ | sh
exit
2) Make pipeline to do it:
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
stage('Test') {
steps {
/* Run some tests which require PostgreSQL */
sh 'mvn test'
}
post {
always {
junit 'target/surefire-reports/*.xml'
}
}
}
}
}
My goal to add postgres and rabbit to be launched on the phase right before tests. I found this https://jenkins.io/doc/book/pipeline/docker/
There is an example how to run additional docker images:
checkout scm
/*
* In order to communicate with the MySQL server, this Pipeline explicitly
* maps the port (`3306`) to a known port on the host machine.
*/
docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw" -p 3306:3306') { c ->
/* Wait until mysql service is up */
sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done'
/* Run some tests which require MySQL */
sh 'make check'
}
Looking for some expirienced devops who can help with my setup. Thanks.
At the time of writing, declarative pipeline doesn't support such sidecar containers (as described in the docs. So what you found is correct for your problem.
The snippet you found is, however, for scripted pipeline. To use this within your declarative pipeline, you need to wrap it in a script step:
stage('Test') {
steps {
docker.image('postgres:9').withRun('<whatever perameters you need>') { c ->
sh 'mvn test'
}
}
}
Of course, replace this with the postgres

Resources