I have a gitlab.ci with this jobs:
build_image:
stage: build
tags:
- BUILD
script:
# recuperation de la derniere image
- docker pull ${CI_REGISTRY_IMAGE}:${TAG} || true
# build ap artir de la derniere image taggé
- docker build --cache-from ${CI_REGISTRY_IMAGE}:${TAG}
--rm
--pull
--tag ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA}
-f cicd/Dockerfile
.
- docker push ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA}
only:
- master
- develop
test_api_image:
image: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA}
stage: testbuild
variables:
#on force la connexion au service local
MONGODB_URL: mongodb://mongo:27017/BDD
script:
- npm run cicdtest
tags:
- TEST
only:
- master
- develop
First, in the build job, we build our image with a Dockerfile and we push it on nexus.
In the next job the gitlab runner pull this image, and launch mocha test with "npm run cicdtest"
and we catch this error.
$ npm run cicdtest
> api#0.1.0 cicdtest /builds/data/api
> mocha test/api/**/tests.js --file test/helper --reporter list --exit
sh: mocha: command not found
npm ERR! code ELIFECYCLE
npm ERR! syscall spawn
npm ERR! file sh
npm ERR! errno ENOENT
On my desktop, i pull the same image for test in local and i run and enter the container. When i execute "npm run cicdtest" i have no problems.
Any idea?
For information this is my Dockerfile:
FROM centos:latest
RUN mkdir -p /var/www/myapp
WORKDIR /var/www/myapp
RUN yum update -y \
&& yum install -y gcc gcc-c++ make \
&& curl -sL https://rpm.nodesource.com/setup_14.x | bash - \
&& yum install -y nodejs
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node" , "./src/app.js"]
And my package.json contains:
"dependencies": {
"body-parser": "^1.19.0",
"cors": "^2.8.5",
"crypto-js": "^4.0.0",
"dotenv": "^8.2.0",
"express": "^4.17.1",
"mongoose": "^5.10.13",
"mustache": "^4.2.0",
"nodemailer": "^6.6.0",
"nodemon": "^2.0.6",
"restify": "^8.5.1",
"web-push": "^3.4.4"
},
"devDependencies": {
"chai": "^4.2.0",
"chai-http": "^4.3.0",
"eslint": "^7.26.0",
"eslint-plugin-chai-expect": "^2.2.0",
"husky": "^6.0.0",
"mocha": "^8.3.0",
"supertest": "^6.1.3"
}
Add this line before RUN npm install:
RUN npm install --global mocha
The mocha command works on your local because you installed it globally in the past.
The solution was to add GIT_STRATEGY None because we are testing a built image with code source so we don't need git to checkout source one more time.
test_api_image:
image: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA}
stage: testbuild
variables:
# We do not need GitLab to clone the source code.
GIT_STRATEGY: none
script:
- npm run cicdtest
tags:
- TEST
only:
- master
- develop
Related
I am using github actions, dockerfile to deploy container to AWS ecs.
But npm ERR is always raised after deploy task-definition to ECR.
I checked ECS container on EC2 instance with ssh. And I can API call in a few seconds but container stop soon.
I watched cloudwatch and find logs below.
I tried installing ts-node global, removing node_modules and npm cache... but error not disappear.
How to solve this? And where is the 2021-11-10T13_02_58_148Z-debug.log? I can not find this on container...
npm ERR! path /test
2021-11-10T22:02:58.147+09:00 npm ERR! command failed
2021-11-10T22:02:58.147+09:00 npm ERR! signal SIGTERM
2021-11-10T22:02:58.147+09:00 npm ERR! command sh -c NODE_ENV=prod ts-node app.ts
2021-11-10T22:02:58.156+09:00 npm ERR! A complete log of this run can be found in:
2021-11-10T22:02:58.156+09:00 npm ERR! /root/.npm/_logs/2021-11-10T13_02_58_148Z-debug.log
package.json
{
"name": "test-package",
"version": "1.0.0",
"description": "",
"dependencies": {
"#types/express": "^4.17.13",
"#types/node": "^16.11.7",
"ajv": "^6.12.6",
"aws-sdk": "^2.1015.0",
"dotenv": "^10.0.0",
"express": "^4.17.1",
"hardhat": "^2.6.6",
"power-di": "^2.4.14",
"tslint": "^6.1.3",
"typescript": "^4.4.4"
},
"scripts": {
"start:prod": "NODE_ENV=prod ts-node app.ts",
},
"devDependencies": {
"#types/chai": "^4.2.22",
"#types/mocha": "^9.0.0",
"#types/supertest": "^2.0.11",
"chai": "^4.3.4",
"mocha": "^9.1.3",
"supertest": "^6.1.6"
}
}
FROM node:16-alpine
RUN mkdir /test
WORKDIR /test
COPY . /test
RUN npm update
RUN npm i
RUN npm i -g ganache-cli
RUN npm i -g ts-node
EXPOSE 3000
ENTRYPOINT ["npm", "run", "start:prod"]
I created a docker image which stores an npm project. The npm project has an npm script which runs tests. I use Gitlab for CI/CD where I want to define a job which will pull my image and run the npm script. This is the .gitlab.yml:
stages:
- test
.test-api:
image: $CI_REGISTRY_IMAGE
stage: test
script:
- cd packages/mypackage && npm run test:ci
artifacts:
paths:
- packages/mypackage/test-report.html
expire_in: 1 week
test-api-beta:
extends: .test-api
environment:
name: some-env
variables:
CI_REGISTRY_IMAGE: my_image_name
The gitlab job fails with the error:
> mypackage#1.0.0 test:ci /builds/my-organization/my-project/packages/mypackage
> DEBUG=jest-mongodb:* NODE_ENV=test ts-node --transpile-only --log-error node_modules/.bin/jest --watchAll=false --detectOpenHandles --bail
sh: 1: ts-node: not found
npm ERR! code ELIFECYCLE
npm ERR! syscall spawn
npm ERR! file sh
npm ERR! errno ENOENT
npm ERR! mypackage#1.0.0 test:ci: `DEBUG=jest-mongodb:* NODE_ENV=test ts-node --transpile-only --log-error node_modules/.bin/jest --watchAll=false --detectOpenHandles --bail`
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the mypackage#1.0.0 test:ci script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-04-28T09_05_39_023Z-debug.log
The main issue is the error npm WARN Local package.json exists, but node_modules missing, did you mean to install?. This means that the gitlab script is executed on the actual git repository of my project instead of being executed on the docker image. Indeed my repository doesn't contain node_modules so the job fails. But why doesn't gitlab execute the script on the actual image?
The docker image has a CMD directive:
CMD ["npm", "run", "start"]
Maybe the CMD somehow interferes with the gitlab script?
P.S. pulling the docker image manually and executing the npm script locally works.
This is my Dockerfile:
FROM node:14.15.1
COPY ./package.json /src/package.json
WORKDIR /src
RUN npm install
COPY ./lerna.json /src/lerna.json
COPY ./packages/mypackage/package.json /src/packages/mypackage/package.json
RUN npm run clean
COPY . /src
EXPOSE 8082
CMD ["npm" , "run", "start"]
EDIT: As per M. Iduoad answer if the script is changed as follows:
.test-api:
image: $CI_REGISTRY_IMAGE
stage: test
script:
- cd /src/packages/mypackage && npm run test:ci
artifacts:
paths:
- packages/mypackage/test-report.html
expire_in: 1 week
the npm script works. We need to cd /src/packages/mypackage because this is the location of script in the Dockerfile.
Gitlab always clones you repo and checkout the branch you are triggering the pipeline against and run your commands on that code (same folder CI_PROJECT_DIR)
So in order to use you version of the code you should either move to folder where it is located in your docker image.
.test-api:
image: $CI_REGISTRY_IMAGE
stage: test
script:
- cd /the/absolute/path/of/the/project/ && npm run test:ci
Doing this, defies gitlab-ci's way of doing things. Since you job will always run on the same code(the one in the image) every time your gitlab job is run. When gitlab-ci is a CI system and is intended to run you jobs against the code in your git repo.
So to summarize, I suggest you add a stage where you install you dependencies (node_modules)
stages:
- install
- test
install-deps:
image: node:latest # or the version you are using
stage: install
script:
- npm install
cache:
key: some-key
paths:
- $CI_PROJECT_DIR/node_modules
.test-api:
image: $CI_REGISTRY_IMAGE
stage: test
script:
- npm run test:ci
cache:
key: some-key
paths:
- $CI_PROJECT_DIR/node_modules
artifacts:
paths:
- packages/mypackage/test-report.html
expire_in: 1 week
This will use gitlab-ci's cache feature, to store the node_module across you jobs and across your pipeline.
You can control how to use and share the cache across pipelines and jobs, by changing the key. (read more about the cache on gitlab's docs)
I am using Symfony ApiPlatform 2.5 on Docker, with a "client" service for ReactJs frontend. I don't really know what happened, but I can't do anything more with npm, always going to have this error :
npm ERR! cb() never called!
npm ERR! This is an error with npm itself. Please report this error at:
npm ERR! <https://npm.community>
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-04-04T09_55_47_791Z-debug.log
I then tryed "npm install --no-package-lock", getting this error :
npm ERR! code EBUSY
npm ERR! syscall rmdir
npm ERR! path /usr/src/client/node_modules/.webpack-dev-server.DELETE/ssl
npm ERR! errno -16
npm ERR! EBUSY: resource busy or locked, rmdir '/usr/src/client/node_modules/.webpack-dev-server.DELETE/ssl'
When I try to run "rm -rf node_modules" on the container, I got the same kind of error :
rm: can't remove 'node_modules/.webpack-dev-server.DELETE/ssl': Resource busy
Here is the docker-compose part :
client:
build:
context: ./client
target: api_platform_client_development
cache_from:
- ${CLIENT_IMAGE:-quay.io/api-platform/client}
image: ${CLIENT_IMAGE:-quay.io/api-platform/client}
tty: true # https://github.com/facebook/create-react-app/issues/8688
environment:
- API_PLATFORM_CLIENT_GENERATOR_ENTRYPOINT=http://api
- API_PLATFORM_CLIENT_GENERATOR_OUTPUT=src
depends_on:
- dev-tls
volumes:
- ./client:/usr/src/client:rw,cached
- dev-certs:/usr/src/client/node_modules/webpack-dev-server/ssl:rw,nocopy
ports:
- target: 3000
published: 443
protocol: tcp
And the associated Dockerfile :
# https://docs.docker.com/develop/develop-images/multistage-build/#stop-at-a-specific-build-stage
# https://docs.docker.com/compose/compose-file/#target
# https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
ARG NODE_VERSION=13
ARG NGINX_VERSION=1.17
# "development" stage
FROM node:${NODE_VERSION}-alpine AS api_platform_client_development
WORKDIR /usr/src/client
RUN yarn global add #api-platform/client-generator
RUN apk --no-cache --virtual build-dependencies add \
python \
make \
build-base
# prevent the reinstallation of node modules at every changes in the source code
COPY package.json yarn.lock ./
RUN set -eux; \
yarn install
COPY . ./
VOLUME /usr/src/client/node_modules
ENV HTTPS true
CMD ["yarn", "start"]
# "build" stage
# depends on the "development" stage above
FROM api_platform_client_development AS api_platform_client_build
ARG REACT_APP_API_ENTRYPOINT
RUN set -eux; \
yarn build
# "nginx" stage
# depends on the "build" stage above
FROM nginx:${NGINX_VERSION}-alpine AS api_platform_client_nginx
COPY docker/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
WORKDIR /usr/src/client/build
COPY --from=api_platform_client_build /usr/src/client/build ./
Is there a way to completely RESET (ie: clean install) only this service, without touching the others ? I know we can remove all volumes with options, but didn't find how to act only on one service. I have a database on another service that I don't want to loose. :/
Thanks !
I have the following Dockerfile
FROM node:lts-alpine
WORKDIR /app
COPY package*.json ./
RUN export NODE_ENV=production
RUN npm config set strict-ssl false
RUN npm install --only=prod
RUN npm i #vue/cli-service
COPY . ./
RUN npm run build:prod
RUN npm install -g http-server
EXPOSE 8080
CMD ["http-server" "dist"]
My dev environment includes Cypress for E2E testing, but when the npm i #vue/cli-service command runs, it fails with the following error
Cypress Version: 3.8.3
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#^1.2.7 (node_modules/webpack-dev-server/node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.2.13 (node_modules/watchpack-chokidar2/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.2.13 (node_modules/mochapack/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.2.13: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#2.3.2 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#2.3.2: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! cypress#3.8.3 postinstall: `node index.js --exec install`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the cypress#3.8.3 postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-03-27T11_53_21_406Z-debug.log
The command '/bin/sh -c npm i #vue/cli-service' returned a non-zero code: 1
ERROR: Job failed: exit status 1
here is my package.json file
{
"name": "todo-app",
"version": "0.1.0",
"private": true,
"scripts": {
"serve": "vue-cli-service serve",
"build:dev": "vue-cli-service build --mode development",
"build:prod": "vue-cli-service build --mode production",
"test:unit": "vue-cli-service test:unit",
"test:e2e": "vue-cli-service test:e2e",
"test:e2e:ci": "vue-cli-service test:e2e --headless",
"lint": "vue-cli-service lint"
},
"dependencies": {
"core-js": "^3.6.5",
"vue": "^2.6.11",
"vue-router": "^3.2.0",
"vuex": "^3.4.0"
},
"devDependencies": {
"#vue/cli-plugin-babel": "~4.5.0",
"#vue/cli-plugin-e2e-cypress": "~4.5.0",
"#vue/cli-plugin-eslint": "~4.5.0",
"#vue/cli-plugin-router": "~4.5.0",
"#vue/cli-plugin-unit-mocha": "~4.5.0",
"#vue/cli-plugin-vuex": "~4.5.0",
"#vue/cli-service": "~4.5.0",
"#vue/eslint-config-prettier": "^6.0.0",
"#vue/test-utils": "^1.0.3",
"babel-eslint": "^10.1.0",
"chai": "^4.1.2",
"eslint": "^6.7.2",
"eslint-plugin-prettier": "^3.3.1",
"eslint-plugin-vue": "^6.2.2",
"node-sass": "^4.12.0",
"prettier": "^2.2.1",
"sass-loader": "^8.0.2",
"vue-template-compiler": "^2.6.11"
}
}
I seem to be in a catch 22. I can't build my VueJS App for production with out having the vue-cli-service, but vue-cli-service needs the dev dependencies installed and I can only install the dev dependencies if I include a lot of extra dependencies for testing etc in my Production Docker container that I don't want or need.
I am building my containers in a GitLab CI Runner.
How do other people get around this issue?
GitLab CI was able to help me solve my problem.
What I did was to build the app in the same stage that I run the tests (The build only runs if all the tests pass). I then made the dist folder available as an artifact that I could use in the docker_build_dev stage, which is then available as a folder I can simply copy into the Docker image file.
You can read more about GitLab Artifacts here https://docs.gitlab.com/ee/ci/pipelines/job_artifacts.html
This has not only speeded up my test and build but also made it simpler.
Here is my .gitlab-ci.yml file
stages:
- test
- build
test:
stage: test
tags:
- docker-test
script:
- npm config set strict-ssl false
- export CYPRESS_INSTALL_BINARY=/cypress/cypress.zip
- npm install
- npm run test:e2e:ci
- npm run test:unit
- npm run build:prod
artifacts:
paths:
- tests/e2e/videos/
- dist
expire_in: 20 minutes
docker_build_dev:
only:
- development
stage: build
tags:
- docker
dependencies:
- test
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build --tag $CI_REGISTRY_IMAGE:DEV .
- docker push $CI_REGISTRY_IMAGE:DEV
needs: ["test"]
Here is my simplified Dockerfile (I switched to Nginx, but that has nothing todo with the solution).
FROM nginx:stable-alpine
COPY ./dist/ /usr/share/nginx/html/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
While working with a simple declarative pipeline in Jenkins I'm running into an inconsistency where I can run the docker run commands manually to publish my expo project; however, when Jenkins creates the docker container and attempts to run the expo publish command I get a connection refused error. My initial guess was to add privileged to the docker container, then to ensure the user can run as root ... none of which actually helped. I'm curious if anyone has figured out how to run expo CI/CD inside of a docker container using Jenkins as the main way of facilitating that.
+ EXPO_DEBUG=true npx expo publish --non-interactive --release-channel develop
[07:50:24] Publishing to channel 'develop'...
[07:50:26] We noticed you did not build a standalone app with this SDK version and release channel before. Remember that OTA updates will not work with the app built with different SDK version and/or release channel. Read more: https://docs.expo.io/versions/latest/guides/publishing.html#limitations
[07:50:27] Building iOS bundle
[07:50:27] connect ECONNREFUSED 127.0.0.1:19001
[07:50:27] Error: connect ECONNREFUSED 127.0.0.1:19001
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1113:14)
My Jenkinsfile is pretty simple:
pipeline {
agent {
dockerfile {
filename 'Dockerfile'
}
}
stages {
stage('slack notification') {
agent none
steps {
slackSend color: "good", message: "Build Started - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>)"
}
}
stage('run tests') {
steps {
sh 'cd /project && yarn test'
}
}
stage ('publish to expo') {
environment {
expo_creds = credentials('expo_credentials')
}
steps {
sh "npx expo login -u $expo_creds_USR -p $expo_creds_PSW && mv env.beta.ts env.ts && EXPO_DEBUG=true npx expo publish --non-interactive --release-channel ${env.BRANCH_NAME}"
}
}
}
post {
success {
slackSend color: "good", message: "Build Finished - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>) duration: ${currentBuild.durationString}"
}
unstable {
slackSend color: "warning", message: "Build Unstable - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>) duration: ${currentBuild.durationString}"
}
failure {
slackSend color: "danger", message: "Build Failed - ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>) duration: ${currentBuild.durationString}"
}
}
}
And my Dockerfile looks as follows:
FROM node:10.13-alpine as npm-dependencies
WORKDIR /project
RUN apk add --no-cache \
autoconf \
libtool \
automake \
g++ \
make \
libjpeg-turbo-dev \
libpng-dev \
libwebp-dev \
nasm
COPY yarn.lock .
COPY package.json .
COPY .npmrc .
RUN yarn install
FROM node:10.13-jessie
WORKDIR /project
COPY custom_types ./custom_typess
COPY img ./img
COPY assets ./assets
COPY src ./src
COPY tests ./tests
COPY babel.config.js ./
COPY .buckconfig ./
COPY .flowconfig ./
COPY .watchmanconfig ./
COPY app.json .
COPY App.js .
COPY env.docker.ts ./env.ts
COPY tsconfig.json .
COPY package.json .
COPY jest.config.js .
COPY --from=npm-dependencies /project/node_modules /project/node_modules
RUN npm install -g expo-cli
RUN mkdir /.npm && chmod 0777 /.npm
RUN mkdir /.cache && chmod 0777 /.cache
RUN mkdir /.yarn && chmod 0777 /.yarn
RUN mkdir /.expo && chmod 0777 /.expo
RUN mkdir /project/.expo && chmod 0777 /project/.expo
Okay, so this is really just expo specific and is probably a bug in how it's made. After doing the login step I manually cd'd into the /project directory and then rm -rf .expo.
setting the CWD to /project and then deleting .expo fixes the issue. why it worked outside of jenkins but not inside is still a bit befuddling; however, the combination of those two actions resolved it for me.