npm ERR! missing script: dev - docker

I have a Jenkins container.
With node v7.2.1 and npm 5.6.0, installed by the command:
apk add nodejs-current
Out of the container, I can run the npm run dev command directly in the folder containing the package.json. It has node v8.11.1 and npm 5.6.0 and build sucess.
Returning to container jenkins
In Jenkins Build, it runs shell ...
command in the same package.json folder:
npm run dev
My package.json:
"scripts": {
"dev": "webpack-dev-server --progress --colors --inline --hot",
"production": "webpack --progress -p"
}
npm ERR! missing script: dev
Build Failed
I saw some solutions here, but I did not succeed.I believe my error is due to the version of nodejs.
If anyone knows how to upgrade nodejs to 8.11.1 or reinstall version 8.11.1 inside the jenkins container. Try nvm, n... fail
Sorry for my english. Google translate save me xD

Related

`npm install --force` lockfile not being respected by dockerfile build?

This is confusing so I apologize if I don't word this sufficiently well.
Essentially, I'm leveraging npm's --force flag to bypass a conflicting peer-dependency error with npm#8. Subsequent npm install s of the dependencies complete without any errors. When attempting to install dependencies via docker, however, the original error returns.
So, originally:
encounter error:
npm ERR! ERESOLVE could not resolve
...
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
bypass via npm install --force
subsequent npm installs work without issue in new local environments (e.g. clone into new dir, run npm install)
However, attempting to npm install or npm ci (npm ci ensures a lockfile already exists) in a docker build continues throws the original error:
npm ERR! ERESOLVE could not resolve
...
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
Which, to me, suggests the lockfile isn't being respected. But we know it exists because otherwise npm ci would error.
Does anyone have an idea as to why this might be the case?
Dockerfile I'm testing with:
# // Dockerfile
# ==== CONFIGURE =====
# Use a Node 16 base image
FROM node:16-alpine
# Set the working directory to /app inside the container
WORKDIR /app
# Copy app files
COPY package-lock.json .
RUN echo $(ls)
# ==== BUILD =====
# Install dependencies (npm ci makes sure the exact versions in the lockfile gets installed)
RUN npm ci
# Build the app
RUN npm run build
# ==== RUN =======
# Set the env to "production"
ENV NODE_ENV production
# Expose the port on which the app will be running (3000 is the default that `serve` uses)
EXPOSE 3000
# Start the app
CMD [ "npx", "serve", "build" ]
Local npm is v8.1, docker npm is v8.19. Seems they introduced a breaking change at some point between those two versions.
From official docs:
NOTE: If you create your package-lock.json file by running npm install with flags that can affect the shape of your dependency tree, such as --legacy-peer-deps or --install-links, you must provide the same flags to npm ci or you are likely to encounter errors. An easy way to do this is to run, for example, npm config set legacy-peer-deps=true --location=project and commit the .npmrc file to your repo.

Cypress and Gitlab CI/CD integration

Hey I'd like to run my cypress tests using gitlab pipelines. I've got the following docker image
FROM cypress/browsers:latest
ARG DIR="/usr/tests/e2e"
ENV NODE_MODULES_PATH="$DIR/node_modules"
WORKDIR $DIR
COPY ./tests/e2e/package*.json ./
RUN npm ci
which is built at the beginning of the pipeline as a first job. My .gitlab-ci.yml file looks as follows
image-e2e:
build and push a docker image
...
test-e2e-staging:
stage: test-staging
image: registry.gitlab.com/.../e2e:latest
script:
- cd tests/e2e
- npm run e2e:ci
environment:
name: staging
needs: ["deploy-frontend-staging", "deploy-backend-staging"]
dependencies: []
allow_failure: false
The e2e:ci command simply runs cypress
"e2e:ci": "cypress run --headless --browser chrome --config-file cypress/config/cypress.json",
But the job output on gitlab gives me a following error
> cypress run --headless --browser chrome --config-file cypress/config/cypress.json
sh: 1: cypress: not found
npm ERR! code ELIFECYCLE
npm ERR! syscall spawn
npm ERR! file sh
npm ERR! errno ENOENT
npm ERR! e2e#1.0.0 e2e:ci: `cypress run --headless --browser chrome --config-file cypress/config/cypress.json`
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the e2e#1.0.0 e2e:ci script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2022-05-01T12_59_51_436Z-debug.log
Can anyone tell me what I'm doing wrong here? Thanks a lot in advance. Also I got a cypress dependency in the devDependencies section in package.json. The image-e2e job gives me the following output
Step 6/6 : RUN npm ci
---> Running in ed278e712827
> cypress#9.5.4 postinstall /usr/tests/e2e/node_modules/cypress
> node index.js --exec install
so it looks like cypress has been successfully installed here
As it complains about cypress not found, why don't you simply switch to cypress/included docker image, no need to install cypress on the fly.

How to execute a command from a docker image in a gitlab job?

I created a docker image which stores an npm project. The npm project has an npm script which runs tests. I use Gitlab for CI/CD where I want to define a job which will pull my image and run the npm script. This is the .gitlab.yml:
stages:
- test
.test-api:
image: $CI_REGISTRY_IMAGE
stage: test
script:
- cd packages/mypackage && npm run test:ci
artifacts:
paths:
- packages/mypackage/test-report.html
expire_in: 1 week
test-api-beta:
extends: .test-api
environment:
name: some-env
variables:
CI_REGISTRY_IMAGE: my_image_name
The gitlab job fails with the error:
> mypackage#1.0.0 test:ci /builds/my-organization/my-project/packages/mypackage
> DEBUG=jest-mongodb:* NODE_ENV=test ts-node --transpile-only --log-error node_modules/.bin/jest --watchAll=false --detectOpenHandles --bail
sh: 1: ts-node: not found
npm ERR! code ELIFECYCLE
npm ERR! syscall spawn
npm ERR! file sh
npm ERR! errno ENOENT
npm ERR! mypackage#1.0.0 test:ci: `DEBUG=jest-mongodb:* NODE_ENV=test ts-node --transpile-only --log-error node_modules/.bin/jest --watchAll=false --detectOpenHandles --bail`
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the mypackage#1.0.0 test:ci script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-04-28T09_05_39_023Z-debug.log
The main issue is the error npm WARN Local package.json exists, but node_modules missing, did you mean to install?. This means that the gitlab script is executed on the actual git repository of my project instead of being executed on the docker image. Indeed my repository doesn't contain node_modules so the job fails. But why doesn't gitlab execute the script on the actual image?
The docker image has a CMD directive:
CMD ["npm", "run", "start"]
Maybe the CMD somehow interferes with the gitlab script?
P.S. pulling the docker image manually and executing the npm script locally works.
This is my Dockerfile:
FROM node:14.15.1
COPY ./package.json /src/package.json
WORKDIR /src
RUN npm install
COPY ./lerna.json /src/lerna.json
COPY ./packages/mypackage/package.json /src/packages/mypackage/package.json
RUN npm run clean
COPY . /src
EXPOSE 8082
CMD ["npm" , "run", "start"]
EDIT: As per M. Iduoad answer if the script is changed as follows:
.test-api:
image: $CI_REGISTRY_IMAGE
stage: test
script:
- cd /src/packages/mypackage && npm run test:ci
artifacts:
paths:
- packages/mypackage/test-report.html
expire_in: 1 week
the npm script works. We need to cd /src/packages/mypackage because this is the location of script in the Dockerfile.
Gitlab always clones you repo and checkout the branch you are triggering the pipeline against and run your commands on that code (same folder CI_PROJECT_DIR)
So in order to use you version of the code you should either move to folder where it is located in your docker image.
.test-api:
image: $CI_REGISTRY_IMAGE
stage: test
script:
- cd /the/absolute/path/of/the/project/ && npm run test:ci
Doing this, defies gitlab-ci's way of doing things. Since you job will always run on the same code(the one in the image) every time your gitlab job is run. When gitlab-ci is a CI system and is intended to run you jobs against the code in your git repo.
So to summarize, I suggest you add a stage where you install you dependencies (node_modules)
stages:
- install
- test
install-deps:
image: node:latest # or the version you are using
stage: install
script:
- npm install
cache:
key: some-key
paths:
- $CI_PROJECT_DIR/node_modules
.test-api:
image: $CI_REGISTRY_IMAGE
stage: test
script:
- npm run test:ci
cache:
key: some-key
paths:
- $CI_PROJECT_DIR/node_modules
artifacts:
paths:
- packages/mypackage/test-report.html
expire_in: 1 week
This will use gitlab-ci's cache feature, to store the node_module across you jobs and across your pipeline.
You can control how to use and share the cache across pipelines and jobs, by changing the key. (read more about the cache on gitlab's docs)

How to resolve "The cypress npm package is installed, but the Cypress binary is missing."

I'm trying to download and install Cypress within GitLab CI runner and getting this error output:
The cypress npm package is installed, but the Cypress binary is missing.
We expected the binary to be installed here: /root/.cache/Cypress/4.8.0/Cypress/Cypress
Reasons it may be missing:
- You're caching 'node_modules' but are not caching this path: /root/.cache/Cypress
- You ran 'npm install' at an earlier build step but did not persist: /root/.cache/Cypress
Properly caching the binary will fix this error and avoid downloading and unzipping Cypress.
Alternatively, you can run 'cypress install' to download the binary again.
I ran the suggested command cypress install but it didn't help.
Next it says You're caching 'node_modules' but are not caching this path: /root/.cache/Cypress
I don't understand how you can cache the modules and leave out the path to it.
Next is You ran 'npm install' at an earlier build step but did not persist I did have npm install in earlier builds so I replaced it with npm ci as it's recommended in Cypress official docs in such cases.
No resolution though.
Here are relevant lines where the error occurs:
inside of Dockerfile:
COPY package.json /usr/src/app/package.json
COPY package-lock.json /usr/src/app/package-lock.json
RUN npm ci
inside the test runner:
docker-compose -f docker-compose-prod.yml up -d --build
./node_modules/.bin/cypress run --config baseUrl=http://localhost
inside the package.json:
{
"name": "flask-on-docker",
"dependencies": {
"cypress": "^4.8.0"
}
}
Can anyone point me in a right direction ?
You probably are running npm install and cypress run in two different stages. In this case, cypress cache could not be persisted, So it is recommended to use CYPRESS_CACHE_FOLDER option while running install and as well as cypress run/open. The command will looks like this,
CYPRESS_CACHE_FOLDER=./tmp/Cypress yarn install
CYPRESS_CACHE_FOLDER=./tmp/Cypress npx cy run [--params]
This helped me (Windows):
.\node_modules\.bin\cypress.cmd install --force
Or if you're using a UNIX system:
./node_modules/.bin/cypress install --force
https://newbedev.com/the-cypress-npm-package-is-installed-but-the-cypress-binary-is-missing-591-code-example
yarn cypress install --force before running of tests worked for me
I had the same problem
I run this code to grant jenkins user to be the owner of my cypress project folder
and after that everything was ok.
sudo chown -R jenkins: /your cypress project path/

Docker Container works locally but fails on server

I'm fairly new to docker and I'm kind of experimenting with Angular CLI app. I managed to run it locally through my docker container. It works great, but when I try running it from my server it fails.
Server is hosted on DigitalOcean:
512 MB Memory / 20 GB Disk / FRA1 - Ubuntu Docker 17.03.0-ce on 14.04
I used dockerhub to transfer my container to the server.
When logging the container it gives me this:
** NG Live Development Server is running on http://0.0.0.0:4200. **
63% building modules 469/527 modules 58 active ...s/#angular/compiler/src/assertions.jsKilled
npm info lifecycle angular-test#0.0.0~start: Failed to exec start script
npm ERR! Linux 4.4.0-64-generic
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "start"
npm ERR! node v6.10.3
npm ERR! npm v3.10.10
npm ERR! code ELIFECYCLE
npm ERR! angular-test#0.0.0 start: `ng serve --host 0.0.0.0`
npm ERR! Exit status 137
npm ERR!
npm ERR! Failed at the angular-test#0.0.0 start script 'ng serve --host 0.0.0.0'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the angular-test package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! ng serve --host 0.0.0.0
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs angular-test
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls angular-test
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /usr/src/app/npm-debug.log
Here is my Dockerfile:
# Create image based on the official Node 6 image from dockerhub
FROM node:6
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package.json /usr/src/app
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app
# Expose the port the app runs in
EXPOSE 4200
# Serve the app
CMD ["npm", "start"]
How come it runs locally and fails on server? Am I missing some dependencies?
ng serve is an angular-cli command. I'm guessing you need to install it globally in your docker file if you want to start your server like that on digital ocean:
RUN npm i -g angular-cli
I think it would be more typical to simply run the app using the naked node server in production. So your CMD would look more like this:
CMD ["node", "app.js"]

Resources