I've a frustrating issue when "npm install" is executed inside a Jenkins Groovy pipeline using the NodeJS plugin, the process hangs with the following error -
npm install --ddd ng-cli
npm info it worked if it ends with ok
npm verb cli [ '/var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/nodejs893-v2/bin/node',
npm verb cli '/var/jenkins_home/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/nodejs893-v2/bin/npm',
npm verb cli 'install',
npm verb cli '--ddd',
npm verb cli 'ng-cli' ]
npm info using npm#5.5.1
npm info using node#v8.9.3
npm verb npm-session e522ad0a36f1c038
npm sill install loadCurrentTree
npm sill install readLocalPackageData
npm http fetch GET 503 https://registry.npmjs.org/ng-cli 70252ms attempt #3
npm sill fetchPackageMetaData error for ng-cli#latest 503 Service Unavailable: ng-cli#latest
npm verb stack Error: 503 Service Unavailable: ng-cli#latest
When the command is executed directly on the EC2, the package installs without issue as the Jenkins user.
Also when the command is executed inside the Jenkins Docker, the package installs without issue as the Jenkins user using the same Node installation.
The Docker instance is not limited by CPU or RAM.
The setup is Jenkins v2.138.1 running inside a Docker container, which in turn is hosted on an EC2 v2018.03. Jenkins home is mounted as an EFS volume. The JVM is running on Java v1.8.0_181. NPM is v5.1.1.
Any pointers would be much appreciated.
Reply to first suggestion
Yes there is direct internet connectivity, without any proxy. If a single package is installed such as
npm install ng-cli
The npm install works without issue.
The issue consisted of two parts -
First off - the EFS mount point directory (/var/jenkins_home) required permissions of 777, it doesn't need to be a recursive permission.
The new EFS disk had content migrated from the old Jenkins EFS and this was also contributing to this issue. The fix was to not transfer any content from the old EFS to the new EFS via the backup.tar.gz feature. The new Jenkins is working as expected with npm install.
Dockerfile pulled from - https://hub.docker.com/r/jenkins/jenkins/
Related
This is confusing so I apologize if I don't word this sufficiently well.
Essentially, I'm leveraging npm's --force flag to bypass a conflicting peer-dependency error with npm#8. Subsequent npm install s of the dependencies complete without any errors. When attempting to install dependencies via docker, however, the original error returns.
So, originally:
encounter error:
npm ERR! ERESOLVE could not resolve
...
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
bypass via npm install --force
subsequent npm installs work without issue in new local environments (e.g. clone into new dir, run npm install)
However, attempting to npm install or npm ci (npm ci ensures a lockfile already exists) in a docker build continues throws the original error:
npm ERR! ERESOLVE could not resolve
...
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
Which, to me, suggests the lockfile isn't being respected. But we know it exists because otherwise npm ci would error.
Does anyone have an idea as to why this might be the case?
Dockerfile I'm testing with:
# // Dockerfile
# ==== CONFIGURE =====
# Use a Node 16 base image
FROM node:16-alpine
# Set the working directory to /app inside the container
WORKDIR /app
# Copy app files
COPY package-lock.json .
RUN echo $(ls)
# ==== BUILD =====
# Install dependencies (npm ci makes sure the exact versions in the lockfile gets installed)
RUN npm ci
# Build the app
RUN npm run build
# ==== RUN =======
# Set the env to "production"
ENV NODE_ENV production
# Expose the port on which the app will be running (3000 is the default that `serve` uses)
EXPOSE 3000
# Start the app
CMD [ "npx", "serve", "build" ]
Local npm is v8.1, docker npm is v8.19. Seems they introduced a breaking change at some point between those two versions.
From official docs:
NOTE: If you create your package-lock.json file by running npm install with flags that can affect the shape of your dependency tree, such as --legacy-peer-deps or --install-links, you must provide the same flags to npm ci or you are likely to encounter errors. An easy way to do this is to run, for example, npm config set legacy-peer-deps=true --location=project and commit the .npmrc file to your repo.
I'm trying to define and build a docker container via a Dockerfile that is pulling in an npm dependency that is privately hosted in GCP Artifactory
The .npmrc file works for publishing the dep. But when the docker container runs npm install it struggles to access the private npm registry.
How do I grant a Dockerfile the permission to do an npm install where one of the dependancies is hosted on GCP's artifactory?
Inject the credentials? Copy them in? (doesn't seem safe)
I get an error like this:
---> Running in 058536ed8d33
npm ERR! code E403
npm ERR! 403 403 Forbidden - GET https://us-central1-npm.pkg.dev/<.....>
Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/<....>/locations/us-central1/repositories/<...>" (or it may not exist)
I've got a mono-repo setup with lerna and yarn workspaces.
To build the docker images for servers in the project, I'm using verdaccio as a GitLab service.
In the build phase, I want to publish packages and when running the server I'm planing to install them from the verdaccio registry.
With GitLab CI services we can not pass config files to the service. (APMK)
Currently the error I'm having is, error with auth, since I'm trying to publish a scoped package.
Step 14/24 : RUN lerna publish from-package --yes --no-git-reset
---> Running in 9c2db92f90f0
info cli using local version of lerna
lerna notice cli v4.0.0
lerna notice FYI Unable to verify working tree, proceed at your own risk
lerna WARN Unable to determine published version, assuming "#scope/package-1" unpublished.
lerna WARN Unable to determine published version, assuming "#scope/package-2" unpublished.
Found 2 packages to publish:
- #scope/package-1 => 1.0.0
- #scope/package-2 => 1.0.0
lerna info auto-confirmed
lerna info publish Publishing packages to npm...
lerna notice Skipping all user and access validation due to third-party registry
lerna notice Make sure you're authenticated properly ¯\_(ツ)_/¯
lerna notice FYI Unable to set temporary gitHead property, it will be missing from registry metadata
lerna info lifecycle root#undefined~prepare: root#undefined
lerna WARN lifecycle root#undefined~prepare: cannot run in wd root#undefined husky install (wd=/home/app)
lerna http fetch PUT 401 http://verdaccio:4873/#scope%2fpackage-1 26ms
lerna ERR! E401 authorization required to publish package #scope/package-1
The command '/bin/sh -c lerna publish from-package --yes --no-git-reset' returned a non-zero code: 1
In the CI environment I can not run npm adduser since it's an interactive prompt. I tried few way to bypass is seems they doesn't work.
Some like
RUN npm adduser <<! \
auser \
'1234' \
user#domain.tld \
!
Can I create a user inside a Dockerfile or can I bypass auth for scoped packages?
I'm trying to download and install Cypress within GitLab CI runner and getting this error output:
The cypress npm package is installed, but the Cypress binary is missing.
We expected the binary to be installed here: /root/.cache/Cypress/4.8.0/Cypress/Cypress
Reasons it may be missing:
- You're caching 'node_modules' but are not caching this path: /root/.cache/Cypress
- You ran 'npm install' at an earlier build step but did not persist: /root/.cache/Cypress
Properly caching the binary will fix this error and avoid downloading and unzipping Cypress.
Alternatively, you can run 'cypress install' to download the binary again.
I ran the suggested command cypress install but it didn't help.
Next it says You're caching 'node_modules' but are not caching this path: /root/.cache/Cypress
I don't understand how you can cache the modules and leave out the path to it.
Next is You ran 'npm install' at an earlier build step but did not persist I did have npm install in earlier builds so I replaced it with npm ci as it's recommended in Cypress official docs in such cases.
No resolution though.
Here are relevant lines where the error occurs:
inside of Dockerfile:
COPY package.json /usr/src/app/package.json
COPY package-lock.json /usr/src/app/package-lock.json
RUN npm ci
inside the test runner:
docker-compose -f docker-compose-prod.yml up -d --build
./node_modules/.bin/cypress run --config baseUrl=http://localhost
inside the package.json:
{
"name": "flask-on-docker",
"dependencies": {
"cypress": "^4.8.0"
}
}
Can anyone point me in a right direction ?
You probably are running npm install and cypress run in two different stages. In this case, cypress cache could not be persisted, So it is recommended to use CYPRESS_CACHE_FOLDER option while running install and as well as cypress run/open. The command will looks like this,
CYPRESS_CACHE_FOLDER=./tmp/Cypress yarn install
CYPRESS_CACHE_FOLDER=./tmp/Cypress npx cy run [--params]
This helped me (Windows):
.\node_modules\.bin\cypress.cmd install --force
Or if you're using a UNIX system:
./node_modules/.bin/cypress install --force
https://newbedev.com/the-cypress-npm-package-is-installed-but-the-cypress-binary-is-missing-591-code-example
yarn cypress install --force before running of tests worked for me
I had the same problem
I run this code to grant jenkins user to be the owner of my cypress project folder
and after that everything was ok.
sudo chown -R jenkins: /your cypress project path/
We have Jenkins running within ECS. We are using pipelines for our build and deploy process. The pipeline uses the docker plugin to pull an image which has some dependencies for testing etc, all our steps then occur within this docker container.
The issue we currently have is that our NPM install takes about 8 minutes. We would like to speed this process up. As containers are being torn down at the end of each build then the node_modules that are generated are disposed of. I've considered NPM caching but due to the nature of docker this seemed irrelevant unless we pre-install the dependencies into the docker image (but this triples the size of the image almost). Are there simple solutions to this that will help our NPM install speeds?
You should be using package caching but not caching node_modules directly. Instead you mount cache directories that your package installer uses, and your install will be blazing fast. Docker does make that possible by allowing you to mount directories in a container, that persist across builds.
For yarn mount ~/.cache or ~/.cache/yarn
For npm mount ~/.npm
docker run -it -v ~/.npm:/.npm ~/.cache:/.cache /my-app:/my-app testing-image:1.0.0 bash -c 'npm ci && npm test`
Note: I'm using npm ci here, which will always delete node_modules and reinstall using exact versions in the package-lock.json, so you get very consistent builds. (In yarn, this is yarn install --frozen-lockfile)
You could set up a Http proxy and cache all dependencies (*)(**).
Then use --build-arg to set HTTP_PROXY variable:
docker build --build-arg HTTP_PROXY=http://<cache ip>:3128 .
*: This will not work improve performance on dependencies that need to be compiled (ie: c/c++ bindings)
**: I use a Squid container to share cache configuration
In my case it was a bunch of corporate software installed in my computer apparently some anti virus analyzing all the node_modules files from the container when I mounted the project folder on the host machine, what I did was avoid mounting node_modules locally. Immediately sped up from 25 min to 5.
I have explained what I did with a possible implementation here. I have not used the package-lock.json but the npm ls command to check for changes in the node_modules folder so that I could potentially skip the step of re-uploading the cached modules on the bind mount.
#bkucera 's answer points you in the right direction with the bind mount, in general the easiest option in a containerized environment is to create a volume storing the cached packages. These packages could be archived in a tarball, which is the most common option, or even compressed if necessary (files in a .tar are not compressed).