Azure DevOps builds for Docker with npm installing from private feed have stopped working - docker

I have a few Docker builds on Azure DevOps for React apps which include packages from a private npm feed also hosted on Azure DevOps. Recently the builds have started failing at the npm install command.
In order to authenticate the container to install from the private feed I've always used an .npmrc file. This is saved locally as .npmrc.docker and looks like this:
#<package-scope>:registry=https://<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:username=<feed-name>
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:_password=${NPM_TOKEN}
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:email=npm requires email to be set but doesn't use the value
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:username=<feed-name>
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:_password=${NPM_TOKEN}
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:email=npm requires email to be set but doesn't use the value
I define a scoped package source at the top and the rest is generated from Azure DevOps via its Connect to feed wizard. ${NPM_TOKEN} is my feed password which I pass into the docker build command as a build argument.
The part of my Dockerfile which uses this looks like this:
FROM node:alpine as build
ARG NPM_TOKEN
COPY ./.npmrc.docker /app/.npmrc
COPY ./package.json /app/package.json
WORKDIR /app
RUN npm install
RUN rm -f .npmrc
In my Azure DevOps build pipeline this has always worked. The Build image part of the pipeline feeds in this build arg from a variable like this - --build-arg NPM_TOKEN=$(ArtifactsNpmPat) - where ArtifactsNpmPat is a variable in my library.
Recently my builds have started failing. Initially I assumed my token had expired so I generated and stored a new one. Here's the error from the agent:
[error]The command '/bin/sh -c npm install' returned a non-zero code: 1
[error]The process '/usr/bin/docker' failed with exit code 1
Note that the same process continues to work locally. So I've no idea how to diagnose this. I did find this SO post which led me to update my Dockerfile to look like this:
FROM node:alpine as build
ARG NPM_TOKEN
COPY ./.npmrc.docker /app/.npmrc
COPY ./package.json /app/package.json
WORKDIR /app
RUN npm install -g vsts-npm-auth
RUN vsts-npm-auth -config .npmrc
RUN npm install
RUN rm -f .npmrc
However, a docker build now generates a pretty crazy error from the RUN vsts-npm-auth command.
/usr/local/bin/vsts-npm-auth: line 1: MZ�╚╝���#���: not found
/usr/local/bin/vsts-npm-auth: line 1: �ԞO���: not found
/usr/local/bin/vsts-npm-auth: line 57: ╔╚��║[�
╔0�
&� �# ╗╝═�╗��╔╚.rsrc�##.reloc
�╗�#�H╗║�?�Y
╗*═d��╚���╗(&
╗╚s'
}║╝╗╗╚═}═╝*╗{═╝*0╗4╔╗╚( ═╔╗╚(
═╔╚╚(
═╔╗╚(
═╔╗╚(
+═*0╔╗╚/(═
+═*0╝M╗╗{║╝o(
═()
�╔
,+╚═o*
�╔ ,s+
z═o,
Xo-
╝+║╚╝+╝*0╔╗╚?(═
+═*0╔╗╚#(═
+═*0A╚╚r╔po.
-╗{║╝o/
r╔p(0
+╔
═,║╚
+╗╚/(═
+*0╝�╝╗╚╝║(═
═�╔ ,&═╝╝,r║p╝�S╔╚(1
s2
z╚║+Z╚═o3
╚╚o,
═X(4
o5
╝-╚+║rgp║-╗(6
%-═&rgp+║rgp╝-rgp+(7
║+║*0╗║║- ╚╝o8
+╚╝o9
+═*0╚L╗(&
╗╚}
╝╗║}╝╗═}╝╗╗(═}╝*╝═
╗{╝o:
╗(╗{
╝s;
╗(═o
╝+#╝o
║╗ ║(═══�╚,╗
╝o
╝╝o
0�s<╔╗30c
╗{╝ripo=
�╚,E(>
r�p�╔%rip�%�(?
�S╔%;�o#
═ sA
oB
═╚╗{╝sC
oB
═sD
╝+╝*0║╗{╝-rp╔p+║r�╔p
╝═╗{╝oE
+*0╗# ╗{
╝%-
&╗{╝(═: not found
/usr/local/bin/vsts-npm-auth: line 58: syntax error: unexpected word (expecting ")")
So I'm stuck. Has something changed in DevOps around authenticating with its private feeds? Not that I'm aware of, but like I say these builds just stopped working some time in October without me changing anything. Advice appreciated.

You are trying to run vsts-npm-auth which is not compatible with Linux platform, which in azure docs in documentation is explained that you only need .npmrc file with credentials

As answered by dawid debinski, vsts-npm-auth doesn't run on Linux platform as written here.
Check this documentation out in which you have to authenticate your pipeline with npm authenticate, or use the YAML version.

Related

Using JMeter plugins with justb4/jmeter Docker image results in error

Goal
I am using Docker to run JMeter in Azure Devops. I am trying to use Blazemeter's Parallel Controller, which is not native to JMeter. So, according to the justb4/jmeter image documentation, I used the following command to get the image going and run the JMeter test:
docker run --name jmetertest -i -v /home/vsts/work/1/s/plugins:/plugins -v $ROOTPATH:/test -w /test justb4/jmeter ${#:2}
Error
However, it produces the following error while trying to accommodate for the plugin (I know the plugin makes the difference due to testing without the plugin):
cp: can't create '/test/lib/ext': No such file or directory
As far as I understand, this is an error produced when one of the parent directories of the directory you are trying to make does not exist. Is there something I am doing wrong, or is there actually something wrong with the image?
References
For reference, I will include links to the image documentation and the repository.
Image: https://hub.docker.com/r/justb4/jmeter
Repository: https://github.com/justb4/docker-jmeter
Looking into the Dockerfile:
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
Looking into entrypoint.sh
if [ -d /plugins ]
then
for plugin in /plugins/*.jar; do
cp $plugin $(pwd)/lib/ext
done;
fi
It basically copies the plugins from /plugins folder (if it is present) to /lib/ext folder relative to current working directory
I don't know why did you add this stanza -w /test to your command line but it explicitly "tells" the container that local working directory is /test, not /opt/apache-jmeter-xxxx, that's why the script is failing to copy the files.
In general I don't think that the approach is very valid because:
In Azure DevOps you won't have your "local" folder (unless you want to add plugins binaries under the version control system)
Some JMeter Plugins have other .jars as the dependencies so when you're installing the plugin you should:
put the plugin itself under /lib/ext folder of your JMeter installation
put the plugin dependencies under /lib folder of your JMeter installation
So I would recommend amending the Dockerfile, download JMeter Plugins Manager and installed the plugin(s) you need from the command line
Something like:
RUN wget https://jmeter-plugins.org/get/ -O /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar
RUN wget https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2/cmdrunner-2.2.jar -P /opt/apache-jmeter-${JMETER_VERSION}/lib/
RUN java -cp /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar org.jmeterplugins.repository.PluginManagerCMDInstaller
RUN /opt/apache-jmeter-${JMETER_VERSION}/bin/./PluginsManagerCMD.sh install bzm-parallel

Can I use WORKDIR in my Dockerfile with Github Actions? (Also, resolving Jest "No tests found")

Can I use WORKDIR in my Dockerfile with Github Actions?
I am switching from one CI provider to Github Actions and found that I had a step that runs docker run <temp_image> npm test -- --coverage and something seemed to be altering the way my Jest test were run, compared to my previous CI, and I would receive the error:
No tests found, exiting with code 1
Run with `--passWithNoTests` to exit with code 0
In /app
18 files checked.
testMatch: /**/?(*.)+(spec|test).[jt]s?(x) - 1 match
testPathIgnorePatterns: /.next/, /node_modules/, /testconfig/ - 16 matches
testRegex: - 0 matches
Pattern: - 0 matches
That one testMatch would run correctly in my previous CI solution, using the same command.
Some with this error were accidentally ignoring their test path : Jest No Tests found
I tried a bunch of different approaches -- the main hunch being that my tests were being run in the incorrect directory. Using my shotgun approach I tried:
Specifying <rootdir> for testMatch and testPathIgnorePatterns
Removing testPathIgnorePatterns altogether
Specifying the --config path, also --no-cache, options for Jest
Specifying the -w working directory on docker run options
And a multi-command approach for docker run /bin/sh/ (cd /app && npm test)
Ultimately, I found this line in Github Actions Docs: Dockerfile Instructions And Overrides
GitHub sets the working directory path in the GITHUB_WORKSPACE environment variable. It's recommended to not use the WORKDIR instruction in your Dockerfile.
Removing the WORKDIR instruction in my Dockerfile fixes my error
BUT, Is there a way around having to remove WORKDIR from my Dockerfile? It seems to maybe be a Docker best practice to use WORKDIR and I would prefer to follow Docker guidelines than Github Actions.
Thank you for your time!

Npm install of aurelia project in docker container gets stuck/hangs on jenkins build server

Problem
Most of my Jenkins builds get stuck at npm install. The issue is not reproducible locally what makes it hard to narrow down. The build server would just endlessly hang at a "random" package while until you'd manually stop it.
16:33:55 [0m[91mnpm http fetch GET 200 https://registry.npmjs.org/ws/-/ws-6.2.1.tgz 737ms
Analysis
The frontend is developed with Aurelia and is part of a monorepo that is managed by Docker. This is my only project that uses Aurelia CLI so I thought I could find the problem there - but without any results.
I've already tried to analyze the issue by executing npm install --verbose but didn't gain any additional valuable information. It wasn't a specific package that lead to the problem nor was it a noticeable timeout.
# Dockerfile
FROM node:12.13.0 as builder
WORKDIR /web
COPY web .
RUN pwd
RUN npm install --verbose
RUN npm run build
FROM nginx:mainline-alpine
COPY --from=builder /web/dist /usr/share/nginx/html
COPY html/index.html /usr/share/nginx/html/index2.html
COPY nginx.conf /etc/nginx/nginx.conf
After investigating the problem for a long time, I discovered the newly introduced npm ci command and used it instead of npm install which solved the problem. Unfortunately installing a project with a clean state is a good idea ;-)
This command is similar to npm-install, except it’s meant to be used
in automated environments such as test platforms, continuous
integration, and deployment – or any situation where you want to make
sure you’re doing a clean install of your dependencies. It can be
significantly faster than a regular npm install by skipping certain
user-oriented features. It is also more strict than a regular install,
which can help catch errors or inconsistencies caused by the
incrementally-installed local environments of most npm users.

Docker and trying to build an image using Azure Pipelines

Hopefully someone can help me see the wood for the trees as they say!
I am no Linux expert and therefore I am probably missing something very obvious.
I have a dockerfile which contains the following:
FROM node:9.8.0-alpine as node-webapi
EXPOSE 3000
LABEL authors="David Sheardown"
COPY ["package.json", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . /home/vsts/work/1/s/
CMD ["node", "index.js"]
I then have an Azure pipeline setup as the following image shows:
My issue seems to be the build process cannot find the dockerfile itself:
##[error]Unhandled: No Dockerfile matching /home/vsts/work/1/s/**/Dockerfile was found.
Again, apologies in advance for my lack of Linux knowledge.. there is something silly I have done or not done ;)
P.S: I forgot to mention in Azure Pipelines I am using "Hosted Linux Preview"
-- UPDATE --
This is the get sources stage:
I would recommend adding the exact path to where the docker file resides on your repository .
Dockerfile: subpath/Dockerfile`
You're misusing this absolute path, both within the dockerfile and in the docker build task:
/home/vsts/work/1/s/
That is a path that exists on the build agent (not within the dockerfile) - but it may or may not exist on any given pipeline run. If the agent happens to use work directory 2, or 3, or any other number, then your path will be invalid. If you want to run this pipeline on a different type of agent, then your path will be invalid.
If you want to use a dockerfile in your checked out code, then you should do so by using a relative path (based on the root of your code repository), for example:
buildinfo/docker/Dockerfile
Note: that was just an example, to show the kind of path you should use; here you should be using the actual relative path in your actual code repo.

Setup Docker Jenkins with default plugins

I want to create a Jenkins based image to have some plugins installed as well as npm. To do so I have the following Dockerfile:
FROM jenkins:2.60.3
RUN install-plugins.sh bitbucket
USER root
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y nodejs
RUN npm --version
USER jenkins
That works fine however when I run the image I have two problems:
It seems that the plugins I tried to install manually didn't get persisted for some reason.
I get prompted the list of plugins I want to install but I don't want to install anything else.
Am I missing anything configuring the Dockerfile or is it that what I want to achieve is simply not possible?
Without seeing the contents of install-plugins.sh, I can't comment as to why the plugins aren't persisting. It is most likely caused by an incorrect installation destination; persistence shouldn't be an issue at this stage, since the plugin installation is built into the image itself.
As for the latter issue, you should be able to skip the installation wizard altogether by adding the line ENV JAVA_OPTS=-Djenkins.install.runSetupWizard=false
to your Dockerfile. Please note that this can be a security risk, if the Jenkins image is exposed to the world at large, since this option disables the need for authentication
EDIT: The default plugin directory for the Docker image is /var/jenkins_home/plugins
EDIT 2: According to the README on the Jenkins Docker repo, adding the line RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.stateshould accomplish the same thing
Things have changed since 2017, when the last answer was posted, and it no longer works. The current way is in following Dockerfile snippet:
# Prevent setup wizard from running.
# WARNING: Jenkins will start with security disabled, without any password.
ENV JENKINS_OPTS="-Djenkins.install.runSetupWizard=false"
# plugins.txt must contain the list of plugins to be installed
# (One plugin per line, e.g. sidebar-link:1.11.0)
COPY plugins.txt /tmp/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /tmp/plugins.txt

Resources