Npm install of aurelia project in docker container gets stuck/hangs on jenkins build server - docker

Problem
Most of my Jenkins builds get stuck at npm install. The issue is not reproducible locally what makes it hard to narrow down. The build server would just endlessly hang at a "random" package while until you'd manually stop it.
16:33:55 [0m[91mnpm http fetch GET 200 https://registry.npmjs.org/ws/-/ws-6.2.1.tgz 737ms
Analysis
The frontend is developed with Aurelia and is part of a monorepo that is managed by Docker. This is my only project that uses Aurelia CLI so I thought I could find the problem there - but without any results.
I've already tried to analyze the issue by executing npm install --verbose but didn't gain any additional valuable information. It wasn't a specific package that lead to the problem nor was it a noticeable timeout.
# Dockerfile
FROM node:12.13.0 as builder
WORKDIR /web
COPY web .
RUN pwd
RUN npm install --verbose
RUN npm run build
FROM nginx:mainline-alpine
COPY --from=builder /web/dist /usr/share/nginx/html
COPY html/index.html /usr/share/nginx/html/index2.html
COPY nginx.conf /etc/nginx/nginx.conf

After investigating the problem for a long time, I discovered the newly introduced npm ci command and used it instead of npm install which solved the problem. Unfortunately installing a project with a clean state is a good idea ;-)
This command is similar to npm-install, except it’s meant to be used
in automated environments such as test platforms, continuous
integration, and deployment – or any situation where you want to make
sure you’re doing a clean install of your dependencies. It can be
significantly faster than a regular npm install by skipping certain
user-oriented features. It is also more strict than a regular install,
which can help catch errors or inconsistencies caused by the
incrementally-installed local environments of most npm users.

Related

Bundle install and yarn install inside entry script - Docker

I see that bundle install and yarn install are usually done in Dockerfile as:
RUN bundle install && yarn install
Which means that if I modify Gemfile or yarn.lock, I need to re-build the image again. I know that there is layer caching and the docker build will not rebuild other layers except bundle install && yarn install layer. But it means I have to do docker-compose up -d --build
But I was wondering if it is ok to put these commands inside an entry script of docker-compose or in command as:
command: bundle install && yarn install && rails s
In this way, I believe, whenever I do docker-compose up -d, bundle install and yarn install will be executed without having to build the image.
Not sure if it has any advantages over conventional bundle install in Dockerfile except not having to append --build in docker-compose up. Correct that if I do this, bundle install and yarn install will get executed even when there are no changes to Gemfile or Yarn files. I guess this is one of the bad sides.
Please correct me if it is not the ideal way to go.
New to docker world.
It wastes several minutes of time and uses up network bandwidth every time you start your application. When you're doing local development, it'd be the equivalent of doing this, every time you run the application:
rm -rf vendor node_modules
bundle install # from scratch
yarn install # from scratch
bundle exec rails s
A core part of Docker is rebuilding images (in the same way that languages like Go, Java, Typescript, etc. have a "build" phase). Trying to avoid image rebuilds isn't usually advisable. With a well-written Dockerfile, and particularly for an interpreted language, running docker build should be fairly efficient.
The one important trick is to separately copy the files that specify dependencies, and the rest of your application. As soon as a Dockerfile COPY instruction encounters a file that's changed it will disable layer caching for the rest of the application. Since dependencies change relatively infrequently, a sequence that first copies the dependency file, then installs the dependencies, then copies the application can jump straight to the last step if the dependency file hasn't changed.
COPY Gemfile Gemfile.lock ./
RUN bundle install
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
(Make sure to include the Bundler vendor directory and the node_modules directory in a .dockerignore file so the last COPY step doesn't overwrite what previously got installed.)
This question is opinion based. As you already found out yourself, it is a common practice to install dependencies (bundle, yarn, others) during the image build process, and not image run process.
The rationale is that you run more times than you build, and you want the run operation to start quickly.
In the same way that you do apt install... or yum install... in the build stage, you should normally do bundle install in the build stage as well.
That said, if it makes sense to you to bundle install as a part of the entrypoint, that is your choice. I suspect that after you do it, you will see that it is less common for a reason.
Another note about docker layers: If the Gemfile change, not only the layer that refers to it will change, but all subsequent layers as well. For that reason, it is often common to separate the copy of the dependencies manifest (Gemfile.*) from the copying of the app, like this:
# Pre-install gems
COPY Gemfile* ./
RUN gem install bundler && \
bundle install --jobs=3 --retry=3
# Copy the rest of the app
COPY . .
So this way, if your app files change, but not the dependencies, the build will be faster.

Azure DevOps builds for Docker with npm installing from private feed have stopped working

I have a few Docker builds on Azure DevOps for React apps which include packages from a private npm feed also hosted on Azure DevOps. Recently the builds have started failing at the npm install command.
In order to authenticate the container to install from the private feed I've always used an .npmrc file. This is saved locally as .npmrc.docker and looks like this:
#<package-scope>:registry=https://<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:username=<feed-name>
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:_password=${NPM_TOKEN}
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:email=npm requires email to be set but doesn't use the value
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:username=<feed-name>
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:_password=${NPM_TOKEN}
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:email=npm requires email to be set but doesn't use the value
I define a scoped package source at the top and the rest is generated from Azure DevOps via its Connect to feed wizard. ${NPM_TOKEN} is my feed password which I pass into the docker build command as a build argument.
The part of my Dockerfile which uses this looks like this:
FROM node:alpine as build
ARG NPM_TOKEN
COPY ./.npmrc.docker /app/.npmrc
COPY ./package.json /app/package.json
WORKDIR /app
RUN npm install
RUN rm -f .npmrc
In my Azure DevOps build pipeline this has always worked. The Build image part of the pipeline feeds in this build arg from a variable like this - --build-arg NPM_TOKEN=$(ArtifactsNpmPat) - where ArtifactsNpmPat is a variable in my library.
Recently my builds have started failing. Initially I assumed my token had expired so I generated and stored a new one. Here's the error from the agent:
[error]The command '/bin/sh -c npm install' returned a non-zero code: 1
[error]The process '/usr/bin/docker' failed with exit code 1
Note that the same process continues to work locally. So I've no idea how to diagnose this. I did find this SO post which led me to update my Dockerfile to look like this:
FROM node:alpine as build
ARG NPM_TOKEN
COPY ./.npmrc.docker /app/.npmrc
COPY ./package.json /app/package.json
WORKDIR /app
RUN npm install -g vsts-npm-auth
RUN vsts-npm-auth -config .npmrc
RUN npm install
RUN rm -f .npmrc
However, a docker build now generates a pretty crazy error from the RUN vsts-npm-auth command.
/usr/local/bin/vsts-npm-auth: line 1: MZ�╚╝���#���: not found
/usr/local/bin/vsts-npm-auth: line 1: �ԞO���: not found
/usr/local/bin/vsts-npm-auth: line 57: ╔╚��║[�
╔0�
&� �# ╗╝═�╗��╔╚.rsrc�##.reloc
�╗�#�H╗║�?�Y
╗*═d��╚���╗(&
╗╚s'
}║╝╗╗╚═}═╝*╗{═╝*0╗4╔╗╚( ═╔╗╚(
═╔╚╚(
═╔╗╚(
═╔╗╚(
+═*0╔╗╚/(═
+═*0╝M╗╗{║╝o(
═()
�╔
,+╚═o*
�╔ ,s+
z═o,
Xo-
╝+║╚╝+╝*0╔╗╚?(═
+═*0╔╗╚#(═
+═*0A╚╚r╔po.
-╗{║╝o/
r╔p(0
+╔
═,║╚
+╗╚/(═
+*0╝�╝╗╚╝║(═
═�╔ ,&═╝╝,r║p╝�S╔╚(1
s2
z╚║+Z╚═o3
╚╚o,
═X(4
o5
╝-╚+║rgp║-╗(6
%-═&rgp+║rgp╝-rgp+(7
║+║*0╗║║- ╚╝o8
+╚╝o9
+═*0╚L╗(&
╗╚}
╝╗║}╝╗═}╝╗╗(═}╝*╝═
╗{╝o:
╗(╗{
╝s;
╗(═o
╝+#╝o
║╗ ║(═══�╚,╗
╝o
╝╝o
0�s<╔╗30c
╗{╝ripo=
�╚,E(>
r�p�╔%rip�%�(?
�S╔%;�o#
═ sA
oB
═╚╗{╝sC
oB
═sD
╝+╝*0║╗{╝-rp╔p+║r�╔p
╝═╗{╝oE
+*0╗# ╗{
╝%-
&╗{╝(═: not found
/usr/local/bin/vsts-npm-auth: line 58: syntax error: unexpected word (expecting ")")
So I'm stuck. Has something changed in DevOps around authenticating with its private feeds? Not that I'm aware of, but like I say these builds just stopped working some time in October without me changing anything. Advice appreciated.
You are trying to run vsts-npm-auth which is not compatible with Linux platform, which in azure docs in documentation is explained that you only need .npmrc file with credentials
As answered by dawid debinski, vsts-npm-auth doesn't run on Linux platform as written here.
Check this documentation out in which you have to authenticate your pipeline with npm authenticate, or use the YAML version.

Docker-compose for local development, installing dependencies

I am trying to setup docker-compose architecture for local development and production and I can't figure when in the containers life it's the best time to install library dependencies. In the same time I am not sure if these should be placed in the container or in external volume.
All my code is mounted in external volumes, so that changes are immidiately taken into without rebuilding the containers, but I am not sure about libraries that need to be installed by pip (I am running python backend) and npm/yarn (for webpack front-end).
Placing requirments.txt and package.json into the containers and running pip install and yarn install in the container build process means that I have to rebuild the container any time dependecies change - that is too much overhead.
Putting them in an external volume and running pip install and yarn install as part of the command of each container when it is started seems to solve the issue.
The build process of each container then contains only platform dependencies (eg. installing python, webpack or other platform tools), but libraries are installed after started (with CMD directive).
Is this the correct approach? I have seen lot of examples doing exactly the oposite and running npm install in the build process of the container - but I don't see any advantage for that, am I missing something?
Installing dependecies is usually part of the build process. Mounting code is a good trick when developing in order to get changes directly reflected.
Concerning adding requirements.txt or package.json. Installing dependecies takes time, and for that you need to take advantage of docker layer caching. In particular, you want to avoid cache invalidation.
For pip I suggest the following in development phase: For dependencies that you are unlikely to change, install these in separate RUN instuction. Your Docker file will look something like.
FROM ..
RUN pip install package1 package2 package3 ...
ADD requirements.txt requirements.txt
RUN RUN pip install -r requirements.txt
...
Keep only dependencies that might be changed in requirements.txt. Once you are done developing, add the packages back to the requirements.txt and build using the requirements file.
A similar approach would be adding two requirements files, and at the end combining them.

Setup Docker Jenkins with default plugins

I want to create a Jenkins based image to have some plugins installed as well as npm. To do so I have the following Dockerfile:
FROM jenkins:2.60.3
RUN install-plugins.sh bitbucket
USER root
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y nodejs
RUN npm --version
USER jenkins
That works fine however when I run the image I have two problems:
It seems that the plugins I tried to install manually didn't get persisted for some reason.
I get prompted the list of plugins I want to install but I don't want to install anything else.
Am I missing anything configuring the Dockerfile or is it that what I want to achieve is simply not possible?
Without seeing the contents of install-plugins.sh, I can't comment as to why the plugins aren't persisting. It is most likely caused by an incorrect installation destination; persistence shouldn't be an issue at this stage, since the plugin installation is built into the image itself.
As for the latter issue, you should be able to skip the installation wizard altogether by adding the line ENV JAVA_OPTS=-Djenkins.install.runSetupWizard=false
to your Dockerfile. Please note that this can be a security risk, if the Jenkins image is exposed to the world at large, since this option disables the need for authentication
EDIT: The default plugin directory for the Docker image is /var/jenkins_home/plugins
EDIT 2: According to the README on the Jenkins Docker repo, adding the line RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.stateshould accomplish the same thing
Things have changed since 2017, when the last answer was posted, and it no longer works. The current way is in following Dockerfile snippet:
# Prevent setup wizard from running.
# WARNING: Jenkins will start with security disabled, without any password.
ENV JENKINS_OPTS="-Djenkins.install.runSetupWizard=false"
# plugins.txt must contain the list of plugins to be installed
# (One plugin per line, e.g. sidebar-link:1.11.0)
COPY plugins.txt /tmp/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /tmp/plugins.txt

Development dependencies in Dockerfile or separate Dockerfiles for production and testing

I'm not sure if I should create different Dockerfile files for my Node.js app. One for production without the development dependencies and one for testing with the development dependencies included.
Or one file which is basically the development Dockerfile.dev. Then main difference of both files is the npm install command:
Production:
FROM ...
...
RUN npm install --quiet --production
...
CMD ...
Development/Test:
FROM ...
...
RUN npm install
...
CMD ...
The question arises because I want to be able to run my tests inside the container via docker run command. Therefore I need the test dependencies (typically dev dependencies for me).
Seems a little bit odd to put dependencies not needed in production into the image. On the other hand creating/maintaining a second Dockerfile.dev which just minor differences seems also not right. So what is the a good practise for this kind of problem.
No, you don't need to have different Dockerfiles and in fact you should avoid that.
The goal of docker is to ship your app in an immutable, well tested artifact (docker images) which is identical for production and test and even dev.
Why? Because if you build different artifacts for test and production how can you guarantee what you have already tested is working in production too? you can't because they are two different things.
Given all that, if by test you mean unit tests, then you can mount your source code inside docker container and run tests without building any docker images. And that's fine. Remember you can build image for tests but that terribly slow and makes development quiet difficult and slow which is not good at all. Then if your test passed you can build you app container safely.
But if you mean acceptance test that actually needs to run against your running application then you should create one image for your app (only one) and run tests in another container (mount test source code for example) and run tests against that container. This obviously means what your build for your app is different for npm installs for your tests.
I hope this gives you some over view.
Well then you'll have to support several Dockerfiles that are almost identical. Instead I recommend to use NodeJS feature like production profile. And another one recommendation regarding to
RUN npm install --quiet --production
It is better to create separate .sh file and do something like this instead:
ADD ./scripts/run.sh /run.sh
RUN chmod +x /*.sh
And also think about to start using Gulp.
UPD #1
By default npm install installs devDependencies. In order to get around this - use npm install --production OR set the NODE_ENV environment variable to production value.
Putting script line in separate file is a good practice in order not to change Dockerfile often. If you'll need changes next time then you'll have to update only script-file and you're done. In future you could also have some additional work to do.

Resources