I am using a .npmrc file to configure a private repo (font-awesome-pro).
It works well without docker.
But with docker, the npm install fails:
npm ERR! code E401
npm ERR! 404 401 Unauthorized: #fortawesome/fontawesome-pro-light#https://npm.fontawesome.com/7D46BEC2-1565-40B5-B5FC-D40C724E60C6/#fortawesome/fontawesome-pro-light/-/fontawesome-pro-light-5.0.12.tgz
I have read the doc from NPM : Docker and private packages, but I don't know how to apply it with docker-compose.yml and I not sure passing variables is the solution (?).
Is it possible that the .npmrc file is not read during installation inside the docker instance ? Am I missing something ?
Here is my docker-compose.yaml :
version: '2.1'
services:
app:
image: node:8.9.4
# restart: always
container_name: jc-vc
environment:
- APP_ENV=${JC_ENV:-dev}
- HOST=0.0.0.0
- BASE_URL=${JC_BASE_URL}
- BROWSER_BASE_URL=${JC_BROWSER_BASE_URL}
working_dir: /usr/src/app
volumes:
- ${DOCKER_PATH}/jc/vc/app:/usr/src/app
command: npm install
# command: npm run dev
# command: npm run lintfix
# command: npm run build
# command: npm start
expose:
- 3000
nginx:
image: nginx
logging:
driver: none
# restart: always
volumes:
- ${DOCKER_PATH}/jc/vc/nginx/www:/usr/share/nginx/html
- ${DOCKER_PATH}/jc/vc/nginx/default.${JC_ENV:-dev}.conf:/etc/nginx/nginx.conf
- ${DOCKER_PATH}/jc/vc/nginx/.htpasswd:/etc/nginx/.htpasswd
- ${DOCKER_PATH}/jc/letsencrypt:/etc/letsencrypt
container_name: jc-nginx-vc
depends_on:
- app
ports:
- ${PORT_80:-4020}:${PORT_80:-4020}
- ${PORT_443:-4021}:${PORT_443:-4021}
and my .npmrc (with replaced token) :
#fortawesome:registry=https://npm.fontawesome.com/
//npm.fontawesome.com/:_authToken=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX
The correct way to fix this, as documented in the link you reference, is to use arg variables in the dockerfile. I think the bit you're missing is how to do this in compose:
version: "3"
services:
myapp:
build:
context: "."
args:
NPM_TOKEN: "s3kr!t"
You need to reference this argument in your dockerfile and create a .npmrc file in the root of your project:
//registry.npmjs.org/:_authToken=${NPM_TOKEN}
I like to generate this in the dockerfile to minimise the risk of exposure (but, be aware, the token is still stored in the image's layers), so it would look something like this:
FROM node:current-buster-slim
ARG NPM_TOKEN
WORKDIR /app
COPY package.json /app/package.json
RUN echo "//registry.npmjs.org/:_authToken=${NPM_TOKEN}" > /app/.npmrc && \
npm install && \
rm -f /app/.npmrc
COPY . /app/
CMD npm start
You can then run docker-compose build myapp and get a good result. This solution still suffers from having the secret in your compose file and in the docker images, but this is only a sketch for SO. In the real world you'd not want to put your secrets in your source-files so realistically you'd replace the secret with a dynamic secret that has a short Time To Live (TTL) and a single-use policy (and you'd probably want to use Hashicorp Vault to help with that).
In the root directory of your project, create a custom .npmrc file with the following contents:
//registry.npmjs.org/:_authToken=${NPM_TOKEN}
Now add these commands to Dockerfile
COPY .npmrc .npmrc
COPY package.json package.json
RUN npm install
RUN rm -f .npmrc
That should fix the issue, hope that helps
package-lock.json needs to be re-generate with the new .npmrc file. Delete it package-lock.json and recreate it with npm install then redeploy the image.
Related
I am new in docker. I've built an application with VueJs2 that interacts with an external API. I would like to run the application on docker.
Here is my docker-compose.yml file
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8080:8080'
Here is my Dockerfile:
FROM node:14.17.0-alpine as develop-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
RUN yarn install
COPY . .
EXPOSE 8080
CMD ["node"]
Here is the building command I run to build my image an container.
docker-compose up -d
The image and container is building without error but when I run the container it stops immediately. So the container is not running.
Are the DockerFile and compose files set correctly?
First of all you run npm install and yarn install, which is doing the same thing, just using different package managers. Secondly you are using CMD ["node"] which does not start your vue application, so there is no job running and docker is shutting down.
For vue applicaton you normally want to build the app with static assets and then run a simple http server to serve the static content.
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy 'package.json' to install dependencies
COPY package*.json ./
# install dependencies
RUN npm install
# copy files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
Your docker-compose file could be as simple as
version: "3.7"
services:
vue-app:
build:
context: .
dockerfile: Dockerfile
container_name: vue-app
restart: always
ports:
- "8080:8080"
networks:
- vue-network
networks:
vue-network:
driver: bridge
to run the service from docker-compose use command property in you docker-compose.yml.
services:
vue-app:
command: >
sh -c "yarn serve"
I'm not sure about the problem but by using command: tail -f /dev/null in your docker-compose file , it will keep up your container so you could track the error within it and find its problem. You could do that by running docker exec -it <CONTAINER-NAME> bash and track the error logs in your container.
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
command: tail -f /dev/null
ports:
- '8080:8080'
In your Dockerfile you have to start your application e.g. npm run start or any other scripts that you are using for running your application in your package.json.
I am running Docker on Windows 10 and when I run docker-compose up -d I get this errror but I don't know why.
npm WARN saveError ENOENT: no such file or directory, open '/var/www/package.json'
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN enoent ENOENT: no such file or directory, open '/var/www/package.json'
npm WARN www No description
npm WARN www No repository field.
npm WARN www No README data
npm WARN www No license field.
Here is my docker-compose.yaml file
version: '3'
services:
# Nginx client app server
nginx-client:
container_name: nginx-client
build:
context: ./docker/nginx-client
dockerfile: Dockerfile
restart: unless-stopped
ports:
- 28874:3000
volumes:
- ./client:/var/www
networks:
- app-network
# Networks
networks:
app-network:
driver: bridge
And here is my Dockerfile
FROM node:12
WORKDIR /var/www
RUN npm install
CMD ["npm", "run", "serve"]
This is happening because first you are building an image out of your Dockerfile, which contains this commands:
WORKDIR /var/www
RUN npm install
But right now this directory is empty. It's only after the container is created, bind mounting will take place. You can read more about it in the docs, it states:
A service definition contains configuration that is applied to each
container started for that service, much like passing command-line
parameters to docker run. Likewise, network and volume definitions are
analogous to docker network create and docker volume create.
If you need to have this file available at the image build time, I'd suggest using COPY command, like:
COPY ./client/package*.json ./
RUN npm install
Are there any ways to share data between containers. There is following docker-compose file
version: '3'
services:
app_build_prod:
container_name: 'app'
build:
context: ../
dockerfile: docker/Dockerfile
args:
command: build:prod
nginx:
container_name: 'nginx'
image: nginx:alpine
ports:
- "80:80"
depends_on:
- app_build_prod
Dockerfile content is:
FROM node:10-alpine as builder
## Installing missing packages, fixing git self signed certificate issue
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh && \
rm -rf /var/cache/apk/* && \
git config --global http.sslVerify false
## Defigning app directory
WORKDIR /usr/app
## Copying files. Files listed in .dockerignore are omitted
COPY . .
## node_modules are on a separate intermediate image will prevent unnecessary npm installs at each build
RUN npm ci
## Declaring arguments and environment variables. Important to declara env var to consume them on run stage
ARG command=build:prod
ENV command=$command
ENTRYPOINT npm run ${command}
Tried with #Robert's solution, but couldn't make it work - app container crashes because of:
EBUSY: resource busy or locked, rmdir '/usr/app/dist
Error: EBUSY: resource busy or locked, rmdir '/usr/app/dist'
My assumption is that /usr/app/dist directory is mounted with read-only access, therefore when Angular attempt to remove it prior the build, it throws an error.
Need to send data following direction
app_build_prod:/usr/app/dist => nginx:/usr/share/nginx/html
I have the same problem and change the sharing to use multi-stage build :
FROM alpine:latest AS builder
...build app_build_prod
FROM nginx:alpine
COPY --from=builder /usr/app/dist /usr/share/nginx/html
and change docker-compose to:
version: '3'
services:
nginx:
container_name: 'nginx'
build:
...
ports:
- "80:80"
I used the typical gatsby init given by the gatsby-cli. And, I wanted to use docker to automate even further.
.
|- src
|- gatsby-*.js
|- node_modules
|- Dockerfile
|- docker-compose.yml
|- package*.json
|- public
Here's my Dockerfile:
FROM node:12
# Add the package.json file and build the node_modules folder
WORKDIR /app
COPY ./package*.json ./
RUN mkdir node_modules && npm install
RUN npm install --global gatsby-cli && gatsby telemetry --disable
Here's my docker-compose.yml
version: '3.7'
services:
gatsby:
build:
context: .
dockerfile: Dockerfile
working_dir: /app
command: gatsby develop -H 0.0.0.0
ports:
- "8000:8000"
volumes:
- .:/app
- /app/node_modules/
The problem is that whenever I change anything locally and I have verified this through going inside the container to make sure the changes was copied to the container, the changes didn't trigger a build.
I have no problems with accessing the exposed port in my localhost. Anything that I did wrong? I have verified that the build does run but only once. There's no indication of errors during the build process only WARN stuff from npm installs which was also the case when I installed locally.
I think I wasn't using google keywords right. If only I used the word, recompile instead of rebuilding.
https://github.com/gatsbyjs/gatsby/issues/10836
This made it work for me.
I have a Dockerfile I'm pointing at from a docker-compose.yml.
I'd like the volume mount in the docker-compose.yml to happen before the RUN in the Dockerfile.
Dockerfile:
FROM node
WORKDIR /usr/src/app
RUN npm install --global gulp-cli \
&& npm install
ENTRYPOINT gulp watch
docker-compose.yml
version: '2'
services:
build_tools:
build: docker/gulp
volumes_from:
- build_data:rw
build_data:
image: debian:jessie
volumes:
- .:/usr/src/app
It makes complete sense for it to do the Dockerfile first, then mount from docker-compose, however is there a way to get around it.
I want to keep the Dockerfile generic, while passing more specific bits in from compose. Perhaps that's not the best practice?
Erik Dannenberg's is correct, the volume layering means that what I was trying to do makes no sense. (There is another really good explaination on the Docker website if you want to read more). If I want to have Docker do the npm install then I could do it like this:
FROM node
ADD . /usr/src/app
WORKDIR /usr/src/app
RUN npm install --global gulp-cli \
&& npm install
CMD ["gulp", "watch"]
However, this isn't appropriate as a solution for my situation. The goal is to use NPM to install project dependencies, then run gulp to build my project. This means I need read and write access to the project folder and it needs to persist after the container is gone.
I need to do two things after the volume is mounted, so I came up with the following solution...
docker/gulp/Dockerfile:
FROM node
RUN npm install --global gulp-cli
ADD start-gulp.sh .
CMD ./start-gulp.sh
docker/gulp/start-gulp.sh:
#!/usr/bin/env bash
until cd /usr/src/app && npm install
do
echo "Retrying npm install"
done
gulp watch
docker-compose.yml:
version: '2'
services:
build_tools:
build: docker/gulp
volumes_from:
- build_data:rw
build_data:
image: debian:jessie
volumes:
- .:/usr/src/app
So now the container starts a bash script that will continuously loop until it can get into the directory and run npm install. This is still quite brittle, but it works. :)
You can't mount host folders or volumes during a Docker build. Allowing that would compromise build repeatability. The only way to access local data during a Docker build is the build context, which is everything in the PATH or URL you passed to the build command. Note that the Dockerfile needs to exist somewhere in context. See https://docs.docker.com/engine/reference/commandline/build/ for more details.