Authenticate private github dependency in a react app for dockerfile - docker

I have a private repository that I use as a dependency in my frontend react app, currently when I download it I use a fine-grained PAT that allows me access to that github repository noted in the .env file as:
PERSONAL_ACCESS_TOKEN: blablabla
I understand that it is not safe to put the personal access token inside the ENV file as it could be used by anyone else, and since it is in the environment people can have access to it as well.
It still did not install the package when I did use it in the .env file and it gave me the following error:
fatal: could not read Username for 'https://github.com': terminal prompts disabled
What would be the best practises to allow for this token to be validated and used to install the private dependency when running Docker build to create the docker image?
My current Dockerfile is as follows:
# Build stage:
FROM node:14-alpine AS build
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh
# set working directory
WORKDIR /react
# install app dependencies
# (copy _just_ the package.json here so Docker layer caching works)
COPY ./package*.json yarn.lock ./
RUN yarn install --network-timeout 1000000
# build the application
COPY . .
RUN yarn build
# Final stage:
FROM node:14-alpine
# set working directory
WORKDIR /react
# get the build tree
COPY --from=build /react/build/ ./build/
# Install `serve` to run the application.
RUN npm install -g serve
EXPOSE 3000
# explain how to run the application
# ENTRYPOINT ["npx"]
CMD ["serve", "-s", "build"]
I use github actions to deploy the actual production version, would I need to simply set it up as a actions secret?

Related

Dockerfile FROM AS throws Invalid Reference Format

I'm trying to use a multistep Dockerfile that uses FROM AS, but when I run the Dockerfile in a Jenkins job I get an error
FROM node:8.12.0-alpine AS firstStep
Error parsing reference: "node:8.12.0-alpine AS firstStep" is not a valid repository/tag: invalid reference format
The Dockerfile is this:
FROM node:8.12.0-alpine AS firstStep
WORKDIR /usr/src/app/
# Copy both the package.json and the package-lock.json
COPY package*.json ./
COPY . .
# Deployment container
FROM nginx:1.14.0-alpine
RUN apk add --no-cache bash
RUN apk add --update curl
#set env var for certs
ENV NODE_EXTRA_CA_CERTS /confs/MyPem.pem
# Forward logs to stdout and stderr
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
# Create nginx config dir and copy nginx files for environments into it
RUN mkdir /confs
COPY ./nginxconf/* /confs/
#This copies the Keystore from the workspace and places it at the root of the container
COPY ./MyPem.pem /confs/MyPem.pem
COPY --from=firstStep /usr/src/app/dist /usr/share/nginx/html
COPY ./entrypoint.sh /opt/entrypoint.sh
RUN chmod a+x /opt/entrypoint.sh
ENTRYPOINT ["/opt/entrypoint.sh"]
If you check in dockerhub.com the image node:8.12.0-alpine indeed does not exists, Use for example "node:8.12-alpine" . Also you should use lowercase for "firstStep" so ... "firststep"
Support for multi-stage builds was added in 17.05.0. You can check the current version of the docker client and server with docker version. You'll need to upgrade the docker engine performing the build. Older docker releases are not supported once a new major release is delivered, so you'll want to pick a current stable release. Follow the installation guide from docker for your platform to install from the docker repos, you'll often find that Linux distributions have older versions of docker in their repos.

What is the RUN command in Dockerfile for install vuetify?

I expected and tried to include it in Dockefile directly. Here is my whole dockerfile:
FROM node
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
RUN npm i --save #koumoul/vuetify-jsonschema-form
RUN npm install --save axios vue-axios
RUN npm install vuetify#1.5.8
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
But got
Module not found: Error: Can't resolve 'vuetify' in '/app/src/views'
It is not good practice to install separately from package.json. You should just include it in your package.json.. But I am going to teach you a technique for testing cases like this.
You can run first the image on your own docker run -it node bash then do there what you want to run. You can also apply bind mount so the files that you needed are included like docker run -it -v=$(pwd):/usr/src/app node bash.. With this you can practice everything that you are trying to run in your Dockerfile more directly

Docker isn't caching Alpine apk add command

Everytime I build the container I have to wait for apk add docker to finish which takes a long time.
Since everytime it downloads the same thing, can I somehow force Docker to cache apk's downloads for development purposes?
Here's my Dockerfile:
FROM golang:1.13.5-alpine
WORKDIR /go/src/app
COPY src .
RUN go get -d -v ./...
RUN go install -v ./...
RUN apk add --update docker
CMD ["app"]
BTW, I am using this part volumes: - /var/run/docker.sock:/var/run/docker.sock in my docker-compose.yml to use sibling containers, if that matters.
EDIT: I've found google to copy docker.tgz in Chromium:
# add docker client -- do not install docker via apk -- it will try to install
# docker engine which takes a lot of space as well (we don't need it, we need
# only the small client to communicate with the host's docker server)
ADD build/docker/docker.tgz /
What is that docker.tgz? How can I get it?
Reorder your Dockerfile and it should work.
FROM golang:1.13.5-alpine
RUN apk add --update docker
WORKDIR /go/src/app
COPY src .
RUN go get -d -v ./...
RUN go install -v ./...
CMD ["app"]
As you are copying before installation, so whenever you change something in src the cache will invalidate for docker installtion.
Whenever you have a COPY command, if any of the files involve change, it causes every command after that to get re-run. If you move your RUN apk add ... command to the start of the file before it COPYs anything, it will get cached across runs.
A fairly generic recipe for most Dockerfiles to accommodate this pattern looks like:
FROM some-base-image
# Install OS-level dependencies
RUN apk add or apt-get install ...
WORKDIR /app
# Install language-level dependencies
COPY requirements.txt requirements.lock ./
RUN something install -r requirements.txt
# Install the rest of the application
COPY main.app ./
COPY src src/
# Set up standard run-time metadata
EXPOSE 12345
CMD ["/app/main.app"]
(Go and Java applications need the additional step of compiling the application, which often lends itself to a multi-stage build, but this same pattern can be repeated in both stages.)
You can download Docker x86_64 binaries for mac, linux, windows and unzip/untar and make it executable.
Whenever you are installing any packages in Docker container those should go at the beginning of Dockerfile, so it won’t ask you again to install same packages and COPY command part must be at the end of Dockerfile.

Webpack app in docker needs environment variables before it can be built

New to docker so maybe I'm missing something obvious...
I have an app split into a web client and a back end server. The back end is pretty easy to create an image for via a Dockerfile:
COPY source
RUN npm install, npm run build
CMD npm run start
The already-built back end app will then access the environment variables at runtime.
With the web client it's not as simple because webpack needs to have the environment variables before the application is built. This leaves me as far as I'm aware only two options:
Require the user to build their own image from the application source
Build the web client on container run by running npm run build in CMD
Currently I'm doing #2 but both options seem wrong to me. What's the best solution?
FROM node:latest
COPY ./server /app/server
COPY ./web /app/web
WORKDIR /app/web
CMD ["sh", "-c", "npm install && npm run build && cd ../server && npm install && npm run build && npm run start"]
First, it would be a good idea for both the backend server and web client to each have their own Dockerfile/image. Then it would be easy to run them together using something like docker-compose.
The way you are going to want to provide environment variables to the web Dockerfile is by using build arguments. Docker build arguments are available when building the Docker image. You use these by specifying the ARG key in the Dockerfile, or by passing the --build-arg flag to docker build.
Here is an example Dockerfile for your web client based on what you provided:
FROM node:latest
ARG NODE_ENV=dev
COPY ./web /app/web
WORKDIR /app/web
RUN npm install \
&& npm run build
CMD ["npm", "run", "start"]
The following Dockerfile uses the ARG directive to create a variable with a default value of dev.
The value of NODE_ENV can then be overridden when running docker build.
Like so:
docker build -t <myimage> --build-arg NODE_ENV=production .
Whether you override it or not NODE_ENV will be available to webpack before it is built. This allows you to build a single image, and distribute it to many people without them having to build the web client.
Hopefully this helps you out.

Versioning of a nodejs project and dockerizing it with minimum image diff

The npm version is located at package.json.
I have a Dockerfile, simplified as follows:
FROM NODE:carbon
COPY ./package.json ${DIR}/
RUN npm install
COPY . ${DIR}
RUN npm build
Correct my understanding,
If ./package.json changes, is it true that the writable docker image layers changes are from 2 to 5?
Assuming that I do not have any changes on npm package dependencies,
How could I change the project version but I do not want docker rebuild image layer for RUN npm install ?
To sum up, the behavior you describe using Docker is fairly standard (as soon as package.json has changed and has a different hash, COPY package.json ./ will be trigerred again as well as each subsequent Dockerfile command).
Thus, the docker setup outlined in the official doc of Node.js does not alleviate this, and proposes the following Dockerfile:
FROM node:carbon
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
But if you really want to find ways to avoid rerunning npm install from scratch most of the time, you could just maintain two different files package.json and, say, package-deps.json, import (and rename) package-deps.json and run npm install, then use the proper package.json afterwards.
And if you want to have more checks to be sure that the dependencies of both files are not out-of-sync, it happens that the role of file package-lock.json may be of some help, if you use the new npm ci feature that comes with npm 5.8 (cf. the corresponding changelog) instead of using npm install.
In this case, as the latest version of npm available in Docker Hub is npm 5.6, you'll need to upgrade it beforehand.
All things put together, here is a possible Dockerfile for this use case:
FROM node:carbon
# Create app directory
WORKDIR /usr/src/app
# Upgrade npm to have "npm ci" available
RUN npm install -g npm#5.8.0
# Import conf files for dependencies
COPY package-lock.json package-deps.json ./
# Note that this REQUIRES to run the command "npm install --package-lock-only"
# before running "docker build …" and also REQUIRES a "package-deps.json" file
# that is in sync w.r.t. package.json's dependencies
# Install app dependencies
RUN mv package-deps.json package.json && npm ci
# Note that "npm ci" checks that package.json and package-lock.json are in sync
# COPY package.json ./ # subsumed by the following command
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
Disclaimer: I did not try the solution above on a realistic example as I'm not a regular node user, but you may view this as a useful workaround for the dev phase…

Resources