I want to dockerize my Nest API. I'm completely new to Docker so I created a fresh Nest project with the CLI. I created a .dockerignore and added every file that shouldn't live in the Docker image.
.git
.gitignore
coverage
LICENSE
README.md
CONTRIBUTING.md
docker-compose.yml
Dockerfile
node_modules/
.github
.vscode
npm-debug.log
npm-debug.log.*
Next I started with the Dockerfile.
FROM node:12.13-alpine As api
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
ADD . /usr/src/app
CMD npm start
I'm wondering why the image has a size of 321 MB. Does someone know how to improve this? I don't need fancy stuff for development and testing purposes etc. I just would like to get into Docker by starting with a small "clean" image to setup the docker-compose file for the TypeORM database support.
If you don't need development and testing stuff, enhance project dependency installation in Dockerfile next way:
RUN npm install --production
Related
I'm running into a problem with docker that I can't fix by looking at other references, documentation, etc. and since i'm a beginner with Docker I try my luck here. I'm working in a Next.js project that is using Docker to build the app. I'm using the example documentation of Next.js, and that works if I have my Dockerfile in the root of my project. However, I want to put it in a folder called etc and use it from there. This is giving me problems, because docker can't find the files that i'm trying to copy to the working directory, see error below.
Structure
.
├── etc
│ └── Dockerfile
├── package.json
└── yarn.lock
Command
docker build etc/
Error
failed to compute cache key: "/yarn.lock" not found: not found
Dockerfile
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
I've tried a bunch of things, such as changing the files and paths. Also, in the documentation they mention the -f flag, but that doesn't work for me either because I get the error "docker build" requires exactly 1 argument. when running docker build -f etc/Dockerfile. Is that outdated? Anyway, my question is how to build my app with docker when my dockerfile is not in the root of the project but in a child folder like etc/.
You have forgotten the dot at the end of the command docker build -f etc/Dockerfile .
Docker doesn't use build cache when something in package.json or package-lock.json is changed, even if this is only the version number in the file, no dependencies are changed.
How can I achieve it so docker use the old build cache and skip npm install (npm ci) everytime?
I know that docker looks at the modified date of files. But package.json is not changed at all only the version number.
Below is my Dockerfile
FROM node:10 as builder
ARG REACT_APP_BUILD_NUMBER=X
ENV REACT_APP_BUILD_NUMBER="${REACT_APP_BUILD_NUMBER}"
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY .npmrc ./
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:alpine
COPY nginx/nginx.conf /etc/nginx/nginx.conf
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Here are some solutions that should help mitigate this problem. There are trade-offs with each, but they are not necessarily mutually exclusive - they can be mixed together for better overall build performance.
Solution I: Docker BuildKit cache mounts
Docker BuildKit enables partial mitigation of this problem using the experimental RUN --mount=type=cache flag. It supports a reusable cache mount during the image build progress.
An important caveat here is that support for Docker BuildKit may vary significantly between CI/development environments. Check the documentation and the build environment to ensure it will have proper support (otherwise, it will error). Here are some requirements (but not necessarily an exhaustive list):
The Docker daemon needs to support BuildKit (requires Docker 18.09+).
Docker BuildKit needs to be explicitly enabled with DOCKER_BUILDKIT=1 or by default from a daemon/cli configuration.
A comment is needed at the start of the Dockerfile to enable experimental support: # syntax=docker/dockerfile:experimental
Here is a sample Dockerfile that makes use of this feature, caching npm dependencies locally to /usr/src/app/.npm for reuse in subsequent builds:
# syntax=docker/dockerfile:experimental
FROM node
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json package-lock.json /usr/src/app
RUN --mount=type=cache,target=/usr/src/app/.npm \
npm set cache /usr/src/app/.npm && \
npm ci
Notes:
This will cache fetched dependencies locally, but npm will still need to install these into the node_modules directory. Testing with a medium-sized project indicates that this does shave off some build time, but building node_modules can still be non-negligible.
/usr/src/app/.npm will not be included in the final build, and is only available during build time (however, a lingering .npm directory will exist).
The build cache can be cleared if needed, see this Docker forum spost.
Caching node_modules is not recommended. Removal of dependencies in package.json might not be properly propogated. Your milage may vary, if attempted.
Solution II: Install dependencies prior to copying package.json
On the host machine, a script extracts only the dependencies and devDependencies tags from package.json and copies those tags that a new file, such as package-dependencies.json.
E.g. package-dependencies.json:
{
"dependencies": {
"react": "^16.13.1"
},
"devDependencies": {
"gulp": "^4.0.2",
}
}
In the Dockerfile, COPY the package-dependencies.json and package-lock.json and install dependencies. Then, copy the original package.json. Unless changes occur to package-lock.json or package.json's dependencies/devDependencies tags, the layers will be cached and reused from a previous build, meaning minor changes to the package.json will not need to run npm ci/npm install.
Here is an example:
FROM node
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# copy dependency list and locked dependencies
COPY package-dependencies.json package-lock.json /usr/src/app/
# install dependencies
RUN npm ci
# copy over the full package configuration
COPY package.json /usr/src/app/
# ...
RUN npm run build
# ...
Notes:
If used mutually-exclusively, this solution will be faster than the first solution for small changes (such as a version bump), as it will not need to rerun npm ci.
package-dependencies.json will be in the layer history. While this file would be negligible/insignificant in size, it is still "wasted space" since it is not needed in the final image.
A quick script will be needed to generate package-dependencies.json. Depending on the build environments, this may be annoying to implement. Here is an example using the cli utility jq:
cat package.json | jq -S '. | with_entries(select (.key as $k | ["dependencies", "devDependencies"] | index($k)))' > package-dependencies.json
Solution III: All of the above
Solution I will enable caching npm dependencies locally for faster dependency fetching. Solution II will only ever trigger npm ci/npm install if a dependency or development dependency is updated. These solutions can used together to further accelerate build times.
I have a front end project which includes package.json inside it.(imagine create-react-app for example)
When I run below command everything works fine with no error.
first DockerFile
COPY . develop
WORKDIR develop
But in case I want COPY, package.json next command I will face an error.
second DockerFile
COPY package.json develop
WORKDIR develop
error message: Cannot mkdir: /develop is not a directory
I know how to dockerize my project with the below command.
WORKDIR develop
COPY package.json .
I am just curious to know why the first Dockerfile works and the second one won't work.
I also used RUN ls after COPY command and find out in both case the develop directory has been generated.
It is because COPY package.json develop is instructed to copy the packages.json to the container as develop. So the next directive WORKDIR fails because develop is not a directory but a file.
Use the / before && after develop and it should work.
FROM alpine
COPY temp.txt /develop/
WORKDIR develop
I am working on creating a docker image with the following
FROM node:lts-alpine
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start"]
I am confused about the following line
# Bundle app source
COPY . .
What exactly is meant here by bundling? Copy everything? IF that is the case why is it copying the package.json file beforehand?
I was confused about the exact same thing.
The solution is the distinction between your application and its dependencies: Running npm install after copying package.json only installs the dependencies (creating the node_modules folder), but your application code is still not inside the container. That's what COPY . . does. I don't think the word "bundle" has any special meaning here, since it is just a normal file copy operation.
Separating those steps means the results of npm install (i.e. the state of the container after executing the command) can be cached by docker and thus don't have to be executed every time a part of the application code changes. This makes deploys faster.
PS: When talking about making deploys faster, have a look at npm ci: https://blog.npmjs.org/post/171556855892/introducing-npm-ci-for-faster-more-reliable
To bundle your app's source code inside the Docker image, use the COPY instruction:
We are working with two project at the moment:
1 C++ based project
2 Nodejs based project
These two projectes are separated which means they have different codebase(git repoitory) and working directory.
C++ project will produce a node binding file .node which will be used by Nodejs project.
And we try to build an docker image for the Nodejs project with multi-stage like this:
from ubuntu:18.04 as u
WORKDIR /app
RUN apt-get........
copy (?) . #1 copy the c++ source codes
RUN make
from node:10
WORKDIR /app
copy (?) . #1 copy the nodejs cource codes
RUN npm install
copy --from=u /app/dist/xx.node ./lib/
node index.js
And I will build the image by docker build -t xx (?) #2.
However as commented in the dockerfile and the command, how to setup the context directory(see comment #2)? Since it will affect the path in the dockerfile (see comment #1).
Also which project should I put inside for the above dockerfile?
You will have two options on this, as the limiting factor is that Docker only allows copying from the same directory as the Dockerfile:
Create a new Repository
You can either create a new repo and use your repos as submodules or just for the Dockerfile (Than you would have to copy both repos into the root folder at build time). In the End what you want to achieve is the following structure:
/ (root)
|-- C-plus-plus-Repo
|-- |-- <Files>
|-- Node-Repo
|-- |-- <Files>
|-- Dockerfile
Than you can build your project with:
from ubuntu:18.04 as u
WORKDIR /app
RUN apt-get........
#1 copy the c++ source files
copy ./C-plus-plus-Repo .
RUN make
from node:10
WORKDIR /app
#1 copy the nodejs cource codes
copy ./Node-Repo .
RUN npm install
copy --from=u /app/dist/xx.node ./lib/
node index.js
In the root Directory execute:
docker build -t xx .
Build your staging containers extra
Docker allows to copy from an external container as stage.
So you can build the C++ container in your C++ Repo root
from ubuntu:18.04 as u
WORKDIR /app
RUN apt-get........
#1 copy the c++ source files
copy . .
RUN make
and Tag it:
# Build your C++ Container in root of the c++ repo
docker build . -t c-stage
then copy the files from it, using the tag (in your node Repo root):
from node:10
WORKDIR /app
#1 copy the nodejs source files
copy . .
RUN npm install
# Use the Tag-Name of the already build container "c-stage"
copy --from=c-stage /app/dist/xx.node ./lib/
node index.js
Both build steps can be executed from their respective repo roots.
Creating a deploy project with git submodules
How about creating a deploy project using git submodules?
This project would only exist for building the docker image and contains the Dockerfile and both of your projects as git submodules.
Since you not just copy the two projects, but manage them with git, you can always keep them up to date with git submodules update --remote, but note that this leaves your submodule in a detached head state. However this is not a problem as long as you do not try to update your C++ project or the node project from the deploy project.
You can create the project with the following commands:
mkdir deploy_project && cd deploy_project
git init
git submodule add git#your-gitserver.com:YourName/YourCppProject.git cpp_project
git submodule add git#your-gitserver.com:YourName/YourNodeProject.git nodejs_project
Then you can simply add the paths to the subprojects to your dockerfile and build the image in the root directory of the deploy project.
The dockerfile would look like this
FROM ubuntu:18.04 as u
WORKDIR /app
RUN apt-get........
COPY cpp_project/ . #1 copy the c++ source codes
RUN make
FROM node:10
WORKDIR /app
COPY nodejs_project/ . #1 copy the nodejs cource codes
RUN npm install
COPY --from=u /app/dist/xx.node ./lib/
You can use ADD command (her context watching to host directory where Dockerfile placed. It will copy everything placed in same directory as Dockerfile in host machine (in this case content of cpp_app directory) into the docker container.
...
ADD cpp_app /place/to/build
WORKDIR /place/to/build
RUN make
RUN mv result_file /place/where/result_file/have/to/be
WORKDIR /place/where/result_file/have/to/be
... execute your nodejs stuff