I am using docker to deploy my nuxt app. However my docker image size is 260MB. Is it too big for a docker image. I've used node alpine to reduce docker size.
This is the dockerfile.
FROM node:10-alpine
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
# copy the app, note .dockerignore
COPY package*.json ./
COPY . .
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NUXT_HOST=0.0.0.0
# set app port
ENV NUXT_PORT=3000
# start the app
CMD [ "npm", "start" ]
I want to have an docker image of size <100MB. Is there any more configuration needed for nuxt app or docker commands to be added?
You have to do multi stage docker build.
Idea is, you use one image for build, and then just copy plain javascript files to alphine image.
Check good example here - https://github.com/nuxt/nuxt.js/issues/2871
Also, as JMLizano mentioned, at run image you can install packages without dev ones -
npm install --production
(example above just copy all build modules to run image)
I do not know Nuxt, but some things you can try are:
Group the two COPY statements, seems like it should be enough with COPY . .
Group the two RUN statements (Ex. RUN npm install && npm build)
Avoid installing dev packages -> Use the --production flag of npm install.
The two first will reduce the amount in layers in the image, but do not expect a huge size reduction. The third one is where you can save more space (in case you have a lot of dev packages).
Related
Whenever I run the following command to create a Docker image,
docker build -t ehi-member-portal:v1.0.0 -f ./Dockerfile .
I get the following results
I'm not sure why it is complaining about Node version because I am currently running
And I am not sure why it is detecting v12.14.1 when you see I am running v14.20.0. I installed Node and NPM using NVM. I used this site as a reference to how to create the node and ngix image for a container.
Here is the contents of my Dockerfile:
FROM node:12.14-alpine AS builder
WORKDIR /dist/src/app
RUN npm cache clean --force
COPY . .
RUN npm install
RUN npm run build --prod
FROM nginx:latest AS ngi
COPY --from=builder /dist/ehi-member-portal /usr/share/nginx/html
COPY /nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
Here is more version information:
Any help would be HIGHLY appreciated. I need to figure this out.
RUN npm run build --prod is executed INSIDE the docker container, and node inside is not in required version.
Also you clearly states that you want to use node v12 with
FROM node:12.14-alpine AS builder
so this is why it is "detected" as 12 because this is the node version inside the container. Bump the version. You can use some of images listed here
https://hub.docker.com/_/node
eg
FROM node:14.20.0-alpine AS builder
Docker doesn't use build cache when something in package.json or package-lock.json is changed, even if this is only the version number in the file, no dependencies are changed.
How can I achieve it so docker use the old build cache and skip npm install (npm ci) everytime?
I know that docker looks at the modified date of files. But package.json is not changed at all only the version number.
Below is my Dockerfile
FROM node:10 as builder
ARG REACT_APP_BUILD_NUMBER=X
ENV REACT_APP_BUILD_NUMBER="${REACT_APP_BUILD_NUMBER}"
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY .npmrc ./
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:alpine
COPY nginx/nginx.conf /etc/nginx/nginx.conf
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Here are some solutions that should help mitigate this problem. There are trade-offs with each, but they are not necessarily mutually exclusive - they can be mixed together for better overall build performance.
Solution I: Docker BuildKit cache mounts
Docker BuildKit enables partial mitigation of this problem using the experimental RUN --mount=type=cache flag. It supports a reusable cache mount during the image build progress.
An important caveat here is that support for Docker BuildKit may vary significantly between CI/development environments. Check the documentation and the build environment to ensure it will have proper support (otherwise, it will error). Here are some requirements (but not necessarily an exhaustive list):
The Docker daemon needs to support BuildKit (requires Docker 18.09+).
Docker BuildKit needs to be explicitly enabled with DOCKER_BUILDKIT=1 or by default from a daemon/cli configuration.
A comment is needed at the start of the Dockerfile to enable experimental support: # syntax=docker/dockerfile:experimental
Here is a sample Dockerfile that makes use of this feature, caching npm dependencies locally to /usr/src/app/.npm for reuse in subsequent builds:
# syntax=docker/dockerfile:experimental
FROM node
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json package-lock.json /usr/src/app
RUN --mount=type=cache,target=/usr/src/app/.npm \
npm set cache /usr/src/app/.npm && \
npm ci
Notes:
This will cache fetched dependencies locally, but npm will still need to install these into the node_modules directory. Testing with a medium-sized project indicates that this does shave off some build time, but building node_modules can still be non-negligible.
/usr/src/app/.npm will not be included in the final build, and is only available during build time (however, a lingering .npm directory will exist).
The build cache can be cleared if needed, see this Docker forum spost.
Caching node_modules is not recommended. Removal of dependencies in package.json might not be properly propogated. Your milage may vary, if attempted.
Solution II: Install dependencies prior to copying package.json
On the host machine, a script extracts only the dependencies and devDependencies tags from package.json and copies those tags that a new file, such as package-dependencies.json.
E.g. package-dependencies.json:
{
"dependencies": {
"react": "^16.13.1"
},
"devDependencies": {
"gulp": "^4.0.2",
}
}
In the Dockerfile, COPY the package-dependencies.json and package-lock.json and install dependencies. Then, copy the original package.json. Unless changes occur to package-lock.json or package.json's dependencies/devDependencies tags, the layers will be cached and reused from a previous build, meaning minor changes to the package.json will not need to run npm ci/npm install.
Here is an example:
FROM node
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# copy dependency list and locked dependencies
COPY package-dependencies.json package-lock.json /usr/src/app/
# install dependencies
RUN npm ci
# copy over the full package configuration
COPY package.json /usr/src/app/
# ...
RUN npm run build
# ...
Notes:
If used mutually-exclusively, this solution will be faster than the first solution for small changes (such as a version bump), as it will not need to rerun npm ci.
package-dependencies.json will be in the layer history. While this file would be negligible/insignificant in size, it is still "wasted space" since it is not needed in the final image.
A quick script will be needed to generate package-dependencies.json. Depending on the build environments, this may be annoying to implement. Here is an example using the cli utility jq:
cat package.json | jq -S '. | with_entries(select (.key as $k | ["dependencies", "devDependencies"] | index($k)))' > package-dependencies.json
Solution III: All of the above
Solution I will enable caching npm dependencies locally for faster dependency fetching. Solution II will only ever trigger npm ci/npm install if a dependency or development dependency is updated. These solutions can used together to further accelerate build times.
Everytime I build the container I have to wait for apk add docker to finish which takes a long time.
Since everytime it downloads the same thing, can I somehow force Docker to cache apk's downloads for development purposes?
Here's my Dockerfile:
FROM golang:1.13.5-alpine
WORKDIR /go/src/app
COPY src .
RUN go get -d -v ./...
RUN go install -v ./...
RUN apk add --update docker
CMD ["app"]
BTW, I am using this part volumes: - /var/run/docker.sock:/var/run/docker.sock in my docker-compose.yml to use sibling containers, if that matters.
EDIT: I've found google to copy docker.tgz in Chromium:
# add docker client -- do not install docker via apk -- it will try to install
# docker engine which takes a lot of space as well (we don't need it, we need
# only the small client to communicate with the host's docker server)
ADD build/docker/docker.tgz /
What is that docker.tgz? How can I get it?
Reorder your Dockerfile and it should work.
FROM golang:1.13.5-alpine
RUN apk add --update docker
WORKDIR /go/src/app
COPY src .
RUN go get -d -v ./...
RUN go install -v ./...
CMD ["app"]
As you are copying before installation, so whenever you change something in src the cache will invalidate for docker installtion.
Whenever you have a COPY command, if any of the files involve change, it causes every command after that to get re-run. If you move your RUN apk add ... command to the start of the file before it COPYs anything, it will get cached across runs.
A fairly generic recipe for most Dockerfiles to accommodate this pattern looks like:
FROM some-base-image
# Install OS-level dependencies
RUN apk add or apt-get install ...
WORKDIR /app
# Install language-level dependencies
COPY requirements.txt requirements.lock ./
RUN something install -r requirements.txt
# Install the rest of the application
COPY main.app ./
COPY src src/
# Set up standard run-time metadata
EXPOSE 12345
CMD ["/app/main.app"]
(Go and Java applications need the additional step of compiling the application, which often lends itself to a multi-stage build, but this same pattern can be repeated in both stages.)
You can download Docker x86_64 binaries for mac, linux, windows and unzip/untar and make it executable.
Whenever you are installing any packages in Docker container those should go at the beginning of Dockerfile, so it won’t ask you again to install same packages and COPY command part must be at the end of Dockerfile.
New to docker so maybe I'm missing something obvious...
I have an app split into a web client and a back end server. The back end is pretty easy to create an image for via a Dockerfile:
COPY source
RUN npm install, npm run build
CMD npm run start
The already-built back end app will then access the environment variables at runtime.
With the web client it's not as simple because webpack needs to have the environment variables before the application is built. This leaves me as far as I'm aware only two options:
Require the user to build their own image from the application source
Build the web client on container run by running npm run build in CMD
Currently I'm doing #2 but both options seem wrong to me. What's the best solution?
FROM node:latest
COPY ./server /app/server
COPY ./web /app/web
WORKDIR /app/web
CMD ["sh", "-c", "npm install && npm run build && cd ../server && npm install && npm run build && npm run start"]
First, it would be a good idea for both the backend server and web client to each have their own Dockerfile/image. Then it would be easy to run them together using something like docker-compose.
The way you are going to want to provide environment variables to the web Dockerfile is by using build arguments. Docker build arguments are available when building the Docker image. You use these by specifying the ARG key in the Dockerfile, or by passing the --build-arg flag to docker build.
Here is an example Dockerfile for your web client based on what you provided:
FROM node:latest
ARG NODE_ENV=dev
COPY ./web /app/web
WORKDIR /app/web
RUN npm install \
&& npm run build
CMD ["npm", "run", "start"]
The following Dockerfile uses the ARG directive to create a variable with a default value of dev.
The value of NODE_ENV can then be overridden when running docker build.
Like so:
docker build -t <myimage> --build-arg NODE_ENV=production .
Whether you override it or not NODE_ENV will be available to webpack before it is built. This allows you to build a single image, and distribute it to many people without them having to build the web client.
Hopefully this helps you out.
In our project, we have an ASP.NET Core project with an Angular2 client. At Docker build time, we launch:
FROM microsoft/dotnet:latest
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN apt-get -qq update ; apt-get -qqy --no-install-recommends install \
git \
unzip
RUN curl -sL https://deb.nodesource.com/setup_7.x | bash -
RUN apt-get install -y nodejs build-essential
RUN ["dotnet", "restore"]
RUN npm install
RUN npm run build:prod
RUN ["dotnet", "build"]
EXPOSE 5000/tcp
ENV ASPNETCORE_URLS http://*:5000
ENTRYPOINT ["dotnet", "run"]
Since restoring the npm packages is essential to be able to build the Angular2 client using npm run build, our Docker image is HUGE, I mean almost 2GB. Built Angular2 client is only 1.7Mb itself.
Our app does nothing fancy: simple web API writing to MongoDB and displaying static files.
In order to improve the size of our image, is there any way to exclude path which are useless at run time? For example node_modules or any .NET Core source?
Dotnet may restore much, especially if you have multiple targets platforms (linux, mac, windows).
Depending on how your application is configured (i.e. as portable .NET Core app or as self-contained), it can also pull the whole .NET Core Framework for one, or multiple platforms and/or architectures (x64, x86). This is mainly explained here.
When "Microsoft.NETCore.App" : "1.0.0" is defined, without the type platform, then then complete framework will be fetched via nuget. Then if you have multiple runtimes defined
"runtimes": {
"win10-x64": {},
"win10-x86": {},
"osx.10.10-x86": {},
"osx.10.10-x64": {}
}
it will get native libraries for all this platforms too. But not only in your project directory but also in ~/.nuget and npm-cache additionally to node_modules in your project + eventual copies in your wwwdata.
However, this is not how docker works. Everything you execute inside the Dockerfile is written to the virtual filesystem of the container! That's why you see this issues.
You should follow my previous comment on your other question:
Run dotnet restore, dotne build and dotnet publish outside the Dockerfile, for example in a bash or powershell/batch script.
Once finished call copy the content of the publish folder in your container with
dotnet publish
docker build bin\Debug\netcoreapp1.0\publish ... (your other parameters here)
This will generate publish files on your file system, only containing the required dll files, Views and wwwroot content without all the other build files, artifacts, caches or source and will run the docker process from the bin\Debug\netcoreapp1.0\publish folder.
You also need to change your docker files, to copy the files instead of running the commands you have during container building.
Scott uses this Dockerfile for his example in his blog:
FROM ... # Your base image here
ENTRYPOINT ["dotnet", "YourWebAppName.dll"] # Application to run
ARG source=. # An argument from outside, here store the path from real filesystem
WORKDIR /app
ENV ASPNETCORE_URLS http://+:82 # Define the port it should listen
EXPOSE 82
COPY $source . # copy the files from defined folder, here bin\Debug\netcoreapp1.0\publish to inside the docker container
This is the recommended approach for building docker containers. When you run the build commands inside, all the build and publish artifacts remain in the virtual file system and the docker image grows unexpectedly.