Dockerized Sveltkit app: Hot reload not working - docker

With the help from SO community I was finally able to dockerize my Sveltekit app and access it from the browser (this was an issue initially). So far so good, but now every time I perform a code change I need to re-build and redeploy my container which obviously is not acceptable. Hot reload is not working, I've been trying multiple things I've found online but none of them have worked so far.
Here's my Dockerfile:
FROM node:19-alpine
# Set the Node environment to development to ensure all packages are installed
ENV NODE_ENV development
# Change our current working directory
WORKDIR /app
# Copy over `package.json` and lock files to optimize the build process
COPY package.json package-lock.json ./
# Install Node modules
RUN npm install
# Copy over rest of the project files
COPY . .
# Perhaps we need to build it for production, but apparently is not needed to run dev script.
# RUN npm run build
# Expose port 3000 for the SvelteKit app and 24678 for Vite's HMR
EXPOSE 3333
EXPOSE 8080
EXPOSE 24678
CMD ["npm", "run", "dev"]
My docker-compose:
version: "3.9"
services:
dmc-web:
build:
context: .
dockerfile: Dockerfile
container_name: dmc-web
restart: always
ports:
- "3000:3000"
- "3010:3010"
- "8080:8080"
- "5050:5050"
- "24678:24678"
volumes:
- ./:/var/www/html
the scripts from my package.json:
"scripts": {
"dev": "vite dev --host 0.0.0.0",
"build": "vite build",
"preview": "vite preview",
"test": "playwright test",
"lint": "prettier --check . && eslint .",
"format": "prettier --write ."
},
and my vite.config.js:
import { sveltekit } from '#sveltejs/kit/vite';
import {defineConfig} from "vite";
export default defineConfig({
plugins: [sveltekit()],
server: {
watch: {
usePolling: true,
},
host: true, // needed for the DC port mapping to work
strictPort: true,
port: 8080,
}
});
any idea what am I missing? I can reach my app at http://localhost:8080 but cannot get to reload the app when a code change happens.
Thanks.

Solution
The workspace in question does not work simply because it does not bind-mount the source directory. Other than that, it has no problem whatsoever.
Here's working code at my github:
https://github.com/rabelais88/stackoverflow-answers/tree/main/74680419-svelte-docker-HMR
1. Proper bind mount in docker-compose
The docker-compose.yaml from the question only mounts the result of previous build, not the current source files.
# 🚨wrong
volumes:
- ./:/var/www/html
# ✅answer
volumes:
# it avoids mounting the workspace root
# because it may cause OS specific node_modules folder
# or build folder(.svelte-kit) to be mounted.
# they conflict with the temporary results from docker space.
# this is why many mono repos utilize ./src folder
- ./src:/home/node/app/src
- ./static:/home/node/app/static
- ./vite.config.js:/home/node/app/vite.config.js
- ./tsconfig.json:/home/node/app/tsconfig.json
- ./svelte.config.js:/home/node/app/svelte.config.js
2. dockerfile should not include file copy and command
dockerfile does not always have to include command. it is necessary when 1)the result has to be preserved 2)the process lifecycle is critical to image. In this case 1)the result is not quite certain because the source may not be complete at the moment of booting, 2) the process lifecycle is not really important because the user may manually execute or close the container. The local development environment for VSCode + Docker, a.k.a VSCode devcontainer, also uses sleep infinity command for this reason.
As mentioned above, the code cannot be copied to docker space because it would conflict with bind-mounted files. To avoid both files collide, just remove COPY and CMD command from dockerfile and add more commands at docker-compose.yaml
# dockerfile
# 🚨wrong
COPY package.json package-lock.json ./
RUN npm install
COPY . .
# ...
CMD ["npm", "run", "dev"]
# ✅answer
COPY package*.json ./
RUN npm install
# comment out COPY and CMD
# COPY . .
# ...
# CMD ["npm", "run", "dev"]
and add command to docker-compose
# docker-compose.yaml
services:
svelte:
# ...
command: npm dev
and rest of configs in the question are not necessary. you can check this out from my working demo at Github
Edit
I just did this, but when running it I'm getting Error: Cannot find module '/app/npm dev'.
the answer uses arbitrary settings. the volumes and CMD may has to be changed accordingly.
i.e.)
# docker-compose.yaml
volumes:
- ./src:/$YOUR_APP_DIR/src
- ./static:/$YOUR_APP_DIR/static
# ...
I've used /home/node/app as WORKDIR because /home/node is used as main WORKDIR for official node docker image. However, it is not necessary to use the same folder. If you're going to use /home/node/app, make sure create the folder before use.
RUN mkdir -p /home/node/app
WORKDIR /home/node/app

Why are you using docker for local development? Check this https://stackoverflow.com/a/70159286/3957754
Nodejs works very well even on windows, so my advice is to develop directly in the host with a simple nodejs installation. Your hot reload should work.
Docker is for your test, staging or production servers in which you don't want a hot-reload because reals users are using your web. Hot reload is only for local development.
container process
When the docker starts, is linked to a live and foreground process. If this process ends or is restarted, the entire container will crash. Check this https://stackoverflow.com/a/68593731/3957754
That's why docker goal is not related to hot-reload at source code level
anyway
Anyway, if you have a machine in which to have the workspace (nodejs, git, etc) is so complex, you could use docker with nodejs hot reload following these steps:
Don't use Dockerfile, use directly docker run ubuntu .... You are working with nodejs not with c#, php or similar follies
At the root of your workspace (package.json) execute this
docker run -p 8080:8080 -v $(pwd):/src node:19-alpine bash
The previous sentence will create a container not linked to a tcp process. So you will have new sub-shell, with nodejs 19 and alpine ready to use
Execute the classic
cd /src
npm install
npm run dev
If your app works fine, the hot reload should work. If don't work, try without docker and share us the result

Related

Nuxt 3 Docker doesn't recognize new pages, what am I doing wrong?

I have a problem with my Nuxt 3 project that I run with Docker (dev environment).
Nuxt 3 should automatically create routes when I create .vue files in pages directory, and that works when I run my project outside of Docker, but when I use Docker it doesn't recognize my files until I restart the container. Same thing happens when I try to delete files from pages directory, it doesn't recognize any changes until I restart the container. Weird thing is that this happens only in pages directory, in other directories everything works fine. Just to mention that hot reload works, I set up vite in nuxt.config.ts.
docker-compose.yaml
version: '3.8'
services:
nuxt:
build:
context: .
image: nuxt_dev
container_name: nuxt_dev
command: npm run dev
volumes:
- .:/app
- /app/node_modules
ports:
- "3000:3000"
- "24678:24678"
Dockerfile:
FROM node:16.14.2-alpine
WORKDIR /app
RUN apk update && apk upgrade
RUN apk add git
COPY ./package*.json /app/
RUN npm install && npm cache clean --force && npm run build
COPY . .
ENV PATH ./node_modules/.bin/:$PATH
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=3000
EXPOSE 3000
CMD ["npm", "run", "dev"]
I tried some things with Docker volumes, like to add a separate volume just for pages, like this:
./pages:app/pages
/pages:app/pages
app/pages
but as I thought, none of those things helped.
One more thing that is weird to me, when I created a .vue file in pages directory, I checked if it appeared in the container and it did. I'm not an expert in Docker nor in Nuxt, I just started to learn, so any help would be much appreciated.

Dockerfile for angular development not updating node_modules

I'm using the following Dockerfile for development of an Angular project:
FROM node:18-alpine
WORKDIR /code
COPY package*.json /code/
RUN npm ci --quiet
It gets started with docker compose. My code folder is mounted as a volume so the development server inside the container detects changes when editing and keeps live updates going:
version: "3"
services:
ui:
build: ./PathOnHostWithProjectRepo
command: sh -c "npm start"
ports:
- 4200:4200
volumes:
- ./PathOnHostWithProjectRepo:/code
- node_modules:/code/node_modules
volumes:
node_modules:
node_modules gets created when the image is created and, to my understanding, would only update if my package.json is changed. However, today I updated package.json with a new dependency and it is not being installed inside of the volume. I have tried everything I can think of. docker compose down, docker system prune -a -f, and rebuilding. Every time the container starts there is an error that it cannot find the new dependency added. If I step into the container and inspect the node_modules folder the library isn't there. It is present on my host machine if I run npm install locally without Docker, so I know the package and imports must be correct.
With this setup your node_modules will never be updated. Docker will completely ignore any changes in your package.json file. You've told it that directory contains user data that must not be modified.
For the setup you show you don't need Docker at all. It's straightforward to install Node and OS package managers like Debian/Ubuntu APT or MacOS Homebrew generally have a prepackaged version. If you use Node directly then you won't have problems like this; everything will work normally.
If you must use Docker here, the most straightforward thing to do is to make sure all of your application code is in a subdirectory; then you can mount only the subdirectory containing the code and leave the image's node_volumes directory intact.
$ ls -F
Dockerfile
docker-compose.yml
node_modules/
package.json
package-lock.json
src/
# Dockerfile
FROM node:lts
WORKDIR /code
COPY package*.json ./
RUN npm ci
COPY src/ ./src/
# RUN npm build
CMD ["npm", "start"]
# docker-compose.yml
version: '3.8'
services:
ui:
build: .
ports:
- '4200:4200'
volumes:
- ./src:/code/src
Mounting only the src subdirectory avoids the trouble of storing node_modules in a named volume (or an anonymous one). If you change your package.json file you will need to re-run docker-compose build, but since you're directly using the library tree in your image then this will in fact get updated.
If you're going to deploy this image somewhere, remember to delete the volumes: block during your local integration testing so that you're actually running the image you're going to deploy, and not a hybrid of an image and your potentially-modified local code.

Container exited with code 0, and my app is served from the host OS

I want to dockerize a Next.js project.
I am using Ubuntu 20.04
I first created a Next.js app in my /home/user/project/ folder using npx create-next-app
So I have the project source code in my host machine.
But I want to dockerize it, so I created a docker-compose.yaml:
next:
build:
context: ./next
dockerfile: Dockerfile
container_name: next
volumes:
- ./next:/var/www/html
ports:
- "3000:3000"
networks:
- nginx
And this is the Dockerfile:
#Creates a layer from node:alpine image.
FROM node:alpine
#Creates directories
RUN mkdir -p /usr/src/app
#Sets an environment variable
ENV PORT 3000
#Sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, and ADD commands
WORKDIR /usr/src/app
#Copy new files or directories into the filesystem of the container
COPY package.json /usr/src/app
COPY package-lock.json /usr/src/app
#Execute commands in a new layer on top of the current image and commit the results
RUN npm install
##Copy new files or directories into the filesystem of the container
COPY . /usr/src/app
#Execute commands in a new layer on top of the current image and commit the results
RUN npm run build
#Informs container runtime that the container listens on the specified network ports at runtime
EXPOSE 3000
#Allows you to configure a container that will run as an executable
ENTRYPOINT ["npm", "run"]
Then I build my container using docker-compose build && docker-compose up.
The container is built, but it's not running and is displaying EXITED (0)
and the LOGS has the following message:
Lifecycle scripts included in next-frontend#0.1.0:
start
next start
available via `npm run-script`:
dev
next dev
build
next build
lint
next lint
But of course if I run in the host npm run dev it will run the app from the host, and not from the container (It runs, but that's not what I want)
I feel like there is some very fundamental mistake in my deployment, but I just started with Docker so I can't find out what
Also, I copied the Dockerfile from a tutorial so it might not fit the way I created the project
ENTRYPOINT ["npm", "run"]... What?
From npm run documentation,
This runs an arbitrary command from a package's "scripts" object. If no "command" is provided, it will list the available scripts.
In the docker-compose.yml, you need to override the CMD instruction (that is empty in your case) with the npm script you want to run. Something like this:
next:
build:
context: ./next
dockerfile: Dockerfile
container_name: next
command: ["start"]
volumes:
- ./next:/var/www/html
ports:
- "3000:3000"
networks:
- nginx
Since you are using the Compose Spec, this is the reference for the command instruction.

Install node_modules inside Docker container and synchronize them with host

I have the problem with installing node_modules inside the Docker container and synchronize them with the host. My Docker's version is 18.03.1-ce, build 9ee9f40 and Docker Compose's version is 1.21.2, build a133471.
My docker-compose.yml looks like:
# Frontend Container.
frontend:
build: ./app/frontend
volumes:
- ./app/frontend:/usr/src/app
- frontend-node-modules:/usr/src/app/node_modules
ports:
- 3000:3000
environment:
NODE_ENV: ${ENV}
command: npm start
# Define all the external volumes.
volumes:
frontend-node-modules: ~
My Dockerfile:
# Set the base image.
FROM node:10
# Create and define the working directory.
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# Install the application's dependencies.
COPY package.json ./
COPY package-lock.json ./
RUN npm install
The trick with the external volume is described in a lot of blog posts and Stack Overflow answers. For example, this one.
The application works great. The source code is synchronized. The hot reloading works great too.
The only problem that I have is that node_modules folder is empty on the host. Is it possible to synchronize the node_modules folder that is inside Docker container with the host?
I've already read these answers:
docker-compose volume on node_modules but is empty
Accessing node_modules after npm install inside Docker
Unfortunately, they didn't help me a lot. I don't like the first one, because I don't want to run npm install on my host because of the possible cross-platform issues (e.g. the host is Windows or Mac and the Docker container is Debian 8 or Ubuntu 16.04). The second one is not good for me too, because I'd like to run npm install in my Dockerfile instead of running it after the Docker container is started.
Also, I've found this blog post. The author tries to solve the same problem I am faced with. The problem is that node_modules won't be synchronized because we're just copying them from the Docker container to the host.
I'd like my node_modules inside the Docker container to be synchronized with the host. Please, take into account that I want:
to install node_modules automatically instead of manually
to install node_modules inside the Docker container instead of the host
to have node_modules synchronized with the host (if I install some new package inside the Docker container, it should be synchronized with the host automatically without any manual actions)
I need to have node_modules on the host, because:
possibility to read the source code when I need
the IDE needs node_modules to be installed locally so that it could have access to the devDependencies such as eslint or prettier. I don't want to install these devDependencies globally.
At first, I would like to thank David Maze and trust512 for posting their answers. Unfortunately, they didn't help me to solve my problem.
I would like to post my answer to this question.
My docker-compose.yml:
---
# Define Docker Compose version.
version: "3"
# Define all the containers.
services:
# Frontend Container.
frontend:
build: ./app/frontend
volumes:
- ./app/frontend:/usr/src/app
ports:
- 3000:3000
environment:
NODE_ENV: development
command: /usr/src/app/entrypoint.sh
My Dockerfile:
# Set the base image.
FROM node:10
# Create and define the node_modules's cache directory.
RUN mkdir /usr/src/cache
WORKDIR /usr/src/cache
# Install the application's dependencies into the node_modules's cache directory.
COPY package.json ./
COPY package-lock.json ./
RUN npm install
# Create and define the application's working directory.
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
And last but not least entrypoint.sh:
#!/bin/bash
cp -r /usr/src/cache/node_modules/. /usr/src/app/node_modules/
exec npm start
The trickiest part here is to install the node_modules into the node_module's cache directory (/usr/src/cache) which is defined in our Dockerfile. After that, entrypoint.sh will move the node_modules from the cache directory (/usr/src/cache) to our application directory (/usr/src/app). Thanks to this the entire node_modules directory will appear on our host machine.
Looking at my question above I wanted:
to install node_modules automatically instead of manually
to install node_modules inside the Docker container instead of the host
to have node_modules synchronized with the host (if I install some new package inside the Docker container, it should be
synchronized with the host automatically without any manual actions
The first thing is done: node_modules are installed automatically. The second thing is done too: node_modules are installed inside the Docker container (so, there will be no cross-platform issues). And the third thing is done too: node_modules that were installed inside the Docker container will be visible on our host machine and they will be synchronized! If we install some new package inside the Docker container, it will be synchronized with our host machine at once.
The important thing to note: truly speaking, the new package installed inside the Docker container, will appear in /usr/src/app/node_modules. As this directory is synchronized with our host machine, this new package will appear on our host machine's node_modules directory too. But the /usr/src/cache/node_modules will have the old build at this point (without this new package). Anyway, it is not a problem for us. During next docker-compose up --build (--build is required) the Docker will re-install the node_modules (because package.json was changed) and the entrypoint.sh file will move them to our /usr/src/app/node_modules.
You should take into account one more important thing. If you git pull the code from the remote repository or git checkout your-teammate-branch when Docker is running, there may be some new packages added to the package.json file. In this case, you should stop the Docker with CTRL + C and up it again with docker-compose up --build (--build is required). If your containers are running as a daemon, you should just execute docker-compose stop to stop the containers and up it again with docker-compose up --build (--build is required).
If you have any questions, please let me know in the comments.
Having run into this issue and finding the accepted answer pretty slow to copy all node_modules to the host in every container run, I managed to solve it by installing the dependencies in the container, mirror the host volume, and skip installing again if a node_modules folder is present:
Dockerfile:
FROM node:12-alpine
WORKDIR /usr/src/app
CMD [ -d "node_modules" ] && npm run start || npm ci && npm run start
docker-compose.yml:
version: '3.8'
services:
service-1:
build: ./
volumes:
- ./:/usr/src/app
When you need to reinstall the dependencies just delete node_modules.
A Simple, Complete Solution
You can install node_modules in the container using the external named volume trick and synchronize it with the host by configuring the volume's storage location to point to your host's node_modules directory. This can be done with a named volume using the local driver and a bind mount, as seen in the example below.
The volume's data is stored on your host anyway, in something like /var/lib/docker/volumes/, so we're just storing it inside your project instead.
To do this in Docker Compose, just add your node_modules volume to your front-end service, and then configure the volume in the named volumes section, where "device" is the relative path (from the location of docker-compose.yml) to your local (host) node_modules directory.
docker-compose.yml
version: '3.9'
services:
ui:
# Your service options...
volumes:
- node_modules:/path/to/node_modules
volumes:
node_modules:
driver: local
driver_opts:
type: none
o: bind
device: ./local/path/to/node_modules
The key with this solution is to never make changes directly in your host node_modules, but always install, update, or remove Node packages in the container.
Version Control Tip:
When your Node package.json/package-lock.json files change, either when pulling, or switching branches, in addition to rebuilding the Image, you have to remove the Volume, and delete its contents:
docker volume rm example_node_modules
rm -rf local/path/to/node_modules
mkdir local/path/to/node_modules
Documentation:
https://docs.docker.com/storage/volumes/
https://docs.docker.com/storage/bind-mounts/
https://docs.docker.com/compose/compose-file/compose-file-v3/#driver_opts
There's three things going on here:
When you run docker build or docker-compose build, your Dockerfile builds a new image containing a /usr/src/app/node_modules directory and a Node installation, but nothing else. In particular, your application isn't in the built image.
When you docker-compose up, the volumes: ['./app/frontend:/usr/src/app'] directive hides whatever was in /usr/src/app and mounts host system content on top of it.
Then the volumes: ['frontend-node-modules:/usr/src/app/node_modules'] directive mounts the named volume on top of the node_modules tree, hiding the corresponding host system directory.
If you were to launch another container and attach the named volume to it, I expect you'd see the node_modules tree there. For what you're describing you just don't want the named volume: delete the second line from the volumes: block and the volumes: section at the end of the docker-compose.yml file.
No one has mentioned solution with actually using docker's entrypoint feature.
Here is my working solution:
Dockerfile (multistage build, so it is both production and local dev ready):
FROM node:10.15.3 as production
WORKDIR /app
COPY package*.json ./
RUN npm install && npm install --only=dev
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
FROM production as dev
COPY docker/dev-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["dev-entrypoint.sh"]
CMD ["npm", "run", "watch"]
docker/dev-entrypoint.sh:
#!/bin/sh
set -e
npm install && npm install --only=dev ## Note this line, rest is copy+paste from original entrypoint
if [ "${1#-}" != "${1}" ] || [ -z "$(command -v "${1}")" ]; then
set -- node "$#"
fi
exec "$#"
docker-compose.yml:
version: "3.7"
services:
web:
build:
target: dev
context: .
volumes:
- .:/app:delegated
ports:
- "3000:3000"
restart: always
environment:
NODE_ENV: dev
With this approach you achieve all 3 points you required and imho it is much cleaner way - not need to move files around.
Binding your host node_modules folder with your container node_modules is not a good practice as you mention. I have seen the solution of creating an internal volume for this folder quite often. Not doing so will cause problems during the building stage.
I ran into this problem when I was trying to build a docker development environment for an angular app, that shows tslib errors when I was editing the files within my host folder cause my host's node_modules folder was empty (as expected).
The cheap solution that helps me, in this case, was to use the Visual Studio Code Extension called "Remote-Containers".
This extension will allow you to attach your Visual Studio Code to your container and edit transparently your files within your container folders. To do so, it will install an internal vscode server within your development container. For more information check this link.
Ensure, however, that your volumes are still created in your docker-compose.yml file.
I hope it helps :D!
I wouldn't suggest overlapping volumes, although I haven't seen any official docs ban it, I've had some issues with it in the past. How I do it is:
Get rid of the external volume as you are not planning on actually using it how it's meant to be used - respawning the container with its data created specifically in the container after stopping+removing it.
The above might be achieved by shortening your compose file a bit:
frontend:
build: ./app/frontend
volumes:
- ./app/frontend:/usr/src/app
ports:
- 3000:3000
environment:
NODE_ENV: ${ENV}
command: npm start
Avoid overlapping volume data with Dockerfile instructions when not necessary.
That means you might need two Dockerfiles - one for local development and one for deploying a fat image with all the application dist files layered inside.
That said, consider a development Dockerfile:
FROM node:10
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install
The above makes the application create a full node_modules installation and map it to your host location, while the docker-compose specified command would start your application off.
I'm not sure to understand why you want your source code to live inside the container and host and bind mount each others during development. Usually, you want your source code to live inside the container for deployments, not development since the code is available on your host and bind mounted.
Your docker-compose.yml
frontend:
volumes:
- ./app/frontend:/usr/src/app
Your Dockerfile
FROM node:10
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
Of course you must run npm install first time and everytime package.json changes, but you run it inside the container so there is no cross-platform issue: docker-compose exec frontend npm install
Finally start your server docker-compose exec frontend npm start
And then later, usually in a CI pipeline targetting a deployment, you build your final image with the whole source code copied and node_modules reinstalled, but of course at this point you don't need anymore the bind mount and "synchronization", so your setup could look like :
docker-compose.yml
frontend:
build:
context: ./app/frontend
target: dev
volumes:
- ./app/frontend:/usr/src/app
Dockerfile
FROM node:10 as dev
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
FROM dev as build
COPY package.json package-lock.json ./
RUN npm install
COPY . ./
CMD ["npm", "start"]
And you target the build stage of your Dockerfile later, either manually or during a pipeline, to build your deployment-ready image.
I know it's not the exact answer to your questions since you have to run npm install and nothing lives inside the container during development, but it solves your node_modules issue, and I feel like your questions are mixing development and deployment considerations, so maybe you thought about this problem in the wrong way.
The best for development
docker-compose.yml
...
frontend:
build: ./app/frontend
ports:
- 3000:3000
volumes:
- ./app/frontend:/usr/src/app
...
./app/frontend/Dockerfile
FROM node:lts
WORKDIR /usr/src/app
RUN npm install -g react-scripts
RUN chown -Rh node:node /usr/src/app
USER node
EXPOSE 3000
CMD [ "sh", "-c", "npm install && npm run start" ]
#FOR PROD
# CMD [ "sh", "-c", "npm install && npm run build" ]
The user node will help you with the rights of host<->guest
The folder node_modules will be accessible from the host and synchronize host<->guest
Thanks Vladyslav Turak for answer with entrypoint.sh where we copy node_modules from container to host.
I implemented the similar thing but I run into the issue with husky, #commitlint, tslint npm packages.
I can't push anything into repository.
Reason: I copied node_modules from Linux to Windows. In my case <5% of files are different (.bin and most of package.json) and 95% are the same. example: image with diff
So I returned to solution with npm install of node_modules for Windows first (for IDE and debugging). And Docker image will contain Linux version of node_modules.
I know that this was resolved, but what about:
Dockerfile:
FROM node
# Create app directory
WORKDIR /usr/src/app
# Your other staffs
EXPOSE 3000
docker-composer.yml:
version: '3.2'
services:
api:
build: ./path/to/folder/with/a/dockerfile
volumes:
- "./volumes/app:/usr/src/app"
command: "npm start"
volumes/app/package.json
{
... ,
"scripts": {
"start": "npm install && node server.js"
},
"dependencies": {
....
}
}
After run, node_modules will be present in your volumes, but its contents are generated within the container so no cross platform problems.
My workaround is to install dependencies when the container is starting instead of during build-time.
Dockerfile:
# We're using a multi-stage build so that we can install dependencies during build-time only for production.
# dev-stage
FROM node:14-alpine AS dev-stage
WORKDIR /usr/src/app
COPY package.json ./
COPY . .
# `yarn install` will run every time we start the container. We're using yarn because it's much faster than npm when there's nothing new to install
CMD ["sh", "-c", "yarn install && yarn run start"]
# production-stage
FROM node:14-alpine AS production-stage
WORKDIR /usr/src/app
COPY package.json ./
RUN yarn install
COPY . .
.dockerignore
Add node_modules to .dockerignore to prevent it from being copied when the Dockerfile runs COPY . .. We use volumes to bring in node_modules.
**/node_modules
docker-compose.yml
node_app:
container_name: node_app
build:
context: ./node_app
target: dev-stage # `production-stage` for production
volumes:
# For development:
# If node_modules already exists on the host, they will be copied
# into the container here. Since `yarn install` runs after the
# container starts, this volume won't override the node_modules.
- ./node_app:/usr/src/app
# For production:
#
- ./node_app:/usr/src/app
- /usr/src/app/node_modules
You could also use dockerized npm install. This is the same as npm install but it runs on a docker container.
https://github.com/datastack-net/dockerized
The node_modules will be written to the host. It should work out of the box and you can specify which npm version to use. If needed, the container can be extended or customized.
Be aware that some npm packages may require compilation, and the generated binaries may not be compatible with your host machine. If you just need the source code or dist files, this is not an issue.
Disclaimer: I'm the author of Dockerized.

Use Docker to run a build process

I'm using docker and docker-compose to set up a build pipeline. I've got a front-end that's written in javascript and needs to be built before being used. The backend is written in go.
To make this component integrate with the rest of our docker-compose setup, I want to do the building in a docker image as well.
This is the flow I'm going for:
during build do:
build the frontend stuff and put it in /output (that is bound to the
output volume
build the backend server
when running do:
run the server, it has access to the build files in /output
I'm quite new to docker and docker-compose so I'm not sure if this is possible, or even the right thing to do.
For reference, here's my docker-compose.yml:
version: '2'
volumes:
output:
driver: local
services:
frontend:
build: .
volumes:
- output:/output
backend:
build: ./backend
depends_on:
- frontend
volumes:
- output:/output
and Dockerfile:
FROM node
# create working dir
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD package.json /usr/src/app/package.json
# install packages
RUN npm install
COPY . /usr/src/app
# build frontend files and place results in /output
RUN npm build
RUN cp /usr/src/app/build/* /output
And backend/Dockerfile:
FROM go
# copy and build server
COPY . /usr/src/backend
WORKDIR /usr/src/backend
RUN go build
# run the server
ENTRYPOINT ["/usr/src/backend/main"]
Something is wrong here, but I do not know what. It seems as though the output of the build step are not persisted in the output volume. What can I do to fix this?
You cannot attach a volume during docker build.
The reason for this is that the goal of the docker build command is to build an image, and nothing else, it doesn't need to have volumes, as Dockerfile has ADD / COPY.
To produce your output, you should create a script which mostly does the npm install ; npm build ; cp /usr/src/app/build/* /output from your current dockerfile and use this script as the entrypoint / cmd in your dockerfile.
I'm not sure compose can run this, but in any case, I find it more clear wrapped in a shell script that first executes the frontend builder container, then executing the backend container with the output directory as a volume.

Resources