With the help from SO community I was finally able to dockerize my Sveltekit app and access it from the browser (this was an issue initially). So far so good, but now every time I perform a code change I need to re-build and redeploy my container which obviously is not acceptable. Hot reload is not working, I've been trying multiple things I've found online but none of them have worked so far.
Here's my Dockerfile:
FROM node:19-alpine
# Set the Node environment to development to ensure all packages are installed
ENV NODE_ENV development
# Change our current working directory
WORKDIR /app
# Copy over `package.json` and lock files to optimize the build process
COPY package.json package-lock.json ./
# Install Node modules
RUN npm install
# Copy over rest of the project files
COPY . .
# Perhaps we need to build it for production, but apparently is not needed to run dev script.
# RUN npm run build
# Expose port 3000 for the SvelteKit app and 24678 for Vite's HMR
EXPOSE 3333
EXPOSE 8080
EXPOSE 24678
CMD ["npm", "run", "dev"]
My docker-compose:
version: "3.9"
services:
dmc-web:
build:
context: .
dockerfile: Dockerfile
container_name: dmc-web
restart: always
ports:
- "3000:3000"
- "3010:3010"
- "8080:8080"
- "5050:5050"
- "24678:24678"
volumes:
- ./:/var/www/html
the scripts from my package.json:
"scripts": {
"dev": "vite dev --host 0.0.0.0",
"build": "vite build",
"preview": "vite preview",
"test": "playwright test",
"lint": "prettier --check . && eslint .",
"format": "prettier --write ."
},
and my vite.config.js:
import { sveltekit } from '#sveltejs/kit/vite';
import {defineConfig} from "vite";
export default defineConfig({
plugins: [sveltekit()],
server: {
watch: {
usePolling: true,
},
host: true, // needed for the DC port mapping to work
strictPort: true,
port: 8080,
}
});
any idea what am I missing? I can reach my app at http://localhost:8080 but cannot get to reload the app when a code change happens.
Thanks.
Solution
The workspace in question does not work simply because it does not bind-mount the source directory. Other than that, it has no problem whatsoever.
Here's working code at my github:
https://github.com/rabelais88/stackoverflow-answers/tree/main/74680419-svelte-docker-HMR
1. Proper bind mount in docker-compose
The docker-compose.yaml from the question only mounts the result of previous build, not the current source files.
# 🚨wrong
volumes:
- ./:/var/www/html
# ✅answer
volumes:
# it avoids mounting the workspace root
# because it may cause OS specific node_modules folder
# or build folder(.svelte-kit) to be mounted.
# they conflict with the temporary results from docker space.
# this is why many mono repos utilize ./src folder
- ./src:/home/node/app/src
- ./static:/home/node/app/static
- ./vite.config.js:/home/node/app/vite.config.js
- ./tsconfig.json:/home/node/app/tsconfig.json
- ./svelte.config.js:/home/node/app/svelte.config.js
2. dockerfile should not include file copy and command
dockerfile does not always have to include command. it is necessary when 1)the result has to be preserved 2)the process lifecycle is critical to image. In this case 1)the result is not quite certain because the source may not be complete at the moment of booting, 2) the process lifecycle is not really important because the user may manually execute or close the container. The local development environment for VSCode + Docker, a.k.a VSCode devcontainer, also uses sleep infinity command for this reason.
As mentioned above, the code cannot be copied to docker space because it would conflict with bind-mounted files. To avoid both files collide, just remove COPY and CMD command from dockerfile and add more commands at docker-compose.yaml
# dockerfile
# 🚨wrong
COPY package.json package-lock.json ./
RUN npm install
COPY . .
# ...
CMD ["npm", "run", "dev"]
# ✅answer
COPY package*.json ./
RUN npm install
# comment out COPY and CMD
# COPY . .
# ...
# CMD ["npm", "run", "dev"]
and add command to docker-compose
# docker-compose.yaml
services:
svelte:
# ...
command: npm dev
and rest of configs in the question are not necessary. you can check this out from my working demo at Github
Edit
I just did this, but when running it I'm getting Error: Cannot find module '/app/npm dev'.
the answer uses arbitrary settings. the volumes and CMD may has to be changed accordingly.
i.e.)
# docker-compose.yaml
volumes:
- ./src:/$YOUR_APP_DIR/src
- ./static:/$YOUR_APP_DIR/static
# ...
I've used /home/node/app as WORKDIR because /home/node is used as main WORKDIR for official node docker image. However, it is not necessary to use the same folder. If you're going to use /home/node/app, make sure create the folder before use.
RUN mkdir -p /home/node/app
WORKDIR /home/node/app
Why are you using docker for local development? Check this https://stackoverflow.com/a/70159286/3957754
Nodejs works very well even on windows, so my advice is to develop directly in the host with a simple nodejs installation. Your hot reload should work.
Docker is for your test, staging or production servers in which you don't want a hot-reload because reals users are using your web. Hot reload is only for local development.
container process
When the docker starts, is linked to a live and foreground process. If this process ends or is restarted, the entire container will crash. Check this https://stackoverflow.com/a/68593731/3957754
That's why docker goal is not related to hot-reload at source code level
anyway
Anyway, if you have a machine in which to have the workspace (nodejs, git, etc) is so complex, you could use docker with nodejs hot reload following these steps:
Don't use Dockerfile, use directly docker run ubuntu .... You are working with nodejs not with c#, php or similar follies
At the root of your workspace (package.json) execute this
docker run -p 8080:8080 -v $(pwd):/src node:19-alpine bash
The previous sentence will create a container not linked to a tcp process. So you will have new sub-shell, with nodejs 19 and alpine ready to use
Execute the classic
cd /src
npm install
npm run dev
If your app works fine, the hot reload should work. If don't work, try without docker and share us the result
I am trying to Dockerize my Nodejs application which uses Firebase cloud functions.
This is what my project directory looks like -
When running without using Docker, I follow the following steps to start the server -
cd functions
yarn serve - this script is present in functions/package.json
Now I want to Dockerize this application, the Dockerfile which I created looks like this -
FROM node:14-alpine
WORKDIR /api
COPY package*.json ./
RUN npm install
COPY . ./api
CMD ["npm", "run", "serve"]
When I build an image using this and run the container, execution fails (I think it's because of missing files in the container)
Command used to build image -
docker build -t <my-image-name> ../functions
When I look into container file system I do not see the exact file structure which I have on my machine.
What are all the changes needed in Dockerfile?
I have a grails app running in Docker, and I was trying to add the Apache Derby server to run in the same image using Docker multi stage. But when I add Derby, then the grails app doesn't run.
So I started with this:
$ cat build/docker/Dockerfile
FROM azul/zulu-openjdk:13.0.3
EXPOSE 8080
VOLUME ["/AppData/derby"]
WORKDIR /app
COPY holder-0.1.jar application.jar
COPY app-entrypoint.sh app-entrypoint.sh
RUN chmod +x app-entrypoint.sh
RUN apt-get update && apt-get install -y dos2unix && dos2unix app-entrypoint.sh
ENTRYPOINT ["/app/app-entrypoint.sh"]
So far, so good this starts off grails as a web server, and I can connect to the web app. But then I added Derby....
FROM azul/zulu-openjdk:13.0.3
EXPOSE 8080
VOLUME ["/AppData/derby"]
WORKDIR /app
COPY holder-0.1.jar application.jar
COPY app-entrypoint.sh app-entrypoint.sh
RUN chmod +x app-entrypoint.sh
RUN apt-get update && apt-get install -y dos2unix && dos2unix app-entrypoint.sh
ENTRYPOINT ["/app/app-entrypoint.sh"]
FROM datagrip/derby-server
WORKDIR /derby
Now when I start the container, Derby runs, but the grails app doesn't run at all. This is obvious from what is printed on the terminal, but I also logged in and did a ps aux to verify it.
Now I suppose I could look into creating my own startup script to start the Derby server, although this would seem to violate the independence of the two images' configurations.
Other people might say, I should use two containers, but I was hoping to keep it simple, derby is a very simple database, I don't feel the need for this complexity here.
Am I just trying to push the concept of multi stage docker containers too far?
Is it actually normal at all for docker containers to have more than one process start up? Will I have to fudge it and come up with my own entry point that starts Derby server in the background before starting grails in the foreground? Or is this all just wrong, and I really should be using multiple containers?
It is fine for Docker to have multiple processes in one container but the concept is different: one container, one process. Having a database separately is certainly how it should be done.
Now the problem with your Dockerfile is that after you've declared a second FROM, you have effectively discarded most of what you've done so far. You may use a previous stage to copy some files from it (this is normally used to build some binaries) but Docker will not do that for you, unless you explicitly define what to copy. Thus your actual entrypoint is the one declared in datagrip/derby-server image.
I suggest you get started with docker-compose. It's a nice tool to run several containers without complications. With a file like this:
version: "3.0"
services:
app:
build:
context: .
database:
image: datagrip/derby-server
docker-compose will build an image for the app (if the Dockefile is in the same directory but this can be customised) and start a database as well. The database can be access from the application container as just 'database' (it is a resolvable name). See this reference for more options.
I am trying to create an OpenShift application from an Express Node js app, using a Dockerfile. The web app is currently a skeleton created with express generator and the Dockerfile looks like this:
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
I am running the OpenShift Online version 3, using the oc CLI.
But when I run:
oc new-app c:\dev\myapp --strategy=docker
I get the following error message:
error: buildconfigs "node" is forbidden: build strategy Docker is not allowed
I have not found a way to enable Docker as build strategy. Why do I get this error message and how could it be resolved?
OpenShift Online does not allow you to build images from a Dockerfile in the OpenShift cluster itself. This is because that requires extra privileges which at this time are not safe to enable in a multi user cluster.
You would be better off using the NodeJS S2I builder. It can pull in your source code for your repo and will build an image for you without needing a Dockerfile.
Read this blog post to get started:
https://blog.openshift.com/getting-started-nodejs-oso3/
Docker noob here so bear with me.
I have a VPS with dokku configured, it has multiple apps already running.
I am trying to add a fairly complex app at present. But docker just fails with the following error.
From what I understand I need to update the packages the error is given. Problem is they are needed by some other module and I cant update it. Is a way to make docker bypass the warning and build.
Following is the content of my docker
FROM mhart/alpine-node:6
# Create app dir
RUN mkdir -p /app
WORKDIR /app
# Install dependancy
COPY package.json /app
RUN npm install
# Bundle the app
COPY . /app
EXPOSE 9337
CMD ["npm", "start"]
Been trying this for a couple of days not with no success.
Any help greatly appreciated
Thanks.
I believe npm process get killed with error 137 on docker is usually caused by out of memory error. You can try adding swap file (or add more RAM) to test this.