Dockerizing Nuxt 3 app for development purposes - docker

I'm trying to dockerize Nuxt 3 app, but I have strange issue.
This Dockerfile is working with this docker run command:
docker run -v /Users/my_name/developer/nuxt-app:/app -it -p 3000:3000 nuxt-app
# Dockerfile
FROM node:16-alpine3.14
# create destination directory
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
# update and install dependency
RUN apk update && apk upgrade
RUN apk add git
# copy the app, note .dockerignore
COPY . /usr/src/nuxt-app/
RUN npm install
# RUN npm run build
EXPOSE 3000
# ENV NUXT_HOST=0.0.0.0
# ENV NUXT_PORT=3000
CMD [ "npm", "run", "dev"]
I don't understand why despite mounting it to /app folder in the container and declaring /usr/src/nuxt-app in Dockerfile it works.
When I try to match them then I get this error:
ERROR (node:18) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 3) 20:09:42
(Use `node --trace-warnings ...` to show where the warning was created)
✔ Nitro built in 571 ms nitro 20:09:43
ERROR [unhandledRejection]
You installed esbuild for another platform than the one you're currently using.
This won't work because esbuild is written with native code and needs to
install a platform-specific binary executable.
Specifically the "#esbuild/darwin-arm64" package is present but this platform
needs the "#esbuild/linux-arm64" package instead. People often get into this
situation by installing esbuild on Windows or macOS and copying "node_modules"
into a Docker image that runs Linux, or by copying "node_modules" between
Windows and WSL environments.
If you are installing with npm, you can try not copying the "node_modules"
directory when you copy the files over, and running "npm ci" or "npm install"
on the destination platform after the copy. Or you could consider using yarn
instead of npm which has built-in support for installing a package on multiple
platforms simultaneously.
If you are installing with yarn, you can try listing both this platform and the
other platform in your ".yarnrc.yml" file using the "supportedArchitectures"
feature: https://yarnpkg.com/configuration/yarnrc/#supportedArchitectures
Keep in mind that this means multiple copies of esbuild will be present.
Another alternative is to use the "esbuild-wasm" package instead, which works
the same way on all platforms. But it comes with a heavy performance cost and
can sometimes be 10x slower than the "esbuild" package, so you may also not
want to do that.
at generateBinPath (node_modules/vite/node_modules/esbuild/lib/main.js:1841:17)
at esbuildCommandAndArgs (node_modules/vite/node_modules/esbuild/lib/main.js:1922:33)
at ensureServiceIsRunning (node_modules/vite/node_modules/esbuild/lib/main.js:2087:25)
at build (node_modules/vite/node_modules/esbuild/lib/main.js:1978:26)
at runOptimizeDeps (node_modules/vite/dist/node/chunks/dep-3007b26d.js:42941:26)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
I have absolutely no clue what is going on here except architecture mismatch (that doesn't seem to be the case with working version - I'm on MacBook Air M1).
The second issue is that mounting doesn't update the page.

Okay, found the way. The issue was with Vite because HMR module is using port 24678 and I didn't expose it so it couldn't reload the page. This is how it should be looking:
docker run --rm -it \
-v /path/to/your/app/locally:/usr/src/app \
-p 3000:3000 \
-p 24678:24678 \
nuxt-app
Dockerfile
FROM node:lts-slim
WORKDIR /usr/src/app
CMD ["sh", "-c", "npm install && npm run dev"]

Related

How to install #swc/core-linux-musl on windows, to make it work in docker container?

I'am working on windows. I docernize Next.js with Typescript app.
Here is my dockerfile:
FROM node:alpine
# create directory where our application will be run
RUN mkdir -p /usr/src
WORKDIR /usr/src
# copy our files into directory
COPY . /usr/src
# install dependences
RUN npm install
EXPOSE 3000
ENTRYPOINT ["npm", "run" ,"dev"]
During development I bind host catalogue to container by --mount type=bind,source=d:/apps/library.next.js/,target=/usr/src. When I start container I get error: error - Failed to load SWC binary for linux/x64, see more info here: https://nextjs.org/docs/messages/failed-loading-swc.
That's fine, I understand error and know what to do. To fix this I need to install #swc/cli #swc/core #swc/core-linux-musl, but I can't doing it because npm complain:
ERR! code EBADPLATFORM npm
ERR! notsup Unsupported platform for #swc/core-linux-musl#1.2.42: wanted {"os":"linux","arch":"x64"} (current: {"os":"win32","arch":"x64"})
How to install it on windows, or how to change docker setup to make it work? I must install it locally then it will be linked (by binding !) into container.
My workaround for now is to get into container by docker exec -it <id> /bin/sh then manually type npm install -save-dev #swc/cli #swc/core #swc/core-linux-musl. But doing that every time I recreate container is annoying.
The docs state: The -f or --force will force npm to fetch remote resources even if a local copy exists on disk. And it should be in the docs v6 legacy, the one you posted and v8 version. (See the section after --package-lock-only. It comes with an example npm install sax --force). So, you shouldn't have issues with that every time your container is recreating.

Docker isn't caching Alpine apk add command

Everytime I build the container I have to wait for apk add docker to finish which takes a long time.
Since everytime it downloads the same thing, can I somehow force Docker to cache apk's downloads for development purposes?
Here's my Dockerfile:
FROM golang:1.13.5-alpine
WORKDIR /go/src/app
COPY src .
RUN go get -d -v ./...
RUN go install -v ./...
RUN apk add --update docker
CMD ["app"]
BTW, I am using this part volumes: - /var/run/docker.sock:/var/run/docker.sock in my docker-compose.yml to use sibling containers, if that matters.
EDIT: I've found google to copy docker.tgz in Chromium:
# add docker client -- do not install docker via apk -- it will try to install
# docker engine which takes a lot of space as well (we don't need it, we need
# only the small client to communicate with the host's docker server)
ADD build/docker/docker.tgz /
What is that docker.tgz? How can I get it?
Reorder your Dockerfile and it should work.
FROM golang:1.13.5-alpine
RUN apk add --update docker
WORKDIR /go/src/app
COPY src .
RUN go get -d -v ./...
RUN go install -v ./...
CMD ["app"]
As you are copying before installation, so whenever you change something in src the cache will invalidate for docker installtion.
Whenever you have a COPY command, if any of the files involve change, it causes every command after that to get re-run. If you move your RUN apk add ... command to the start of the file before it COPYs anything, it will get cached across runs.
A fairly generic recipe for most Dockerfiles to accommodate this pattern looks like:
FROM some-base-image
# Install OS-level dependencies
RUN apk add or apt-get install ...
WORKDIR /app
# Install language-level dependencies
COPY requirements.txt requirements.lock ./
RUN something install -r requirements.txt
# Install the rest of the application
COPY main.app ./
COPY src src/
# Set up standard run-time metadata
EXPOSE 12345
CMD ["/app/main.app"]
(Go and Java applications need the additional step of compiling the application, which often lends itself to a multi-stage build, but this same pattern can be repeated in both stages.)
You can download Docker x86_64 binaries for mac, linux, windows and unzip/untar and make it executable.
Whenever you are installing any packages in Docker container those should go at the beginning of Dockerfile, so it won’t ask you again to install same packages and COPY command part must be at the end of Dockerfile.

libnode-dev installation within docker container

I'm trying to run a node.js application.
I can run it without problems directly on my raspbian buster.
Within a docker container running on the same raspberry pi, I have no such luck.
Dockerfile:
FROM balenalib/raspberry-pi2-debian-node:10-stretch-run
RUN sudo apt-get update
RUN sudo apt-get -y install g++ python make git
WORKDIR /usr/src/app
COPY package.json package.json
RUN JOBS=MAX npm install --production
COPY . ./
CMD ["npm", "start"]
But when I run the same node.js code within a docker container, I'm getting a libnode.so.64 error.
pi#raspberrypi:~/rpi-lora-sensorified/data $ docker logs rpi-lora-sensorified_data_1
> resin-websocket#1.0.1 start /usr/src/app
> node index.js
/usr/src/app/node_modules/bindings/bindings.js:121
throw e;
^
Error: libnode.so.64: cannot open shared object file: No such file or directory
I've tried installing libnode-dev (which I've concluded provides this library) within the container, but I'm getting a
E: Unable to locate package libnode-dev
And yes, I've rebuilt the container without cache but still cannot locate that package.
Any (like really even some pointers would help) idea where do I continue to look further?
So the solution that I can't explain at all is:
I was trying to run the code on debian stretch, while testing if it works on debian buster.. When updating the docker image to buster, everthing works as expected.

nuxt js application builds local and starts on production with docker

I have a project developed on nuxt js. Now I want to employ it with docker. But for some reason, I need build it on my local machine of mac os. It would be better to run npm install on local machine. And then employing it on linux server of production environment. Can this task be done?
Sure can be. Build your project normally (via npm install), then, inside your project directory, write a Dockerfile like this:
FROM node:7.8.0-alpine
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
RUN apk update && apk upgrade && apk add git
# Copy your already built project files inside image
COPY . .
ENV HOST 0.0.0.0
EXPOSE 3000
# start command
CMD [ "npm", "start" ]
Make sure your Dockerfile is in the project's root directory where you'd normally run npm start.
Then, in order to create a image with your project, just do:
$ docker build -t myapp .
and run it with:
$ docker run -it -p 3000:3000 myapp

npm install doesn't work in Docker

This is my Dockerfile:
FROM node:7
RUN apt-get update && apt-get install -y --no-install-recommends \
rubygems build-essential ruby-dev \
&& rm -rf /var/lib/apt/lists/*
RUN npm install -gq gulp bower
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD ["gulp", "start:dev"]
When I build the image, the npm install command executes with little output and really quickly. I actually build it through docker-compose which does have a volume mounted - and I cannot see the node_modules folder being created on my hose. When I launch a container on this image, I can see there is no node_modules folder. I then execute npm install and things start working - it takes 2-3 minutes to install all the packages and the node_modules folder is indeed created.
What is happening here? What am I doing wrong? Why doesn't npm install work at build time, but then it works at run time?
The npm install should have worked based on your Dockerfile. You can see the created files if you run the image without a mounted volume (DIRNAME: where your docker-compose.yml is located):
docker run --rm -it DIRNAME_node ls -ahl /usr/src/app
With docker build, all data is stored in the image. So, it's intended that you don't see any files created on your host.
If you mount a volume (generally in Linux, also in a Docker container), it overlays the directory. So you can't see the node_modules created in the build step.
I suggest you do your tests based on the Docker image itself and don't mount the volume. Then you have an immutable Docker image which is better for deployment.
Also copying up the source and running npm install means that whenever the source code changes, the npm install step cache becomes invalid.
Instead, separate the steps/caches like so;
COPY package*.json ./
RUN npm install
On Windows 10 I was having the same issue reported in this question and after some research I've found a question with the necessary steps to solve the problem.
In short, the main problem is that during the install wizard I've selected the option "Windows as containers".
To solve the issue:
1) Switch to Linux Containers: On taskbar, right click on docker icon and click on the option as presented bellow:
2) Disable "Experimental Features" on Command Line: Open docker/settings and click on Command Line:
3) Disable Experimental setting on configuration file:: On docker/settings, click on Docker Engine and certify that experimental is set to false:
The question where I found the solution was related to another problem I was facing when trying to build docker images: Unspecified error (0x80004005) while running a Docker build. Both problems were related to the same issue: when installing docker for the first time I've selected the option "windows as containers".
Hope it helps. Cheers

Resources