Lift Sails inside Docker container - docker

I know there are multiple examples (actually only a few) out there, and I've looked into some and tried to apply them to my case but then when I try to lift the container (docker-compose up) I end up with more or less the same error every time.
My folder structure is:
sails-project
--app
----api
----config
----node_modules
----.sailsrc
----app.js
----package.json
--docker-compose.yml
--Dockerfile
The docker-compose.yml file:
sails:
build: .
ports:
- "8001:80"
links:
- postgres
volumes:
- ./app:/app
environment:
- NODE_ENV=development
command: node app
postgres:
image: postgres:latest
ports:
- "8002:5432"
And the Dockerfile:
FROM node:0.12.3
RUN mkdir /app
WORKDIR /app
# the dependencies are already installed in the local copy of the project, so
# they will be copied to the container
ADD app /app
CMD ["/app/app.js", "--no-daemon"]
RUN cd /app; npm i
I tried also having RUN npm i -g sails instead (in the Dockerfile) and command:sails lift, but I'm getting:
Naturally, I tried different configurations of the Dockerfile and then with different commands (node app, sails lift, npm start, etc...), but constantly ending up with the same error. Any ideas?

By using command: node app you are overriding the command CMD ["/app/app.js", "--no-daemon"] which as a consequence will have no effect. WORKDIR /app will create an app folder so you don't have to RUN mkdir /app. And most important you have to RUN cd /app; npm i before CMD ["/app/app.js", "--no-daemon"]. NPM dependencies have to be installed before you start your app.

Related

Nuxt 3 Docker doesn't recognize new pages, what am I doing wrong?

I have a problem with my Nuxt 3 project that I run with Docker (dev environment).
Nuxt 3 should automatically create routes when I create .vue files in pages directory, and that works when I run my project outside of Docker, but when I use Docker it doesn't recognize my files until I restart the container. Same thing happens when I try to delete files from pages directory, it doesn't recognize any changes until I restart the container. Weird thing is that this happens only in pages directory, in other directories everything works fine. Just to mention that hot reload works, I set up vite in nuxt.config.ts.
docker-compose.yaml
version: '3.8'
services:
nuxt:
build:
context: .
image: nuxt_dev
container_name: nuxt_dev
command: npm run dev
volumes:
- .:/app
- /app/node_modules
ports:
- "3000:3000"
- "24678:24678"
Dockerfile:
FROM node:16.14.2-alpine
WORKDIR /app
RUN apk update && apk upgrade
RUN apk add git
COPY ./package*.json /app/
RUN npm install && npm cache clean --force && npm run build
COPY . .
ENV PATH ./node_modules/.bin/:$PATH
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=3000
EXPOSE 3000
CMD ["npm", "run", "dev"]
I tried some things with Docker volumes, like to add a separate volume just for pages, like this:
./pages:app/pages
/pages:app/pages
app/pages
but as I thought, none of those things helped.
One more thing that is weird to me, when I created a .vue file in pages directory, I checked if it appeared in the container and it did. I'm not an expert in Docker nor in Nuxt, I just started to learn, so any help would be much appreciated.

Preventing Docker Compose container from creating files as root

Dockerfile:
FROM node:18.13.0
ENV WORK_DIR=/app
RUN mkdir -p ${WORK_DIR}
WORKDIR ${WORK_DIR}
RUN mkdir ${WORK_DIR}/data
RUN chmod -R 755 ${WORK_DIR}/data
COPY package*.json ./
RUN npm ci
COPY . .
docker-compose.yml:
version: '3.8'
services:
fetch:
container_name: fetch
build: .
command: sh -c "npx prisma migrate deploy && npm start"
restart: unless-stopped
depends_on:
- postgres
volumes:
- ./data:/app/data:z
The container fetches new files and saves them into a directory configured by the app running in the container, defaulting to data/. The issue is that they're all created as root and cannot be manipulated by the host. If I chown the dir on the host, it works, but any new files are then created as root again.
I've tried a couple different variations of creating a new user in Dockerfile and passing host user info into the compose file but it always seems to result in a disconnect between the Dockerfile and compose file. I'm trying to keep things as easy as docker compose up, if possible.

Container exited with code 0, and my app is served from the host OS

I want to dockerize a Next.js project.
I am using Ubuntu 20.04
I first created a Next.js app in my /home/user/project/ folder using npx create-next-app
So I have the project source code in my host machine.
But I want to dockerize it, so I created a docker-compose.yaml:
next:
build:
context: ./next
dockerfile: Dockerfile
container_name: next
volumes:
- ./next:/var/www/html
ports:
- "3000:3000"
networks:
- nginx
And this is the Dockerfile:
#Creates a layer from node:alpine image.
FROM node:alpine
#Creates directories
RUN mkdir -p /usr/src/app
#Sets an environment variable
ENV PORT 3000
#Sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, and ADD commands
WORKDIR /usr/src/app
#Copy new files or directories into the filesystem of the container
COPY package.json /usr/src/app
COPY package-lock.json /usr/src/app
#Execute commands in a new layer on top of the current image and commit the results
RUN npm install
##Copy new files or directories into the filesystem of the container
COPY . /usr/src/app
#Execute commands in a new layer on top of the current image and commit the results
RUN npm run build
#Informs container runtime that the container listens on the specified network ports at runtime
EXPOSE 3000
#Allows you to configure a container that will run as an executable
ENTRYPOINT ["npm", "run"]
Then I build my container using docker-compose build && docker-compose up.
The container is built, but it's not running and is displaying EXITED (0)
and the LOGS has the following message:
Lifecycle scripts included in next-frontend#0.1.0:
start
next start
available via `npm run-script`:
dev
next dev
build
next build
lint
next lint
But of course if I run in the host npm run dev it will run the app from the host, and not from the container (It runs, but that's not what I want)
I feel like there is some very fundamental mistake in my deployment, but I just started with Docker so I can't find out what
Also, I copied the Dockerfile from a tutorial so it might not fit the way I created the project
ENTRYPOINT ["npm", "run"]... What?
From npm run documentation,
This runs an arbitrary command from a package's "scripts" object. If no "command" is provided, it will list the available scripts.
In the docker-compose.yml, you need to override the CMD instruction (that is empty in your case) with the npm script you want to run. Something like this:
next:
build:
context: ./next
dockerfile: Dockerfile
container_name: next
command: ["start"]
volumes:
- ./next:/var/www/html
ports:
- "3000:3000"
networks:
- nginx
Since you are using the Compose Spec, this is the reference for the command instruction.

Docker + node_modules: receiving error for local dependency while trying to run Dockerfile

I am working on creating a docker container for a node.js microservice and am running into an issue with a local dependency from another folder.
I added the dependency to the node_modules folder using:
npm install -S ../dependency1(module name).
This also added an entry in the package.json as follows:
"dependency1": "file:../dependency1".
When I run the docker-compose up -d command, I receive an error indicating the folowing:
npm ERR! Could not install from "../dependency1" as it does not contain a package.json file.
Dockerfile:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD [ "npm", "start" ]
EXPOSE 3000
docker-compose.yml:
customer:
container_name: "app_customer"
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/usr/src/app/
- /usr/src/app/node_modules
ports:
- "3000:3000"
depends_on:
- mongo
- rabbitmq
I found articles outlining an issue with symlinks in a node_modules folder and docker and a few outlining this issue but none seem to provide a solution to this problem. I am looking for a solution to this problem or a really good workaround.
A Docker build can't reference files outside of the build context, which is the . defined in the docker-compose.yml file.
docker build creates a tar bundle of all the files in a build context and sends that to the Docker daemon for the build. Anything outside the context directory doesn't exist to the build.
You could move your build context with context: ../ to the parent directory and shuffle all the paths you reference in the Dockerfile to match. Just be careful not to make the build context to large as it can slow down the build process.
The other option is to publish the private npm modules to a scope, possible on a private npm registry that you and the build server have access to and install the dependencies normally.

Docker-compose volume mount before run

I have a Dockerfile I'm pointing at from a docker-compose.yml.
I'd like the volume mount in the docker-compose.yml to happen before the RUN in the Dockerfile.
Dockerfile:
FROM node
WORKDIR /usr/src/app
RUN npm install --global gulp-cli \
&& npm install
ENTRYPOINT gulp watch
docker-compose.yml
version: '2'
services:
build_tools:
build: docker/gulp
volumes_from:
- build_data:rw
build_data:
image: debian:jessie
volumes:
- .:/usr/src/app
It makes complete sense for it to do the Dockerfile first, then mount from docker-compose, however is there a way to get around it.
I want to keep the Dockerfile generic, while passing more specific bits in from compose. Perhaps that's not the best practice?
Erik Dannenberg's is correct, the volume layering means that what I was trying to do makes no sense. (There is another really good explaination on the Docker website if you want to read more). If I want to have Docker do the npm install then I could do it like this:
FROM node
ADD . /usr/src/app
WORKDIR /usr/src/app
RUN npm install --global gulp-cli \
&& npm install
CMD ["gulp", "watch"]
However, this isn't appropriate as a solution for my situation. The goal is to use NPM to install project dependencies, then run gulp to build my project. This means I need read and write access to the project folder and it needs to persist after the container is gone.
I need to do two things after the volume is mounted, so I came up with the following solution...
docker/gulp/Dockerfile:
FROM node
RUN npm install --global gulp-cli
ADD start-gulp.sh .
CMD ./start-gulp.sh
docker/gulp/start-gulp.sh:
#!/usr/bin/env bash
until cd /usr/src/app && npm install
do
echo "Retrying npm install"
done
gulp watch
docker-compose.yml:
version: '2'
services:
build_tools:
build: docker/gulp
volumes_from:
- build_data:rw
build_data:
image: debian:jessie
volumes:
- .:/usr/src/app
So now the container starts a bash script that will continuously loop until it can get into the directory and run npm install. This is still quite brittle, but it works. :)
You can't mount host folders or volumes during a Docker build. Allowing that would compromise build repeatability. The only way to access local data during a Docker build is the build context, which is everything in the PATH or URL you passed to the build command. Note that the Dockerfile needs to exist somewhere in context. See https://docs.docker.com/engine/reference/commandline/build/ for more details.

Resources