Docker Compose, Rails and Webpacker not perserving node_modules - docker

TL;DR - yarn install installs node_modules in an 'intermediate container' and the packages disappear after the build step.
I'm trying to get webpacker going with our dockerized rails 5.0 app.
Dockerfile
FROM our_company_centos_image:latest
RUN yum install wget -y
RUN wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo
RUN yum install sqlite-devel yarn -y
RUN mkdir -p $APP_HOME/node_modules
COPY Gemfile Gemfile.lock package.json yarn.lock $APP_HOME/
RUN bundle install --path /bundle
RUN yarn install --pure-lockfile
ADD . $APP_HOME
When yarn install runs, it installs the packages, followed immediately by
Removing intermediate container 67bcd62926d2
Outside of the container, running ls node_modules shows an empty directory, and the docker-compose up process will eventually fail when running webpack_dev_server exits due to the modules not being present.
I've done various things link adding node_modules as a volume in docker-compose.yml to no effect.
The only thing that HAS worked is running yarn install locally to build the directory and then doing it again in the directory, but then I've got OS X versions of the packages which may eventually cause a problem.
What am I doing wrong here?
docker-compose.yml
version: '2'
services:
web:
build: .
network_mode: bridge
environment:
WEBPACK_DEV_SERVER_HOST: webpack_dev_server
links:
- webpack_dev_server
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- ./node_modules:/app/node_modules
- .:/app
ports:
- "3000:3000"
tty: true
stdin_open: true
webpack_dev_server:
image: myapp_web
network_mode: bridge
command: bin/webpack-dev-server
environment:
NODE_ENV: development
RAILS_ENV: development
WEBPACK_DEV_SERVER_HOST: 0.0.0.0
volumes:
- .:/app
ports:
- "3035:3035"

The last step is to ADD . $APP_HOME. You also mention that node_modules folder is empty in your local tree. Does that mean node_modules exists still as an empty folder?
If this is true, then the node_modules empty folder is likely getting copied over during the ADD step and overwriting everything that was done in the previous yarn step.

One solution I found, is to add the node_modules as a volume.
For example, if you node_modules directory is located at /usr/src/app/node_modules, just add:
volumes:
- /usr/src/app/node_modules

I have a Rails 5.2.0.rc1 app with webpacker working at https://github.com/archonic/limestone. It's not 100% right yet but I've found that running docker-compose webpacker yarn install --pure-lockfile gets things up and running on a new environment before docker-compose up --build. I'm not entirely sure yet why that's required since it's in the Dockerfile.
Also as far as I know your volume for web should just be - '.:/app' and the statement with node_modules is redundant.

Related

Nuxt 3 Docker doesn't recognize new pages, what am I doing wrong?

I have a problem with my Nuxt 3 project that I run with Docker (dev environment).
Nuxt 3 should automatically create routes when I create .vue files in pages directory, and that works when I run my project outside of Docker, but when I use Docker it doesn't recognize my files until I restart the container. Same thing happens when I try to delete files from pages directory, it doesn't recognize any changes until I restart the container. Weird thing is that this happens only in pages directory, in other directories everything works fine. Just to mention that hot reload works, I set up vite in nuxt.config.ts.
docker-compose.yaml
version: '3.8'
services:
nuxt:
build:
context: .
image: nuxt_dev
container_name: nuxt_dev
command: npm run dev
volumes:
- .:/app
- /app/node_modules
ports:
- "3000:3000"
- "24678:24678"
Dockerfile:
FROM node:16.14.2-alpine
WORKDIR /app
RUN apk update && apk upgrade
RUN apk add git
COPY ./package*.json /app/
RUN npm install && npm cache clean --force && npm run build
COPY . .
ENV PATH ./node_modules/.bin/:$PATH
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=3000
EXPOSE 3000
CMD ["npm", "run", "dev"]
I tried some things with Docker volumes, like to add a separate volume just for pages, like this:
./pages:app/pages
/pages:app/pages
app/pages
but as I thought, none of those things helped.
One more thing that is weird to me, when I created a .vue file in pages directory, I checked if it appeared in the container and it did. I'm not an expert in Docker nor in Nuxt, I just started to learn, so any help would be much appreciated.

Webpack compiled output folder not showing up in docker container

I'm experiencing a confusing situation that I'd love some additional thoughts about. I'm trying to get a local dev environment set up at my company using Docker.
Goal 1 is to allow local edits to app and lib to be captured and piped into the container. I am able to accomplish this with no problem using bind mounts to those directories. This enables our ruby work to happen against a running docker cluster, perfect!
Goal 2 is to allow updates to public/assets/, generated by a webpack process running on the local filesystem, to be captured and piped into the container. The intention is for our front-end engineers to run webpack locally, but to allow their compiled local output to be served by the running docker container.
Unfortunately, something strange is happening when I try to do this. Currently, as you can see below, I'm trying to use public/ as a bind mount. This sort of works -- I can navigate into public/ and see a bunch of files. But this is where it gets weird. The public/ directory contains assets/, which holds the output of the local webpack process. When I shell into my container I can see the assets folder:
$ cd public
$ ls -la
...
drwxr-xr-x 36 root root 1152 Aug 13 13:12 assets
....
But then when I try to access it I get weird behavior:
/webapp/public # ls -la assets
ls: cannot access 'assets': No such file or directory
Sure enough, when I navigate to my web application it gives me 404's on anything in the public/assets/ folder.
Here's my Dockerfile:
FROM ruby:2.7.2-alpine
WORKDIR /webapp
RUN apk add --update --no-cache yarn nodejs npm graphviz vim postgresql-client coreutils binutils build-base readline readline-dev cmake git nodejs openssh-client openssl-dev postgresql-dev shared-mime-info tzdata; \
mkdir -p -m 0600 ~/.ssh; \
ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN mkdir config
# this overwrites the local database.yml with some custom params we need to use docker-compose's postgres.
# this way you don't need to maintain a forked local config, the container will configure itself correctly.
COPY local/config/application.template.yml config/application.yml
COPY local/config/database.template.yml config/database.yml
COPY local/config/warehouse.template.yml config/warehouse.yml
COPY local/config/.setup_config.template.yml ./.setup_config.yml
ARG packagecloud
ARG contribsys
ENV PACKAGECLOUD_TOKEN=$packagecloud
ENV BUNDLE_GEMS__CONTRIBSYS__COM=$contribsys
# by only copying over config and gem info prior to bundle installing I'm hoping to
# get Docker to use its caching to not run this unless something above has changed.
COPY ./Gemfile ./Gemfile
COPY ./Gemfile.lock ./Gemfile.lock
RUN --mount=type=ssh bundle install
# we should be able to do the same for yarn, but it's not nearly as expensive.
# for some reason, though, the packagecloud stuff fails if we run this before
# copying everything over. Unsure why.
COPY ./package.json ./package.json
COPY ./yarn.lock ./yarn.lock
COPY ./.npmrc ./.npmrc
RUN yarn install
COPY . .
# if necessary, remove the .gitkeep files -- they keep databases from initializing. This sucks for now.
RUN mkdir -p local/data/redis/
RUN mkdir -p local/data/postgres/
EXPOSE 3000
CMD ["/bin/sh"]
Here's my docker-compose.yml with database details redacted:
version: "3"
networks:
default:
name: mode-net
driver: bridge
services:
db:
image: postgres:11.5
environment:
volumes:
- ./local/data/postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: "redis:5-alpine"
command: redis-server
ports:
- "6379:6379"
volumes:
- ./local/data/redis:/var/lib/postgresql/data
environment:
- REDIS_URL_SIDEKIQ=redis://redis:6379/1
sidekiq:
image: webapp:latest
depends_on:
- "db"
- "redis"
command: bin/sidekiq
volumes:
- ./app:/webapp/app
- ./lib:/webapp/lib
environment:
- RAILS_ENV=development
- DATABASE_URL=
- REDIS_URL_SIDEKIQ=redis://redis:6379/1
web:
image: webapp:latest
command: /webapp/local/scripts/start_server
environment:
- RAILS_ENV=development
- DEBUG=$DEBUG
- DATABASE_URL=
- REDIS_URL_SIDEKIQ=redis://redis:6379/1
ports:
- "3000:3000"
- "1234:1234" # used for debugger access
volumes:
- ./public:/webapp/public
- ./app:/webapp/app
- ./lib:/webapp/lib
- ./webapp-ui:/webapp/webapp-ui
depends_on:
- db
- redis
I do include certain parts of public in .dockerignore but by bind mounting the volume I think it should include it at runtime, right?
update
So, I'm not going to accept this as The Answer but I do have some more information. It seems like if I restart my computer, start a fresh docker engine, and fire it up against built static files in public/assets it works fine.
If, while it's running, I do a build that updates the static assets in public/assets, that build's output is now captured by my container. Great!
But if I start, on my filesystem, a webpack dev server? That's when everything falls apart. I suspect it's doing something to the folder on the filesystem that isn't playing nicely with Docker. What sucks, though, is that once I start that dev server even once my Docker setup is hosed until I restart my computer.
That's... really weird?
I couldn’t reproduce this and it would be helpful if you provide more contents for reproduce it easily.
This could be a bug in the Docker file sharing mechanism or race condition to access the assets folder.
If you are using Docker Desktop on Mac, I recommend to disable Use gRPC FUSE for file sharing feature Preferences -> General
Disable Use gRPC FUSE for file sharing

Why modules are not installed by only binding volume in docker-compose

When I tried docker-compose build and docker-compose up -d
I suffered api-server container didn't start.
I tried
docker logs api-server
yarn run v1.22.5
$ nest start --watch
/bin/sh: nest: not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
It seems nest packages didn't installed. because package.json was not copied to container from host.
But in my opinion,by volume was binded by docker-compose.yml, Therefore the command yarn install should refer to the - ./api:/src.
Why do we need to COPY files to container ?
Why only the volume binding doesn't work well ?
If someone has opinion,please let me know.
Thanks
The following is dockerfile
FROM node:alpine
WORKDIR /src
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN apk --no-cache add curl
RUN yarn install
CMD yarn start:dev
Following is
docker-compose.yml
version: '3'
services:
api-server:
build: ./api
links:
- 'db'
ports:
- '3000:3000'
volumes:
- ./api:/src
- ./src/node_modules
tty: true
container_name: api-server
Volumes are mounted at runtime, not at build time, therefore in your case, you should copy the package.json prior to installing dependencies, and running any command that needs these dependencies.
Some references:
Docker build using volumes at build time
Can You Mount a Volume While Building Your Docker Image to Cache Dependencies?

Webpack not seeing file changes in docker alpine

I'm trying to port my already-working webpack app to a docker setup for easier development env setup. I've used a following Dockerfile for this:
FROM scardon/ruby-node-alpine
MAINTAINER mbajur#gmail.com
RUN apk add --no-cache build-base python
ENV BUNDLE_PATH /box
RUN mkdir -p /app
WORKDIR /app
COPY . ./
EXPOSE 4567
And my docker-compose.yml
version: '3'
services:
app: &app_base
build:
context: .
command: webpack --watch -d --color
volumes:
- .:/app
- box:/box
ports:
- "4567:4567"
volumes:
box:
And i'm running my webpack setup with
$ docker-compose up
However, for some reason, webpack can't see changes made to my files. Also, after some googling, i've tried using --watch-poll instead but it then exits immediately after first build instead watching as a deamon.
What can i do to make it work in docker-alpine ? I have a feeling i'm missing something simple in here.
ps. my project is mostly based on this: https://github.com/grassdog/middleman-webpack
ps2. i'm pretty sure it has nothing to do with webpack config cause it works perfectly fine when used outside of docker
ps3. i've also played with setting up fs.inotify.max_user_watches to 524288 but it didn't changed much

Lift Sails inside Docker container

I know there are multiple examples (actually only a few) out there, and I've looked into some and tried to apply them to my case but then when I try to lift the container (docker-compose up) I end up with more or less the same error every time.
My folder structure is:
sails-project
--app
----api
----config
----node_modules
----.sailsrc
----app.js
----package.json
--docker-compose.yml
--Dockerfile
The docker-compose.yml file:
sails:
build: .
ports:
- "8001:80"
links:
- postgres
volumes:
- ./app:/app
environment:
- NODE_ENV=development
command: node app
postgres:
image: postgres:latest
ports:
- "8002:5432"
And the Dockerfile:
FROM node:0.12.3
RUN mkdir /app
WORKDIR /app
# the dependencies are already installed in the local copy of the project, so
# they will be copied to the container
ADD app /app
CMD ["/app/app.js", "--no-daemon"]
RUN cd /app; npm i
I tried also having RUN npm i -g sails instead (in the Dockerfile) and command:sails lift, but I'm getting:
Naturally, I tried different configurations of the Dockerfile and then with different commands (node app, sails lift, npm start, etc...), but constantly ending up with the same error. Any ideas?
By using command: node app you are overriding the command CMD ["/app/app.js", "--no-daemon"] which as a consequence will have no effect. WORKDIR /app will create an app folder so you don't have to RUN mkdir /app. And most important you have to RUN cd /app; npm i before CMD ["/app/app.js", "--no-daemon"]. NPM dependencies have to be installed before you start your app.

Resources