How can I use Rush and Docker volumes together? - docker

I am trying to use Rush to handle a monorepo. I saw a recent, similar question at How to build for docker (rush monorepo)?, but it has no answers and is more about build issues than development issues.
Rush uses symlinks to avoid copying the same dependencies across different packages in same repo.
I am using docker-compose for local dev as I would for any other project. The config is like this:
version: '3'
services:
web:
build: .
image: 'my-image'
command: "npm start"
user: "node"
working_dir: /home/node/app
volumes:
- ./:/home/node/app
When I do docker-compose up it can't find any of my dependencies. If I copy the folder to a random location, run npm install, and try the same it works because there are no symlinks.
I was debating doing a volume to the source location of ../../common/temp/node_modules/ but that that might be a bit crazy as it has every node modules for all the packages. The thing is that the files live outside of the folder structure of my server/docker package.
Is there some docker or rush optionality I am missing?

This works, but feels wrong. Hoping another user has a better answer.
volumes:
- ./src/:/home/node/app/src
- ../../common/temp/node_modules/:/home/node/app/node_modules

I would guess that you have node_modules in your .dockerignore path, so it is not including the output of your install run from your host. When you do the volume, the empty folder then gets mapped over.

Related

Docker npm install with volumes mapping

I'm following a tutorial for docker and docker compose. Although there is a npm install command in Dockerfile (as following), there is a situation that tutor have to run that command manually.
COPY package*.json .
RUN npm install
Because he is mapping the project current directory to the container by volumes as following:(which is like project runs in the container by mapped source code to the host directory)
api:
build: ./backend
ports:
- 3001:3000
environment:
DB_URL: mongodb://db/vidly
volumes:
- ./backend:/app
So npm install command in docker file doesn't make any sense. So he runs this command directly in the root of project.
So, another developer has to run npm install as well, (or if I add a new package, I should do it too) which seems not very developer friendly. Because the purpose of docker is not to do the instructions by yourself. So docker-compose up should do everything. Any idea about this problem would be appreciated.
I agree with #zeitounator, who adds very sensible commentary to your situation and use.
However, if you did want to solve the original problem of running a container that volume mounts in code, and have it run a development server, then you could move the npm command from the COPY directive to the CMD, or even add an entry script to the container that includes the npm call.
That way you could run the container with the volume mount, and the starting process (npm install, npm serve dev, etc) would occur at runtime as opposed to buildtime.
The best solution is as you mention yourself, Vahid, to use a smart Dockerfile that leverages sensible build caching, that allows the application to be built and ran with one command (and no external input). Perhaps you and your tutor can talk about these differences and come to an agreement

TypeError: Object(...) is not a function when mounting a volume from the local directory into a next.js container

I'm trying to mount a volume from my local directory for Next.js/React's hot reload during development. My current docker-compose.development.yml looks like this:
services:
web:
command: next dev
volumes:
- ./:/usr/src/app
- /usr/src/app/node_modules
- /usr/src/app/.next
depends_on:
db:
condition: service_healthy
It extends my main docker-compose with the command docker-compose -f docker-compose.yml -f docker-compose.development.yml up --build:
services:
web:
build: .
command: /bin/sh -c 'npm run build && npm start'
ports:
- 3000:3000
- 5432:5432
env_file:
- .env.local
It works fine without the development overrides and without docker. I believe this problem has to do with running next dev in a container as the problem persists even after removing the volume bindings. Here is the full call stack. It points to the error being in the src/pages/_app.tsx file.
This are the basic steps to troubleshoot an issue when you can build your project in one environment and you are not able to do it in another.
Make sure the npm install was run before the build starts.
I can not see from the shared snippets you have shared if this was done. To build in the container you need to have the dependancies installed.
Make sure that your package.json is up to date with the versions of the packages/modules that are installed in the development environment.
If you don't have the package.json or it was not maintained you can check in this SO post how to generate it again.
Next to check is the C/C++ build environment. some of the modules require C/C++ or C#/mongo to build environments to be present in the image. Also, most often there will be requirement for specific dev shared libraries to be installed.
Check which dependancies are required for your packages and which libraries are required to be installed in the OS for the modules to work.
Finally, some modules are OS dependent (ex. work only on Windows, or only on macOS), or architecture dependent (amd64, arm64, etc.)
Ream the information about the package/module and research it on internet. If you have such modules, you will face challenges to package them in a container, so best approach here is to refactor them out from your project before you can containerize it.
I had NODE_ENV set to production instead of development in my Dockerfile. I assume it was conflicting with one of the steps for hot reloading.

Docker compose volumes_from not updating after rebuild

Imagine two containers: webserver (1) is hosting static HTML files that need to be built form templates inside a data volume container (2).
docker-compose.yml file looks something like this:
version: "2"
services:
webserver:
build: ./web
ports:
- "80:80"
volumes_from:
- templates
templates:
build: ./templates
Dockerfile for templates service looks like this
FROM ruby:2.3
# ... there is more but that is should not be important
WORKDIR /tmp
COPY ./Gemfile /tmp/Gemfile
RUN bundle install
COPY ./source /tmp/source
RUN bundle exec middleman build --clean
VOLUME /tmp/build
When I run docker-compose up everything is working as expected: templates are built, webserver hosts them and you can view them in the browser.
Problem is, when I update the ./source and restart/rebuild the setup, the files the webserver hosts are still the old ones, although the log shows that the container was rebuilt - at least the last three layers after COPY ./source /tmp/source. So the changes inside the source folder are picked up by the rebuilt but I'm not able to get the changes shown in the browser.
What am I doing wrong?
Compose preserves volumes when containers are recreated, which is probably why you are seeing the old files.
Generally it is not a good idea to use volumes for source code (or in this case static html files). Volumes are for data you want to persist, like data in a database. Source code changes with each version of the image, so doesn't really belong in a volume.
Instead of using a data volume container for these files, you can use a builder container to compile them and a webserver service to host them. You'll need to add a COPY to the webserver Dockerfile to include the files.
To accomplish this you would change your docker-compose.yml to this:
version: "2"
services:
webserver:
image: myapp:latest
ports: ["80:80"]
Now you just need to build myapp:latest. You could write a script which:
builds the builder container
runs the builder container
builds the myapp container
You can also use a tool like dobi instead of writing a script (disclaimer: I am the author of this tool). There is an example of building a minimal docker image which is very similar to what you're trying to do.
Your dobi.yaml might look something like this:
image=builder:
image: myapp-dev
context: ./templates
job=templates:
use: builder
image=webserver:
image: myapp
tags: [latest]
context: .
depends: [templates]
compose=serve:
files: [docker-compose.yml]
depends: [webserver]
Now if you run dobi serve it will do all the steps for you. Each step will only be run if files have changed.

Why is docker-compose ignoring changes to the Dockerfile?

I have made changes in my Dockerfile and yet when I run either
docker-compose up
dock-compose rm && docker-compose build && docker-compose up
an old image is used, as the shown steps states are outdated.
I specifically tell it to build the container in the docker-compose.yml:
my-app:
build: ./
hostname: my-app
...
Yet when I build the container just via docker:
docker build .
The right container is built. What am I missing? I have tried this to no avail.
Check what dockerfile is configured in your docker-compose.yml.
My app has two dockerfiles, and docker-compose used a different one than docker itself, as it should:
my-app:
build: ./
dockerfile: Dockerfile.dev
Adapting that dockerfile as well fixed the problem.
And, oh, if you are using multiple dockerfiles, it's nice to add that in the project's documentation.
I had the same question but my answer was different.
I moved from a large nginx image to a slim, Alpine nginx image.
I thought docker compose was ignoring my Dockerfile as the error looked like it had not copied a script. The error was:
/bin/sh: /usr/bin/start.sh: not found
Well, the file was there. The file had the correct permissions. All I needed to do was to resolve the wrong shebang in my script and all worked with docker-compose:
#!/bin/sh WORKED
#!/bin/bash FAILED

Docker VOLUME for different users

I'm using docker and docker-compose for building my app. There are two developers now for the project hosted on github.
Our project structure is:
sup
dockerfiles
dev
build
.profile
Dockerfile
docker-compose.yml
Now we have ./dockerfiles/dev/docker-compose.yml like this:
app:
container_name: sup-dev
build: ./build
and ./dockerfiles/dev/build/Dockerfile:
FROM sup:dev
# docker-compose tries to find .profile relative to build dir:
# ./dockerfiles/dev/build
COPY .profile /var/www/
We run container like so:
docker-compose up -d
Everything works fine, but due to different OS we have our code in different places: /home/aliance/www/project for me and /home/user/other/path/project for the second developer. So I can not just add volume instruction into Dockerfile.
Now we solve this problem in this wrong way:
- I am using lsyncd with my personal config to transfer files into the container
- While the second one uses volume instruction into Dockerfile but not commited it.
May be you know how can I write an unified Dockerfile for docker-compose to volume out code into app container from different paths?
The file paths on the host shouldn't matter. Why do you need absolute paths?
You can use paths that are relative to the docker-compose.yml so they should be the same for both developers.
The VOLUME instructions in the Dockerfile are always relative to the build context, so if you want, you can use something like this:
app:
container_name: sup-dev
build: ..
dockerfile: build/Dockerfile
That way the build context for the Dockerfile will be the project root.
Maybe you should keep your Dockerfile at the root of your project. Then you could add an instruction in the Dockerfile:
COPY ./ /usr/src/app/
or (not recommended in prod)
VOLUME /usr/src/app
+ (option while running the container as I don't know docker-compose)
-v /path/to/your/code:/usr/src/app

Resources