Docker npm install with volumes mapping - docker

I'm following a tutorial for docker and docker compose. Although there is a npm install command in Dockerfile (as following), there is a situation that tutor have to run that command manually.
COPY package*.json .
RUN npm install
Because he is mapping the project current directory to the container by volumes as following:(which is like project runs in the container by mapped source code to the host directory)
api:
build: ./backend
ports:
- 3001:3000
environment:
DB_URL: mongodb://db/vidly
volumes:
- ./backend:/app
So npm install command in docker file doesn't make any sense. So he runs this command directly in the root of project.
So, another developer has to run npm install as well, (or if I add a new package, I should do it too) which seems not very developer friendly. Because the purpose of docker is not to do the instructions by yourself. So docker-compose up should do everything. Any idea about this problem would be appreciated.

I agree with #zeitounator, who adds very sensible commentary to your situation and use.
However, if you did want to solve the original problem of running a container that volume mounts in code, and have it run a development server, then you could move the npm command from the COPY directive to the CMD, or even add an entry script to the container that includes the npm call.
That way you could run the container with the volume mount, and the starting process (npm install, npm serve dev, etc) would occur at runtime as opposed to buildtime.
The best solution is as you mention yourself, Vahid, to use a smart Dockerfile that leverages sensible build caching, that allows the application to be built and ran with one command (and no external input). Perhaps you and your tutor can talk about these differences and come to an agreement

Related

Managing Docker build steps files and using local volume

So I have a Dockerfile with the following build steps:
FROM composer:latest AS composer
COPY ./src/composer.json ./src/composer.lock ./
RUN composer install --prefer-dist
FROM node:latest AS node
COPY ./src ./
RUN yarn install
RUN yarn encore prod
FROM <company image>/php74 as base
COPY --from=composer --chown=www-data:www-data /app/vendor /var/www/vendor
COPY --from=node --chown=www-data:www-data ./public/build /var/www/public/build
# does the rest of the build...
and in my docker-compose file, I've got a volume for local changes
volumes:
- ./src:/var/www
The container runs fine on the CI/CD pipeline and deploys just fine, it grabs everything it needs and COPY's the correct files in the src directory. The problem is when we use a local volume for the code (for working in development). We have to composer/yarn install on the host because the src folder does not container node_modules/ or vendor/.
Is there a way to publish the node_modules/vendor directory back to the volume?
My attempts have been within the Dockerfile and publishing node_modules and vendor as volumes and that didn't work. Maybe it's not possible to publish a volume inside another volume? IE: within Dockerfile: VOLUME /vendor
The only other way I can think of solving this would be to write a bash script that docker run composer on docker-compose up. Then that would make the build step pointless.
Hopefully I've explained what I'm trying to achieve here. Thanks.
You should delete that volumes: block, especially in a production environment.
A typical workflow here is that your CI system produces a self-contained Docker image. You can run it in a pre-production environment, test it, and promote that exact image to production. You do not separately need to copy the code around to run the image in various environments.
What that volumes: declaration says is to replace the entire /var/www directory – everything the Dockerfile copies into the image – with whatever happens to be in ./src on the local system. If you move the image between systems you could potentially be running completely different code with a different filesystem layout and different installed packages. That's not what you really want. Instead of trying to sync the host content from the image, it's better to take the host filesystem out of the picture entirely.
Especially if your developers already need Composer and Node installed on their host system already, they can just use that set of host tools for day-to-day development, setting environment variables to point at data stores in containers as required. If it's important to do live development inside a container, you can put the volumes: block (only) in a docker-compose.override.yml file that isn't deployed to the production systems; but you still need to be aware that you're "inside a container" but not really actually running the system in the form it would be in production.
You definitely do not want a Dockerfile VOLUME for your libraries or code. This has few obvious effects; its most notable are to cause RUN commands to be able to change that directory, and (if you're running in Compose) for changes in the underlying image to be ignored. Its actual effect is to cause Docker to create an anonymous volume for that directory if nothing else is already mounted there, which then generally behaves like a named volume. Declaring a VOLUME or not isn't necessary to mount content into the container and doesn't affect the semantics if you do so.

TypeError: Object(...) is not a function when mounting a volume from the local directory into a next.js container

I'm trying to mount a volume from my local directory for Next.js/React's hot reload during development. My current docker-compose.development.yml looks like this:
services:
web:
command: next dev
volumes:
- ./:/usr/src/app
- /usr/src/app/node_modules
- /usr/src/app/.next
depends_on:
db:
condition: service_healthy
It extends my main docker-compose with the command docker-compose -f docker-compose.yml -f docker-compose.development.yml up --build:
services:
web:
build: .
command: /bin/sh -c 'npm run build && npm start'
ports:
- 3000:3000
- 5432:5432
env_file:
- .env.local
It works fine without the development overrides and without docker. I believe this problem has to do with running next dev in a container as the problem persists even after removing the volume bindings. Here is the full call stack. It points to the error being in the src/pages/_app.tsx file.
This are the basic steps to troubleshoot an issue when you can build your project in one environment and you are not able to do it in another.
Make sure the npm install was run before the build starts.
I can not see from the shared snippets you have shared if this was done. To build in the container you need to have the dependancies installed.
Make sure that your package.json is up to date with the versions of the packages/modules that are installed in the development environment.
If you don't have the package.json or it was not maintained you can check in this SO post how to generate it again.
Next to check is the C/C++ build environment. some of the modules require C/C++ or C#/mongo to build environments to be present in the image. Also, most often there will be requirement for specific dev shared libraries to be installed.
Check which dependancies are required for your packages and which libraries are required to be installed in the OS for the modules to work.
Finally, some modules are OS dependent (ex. work only on Windows, or only on macOS), or architecture dependent (amd64, arm64, etc.)
Ream the information about the package/module and research it on internet. If you have such modules, you will face challenges to package them in a container, so best approach here is to refactor them out from your project before you can containerize it.
I had NODE_ENV set to production instead of development in my Dockerfile. I assume it was conflicting with one of the steps for hot reloading.

NodeJS is not detecting change in Docker Bind Mount until Swarm is restarted

I'm building a NodeJS application on Docker in Swarm mode (single node). I'm using bind mount volume for NodeJS source code. Everything runs perfectly and I can see the output in localhost from NodeJS and Express, but when I change something in NodeJS code (which is in a bind mount volume), nothing changes. I have to restart my service to observe the changes. Earlier when I was working with Docker Compose only, it never happened, but now when I have switched to Swarm, I'm experiencing problems.
I'm using Docker 18 with Visual Studio Code 1.39 on macOS 10.14.6
Dockerfile
FROM node:12-alpine
WORKDIR /node-dir
COPY package*.json ./
RUN npm install
docker-compose.yml file
# Docker-compose.yml
version: '3.7'
services:
node-service:
image: node-img:1.0
ports:
- 80:3000
working_dir: "/node-dir"
volumes:
- ./node-dir/source:/node-dir/source
networks:
- ness-net
command: npm start
networks:
ness-net:
I also read that it could be due to the inodes, most editors when saving the file breaks the link. But it was working correctly under docker-compose with Visual Studio Code, its behaviour is changed only in Docker Swarm.
Update: I served a static html file using Nginx with bind mount, and I can easily change that file using VS Code and it's reflected. Its only NodeJS which is not detecting changes in a file.
If your volume mapping is correct, the source code changes should reach your node.js app container.
You can verify it by inspecting the source code inside the container after you make a change on docker host.
I'm currently in development mode, and I have to test the source code
repeatedly so I want to use bind mounts to make development and
testing easier.
However, your source code change won't be effective until node process inside the container reloads and picks up the changes.
In order to achieve this you have to use nodemon. Nodemon will pick the changes in the source code and reload node process along with the changes.
Another, longer alternative would be building new docker image and then updating your app using: docker service update --image=...
You can also use tilt to automate all of the above actions.

Composer install doesn't install packages when running in Dockerfile

I cannot seem to run composer install in Dockerfile but I can in the container after building an image and running the container.
Here's the command from Dockerfile:
RUN composer require drupal/video_embed_field:1.5
RUN composer install --no-autoloader --no-scripts --no-progress
The output is:
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Nothing to install or update
But after running the container with docker-compose:
...
drupal:
image: docker_image
container_name: container
ports:
- 8081:80
volumes:
- ./container/modules:/var/www/html/web/modules
links:
# Link the DB container:
- db
running docker exec composer install will install the packages correctly:
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Package operations: 1 installs, 0 updates, 0 removals
...
Generating autoload files
I am assuming that the composer.json and composer.lock files are correct because I can run the composer install command in the container without any further effort, but only after the container is running.
Update
Tried combining the composer commands with:
RUN composer require drupal/video_embed_field:1.5 && composer install
Same issue, "Nothing to install or update". Ultimately I would like to continue using seperate RUN statements in Dockerfile to take advantage of docker caching.
Your issue is coming from the fact that, docker-compose is meant to orchestrate multiple docker container build and run at the same time and it somehow is not really showing easily what it does behind the scene to people starting with docker.
Behind a docker-compose up there are four steps:
docker-compose build if needed, and if there is no existing image(s) yet, create the image(s)
docker-compose create if needed, and if there is no container(s) existing yet, create the container(s)
docker-compose start start existing container(s)
docker-compose logs logs stderr and stdout of the containers
So what you have to spot on there, is the fact that action contained into you Dockerfile are executed at the image creation step.
When mounting folders is executed at start of containers step.
So when you try to use a RUN command, part of the image creation step, on files like composer.lock and composer.json that are mounted on starting step, you end up having nothing to install via composer because your files are not mounted anywhere yet.
If you do a COPY of those files that may actual get you somewhere, because you will then have the composer files as part of your image.
This said, be careful that the mounted source folder will totally override the mounting point so you could end up expecting a vendor folder and not have it.
What you should ideally do is to have it as the ENTRYPOINT, this one happens at the last step of the container booting.
Here is for a little developing comparison: a docker image is to a docker container what a class is to an instance of an class — an object.
Your container are all created from images built possibly long time before.
Most of the steps in your Dockerfile happens at image creation and not at container boot time.
While most of the instruction of docker-compose are aimed at the automatisation of the container build, which include the mounting of folders.
Just noting a docker-compose.yml approach to the issue when the volume mount overwrites the composer files inside the container:
docker-compose.yml
environment:
PROJECT_DIR: /var/www/html
volumes:
- ./php/docker/entrypoint/90-composer-install.sh:/docker-entrypoint-init.d/90-composer-install.sh
composer-install.sh
#!/usr/bin/env bash
cd ${PROJECT_DIR}
composer install
This runs composer install after the build, using the docker-entrypoint-init.d shell script

Building an web app which can perform npm tasks

Before I post any configuration, I try to explain what I would like to archive and would like to mention, that I’m new to docker.
To make path conversations easier, let's assume we talk about the project "Docker me up!" and it's located in X:\docker-projects\docker-me-up\.
Goal:
I would like to run multiple nginx project with different content, each project represents a dedicated build. During development [docker-compose up -d] a container should get updated instantly; which works fine.
The tricky part is, that I want to outsource npm [http://gruntjs.com] from my host directly into the container/image, so I’m able to debug and develop wherever I am, by just installing docker. Therefore, npm must be installed in a “service” and a watcher needs to be initialized.
Each project is encapsulated in its own folder on the host/build in docker and should not be have any knowledge of anything else but itself.
My solution:
I have tried many different versions, with “volumes_from” etc. but I decided to show you this, because it’s minified but still complete.
Docker-compose.yml
version: '2'
services:
web:
image: nginx
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
links:
- php
php:
image: php:fpm
ports:
- "9000:9000"
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
app:
build: .
volumes:
- ./assets:/website/assets
- ./config:/website/config:ro
- ./www:/website/www
Dockerfile
FROM debian:jessie-slim
RUN apt-get update && apt-get install -y \
npm
RUN gem update --system
RUN npm install -g grunt-cli grunt-contrib-watch grunt-babel babel-preset-es2015
RUN mkdir -p /website/{assets,assets/es6,config,www,www/js,www/css}
VOLUME /website
WORKDIR /website
Problem:
As you can see, the “data” service contains npm and should be able to execute a npm command. But, if I run docker-compose up -d everything works. I can edit the page content, work with it, etc. But the data container is not running and because of that cannot perform any npm command. Unless I have a huge logic error; which is quite possible ;-)
Environment:
Windows 10 Pro [up2date]
Shared drive for docker is used
Docker version 1.12.3, build 6b644ec
docker-machine version 0.8.2, build e18a919
docker-compose version 1.8.1, build 004ddae
After you call docker-compose up, you can get an interactive shell for your app container with:
docker-compose run app
You can also run one-off commands with:
docker-compose run app [command]
The reason your app container is not running after docker-compose up completes is that your Dockerfile does not define a service. For app to run as a service, you would need to keep a thread running in the foreground of the container by adding something like:
CMD ./run-my-service
to the end of your Dockerfile.

Resources