Building an web app which can perform npm tasks - docker

Before I post any configuration, I try to explain what I would like to archive and would like to mention, that I’m new to docker.
To make path conversations easier, let's assume we talk about the project "Docker me up!" and it's located in X:\docker-projects\docker-me-up\.
Goal:
I would like to run multiple nginx project with different content, each project represents a dedicated build. During development [docker-compose up -d] a container should get updated instantly; which works fine.
The tricky part is, that I want to outsource npm [http://gruntjs.com] from my host directly into the container/image, so I’m able to debug and develop wherever I am, by just installing docker. Therefore, npm must be installed in a “service” and a watcher needs to be initialized.
Each project is encapsulated in its own folder on the host/build in docker and should not be have any knowledge of anything else but itself.
My solution:
I have tried many different versions, with “volumes_from” etc. but I decided to show you this, because it’s minified but still complete.
Docker-compose.yml
version: '2'
services:
web:
image: nginx
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
links:
- php
php:
image: php:fpm
ports:
- "9000:9000"
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
app:
build: .
volumes:
- ./assets:/website/assets
- ./config:/website/config:ro
- ./www:/website/www
Dockerfile
FROM debian:jessie-slim
RUN apt-get update && apt-get install -y \
npm
RUN gem update --system
RUN npm install -g grunt-cli grunt-contrib-watch grunt-babel babel-preset-es2015
RUN mkdir -p /website/{assets,assets/es6,config,www,www/js,www/css}
VOLUME /website
WORKDIR /website
Problem:
As you can see, the “data” service contains npm and should be able to execute a npm command. But, if I run docker-compose up -d everything works. I can edit the page content, work with it, etc. But the data container is not running and because of that cannot perform any npm command. Unless I have a huge logic error; which is quite possible ;-)
Environment:
Windows 10 Pro [up2date]
Shared drive for docker is used
Docker version 1.12.3, build 6b644ec
docker-machine version 0.8.2, build e18a919
docker-compose version 1.8.1, build 004ddae

After you call docker-compose up, you can get an interactive shell for your app container with:
docker-compose run app
You can also run one-off commands with:
docker-compose run app [command]
The reason your app container is not running after docker-compose up completes is that your Dockerfile does not define a service. For app to run as a service, you would need to keep a thread running in the foreground of the container by adding something like:
CMD ./run-my-service
to the end of your Dockerfile.

Related

Docker with Ruby on Rails on a development environment

I'm learning Docker and I'm trying to configure a Ruby on Rails project to run on it (on development environment). But I'm having some trouble.
I managed to configure docker-compose to start a container with the terminal open, so I can do bundle install, start a server or use rails generators. However, every time I run the command to start, it starts a new container, where I have to do bundle install again (it takes a while).
So I'd like to know if there is a way to reuse components already created.
Here is my Dockerfile.dev
FROM ruby:2.7.4-bullseye
WORKDIR '/apps/gaia_api'
EXPOSE 3000
RUN gem install rails bundler
CMD ["/bin/bash"]
And here is my docker-compose file:
version: "3.8"
services:
gaia_api:
build:
dockerfile: Dockerfile.dev
context: "."
volumes:
- .:/apps/gaia_api
environment:
- USER_DB_RAILS
- PASSWORD_DB_RAILS
ports:
- "3000:3000"
The command I'm using to run is: docker-compose run --service-ports gaia_api.
I tried to use the docker commands build, create and start, however the volume mapping doesn't work. On the terminal of the container, the files of the volume are not there.
The commands I tried.
docker build -t gaia -f Dockerfile.dev .
docker create -v ${pwd}:/apps/gaia_api -it -p 3000:3000 gaia
docker start -i f36d4d9044b08e42b2b9ec1b02b03b86b3ae7da243f5268db2180f3194823e48
There is probably something I still don't understand. So I ask: Whats the best way to configure docker for ruby on rails development? And will it be possible to add new services later (I plan once I get the first part to work, to add postgres and a vue project).
EDIT: Forgot to say that I'm on Mac OS Big Sur
EDIT 2: I found what was wrong with the volumes, I was tying -v ${pwd}:/apps instead of $(pwd):/apps.

Docker npm install with volumes mapping

I'm following a tutorial for docker and docker compose. Although there is a npm install command in Dockerfile (as following), there is a situation that tutor have to run that command manually.
COPY package*.json .
RUN npm install
Because he is mapping the project current directory to the container by volumes as following:(which is like project runs in the container by mapped source code to the host directory)
api:
build: ./backend
ports:
- 3001:3000
environment:
DB_URL: mongodb://db/vidly
volumes:
- ./backend:/app
So npm install command in docker file doesn't make any sense. So he runs this command directly in the root of project.
So, another developer has to run npm install as well, (or if I add a new package, I should do it too) which seems not very developer friendly. Because the purpose of docker is not to do the instructions by yourself. So docker-compose up should do everything. Any idea about this problem would be appreciated.
I agree with #zeitounator, who adds very sensible commentary to your situation and use.
However, if you did want to solve the original problem of running a container that volume mounts in code, and have it run a development server, then you could move the npm command from the COPY directive to the CMD, or even add an entry script to the container that includes the npm call.
That way you could run the container with the volume mount, and the starting process (npm install, npm serve dev, etc) would occur at runtime as opposed to buildtime.
The best solution is as you mention yourself, Vahid, to use a smart Dockerfile that leverages sensible build caching, that allows the application to be built and ran with one command (and no external input). Perhaps you and your tutor can talk about these differences and come to an agreement

TypeError: Object(...) is not a function when mounting a volume from the local directory into a next.js container

I'm trying to mount a volume from my local directory for Next.js/React's hot reload during development. My current docker-compose.development.yml looks like this:
services:
web:
command: next dev
volumes:
- ./:/usr/src/app
- /usr/src/app/node_modules
- /usr/src/app/.next
depends_on:
db:
condition: service_healthy
It extends my main docker-compose with the command docker-compose -f docker-compose.yml -f docker-compose.development.yml up --build:
services:
web:
build: .
command: /bin/sh -c 'npm run build && npm start'
ports:
- 3000:3000
- 5432:5432
env_file:
- .env.local
It works fine without the development overrides and without docker. I believe this problem has to do with running next dev in a container as the problem persists even after removing the volume bindings. Here is the full call stack. It points to the error being in the src/pages/_app.tsx file.
This are the basic steps to troubleshoot an issue when you can build your project in one environment and you are not able to do it in another.
Make sure the npm install was run before the build starts.
I can not see from the shared snippets you have shared if this was done. To build in the container you need to have the dependancies installed.
Make sure that your package.json is up to date with the versions of the packages/modules that are installed in the development environment.
If you don't have the package.json or it was not maintained you can check in this SO post how to generate it again.
Next to check is the C/C++ build environment. some of the modules require C/C++ or C#/mongo to build environments to be present in the image. Also, most often there will be requirement for specific dev shared libraries to be installed.
Check which dependancies are required for your packages and which libraries are required to be installed in the OS for the modules to work.
Finally, some modules are OS dependent (ex. work only on Windows, or only on macOS), or architecture dependent (amd64, arm64, etc.)
Ream the information about the package/module and research it on internet. If you have such modules, you will face challenges to package them in a container, so best approach here is to refactor them out from your project before you can containerize it.
I had NODE_ENV set to production instead of development in my Dockerfile. I assume it was conflicting with one of the steps for hot reloading.

Docker: How to update your container when your code changes

I am trying to use Docker for local development. The problem is that when I make a change to my code, I have to run the following commands to see the updates locally:
docker-compose down
docker images # Copy the name of the image
docker rmi <IMAGE_NAME>
docker-compose up -d
That's quite a mouthful, and takes a while. (Possibly I could make it into a bash script, but do you think that is a good idea?)
My real question is: Is there a command that I can use (even manually each time) that will update the image & container? Or do I have to go through the entire workflow above every time I make a change in my code?
Just for reference, here is my Dockerfile and docker-compose.yml.
Dockerfile
FROM node:12.18.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 4000
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Even though there are multiple good answers to this question, I think they missed the point, as the OP is asking about the local dev environment. The command I usually use in this situation is:
docker-compose up -d --build
If there aren't any errors in Dockerfile, it should rebuild all the images before bringing up the stack. It could be used in a shell script if needed.
#!/bin/bash
sudo docker-compose up -d --build
If you need to tear down the whole stack, you can have another script:
#!/bin/bash
sudo docker-compose down -v
The -v flag removes all the volumes so you can have a fresh start.
NOTE: In some cases, sudo might not be needed to run the command.
When a docker image is build the artifacts are already copied and no new change can reflect until you rebuild the image.
But
If it is only for local development, then you can leverage volume sharing to update code inside container in runtime. The idea is to share your app/repo directory on host machine with /usr/src/app (as per your Dockerfile) and with this approach your code (and new changes) will be appear on both host and the running container.
Also, you will need to restart the server on every change and for this you can run your app using nodemon (as it watches for changes in code and restarts the server)
Changes required in Dockerfile.
services:
web:
...
container_name: web
...
volumes:
- /path/in/host/machine:/usr/src/app
...
...
ports:
- "3000:3000"
depends_on:
- mongo
You may use Docker Swarm as an orchestration tool to apply rolling updates. Check Apply rolling updates to a service.
Basically you issue docker compose up once and do it with a shell script maybe, and once you get your containers running and then you may create a Jenkinsfile or configure a CI/CD pipeline to pull the updated image and apply it to running container with previous image with docker service update <NEW_IMAGE>.

Docker Compose as a CI pipeline

So we use Gitlab CI. The issue was the pain of having to commit each time we want to test wether or not our build pipeline was configured correctly. Unfortunately no way to easily test Gitlab CI locally when our containers/pipeline ain't workin right.
Our solution, use docker-compose.yml as a CI pipeline runner for local testing of containerized build steps, why not ya know . . . ? Basically Gitlab CI, and most others, have each section spawn a container to run a command and won't continue until the preceding steps complete, i.e. the first step must fully complete and then the next step happens.
Here is a simple .gitlab-ci.yml file we use:
stages:
- install
- test
cache:
untracked: true
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
install:
image: node:10.15.3
stage: install
script: npm install
test:
image: node:10.15.3
stage: test
script:
- npm run test
dependencies:
- install
Here is the docker-compose.yml file we converted it to:
version: "3.7"
services:
install:
image: node:10.15.3
working_dir: /home/node
user: node
entrypoint: npm
command:
- install
volumes:
- .:/home/node:Z
test:
image: node:10.15.3
working_dir: /home/node
user: node
entrypoint: npm
command:
- run
- test
volumes:
- .:/home/node:Z
depends_on:
- install
OK, now for the real issue here. The depends_on part of the compose file doesn't wait for the install container to finish, it just waits for the npm command to be running. Therefore, once the npm command is officially loaded up and running, the test container will start running and complain there are no node_modules yet. This happens because npm is running does not mean the npm command has actually finished.
Anyone know any tricks to better control what docker considers to be done. All the solutions I looked into where using some kind of wrapper script which watched some port on the internal docker network to wait for a service, like a db, to be fully turned on and ready.
When using k8s I can setup a readiness probe which is super dope, doesn't seem to be a feature of Docker Compose though. Am I wrong here? Would be nice to just write a command which docker uses to determine what done means.
For now we must run each step manually and then run the next when the preceding step is complete like so:
docker-compose up install
wait ....
docker-compose up test
We really just want to say:
docker-compose up
and have all the steps complete in correct order by waiting for preceding steps.
I went through the same issue, this is a permission related thing when you are mapping from your local machine to docker.
volumes:
- .:/home/node:Z
Create a file inside the container, and check the permission of this same file in your local machine, if you see the root user or anything else is the owner, instead of your current user, you have to run first
export DOCKER_USER="$(id -u):$(id -g)"
and change
user: node
by
user: $DOCKER_USER
PS: I'm assuming you can run docker without having to use sudo, just mentioning this bc this is the scenario I have.
This question was many years ago. I now use this project: https://github.com/firecow/gitlab-ci-local
It runs your Gitlab Pipeline locally using docker just as you would expect it to run.

Resources