When I tried docker-compose build and docker-compose up -d
I suffered api-server container didn't start.
I tried
docker logs api-server
yarn run v1.22.5
$ nest start --watch
/bin/sh: nest: not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
It seems nest packages didn't installed. because package.json was not copied to container from host.
But in my opinion,by volume was binded by docker-compose.yml, Therefore the command yarn install should refer to the - ./api:/src.
Why do we need to COPY files to container ?
Why only the volume binding doesn't work well ?
If someone has opinion,please let me know.
Thanks
The following is dockerfile
FROM node:alpine
WORKDIR /src
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN apk --no-cache add curl
RUN yarn install
CMD yarn start:dev
Following is
docker-compose.yml
version: '3'
services:
api-server:
build: ./api
links:
- 'db'
ports:
- '3000:3000'
volumes:
- ./api:/src
- ./src/node_modules
tty: true
container_name: api-server
Volumes are mounted at runtime, not at build time, therefore in your case, you should copy the package.json prior to installing dependencies, and running any command that needs these dependencies.
Some references:
Docker build using volumes at build time
Can You Mount a Volume While Building Your Docker Image to Cache Dependencies?
Related
I am new to the Docker. I am trying to create a docker image for the NodeJS project which I will upload/host on Docker repository. When I execute docker-compose up -d everything works fine and I can access the nodeJS server that is hosted inside docker containers. After that, I stopped all containers and tried to create a docker image from Dockerfiles using following commands:
docker build -t adonis-app .
docker run adonis-app
The first command executes without any error but the second command throws this error:
> adonis-fullstack-app#4.1.0 start /app
> node server.js
internal/modules/cjs/loader.js:983
throw err;
^
Error: Cannot find module '/app/server.js'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:980:15)
at Function.Module._load (internal/modules/cjs/loader.js:862:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! adonis-fullstack-app#4.1.0 start: `node server.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the adonis-fullstack-app#4.1.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /app/.npm/_logs/2020-02-09T17_33_22_489Z-debug.log
Can someone help me with this error and tell me what is wrong with it?
Dockerfile I am using:
FROM node
ENV HOME=/app
RUN mkdir /app
ADD package.json $HOME
WORKDIR $HOME
RUN npm i -g #adonisjs/cli && npm install
CMD ["npm", "start"]
docker-compose.yaml
version: '3.3'
services:
adonis-mysql:
image: mysql:5.7
ports:
- '3306:3306'
volumes:
- $PWD/data:/var/lib/mysql
environment:
MYSQL_USER: ${DB_USER}
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_RANDOM_ROOT_PASSWORD: 1
networks:
- api-network
adonis-api:
container_name: "${APP_NAME}-api"
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
- /app/node_modules
ports:
- "3333:3333"
depends_on:
- adonis-mysql
networks:
- api-network
networks:
api-network:
Your Dockerfile is missing a COPY step to actually copy your application code into the image. When you docker run the image, there's no actual source code to run, and you get the error you're seeing.
Your Dockerfile should look more like:
FROM node
WORKDIR /app # creates the directory; no need to set $HOME
COPY package.json package.lock .
RUN npm install # all of your dependencies are in package.json
COPY . . # actually copy the application in
CMD ["npm", "start"]
Now that your Docker image is self-contained, you don't need the volumes: that try to inject host content into it. You can also safely rely on several of Docker Compose's defaults (the default network and the generated container_name: are both fine to use). A simpler docker-compose.yml looks like
version: '3.3'
services:
adonis-mysql:
image: mysql:5.7
# As you have it, except delete the networks: block
adonis-api:
build: . # this directory is context:, use default dockerfile:
ports:
- "3333:3333"
depends_on:
- adonis-mysql
There's several key problems in the set of artifacts and commands you show:
docker run and docker-compose ... are separate commands. In the docker run command you show, it runs the image, as-is, with its default command, with no volumes mounted, and with no ports published. docker run doesn't know about the docker-compose.yml file, so whatever options you have specified there won't have an effect. You might mean docker-compose up, which will also start the database. (In your application remember to try several times for it to come up, it often can take 30-60 seconds.)
If you're planning to push the image, you need to include the source code. You're essentially creating two separate artifacts in this setup: a Docker image with Node and some libraries, and also a Javascript application on your host. If you docker push the image, it won't include the application (because you're not COPYing it in), so you'll also have to separately distribute the source code. At that point there's not much benefit to using Docker; an end user may as well install Node, clone your repository, and run npm install themselves.
You're preventing Docker from seeing library updates. Putting node_modules in an anonymous volume seems to be a popular setup, and Docker will copy content from the image into that directory the first time you run the application. The second time you run the application, Docker sees the directory already exists, assumes it to contain valuable user data, and refuses to update it. This leads to SO questions along the lines of "I updated my package.json file but my container isn't updating".
Your docker-compose.yaml file has two services:
adonis-mysql
adonis-api
Only the second item is using the current docker file. As can be seen by the following section:
build:
context: .
dockerfile: Dockerfile
The command docker build . will only build the image in current docker file aka adonis-api. And then it is run.
So most probably it could be the missing mysql service that is giving you the error. You can verify by running
docker ps -aq
to check if the sql container is also running. Hope it helps.
Conclusion: Use docker-compose.
I am trying to dockerize my React-Flask app by dockerizing each one of them and using docker-compose to put them together.
Here the Dockerfiles for each app look like:
React - Frontend
FROM node:latest
WORKDIR /frontend/
ENV PATH /frontend/node_modules/.bin:$PATH
COPY package.json /frontend/package.json
COPY . /frontend/
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
CMD ["npm", "run", "start"]
Flask - Backend
#Using ubuntu as our base
FROM ubuntu:latest
#Install commands in ubuntu, including pymongo for DB handling
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
RUN python -m pip install pymongo[srv]
#Unsure of COPY command's purpose, but WORKDIR points to /backend
COPY . /backend
WORKDIR /backend/
RUN pip install -r requirements.txt
#Run order for starting up the backend
ENTRYPOINT ["python"]
CMD ["app.py"]
Each of them works fine when I just use docker build and docker up. I've checked that they work fine when they are built and ran independently. However, when I docker-compose up the docker-compose.yml which looks like
# Docker Compose
version: '3.7'
services:
frontend:
container_name: frontend
build:
context: frontend
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- '.:/frontend'
- '/frontend/node_modules'
backend:
build: ./backend
ports:
- "5000:5000"
volumes:
- .:/code
Gives me the error below
Starting frontend ... error
Starting dashboard_backend_1 ...
ERROR: for frontend Cannot start service sit-frontend: error while creating mount source path '/host_mnt/c/Users/myid/DeskStarting dashboard_backend_1 ... error
ERROR: for dashboard_backend_1 Cannot start service backend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: for frontend Cannot start service frontend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: for backend Cannot start service backend: error while creating mount source path '/host_mnt/c/Users/myid/Desktop/dashboard': mkdir /host_mnt/c: file exists
ERROR: Encountered errors while bringing up the project.
Did this happen because I am using Windows? What can be the issue? Thanks in advance.
For me the only thing that worked was restarting the Docker deamon
Check if this is related to docker/for-win issue 1560
I had the same issue. I was able to resolve it by running:
docker volume rm -f [name of docker volume with error]
Then restarting docker, and running:
docker-compose up -d --build
I tried these same steps without restarting my docker, but restarting my computer and that didn't resolve the issue.
What resolved the issue for me was removing the volume with the error, restarting my docker, then doing a build again.
Other cause:
On Windows this may be due to a user password change. Uncheck the box to stop sharing the drive and then allow Docker to detect that you are trying to mount the drive and share it.
Also mentioned:
I just ran docker-compose down and then docker-compose up. Worked for me.
I have tried with docker container prune then press y to remove all stopped containers. This issue has gone.
I saw this after I deleted a folder I'd shared with docker and recreated one with the same name. I think this deleted the permissions. To resolve it I:
Unshared the folder in docker settings
Restarted docker
Ran docker container prune
Ran docker-compose build
Ran docker-compose up.
Restarting the docker daemon will work.
I am working on creating a docker container for a node.js microservice and am running into an issue with a local dependency from another folder.
I added the dependency to the node_modules folder using:
npm install -S ../dependency1(module name).
This also added an entry in the package.json as follows:
"dependency1": "file:../dependency1".
When I run the docker-compose up -d command, I receive an error indicating the folowing:
npm ERR! Could not install from "../dependency1" as it does not contain a package.json file.
Dockerfile:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD [ "npm", "start" ]
EXPOSE 3000
docker-compose.yml:
customer:
container_name: "app_customer"
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/usr/src/app/
- /usr/src/app/node_modules
ports:
- "3000:3000"
depends_on:
- mongo
- rabbitmq
I found articles outlining an issue with symlinks in a node_modules folder and docker and a few outlining this issue but none seem to provide a solution to this problem. I am looking for a solution to this problem or a really good workaround.
A Docker build can't reference files outside of the build context, which is the . defined in the docker-compose.yml file.
docker build creates a tar bundle of all the files in a build context and sends that to the Docker daemon for the build. Anything outside the context directory doesn't exist to the build.
You could move your build context with context: ../ to the parent directory and shuffle all the paths you reference in the Dockerfile to match. Just be careful not to make the build context to large as it can slow down the build process.
The other option is to publish the private npm modules to a scope, possible on a private npm registry that you and the build server have access to and install the dependencies normally.
I have just started learning Docker, and run into this issue which don't know how to go abound.
My Dockerfile looks like this:
FROM node:7.0.0
WORKDIR /app
COPY app /app
COPY hermes-entry /usr/local/bin
RUN chmod +x /usr/local/bin/hermes-entry
COPY entry.d /entry.d
RUN npm install
RUN npm install -g gulp
RUN npm install gulp
RUN gulp
My docker-compose.yml looks like this:
version: '2'
services:
hermes:
build: .
container_name: hermes
volumes:
- ./app:/app
ports:
- "4000:4000"
entrypoint: /bin/bash
links:
- postgres
depends_on:
- postgres
tty: true
postgres:
image: postgres
container_name: postgres
volumes:
- ~/.docker-volumes/hermes/postgresql/data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
ports:
- "2345:5432"
After starting the containers up with:
docker-compose up -d
I tried running a simple bash cmd:
docker-compose run hermes ls
And I got this error:
/bin/ls cannot execute binary file
Any idea on what I am doing wrong?
The entrypoint to your container is bash. By default bash expects a shell script as its first argument, but /bin/ls is a binary, as the error says. If you want to run /bin/ls you need to use -c /bin/ls as your command. -c tells bash that the rest of the arguments are a command line rather than the path of a script, and the command line happens to be a request to run /bin/ls.
You can't run Gulp and Node at the same time in one container. Containers should always have one process each.
If you just want node to serve files, remove your entrypoint from the hermes service.
You can add another service to run gulp, if you are having it run tests, you'd have to map the same volume and add a command: ["gulp"]
And you'd need to remove RUN gulp from your dockerfile (unless you are using it to build your node files)
then run docker-compose up
I have a Dockerfile I'm pointing at from a docker-compose.yml.
I'd like the volume mount in the docker-compose.yml to happen before the RUN in the Dockerfile.
Dockerfile:
FROM node
WORKDIR /usr/src/app
RUN npm install --global gulp-cli \
&& npm install
ENTRYPOINT gulp watch
docker-compose.yml
version: '2'
services:
build_tools:
build: docker/gulp
volumes_from:
- build_data:rw
build_data:
image: debian:jessie
volumes:
- .:/usr/src/app
It makes complete sense for it to do the Dockerfile first, then mount from docker-compose, however is there a way to get around it.
I want to keep the Dockerfile generic, while passing more specific bits in from compose. Perhaps that's not the best practice?
Erik Dannenberg's is correct, the volume layering means that what I was trying to do makes no sense. (There is another really good explaination on the Docker website if you want to read more). If I want to have Docker do the npm install then I could do it like this:
FROM node
ADD . /usr/src/app
WORKDIR /usr/src/app
RUN npm install --global gulp-cli \
&& npm install
CMD ["gulp", "watch"]
However, this isn't appropriate as a solution for my situation. The goal is to use NPM to install project dependencies, then run gulp to build my project. This means I need read and write access to the project folder and it needs to persist after the container is gone.
I need to do two things after the volume is mounted, so I came up with the following solution...
docker/gulp/Dockerfile:
FROM node
RUN npm install --global gulp-cli
ADD start-gulp.sh .
CMD ./start-gulp.sh
docker/gulp/start-gulp.sh:
#!/usr/bin/env bash
until cd /usr/src/app && npm install
do
echo "Retrying npm install"
done
gulp watch
docker-compose.yml:
version: '2'
services:
build_tools:
build: docker/gulp
volumes_from:
- build_data:rw
build_data:
image: debian:jessie
volumes:
- .:/usr/src/app
So now the container starts a bash script that will continuously loop until it can get into the directory and run npm install. This is still quite brittle, but it works. :)
You can't mount host folders or volumes during a Docker build. Allowing that would compromise build repeatability. The only way to access local data during a Docker build is the build context, which is everything in the PATH or URL you passed to the build command. Note that the Dockerfile needs to exist somewhere in context. See https://docs.docker.com/engine/reference/commandline/build/ for more details.