Experiencing first setups with Docker with several services running.
Spent a while on this, but cannot pinpoint the problem.
Below is the cause of the problem, I think.
Why the Node app does not work/start?
web_1 | npm ERR! code ENOENT
web_1 | npm ERR! syscall open
web_1 | npm ERR! path /app/http/app/package.json
web_1 | npm ERR! errno -2
web_1 | npm ERR! enoent ENOENT: no such file or directory, open '/app/http/app/package.json'
web_1 | npm ERR! enoent This is related to npm not being able to find a file.
web_1 | npm ERR! enoent
web_1 |
web_1 | npm ERR! A complete log of this run can be found in:
web_1 | npm ERR! /root/.npm/_logs/2020-12-27T23_32_03_845Z-debug.log
Why it does not see it:
docker-compose.yml
version: '3'
services:
mongo:
image: mongo
restart: always
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: mongo_user
MONGO_INITDB_ROOT_PASSWORD: mongo_secret
api:
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- "4433:4433"
depends_on:
- rabbit
volumes:
- .:/app
web:
build:
context: .
dockerfile: Dockerfile1
restart: always
ports:
- "8080:8080"
depends_on:
- api
volumes:
- .:/app
rabbit:
hostname: rabbit
image: rabbitmq:management
environment:
- RABBITMQ_DEFAULT_USER=rabbitmq
- RABBITMQ_DEFAULT_PASS=rabbitmq
ports:
- "5673:5672"
- "15672:15672"
worker_1:
build:
context: .
hostname: worker_1
entrypoint: celery
command: -A workerA worker --loglevel=info -Q workerA
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit
Dockerfile
FROM python:3.8
ADD Pipfile.lock /app/Pipfile.lock
ADD Pipfile /app/Pipfile
WORKDIR /app
COPY . /app
RUN pip install pipenv
RUN pipenv install --system --deploy --ignore-pipfile
ENV FLASK_APP=app/http/api/endpoints.py
ENV FLASK_RUN_PORT=4433
ENV FLASK_ENV=development
ENTRYPOINT ["python"]
#CMD ["app/http/api/endpoints.py","--host=0.0.0.0","--port 4433"]
CMD ["-m", "flask", "run"]
Dockerfile1
FROM node:10
WORKDIR /app/http/app
ADD app/http/app/package.json /app/http/app/package.json
ADD app/http/app/package-lock.json /app/http/app/package-lock.json
RUN npm i
CMD ["npm","start"]
How to make such a setup
Flask, RabbitMQ, React????
How to make it properly?
Related
I have a docker setup with AdonisJS 5. I'm currently trying to get it started (and it builds just fine). But as discussed a bit further down, the ace command cannot be found. ace is the CLI package for AdonisJS (think ng for Angular)
This is my Dockerfile:
ARG NODE_IMAGE=node:16.13.1-alpine
FROM $NODE_IMAGE AS base
RUN apk --no-cache add dumb-init
RUN mkdir -p /home/node/app && chown node:node /home/node/app
WORKDIR /home/node/app
USER node
RUN mkdir tmp
FROM base AS dependencies
COPY --chown=node:node ./package*.json ./
RUN npm ci
RUN npm i #adonisjs/cli
COPY --chown=node:node . .
FROM dependencies AS build
RUN node ace build --production
FROM base AS production
ENV NODE_ENV=production
ENV PORT=$PORT
ENV HOST=0.0.0.0
COPY --chown=node:node ./package*.json ./
RUN npm ci --production
COPY --chown=node:node --from=build /home/node/app/build .
EXPOSE $PORT
CMD [ "dumb-init", "node", "service/build/server.js" ]
And this is my docker-compose.yml:
version: '3.9'
services:
postgres:
container_name: postgres
image: postgres
restart: always
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-password}
- POSTGRES_USER=${POSTGRES_USER:-user}
networks:
- family-service-network
volumes:
- fn-db_volume:/var/lib/postgresql/data
adminer:
container_name: adminer
image: adminer
restart: always
networks:
- family-service-network
ports:
- 8080:8080
minio:
container_name: storage
image: 'bitnami/minio:latest'
ports:
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ROOT_USER=user
- MINIO_ROOT_PASSWORD=password
- MINIO_SERVER_ACCESS_KEY=access-key
- MINIO_SERVER_SECRET_KEY=secret-key
networks:
- family-service-network
volumes:
- fn-s3_volume:/var/lib/postgresql/data
fn_service:
container_name: fn_service
restart: always
build:
context: ./service
target: dependencies
ports:
- ${PORT:-3333}:${PORT:-3333}
- 9229:9229
networks:
- family-service-network
env_file:
- ./service/.env
volumes:
- ./:/home/node/app
- /home/node/app/node_modules
depends_on:
- postgres
command: dumb-init node ace serve --watch --node-args="--inspect=0.0.0.0"
volumes:
fn-db_volume:
fn-s3_volume:
networks:
family-service-network:
When I run this with docker-compose up everything works, except for the fn_service.
I get the error:
Error: Cannot find module '/home/node/app/ace'
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15)
at Function.Module._load (node:internal/modules/cjs/loader:778:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)
at node:internal/main/run_main_module:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
node:internal/modules/cjs/loader:936
throw err;
^
I followed this tutorial, and I can't seem to find anything by googling. I'm sure it's something miniscule.
Any help would be appreciated.
I keep getting the following issue when trying to start up npm container within docker, I can't seem to figure out what is happening. It keeps attempting to start npm and then exits and doesn't run.
| sh: 1: mix: not found
npm-aws | npm ERR! code ELIFECYCLE
npm-aws | npm ERR! syscall spawn
npm-aws | npm ERR! file sh
npm-aws | npm ERR! errno ENOENT
npm-aws | npm ERR! # watch-poll: `mix watch -- --watch-options-poll=3000`
npm-aws | npm ERR! spawn ENOENT
npm-aws | npm ERR!
npm-aws | npm ERR! Failed at the # watch-poll script.
npm-aws | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm-aws |
npm-aws | npm ERR! A complete log of this run can be found in:
npm-aws | npm ERR! /root/.npm/_logs/2021-10-05T22_43_50_263Z-debug.log
docker-compose.yml:
version: '3'
networks:
laravel:
services:
testing-aws:
build:
context: .
dockerfile: nginx.dockerfile
container_name: nginx-aws
ports:
- 5001:5001
volumes:
- ./src:/var/www/html:delegated
depends_on:
- php
- mysql
links:
- mysql
networks:
- laravel
mysql:
image: mysql:5.6
container_name: mysql-aws
restart: unless-stopped
tty: true
ports:
- 3306:3306
environment:
MYSQL_HOST: mysql
MYSQL_DATABASE: heatable
MYSQL_USER: heatable
MYSQL_ROOT_PASSWORD: password
networks:
- laravel
volumes:
- ./mysql:/var/lib/mysql
php:
build:
context: .
dockerfile: php.dockerfile
container_name: php-aws
volumes:
- ./src:/var/www/html:delegated
networks:
- laravel
links:
- mysql
depends_on:
- mysql
composer:
build:
context: .
dockerfile: composer.dockerfile
container_name: composer-aws
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
depends_on:
- php
user: laravel
entrypoint: ['composer', '--ignore-platform-reqs']
networks:
- laravel
npm:
build:
context: .
dockerfile: npm.dockerfile
container_name: npm-aws
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
command: npm run watch-poll
networks:
- laravel
artisan:
build:
context: .
dockerfile: php.dockerfile
container_name: artisan-aws
volumes:
- ./src:/var/www/html:delegated
depends_on:
- mysql
working_dir: /var/www/html
user: laravel
entrypoint: ['php', '/var/www/html/artisan']
links:
- mysql
networks:
- laravel
npm.dockerfile:
FROM node:14.17.1
WORKDIR /var/www/html
COPY ./src/package.json .
RUN npm install
RUN npm clean-install
CMD npm run watch-poll
UPDATE
I've managed to resolve it by adding tty: true, see the updated:
npm:
tty: true
build:
context: .
dockerfile: npm.dockerfile
container_name: npm-aws
working_dir: /var/www/html
networks:
- laravel
I had to manually run npm install within the terminal of the container to get the node modules, if anyone knows a way of fixing without the manual inputted command. Please let me know :)
I need to up a vue js app with docker-compose, but when docker trying 'npm install', the logs shows that npm cannot find a file package.json. Notice that i created a vue js project with vue js cli before i trying to up app with docker-compose* I verified if the directory is wrong, but i cannot see anything wrong . I'm running docker commands on vue project root. The docker-compose file is inside another project
My Dockefile:
FROM node:lts-alpine
RUN mkdir /globostore-frontend
WORKDIR /globostore-frontend
ENV PATH /globostore-frontend/node_modules/.bin:$PATH
COPY package.json /globostore-frontend
RUN npm install
RUN npm install -g #vue/cli
CMD ["npm", "run", "serve"]
My docker-compose.yml:
version: "3.8"
services:
db:
image: mysql:5.7
ports:
- '3306:3306'
environment:
MYSQL_DATABASE: 'Globostore'
MYSQL_USER: 'wendel'
MYSQL_PASSWORD: 'wendel12'
MYSQL_ROOT_PASSWORD: 'wendel12'
volumes:
- ./db:/docker-entrypoint-initdb.d/:ro
web:
build: .
command: flask run
volumes:
- .:/app
ports:
- '5000:5000'
depends_on:
- db
links:
- db
environment:
FLASK_ENV: development
bff:
build: ./../globostore-bff/
ports:
- 5001:5001
volumes:
- .:/app
environment:
FLASK_ENV: development
command: flask run
frontend:
build: ./../globostore-frontend/
volumes:
- .:/globostore-frontend
ports:
- 8080:8080
Error:
frontend_1 | npm ERR! code ENOENT
frontend_1 | npm ERR! syscall open
frontend_1 | npm ERR! path /globostore-frontend/package.json
frontend_1 | npm ERR! errno -2
frontend_1 | npm ERR! enoent ENOENT: no such file or directory, open '/globostore-frontend/package.json'
frontend_1 | npm ERR! enoent This is related to npm not being able to find a file.
frontend_1 | npm ERR! enoent
frontend_1 |
frontend_1 | npm ERR! A complete log of this run can be found in:
frontend_1 | npm ERR! /root/.npm/_logs/2021-02-02T17_00_23_137Z-debug.log
This is my project directory structure
I start the application through the docker-compose file at globostore-api directory
Issue
It looks like your error does not come from docker build. It looks like it comes from from this command: npm run serve executed during container start.
During docker build your npm install will work, because the package.json exists - You copy it.
But when you run docker-compose up this file does not exists because you override entire directory with volumes for frontend.
You have docker-compose.yaml next to those files:
.gitignore
app.py
Dockerfile
Readme.md
requirements.txt
There is no package.json file, in that directory, you mount inside the docker-compose.yaml
In docker-compose you have this section:
frontend:
build: ./../globostore-frontend/
volumes:
- .:/globostore-frontend
ports:
- 8080:8080
So you are overwriting volume here: - .:/globostore-frontend, you copied in docker build.
Solution
Remove this line
Replace .:/globostore-frontend to ./globostore-frontend:/globostore-frontend
Do debugging yourself
You can do debugging yourself. Please find tutorial and follow my instructions:
1. Add command to docker-compose.yaml for the frontend service
You need to add this line: command: ["sleep", "10000"]
So your definition will look like:
frontend:
build: ./../globostore-frontend/
volumes:
- .:/globostore-frontend
ports:
- 8080:8080
command: ["sleep", "1000"]
Then try to run docker-compose up and see if your container is working.
2. Find docker container ID
Run docker ps and find container id - this is container hash.
3. Shell into container
Run docker exec -ti CONTAINER_ID sh. Now you are in the container and you can see if the package.json exists in the /globostore-frontend directory.
But the package.json will be missing because you override the /globostore-frontend dir with volume for a frontend in this lines:
volumes:
- .:/globostore-frontend
This is the docker-compose file I have already set the folder to share by Virtual Box VM but it is still not working.
version: '3'
services:
postgres:
image: 'postgres:latest'
deploy:
restart_policy:
condition: on-failure
window: 15m
redis:
image: 'redis:latest'
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '3050:80'
api:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /usr/src/app/node_modules
- ./server:/usr/src/app
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /usr/src/app/node_modules
- ./client:/usr/src/app
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
volumes:
- /usr/src/app/node_modules
- ./worker:/usr/src/app
I am running it on Windows 7 sp1. Whenever I run docker-compose up - I get an error:
api_1 | npm ERR! code ENOENT
api_1 | npm ERR! syscall open
api_1 | npm ERR! path /usr/src/app/package.json
api_1 | npm ERR! errno -2
api_1 | npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/
app/package.json'
api_1 | npm ERR! enoent This is related to npm not being able to find a fi
le.
api_1 | npm ERR! enoent
api_1 |
api_1 | npm ERR! A complete log of this run can be found in:
api_1 | npm ERR! /root/.npm/_logs/2020-05-28T04_06_56_121Z-debug.log
complex_api_1 exited with code 254
Thanks in advance, please help.
I am trying to run a Fibonacci project from the Udemy course of Docker and Kubernetes complete guide.
Each service has its own package.json and other files.
Server Docker File :
FROM node:alpine
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
Worker Docker File :
FROM node:alpine
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
Client Docker File :
FROM node:alpine
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
If you want to share data between containers
services:
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- datavolume:/usr/src/app/node_modules
- ./client:/usr/src/app
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
volumes:
- datavolume:/usr/src/app/node_modules
- ./worker:/usr/src/app
volumes:
datavolume: {}
Since it looks like your dev, I would suggest mount your workspace folder into container
services:
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- ./node_modules:/usr/src/app/node_modules
- ./client:/usr/src/app
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
volumes:
- ./node_modules:/usr/src/app/node_modules
- ./worker:/usr/src/app
And better way is treating every service a standalone project. Each of them should own their self package.json and node_modules.
services:
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- ./client:/usr/src/app
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
volumes:
- ./worker:/usr/src/app
In my opinion, it doesn't make sense to use same libraries in different project which in different purpose.
I had the same error! Actually I solved moving my project to /c/Users/currentUser from c/Program Files/Docker Toolbox. Maybe you have your project folder inside Program Files directory and not inside Users one, is It right? Try with this, Just Copy your project folder inside users and running your docker-compose from there. Let me know!
I'm setting up Gitlab CI docker-in-docker for a project. Unfortunately the job keeps failing because installed NPM packages can't seem to be found when running commands. The error I'm getting:
backend_1 |
backend_1 | > tacta-backend#0.0.1 build /app
backend_1 | > tsc
backend_1 |
backend_1 | sh: tsc: not found
backend_1 | npm ERR! file sh
backend_1 | npm ERR! code ELIFECYCLE
backend_1 | npm ERR! errno ENOENT
backend_1 | npm ERR! syscall spawn
backend_1 | npm ERR! tacta-backend#0.0.1 build: `tsc`
backend_1 | npm ERR! spawn ENOENT
backend_1 | npm ERR!
backend_1 | npm ERR! Failed at the tacta-backend#0.0.1 build script.
backend_1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
backend_1 |
backend_1 | npm ERR! A complete log of this run can be found in:
backend_1 | npm ERR! /root/.npm/_logs/2019-08-02T04_46_04_881Z-debug.log
The curious thing is that it does work when I run docker-compose manually without using the Gitlab CI. This is what my .gitlab-ci.yml looks like:
build:
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
image: docker:18
stage: build
services:
- docker:18-dind
before_script:
- docker info
- apk add python-dev libffi-dev openssl-dev gcc libc-dev make
- apk add py-pip
- pip install docker-compose
script:
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
This is my docker-compose.yml:
version: '3'
services:
frontend:
build:
context: ./frontend
args:
NODE_ENV: production
PGUSER: ${PGUSER}
PGHOST: ${PGHOST}
PGPASSWORD: ${PGPASSWORD}
PGDATABASE: ${PGDATABASE}
PGPORT: ${PGPORT}
DATABASE_URL: ${DATABASE_URL}
command: npm run build
ports:
- "9000:9000"
volumes:
- /app/node_modules
- ./frontend:/app
backend:
build:
context: ./backend
args:
NODE_ENV: production
command: npm run build
ports:
- "3000:3000"
volumes:
- /app/node_modules
- ./backend:/app
And this is the Dockerfile:
FROM node:11.10.1-alpine
ARG NODE_ENV
ARG PGUSER
ARG PGHOST
ARG PGPASSWORD
ARG PGDATABASE
ARG PGPORT
ARG DATABASE_URL
ENV NODE_ENV ${NODE_ENV}
ENV PGUSER ${PGUSER}
ENV PGHOST ${PGHOST}
ENV PGPASSWORD ${PGPASSWORD}
ENV PGDATABASE ${PGDATABASE}
ENV PGPORT ${PGPORT}
ENV DATABASE_URL ${DATABASE_URL}
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY ./ ./
I expect the installed packages and their commands to be available in the docker container. At some point they worked, and I have no clue what changed in the configuration to cause this issue.
I am not expecting a copy/paste solution from you guys, but I do hope you can point me in the right direction to properly get to the root of this issue.
The problem was that I switched from NODE_ENV: development to NODE_ENV: production. With production enabled devDependencies in my package.json were no longer being installed (duh me).
I added typescript and webpack to the regular dependencies and now it works like a charm again.