I keep getting the following issue when trying to start up npm container within docker, I can't seem to figure out what is happening. It keeps attempting to start npm and then exits and doesn't run.
| sh: 1: mix: not found
npm-aws | npm ERR! code ELIFECYCLE
npm-aws | npm ERR! syscall spawn
npm-aws | npm ERR! file sh
npm-aws | npm ERR! errno ENOENT
npm-aws | npm ERR! # watch-poll: `mix watch -- --watch-options-poll=3000`
npm-aws | npm ERR! spawn ENOENT
npm-aws | npm ERR!
npm-aws | npm ERR! Failed at the # watch-poll script.
npm-aws | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm-aws |
npm-aws | npm ERR! A complete log of this run can be found in:
npm-aws | npm ERR! /root/.npm/_logs/2021-10-05T22_43_50_263Z-debug.log
docker-compose.yml:
version: '3'
networks:
laravel:
services:
testing-aws:
build:
context: .
dockerfile: nginx.dockerfile
container_name: nginx-aws
ports:
- 5001:5001
volumes:
- ./src:/var/www/html:delegated
depends_on:
- php
- mysql
links:
- mysql
networks:
- laravel
mysql:
image: mysql:5.6
container_name: mysql-aws
restart: unless-stopped
tty: true
ports:
- 3306:3306
environment:
MYSQL_HOST: mysql
MYSQL_DATABASE: heatable
MYSQL_USER: heatable
MYSQL_ROOT_PASSWORD: password
networks:
- laravel
volumes:
- ./mysql:/var/lib/mysql
php:
build:
context: .
dockerfile: php.dockerfile
container_name: php-aws
volumes:
- ./src:/var/www/html:delegated
networks:
- laravel
links:
- mysql
depends_on:
- mysql
composer:
build:
context: .
dockerfile: composer.dockerfile
container_name: composer-aws
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
depends_on:
- php
user: laravel
entrypoint: ['composer', '--ignore-platform-reqs']
networks:
- laravel
npm:
build:
context: .
dockerfile: npm.dockerfile
container_name: npm-aws
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
command: npm run watch-poll
networks:
- laravel
artisan:
build:
context: .
dockerfile: php.dockerfile
container_name: artisan-aws
volumes:
- ./src:/var/www/html:delegated
depends_on:
- mysql
working_dir: /var/www/html
user: laravel
entrypoint: ['php', '/var/www/html/artisan']
links:
- mysql
networks:
- laravel
npm.dockerfile:
FROM node:14.17.1
WORKDIR /var/www/html
COPY ./src/package.json .
RUN npm install
RUN npm clean-install
CMD npm run watch-poll
UPDATE
I've managed to resolve it by adding tty: true, see the updated:
npm:
tty: true
build:
context: .
dockerfile: npm.dockerfile
container_name: npm-aws
working_dir: /var/www/html
networks:
- laravel
I had to manually run npm install within the terminal of the container to get the node modules, if anyone knows a way of fixing without the manual inputted command. Please let me know :)
Related
I want to deploy my NestJS Prisma Docker project, but when I try to build it, it takes a long time and does not deploy or give any error. Could you please help me with this issue? Thank you.
=> CACHED [ 3/11] COPY package*.json ./
=> CACHED [ 4/11] RUN npm install -g npm#latest
=> CACHED [ 5/11] RUN npm install
=> CACHED [ 6/11] RUN npm uninstall argon2
=> CACHED [ 7/11] RUN npm install argon2
=> CACHED [ 8/11] COPY . .
=> CACHED [ 9/11] RUN npx prisma generate
=> CACHED [10/11] RUN ./node_modules/.bin/tsc --extendedDiagnostics
=> [11/11] RUN npm run build
=> => # > simplyjet-backend#0.0.1 prebuild
=> => # > rimraf dist
=> => # > simplyjet-backend#0.0.1 build
=> => # > nest build
This is the problem: it is neither deploying nor giving an error.
dc.test.yml
version: '3.9'
services:
nginx:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- data:/data
- letsencrypt:/etc/letsencrypt
depends_on:
- api
- postgres
- redis
networks:
- simply-jet-backend-api
api:
container_name: simply-jet-api-${NODE_ENV}
image: simply-jet-api-${NODE_ENV}
mem_limit: 8g
environment:
- NODE_ENV=${NODE_ENV}
build:
context: .
dockerfile: Dockerfile
env_file:
- .env.${NODE_ENV}
ports:
- ${APP_PORT}:${APP_PORT}
depends_on:
- postgres
- redis
networks:
- simply-jet-backend-api
restart: unless-stopped
postgres:
container_name: postgres
image: postgres:latest
networks:
- simply-jet-backend-api
env_file:
- .env.${NODE_ENV}
ports:
- ${DB_PORT}:${DB_PORT}
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: '${DB_DATABASE}'
PG_DATA: /var/lib/postgresql/data
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
redis:
image: redis
ports:
- 6379:6379
volumes:
- redis:/data
networks:
- simply-jet-backend-api
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4:latest
ports:
- 5050:80
volumes:
- pgadmin:/root/.pgadmin
env_file:
- .env.${NODE_ENV}
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
networks:
- simply-jet-backend-api
networks:
simply-jet-backend-api:
volumes:
data:
letsencrypt:
pgdata:
redis:
pgadmin:
Dockerfile
FROM node:19
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
# Upgrade npm
RUN npm install -g npm#latest
# Install app dependencies
RUN npm install
RUN npm uninstall argon2
RUN npm install argon2
COPY . .
RUN npx prisma generate
ENV NODE_OPTIONS=--max-old-space-size=4096
ENV NODE_ENV production
RUN ./node_modules/.bin/tsc --extendedDiagnostics
RUN npm run build
CMD [ "npm", "run", "start:prod" ]
I just want to deploy my project. Can someone who is familiar with Docker help me with this?
Using MacBook M1. (Maybe it's a reason of problem)
During uploading my project into server I got this error
$ cat docker-compose.yml
version: '3.8'
services:
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env
restart: always
frontend:
image: annarulunat/foodgram_frontend:latest
volumes:
- ../frontend/:/app/result_build/
backend:
image: annarulunat/foodgram:latest
restart: always
volumes:
- static_value:/app/static_backend/
- media_value:/app/media/
depends_on:
- db
env_file:
- ./.env
command: >
sh -c "python manage.py collectstatic --noinput &&
python manage.py migrate &&
gunicorn foodgram.wsgi:application --bind 0:8000"
nginx:
image: nginx:1.21.3-alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
- ../frontend/build:/usr/share/nginx/html/
- ../docs/:/usr/share/nginx/html/api/docs/
- static_value:/var/html/static_backend/
- media_value:/var/html/media/
restart: always
volumes:
static_value:
media_value:
postgres_data:
$ cat Dockerfile
# build env
FROM node:13.12.0-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . ./
RUN npm run build
CMD cp -r build result_build
Thank you)))
I tried to add FROM --platform=linux/amd64 <image>-<version> in the Dockerfile and build
I have a docker setup with AdonisJS 5. I'm currently trying to get it started (and it builds just fine). But as discussed a bit further down, the ace command cannot be found. ace is the CLI package for AdonisJS (think ng for Angular)
This is my Dockerfile:
ARG NODE_IMAGE=node:16.13.1-alpine
FROM $NODE_IMAGE AS base
RUN apk --no-cache add dumb-init
RUN mkdir -p /home/node/app && chown node:node /home/node/app
WORKDIR /home/node/app
USER node
RUN mkdir tmp
FROM base AS dependencies
COPY --chown=node:node ./package*.json ./
RUN npm ci
RUN npm i #adonisjs/cli
COPY --chown=node:node . .
FROM dependencies AS build
RUN node ace build --production
FROM base AS production
ENV NODE_ENV=production
ENV PORT=$PORT
ENV HOST=0.0.0.0
COPY --chown=node:node ./package*.json ./
RUN npm ci --production
COPY --chown=node:node --from=build /home/node/app/build .
EXPOSE $PORT
CMD [ "dumb-init", "node", "service/build/server.js" ]
And this is my docker-compose.yml:
version: '3.9'
services:
postgres:
container_name: postgres
image: postgres
restart: always
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-password}
- POSTGRES_USER=${POSTGRES_USER:-user}
networks:
- family-service-network
volumes:
- fn-db_volume:/var/lib/postgresql/data
adminer:
container_name: adminer
image: adminer
restart: always
networks:
- family-service-network
ports:
- 8080:8080
minio:
container_name: storage
image: 'bitnami/minio:latest'
ports:
- '9000:9000'
- '9001:9001'
environment:
- MINIO_ROOT_USER=user
- MINIO_ROOT_PASSWORD=password
- MINIO_SERVER_ACCESS_KEY=access-key
- MINIO_SERVER_SECRET_KEY=secret-key
networks:
- family-service-network
volumes:
- fn-s3_volume:/var/lib/postgresql/data
fn_service:
container_name: fn_service
restart: always
build:
context: ./service
target: dependencies
ports:
- ${PORT:-3333}:${PORT:-3333}
- 9229:9229
networks:
- family-service-network
env_file:
- ./service/.env
volumes:
- ./:/home/node/app
- /home/node/app/node_modules
depends_on:
- postgres
command: dumb-init node ace serve --watch --node-args="--inspect=0.0.0.0"
volumes:
fn-db_volume:
fn-s3_volume:
networks:
family-service-network:
When I run this with docker-compose up everything works, except for the fn_service.
I get the error:
Error: Cannot find module '/home/node/app/ace'
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15)
at Function.Module._load (node:internal/modules/cjs/loader:778:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)
at node:internal/main/run_main_module:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
node:internal/modules/cjs/loader:936
throw err;
^
I followed this tutorial, and I can't seem to find anything by googling. I'm sure it's something miniscule.
Any help would be appreciated.
Experiencing first setups with Docker with several services running.
Spent a while on this, but cannot pinpoint the problem.
Below is the cause of the problem, I think.
Why the Node app does not work/start?
web_1 | npm ERR! code ENOENT
web_1 | npm ERR! syscall open
web_1 | npm ERR! path /app/http/app/package.json
web_1 | npm ERR! errno -2
web_1 | npm ERR! enoent ENOENT: no such file or directory, open '/app/http/app/package.json'
web_1 | npm ERR! enoent This is related to npm not being able to find a file.
web_1 | npm ERR! enoent
web_1 |
web_1 | npm ERR! A complete log of this run can be found in:
web_1 | npm ERR! /root/.npm/_logs/2020-12-27T23_32_03_845Z-debug.log
Why it does not see it:
docker-compose.yml
version: '3'
services:
mongo:
image: mongo
restart: always
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: mongo_user
MONGO_INITDB_ROOT_PASSWORD: mongo_secret
api:
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- "4433:4433"
depends_on:
- rabbit
volumes:
- .:/app
web:
build:
context: .
dockerfile: Dockerfile1
restart: always
ports:
- "8080:8080"
depends_on:
- api
volumes:
- .:/app
rabbit:
hostname: rabbit
image: rabbitmq:management
environment:
- RABBITMQ_DEFAULT_USER=rabbitmq
- RABBITMQ_DEFAULT_PASS=rabbitmq
ports:
- "5673:5672"
- "15672:15672"
worker_1:
build:
context: .
hostname: worker_1
entrypoint: celery
command: -A workerA worker --loglevel=info -Q workerA
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit
Dockerfile
FROM python:3.8
ADD Pipfile.lock /app/Pipfile.lock
ADD Pipfile /app/Pipfile
WORKDIR /app
COPY . /app
RUN pip install pipenv
RUN pipenv install --system --deploy --ignore-pipfile
ENV FLASK_APP=app/http/api/endpoints.py
ENV FLASK_RUN_PORT=4433
ENV FLASK_ENV=development
ENTRYPOINT ["python"]
#CMD ["app/http/api/endpoints.py","--host=0.0.0.0","--port 4433"]
CMD ["-m", "flask", "run"]
Dockerfile1
FROM node:10
WORKDIR /app/http/app
ADD app/http/app/package.json /app/http/app/package.json
ADD app/http/app/package-lock.json /app/http/app/package-lock.json
RUN npm i
CMD ["npm","start"]
How to make such a setup
Flask, RabbitMQ, React????
How to make it properly?
This is the docker-compose file I have already set the folder to share by Virtual Box VM but it is still not working.
version: '3'
services:
postgres:
image: 'postgres:latest'
deploy:
restart_policy:
condition: on-failure
window: 15m
redis:
image: 'redis:latest'
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '3050:80'
api:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /usr/src/app/node_modules
- ./server:/usr/src/app
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /usr/src/app/node_modules
- ./client:/usr/src/app
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
volumes:
- /usr/src/app/node_modules
- ./worker:/usr/src/app
I am running it on Windows 7 sp1. Whenever I run docker-compose up - I get an error:
api_1 | npm ERR! code ENOENT
api_1 | npm ERR! syscall open
api_1 | npm ERR! path /usr/src/app/package.json
api_1 | npm ERR! errno -2
api_1 | npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/
app/package.json'
api_1 | npm ERR! enoent This is related to npm not being able to find a fi
le.
api_1 | npm ERR! enoent
api_1 |
api_1 | npm ERR! A complete log of this run can be found in:
api_1 | npm ERR! /root/.npm/_logs/2020-05-28T04_06_56_121Z-debug.log
complex_api_1 exited with code 254
Thanks in advance, please help.
I am trying to run a Fibonacci project from the Udemy course of Docker and Kubernetes complete guide.
Each service has its own package.json and other files.
Server Docker File :
FROM node:alpine
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
Worker Docker File :
FROM node:alpine
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
Client Docker File :
FROM node:alpine
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
If you want to share data between containers
services:
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- datavolume:/usr/src/app/node_modules
- ./client:/usr/src/app
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
volumes:
- datavolume:/usr/src/app/node_modules
- ./worker:/usr/src/app
volumes:
datavolume: {}
Since it looks like your dev, I would suggest mount your workspace folder into container
services:
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- ./node_modules:/usr/src/app/node_modules
- ./client:/usr/src/app
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
volumes:
- ./node_modules:/usr/src/app/node_modules
- ./worker:/usr/src/app
And better way is treating every service a standalone project. Each of them should own their self package.json and node_modules.
services:
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- ./client:/usr/src/app
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
volumes:
- ./worker:/usr/src/app
In my opinion, it doesn't make sense to use same libraries in different project which in different purpose.
I had the same error! Actually I solved moving my project to /c/Users/currentUser from c/Program Files/Docker Toolbox. Maybe you have your project folder inside Program Files directory and not inside Users one, is It right? Try with this, Just Copy your project folder inside users and running your docker-compose from there. Let me know!