Hello I'm trying to make a dockercompose, but I have the following error:
invalid from flag value builder: pull access denied for builder,
repository does not exist or may require 'docker login': denied:
requested access to the resource is denied
I can't imagine where I might be missing
my docker-compose file:
version: "3.7"
services:
db:
image: postgres:12
restart: always
container_name: "db"
ports:
- "${DB_PORT}:5432"
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_DB: ${DB_NAME}
pgadmin:
image: dpage/pgadmin4
restart: always
container_name: "pgadmin4"
depends_on:
- db
ports:
- 5050:80
environment:
PGADMIN_DEFAULT_EMAIL: emasa#emasa.com
PGADMIN_DEFAULT_PASSWORD: admin
api:
image: server_emasa
container_name: api
restart: always
depends_on:
- db
ports:
- "${SERVER_PORT}:${SERVER_PORT}"
volumes:
db_data:
my DockerFile:
FROM node as builder
WORKDIR usr/app
COPY package*.json ./
COPY --from=builder /usr/app/dist ./dist
COPY ormconfig.docker.json ./ormconfig.json
COPY .env .
RUN yarn install
RUN yarn run build
COPY back-end/ ./
EXPOSE 4000
and my env file:
SERVER_PORT = 4000
DB_HOST = 0.0.0.0
DB_PORT = 5432
DB_USER = spirit
DB_PASS = emasa
DB_NAME = emasa_base
my json orm config:
{
"type": "postgres",
"host": "${DB_HOST}",
"port": "${DB_PORT}",
"username": "${DB_USER}",
"password": "${DB_PASS}",
"database": "${DB_NAME}",
"synchronize": true,
"logging": false,
"entities": ["src/entity/**/*.ts"],
"migrations": ["src/migration/**/*.ts"],
"subscribers": ["src/subscriber/**/*.ts"],
"cli": {
"entitiesDir": "src/entity",
"migrationsDir": "src/migration",
"subscribersDir": "src/subscriber"
}
}
my folder structure:
When using COPY you are able to use --from to refer to some previous build stage or to some external image. Since builder is your current build stage Docker is thinking that your --from=builder refers to some external image and therefore gives you that error. Check again if your Dockerfile is correct.
Let's start with the basics. Try:
FROM node as builder
WORKDIR usr/app
COPY . .
RUN yarn install
RUN yarn run build
EXPOSE 4000
Related
I have created a simple app connected with PostgreSQL and pgAdmin, as well as a web server in a Docker images running in a container.
My question is how I can make it reload, like with nodemon in a local server, without the need of deleting the container everytime.
I have been trying different solutions and methods I have seen around but I haven't been able to make it work.
I have already tried inserting the command: ["npm", "run", "start:dev"] in the docker-compose.file as well...
My files are:
Dockerfile
FROM node:latest
WORKDIR /
COPY package*.json ./
COPY . .
COPY database.json .
COPY .env .
EXPOSE 3000
CMD [ "npm", "run", "watch ]
Docker-compose.file
version: '3.7'
services:
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=tes
- POSTGRES_DB=test
ports:
- 5432:5432
logging:
options:
max-size: 10m
max-file: "3"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=test#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pasword123test
ports:
- "5050:80"
web:
build: .
# command: ["npm", "run", "start:dev"]
links:
- postgres
image: prueba
depends_on:
- postgres
ports:
- '3000:3000'
env_file:
- .env
Nodemon.json file:
{
"watch": ["dist"],
"ext": ".ts,.js",
"ignore": [],
"exec": "ts-node ./dist/server.js"
}
Package.json file:
"scripts": {
"start:dev": "nodemon",
"build": "rimraf ./dist && tsc",
"start": "npm run build && node dist/server.js",
"watch": "tsc-watch --esModuleInterop src/server.ts --outDir ./dist --onSuccess \"node ./dist/server.js\"",
"jasmine": "jasmine",
"test": "npm run build && npm run jasmine",
"db-test": "set ENV=test&& db-migrate -e test up && npm run test && db-migrate -e test reset",
"lint": "eslint . --ext .ts",
"prettier": "prettier --config .prettierrc src/**/*.ts --write",
"prettierLint": "prettier --config .prettierrc src/**/*.ts --write && eslint . --ext .ts --fix"
},
Thanks
The COPY . . command only runs when the image is built, which only happens when you first run docker compose up. In order for the container to be aware of changes, you need the code changes on your host machine to be synchronized with the code inside the container, even after the build is complete.
Below I've added the volume mount to the web container in your docker compose and uncommented the command that should support hot-reloading. I assumed that the source code you wanted to change lives in a src directory, but feel free to update to reflect how you've organized your source code.
version: '3.7'
services:
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=tes
- POSTGRES_DB=test
ports:
- 5432:5432
logging:
options:
max-size: 10m
max-file: "3"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=test#gmail.com
- PGADMIN_DEFAULT_PASSWORD=pasword123test
ports:
- "5050:80"
web:
build: .
command: ["npm", "run", "start:dev"]
links:
- postgres
image: prueba
depends_on:
- postgres
ports:
- '2000:2000'
env_file:
- .env
volumes:
# <host-path>:<container-path>
- ./src:/src/
If that isn't clear, here's an article that might help:
https://www.freecodecamp.org/news/how-to-enable-live-reload-on-docker-based-applications/
I have made a NestJS app with NPX and I am using Prisma and Postgres.
Below is my Dockerfile:
FROM node:16
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i
EXPOSE 8080
CMD ["npm", "run", "start", "api"]
And my .env:
DATABASE_URL="postgres://myuser:mypassword#todo-db:5432/todoapp-db?schema=public?connection_timeout=300"
And my docker-compose.yml
version: '3.8'
services:
nest-api:
container_name: nest-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
depends_on:
- todo-db
- prisma-postgres-api
env_file:
- .env
todo-db:
image: postgres:13
ports:
- 5432:5432
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: todoapp-db
prisma-postgres-api:
stdin_open: true
build:
context: .
dockerfile: Dockerfile
container_name: prisma-postgres-api
depends_on:
- todo-db
ports:
- '3000:3000'
restart: always
command: npx prisma migrate dev
The error I get is the following:
prisma-postgres-api | Error: P1001: Can't reach database server at `todo-db`:`5432`
prisma-postgres-api | Please make sure your database server is running at `todo-db`:`5432`.
I have tried every solution I could find online but none seem to work and I can't figure out where I am going wrong. I'd really appreciate some help, been stuck here for some time now.
I am trying to create a Nest.js + PostgreSQL with Prisma ORM Docker development environment for an existing project. I am using Docker Desktop app. Here is my Dockerfile:
FROM node:16.15-alpine3.15 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
# Install app dependencies
RUN npm install
RUN npm install --only=dev
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "npm", "run", "start:dev" ]
And Here is my docker-compose.yaml:
version: "3.8"
services:
db:
image: postgres
container_name: local_pgdb
restart: always
expose:
- "5432"
ports:
- "54321:5432"
volumes:
- "pg_data:/var/lib/postgresql"
- "pg_log:/var/log/postgresql"
- "pg_config:/etc/postgresql"
- ./docker-config/db:/docker-entrypoint-initdb.d/
env_file:
- ./docker-config/db/postgres.env
pgadmin:
image: dpage/pgadmin4
container_name: pgadmin4_container
restart: always
expose:
- "80"
ports:
- "5050:80"
volumes:
- pgadmin_data:/var/lib/pgadmin
env_file:
- ./docker-config/pgadmin/pgadmin.env
depends_on:
- db
contents_api:
build:
context: ./
dockerfile: Dockerfile.local
container_name: jccme-dp-contents-api
expose:
- "3000"
ports:
- "3000:3000"
volumes:
- ./:/app
- storage:/app/storage
stdin_open: true
tty: true
depends_on:
- db
volumes:
pg_data:
driver: local
pg_log:
driver: local
pg_config:
driver: local
pgadmin_data:
driver: local
storage:
driver: local
Now when I try docker-compose up, then the node_modules folder and dist folder becomes empty. As a result I get a lot of errors of "module not found". Also eslint service cannot start because of empty node_modules folder.
I have tried both VSCode and WebStorm and both gave errors.
Can anyone tell me what am I doing wrong?
https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/installation/docker.html#creating-a-strapi-project
dockerize strapi with docker and dockercompose
Resolve different error
strapi failed to load resource: the server responded with a status of 404 ()
you can use my dockerized project.
Dockerfile:
FROM node:16.15-alpine3.14
RUN mkdir -p /opt/app
WORKDIR /opt/app
RUN adduser -S app
COPY app/ .
RUN npm install
RUN npm install --save #strapi/strapi
RUN chown -R app /opt/app
USER app
RUN npm run build
EXPOSE 1337
CMD [ "npm", "run", "start" ]
if you don't use RUN npm run build your project on port 80 or http://localhost work but strapi admin templates call http://localhost:1337 on your system that you are running on http://localhost and there is no http://localhost:1337 stabile url and strapi throw exceptions like:
Refused to connect to 'http://localhost:1337/admin/init' because it violates the document's Content Security Policy.
Refused to connect to 'http://localhost:1337/admin/init' because it violates the following Content Security Policy directive: "connect-src 'self' https:".
docker-compose.yml:
version: "3.9"
services:
#Strapi Service (APP Service)
strapi_app:
build:
context: .
depends_on:
- strapi_db
ports:
- "80:1337"
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
volumes:
- /var/scrapi/public/uploads:/opt/app/public/uploads
- /var/scrapi/public:/opt/app/public
networks:
- app-network
#PostgreSQL Service
strapi_db:
image: postgres
container_name: strapi_db
environment:
POSTGRES_USER: strapi_db
POSTGRES_PASSWORD: strapi_db
POSTGRES_DB: strapi_db
ports:
- '5432:5432'
volumes:
- dbdata:/var/lib/postgresql/data
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
in docker compose file I used postgres as database, you can use any other databases and set its config in app service environment variables like:
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
for using environment variables in project you must use process.env for getting operating system environment variables.
change app/config/database.js file to:
module.exports = ({ env }) => ({
connection: {
client: process.env.DATABASE_CLIENT,
connection: {
host: process.env.DATABASE_HOST,
port: parseInt(process.env.DATABASE_PORT),
database: process.env.DATABASE_NAME,
user: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
// ssl: Boolean(process.env.DATABASE_SSL),
ssl: false,
},
},
});
Dockerize Strapi with Docker-compose
FROM node:16.14.2
# Set up the working directory that will be used to copy files/directories below :
WORKDIR /app
# Copy package.json to root directory inside Docker container of Strapi app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
CMD ["npm", "start"]
#docker-compose file
version: '3.7'
services:
strapi:
container_name: strapi
restart: unless-stopped
build:
context: ./strapi
dockerfile: Dockerfile
volumes:
- strapi:/app
- /app/node_modules
ports:
- '1337:1337'
volumes:
strapi:
driver: local
I'm setting up a docker stack with PHP, PostgreSQL, Nginx, Laravel-Echo-Server and Redis and having some issues with Redis and the echo-server connecting. I'm using a docker-compose.yml:
version: '3'
networks:
app-tier:
driver: bridge
services:
app:
build:
context: .
dockerfile: .docker/php/Dockerfile
networks:
- app-tier
ports:
- 9002:9000
volumes:
- .:/srv/app
nginx:
build:
context: .
dockerfile: .docker/nginx/Dockerfile
networks:
- app-tier
ports:
- 8080:80
volumes:
- ./public:/srv/app/public
db:
build:
context: .docker/postgres/
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
ports:
- 5433:5432
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: secret
volumes:
- .docker/postgres/data:/var/lib/postgresql/data
laravel-echo-server:
build:
context: .docker/laravel-echo-server
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
ports:
- 6001:6001
links:
- 'redis:redis'
redis:
build:
context: .docker/redis
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
volumes:
- .docker/redis/data:/var/lib/redis/data
My echo-server Dockerfile:
FROM node:10-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN apk add --update \
python \
python-dev \
py-pip \
build-base
RUN npm install
COPY laravel-echo-server.json /usr/src/app/laravel-echo-server.json
EXPOSE 3000
CMD [ "npm", "start" ]
Redis Dockerfile:
FROM redis:latest
LABEL maintainer="maintainer"
COPY . /usr/src/app
COPY redis.conf /usr/src/app/redis/redis.conf
VOLUME /data
EXPOSE 6379
CMD ["redis-server", "/usr/src/app/redis/redis.conf"]
My laravel-echo-server.json:
{
"authHost": "localhost",
"authEndpoint": "/broadcasting/auth",
"clients": [],
"database": "redis",
"databaseConfig": {
"redis": {
"port": "6379",
"host": "redis"
}
},
"devMode": true,
"host": null,
"port": "6001",
"protocol": "http",
"socketio": {},
"sslCertPath": "",
"sslKeyPath": ""
}
The redis.conf is the default right now. The error I am getting from the laravel-echo-server is:
[ioredis] Unhandled error event: Error: connect ECONNREFUSED 172.20.0.2:6379 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1163:14)
Redis is up and running fine, using the configuration file and ready to accept connections. docker ps shows both redis and echo-server are up, so they're just not connecting as the error indicates. If I change the final line in the Redis Dockerfile to just CMD ["redis-server"] it appears to connect and auto uses the default config (which is the same as the one I have in my .docker directory), but I get this error: Possible SECURITY ATTACK detected. It looks like somebody is sending POST or Host: commands to Redis. This is likely due to an attacker attempting to use Cross Protocol Scripting to compromise your Redis instance. Connection aborted.