I have several docker images I'm trying combine using docker-compose.
My server image is failing with the following:
server_1 |
server_1 | To solve this problem, add the platform "linux-musl" to the "binaryTargets" attribute in the "generator" block in the "schema.prisma" file:
server_1 | generator client {
server_1 | provider = "prisma-client-js"
server_1 | binaryTargets = ["native"]
server_1 | }
server_1 |
server_1 | Then run "prisma generate" for your changes to take effect.
I go into the server directory and make the changes as suggested to scheme.prisma:
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "linux-musl"]
}
And yet when I try docker-compose up again it fails on the same error.
I have deleted node_modules multiple times and tried npx prisma generate but it's almost as if docker-compose is using the same old images.
I've also tried docker-compose up --force-recreate without luck.
My docker-compose.yml:
version: '3.4'
services:
server:
env_file: .env
build: ./server
working_dir: /usr/loft/server
environment:
- DATABASE_URL=postgresql://my_dbase_info
ports:
- "5000:5000"
networks:
- loft-app
client:
depends_on:
- server
build: ./client
stdin_open: true
ports:
- "3000:3000"
working_dir: /usr/loft/client
networks:
- loft-app
nginx:
depends_on:
- server
- client
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
ports:
- "8080:80"
networks:
- loft-app
networks:
loft-app:
driver: bridge
my server Dockerfile:
FROM node:17-alpine
WORKDIR /usr/loft/server
ARG DATABASE_URL=""
ENV DATABASE_URL $DATABASE_URL
# Bundle app source
COPY . .
# Install app dependencies
RUN npm i -g npm#8 && npm i
RUN npx prisma generate
EXPOSE 5000
CMD [ "npm","run","prod" ]
I found the answer located in this article
$ docker-compose build
$ docker-compose up --force-recreate
Related
In Docker Compose, we have two services (a backend in Flask and a frontend in React) running at the same time in different directories. What are best practices for automatically updating the frontend service or backend service when ha change to the respective code is made?
In our case, we have:
frontend/
index.html
docker-compose.yml
Dockerfile
src
App.js
index.js
..
And our backend is:
backend/
app.py
Dockerfile
docker-compose.yml
This is our docker-compose.yml file:
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ../frontend
dockerfile: ../frontend/Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
volumes:
- .:/frontend
app:
image: python:3.9
build:
context: .
dockerfile: ./Dockerfile
command: app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
Typically, we reload the app (on change) almost instantly from a volume in the volume's section. This approach correctly changes the backend service when the backend code is changed, but not the frontend service. Also, we have 2 docker-compose files, one in frontend, one in backend, which we hope to somehow learn how to consolidate.
Edit: These are the logs that work for the backend (app_1 is the backend) but do not work for the frontend:
app_1 | * Detected change in '/app/app.py', reloading
app_1 | environ({'PATH': '/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'HOSTNAME': '***', 'PGPASSWORD': '***', 'POSTGRESQL_PASSWORD': 'magical_password', 'POSTGRESQL_HOST': 'backend-database-1', 'POSTGRESQL_USER_NAME': '***', 'LOCAL_ENVIRONMENT': 'True', 'FLASK_ENV': 'development', 'LANG': 'C.UTF-8', 'GPG_KEY': '***', 'PYTHON_VERSION': '3.9.13', 'PYTHON_PIP_VERSION': '22.0.4', 'PYTHON_SETUPTOOLS_VERSION': '58.1.0', 'PYTHON_GET_PIP_URL': 'https://github.com/pypa/get-pip/raw/6ce3639da143c5d79b44f94b04080abf2531fd6e/public/get-pip.py', 'PYTHON_GET_PIP_SHA256': '***', 'HOST': '0.0.0.0', 'PORT': '8080', 'HOME': '/root', 'KMP_INIT_AT_FORK': 'FALSE', 'KMP_DUPLICATE_LIB_OK': 'True', 'WERKZEUG_SERVER_FD': '3', 'WERKZEUG_RUN_MAIN': 'true'})
app_1 | * Restarting with stat
app_1 | * Tip: There are .env or .flaskenv files present. Do "pip install python-dotenv" to use them.
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 203-417-897
Edit 2: We followed the link suggested in the comments. We attempted setting both WATCHPACK_POLLING and CHOKIDAR_USEPOLLING to “true” but no luck. And we refactored our docker-compose file to be outside the directories like so:
docker-compose.yml
frontend/
index.html
Dockerfile
src
App.js
index.js
..
backend/
app.py
Dockerfile
Here is the new docker-compose
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ./frontend
cache_from:
- node:alpine
dockerfile: ./Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING="true"
volumes:
- /app/node_modules
- ./frontend:/app
app:
image: python:3.9
build:
context: ./backend
cache_from:
- python:3.9
dockerfile: ./Dockerfile
command: backend/app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- backend/database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/backend/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
app:
And here are our Dockerfile for frontend
FROM node:alpine
RUN mkdir -p /frontend
WORKDIR /frontend
# We copy just the package.json first to leverage Docker cache
COPY package.json /frontend
RUN npm install --legacy-peer-deps
COPY . /frontend
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE ${PORT}
CMD ["npm", "start"]
and backend
FROM python:3.9
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=8080
EXPOSE ${PORT}
# This runs the app in the container
CMD [ "app.py" ]
Still backend hot reloads and every time we make a change the change is detected and picked up and reflected in docker-compose immediately. But frontend requires a restart with this command docker-compose down --volumes && docker-compose build --no-cache && docker-compose up the output we get from docker-compose is no logs. It’s like docker-compose can’t see the changes.
Edit 3: Any help would be much appreciated!
https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/installation/docker.html#creating-a-strapi-project
dockerize strapi with docker and dockercompose
Resolve different error
strapi failed to load resource: the server responded with a status of 404 ()
you can use my dockerized project.
Dockerfile:
FROM node:16.15-alpine3.14
RUN mkdir -p /opt/app
WORKDIR /opt/app
RUN adduser -S app
COPY app/ .
RUN npm install
RUN npm install --save #strapi/strapi
RUN chown -R app /opt/app
USER app
RUN npm run build
EXPOSE 1337
CMD [ "npm", "run", "start" ]
if you don't use RUN npm run build your project on port 80 or http://localhost work but strapi admin templates call http://localhost:1337 on your system that you are running on http://localhost and there is no http://localhost:1337 stabile url and strapi throw exceptions like:
Refused to connect to 'http://localhost:1337/admin/init' because it violates the document's Content Security Policy.
Refused to connect to 'http://localhost:1337/admin/init' because it violates the following Content Security Policy directive: "connect-src 'self' https:".
docker-compose.yml:
version: "3.9"
services:
#Strapi Service (APP Service)
strapi_app:
build:
context: .
depends_on:
- strapi_db
ports:
- "80:1337"
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
volumes:
- /var/scrapi/public/uploads:/opt/app/public/uploads
- /var/scrapi/public:/opt/app/public
networks:
- app-network
#PostgreSQL Service
strapi_db:
image: postgres
container_name: strapi_db
environment:
POSTGRES_USER: strapi_db
POSTGRES_PASSWORD: strapi_db
POSTGRES_DB: strapi_db
ports:
- '5432:5432'
volumes:
- dbdata:/var/lib/postgresql/data
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
in docker compose file I used postgres as database, you can use any other databases and set its config in app service environment variables like:
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
for using environment variables in project you must use process.env for getting operating system environment variables.
change app/config/database.js file to:
module.exports = ({ env }) => ({
connection: {
client: process.env.DATABASE_CLIENT,
connection: {
host: process.env.DATABASE_HOST,
port: parseInt(process.env.DATABASE_PORT),
database: process.env.DATABASE_NAME,
user: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
// ssl: Boolean(process.env.DATABASE_SSL),
ssl: false,
},
},
});
Dockerize Strapi with Docker-compose
FROM node:16.14.2
# Set up the working directory that will be used to copy files/directories below :
WORKDIR /app
# Copy package.json to root directory inside Docker container of Strapi app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
CMD ["npm", "start"]
#docker-compose file
version: '3.7'
services:
strapi:
container_name: strapi
restart: unless-stopped
build:
context: ./strapi
dockerfile: Dockerfile
volumes:
- strapi:/app
- /app/node_modules
ports:
- '1337:1337'
volumes:
strapi:
driver: local
I´ve been looking all over to find a solution to this, and i have the feeling it´s something small i´m missing but i just can´t get this working. I started with a adonis js v.5 app and then i want to dockerize it, but it keeps giving me the error below when i do docker-compose up --build:
lwdis-api | Error: Cannot find module '/app/server.js'
lwdis-api | at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15)
lwdis-api | at Function.Module._load (node:internal/modules/cjs/loader:778:27)
lwdis-api | at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)
lwdis-api | at node:internal/main/run_main_module:17:47 {
lwdis-api | code: 'MODULE_NOT_FOUND',
lwdis-api | requireStack: []
lwdis-api | }
lwdis-api |
lwdis-api | Node.js v17.5.0
Dockerfile:
FROM node
WORKDIR /app
COPY package.json /app
RUN npm i -g #adonisjs/cli && npm install
COPY . .
EXPOSE 3333
CMD ["npm", "start"]
docker-compose.yml:
version: "3"
services:
lwdis_db:
image: mysql:5.7
ports:
- "33101:3306"
volumes:
- $PWD/data:/var/lib/mysql
environment:
MYSQL_USER: ${MYSQL_USER}
MYSQL_DATABASE: ${MYSQL_DB_NAME}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MYSQL_PASSWORD}
networks:
- api-network
lwdis_api:
container_name: "${APP_NAME}-api"
build: .
environment:
- HOST=0.0.0.0
volumes:
- .:/app
ports:
- "3333:3333"
depends_on:
- lwdis_db
networks:
- api-network
networks:
api-network:
I started with this package, and the problem was right at the begining, when i do the docker ps it shows the mysql container but not the api container, which i think it will be a problem since i have or want to add another modules etc. Then i deleted all containers and images related to it and this time i use the docker-compose up --build which shows me this error. I don´t have a server.js file but i have a server.ts file at the root of the app.
I was hopping someone could help me with this. Thanks in advance.
Try with this:
API Dockerfile
FROM node:12.18.2-alpine3.9
RUN mkdir /srv/app && chown node:node /srv/app
RUN npm install -g #adonisjs/cli
USER node
WORKDIR /srv/app
COPY --chown=node:node package.json package-lock.json ./
RUN npm install --quiet
# TODO: Can remove once we have some dependencies in package.json.
RUN mkdir -p node_modules
COPY . .
RUN cp .env.example .env
#to run node.js script with sudo as we want to listen on port 80
USER root
EXPOSE 80
CMD ["npm","start"]
Mysql dockerfile
FROM mariadb:10.4
docker-compose
version: '3'
services:
db:
container_name: "${SERVICE_PREFIX}-db"
build:
context: ../.
dockerfile: ./docker/mariadb/Dockerfile
env_file:
- ../.env
ports:
- "127.0.0.1:3306:3306"
volumes:
- ${DB_VOLUME_PATH}:/var/lib/mariadb
environment:
MYSQL_ROOT_PASSWORD: "${DB_ROOT_PASSWORD}"
MYSQL_DATABASE: "${DB_DATABASE}"
MYSQL_USER: "${DB_USER}"
MYSQL_PASSWORD: "${DB_PASSWORD}"
networks:
- api-network
restart: always
api:
container_name: "${SERVICE_PREFIX}-api"
tty: true
build:
context: ../.
dockerfile: ./docker/api/Dockerfile
volumes:
- ../.:/srv/app
- app_node_modules:/srv/app/node_modules
restart: always
env_file:
- ../.env
environment:
- HOST=0.0.0.0 # listen on all interfaces
- SERVER_ENV=development
ports:
- "${PORT}:80" # matches actual listener message
depends_on:
- db
networks:
- api-network
networks:
api-network:
driver: "bridge"
volumes:
mysqldata:
driver: "local"
app_node_modules:
I have a Dockerfile and a docker-compose.yml file.
If I execute docker-compose up, it returns:
Creating network "demoapi_webnet" with the default driver
Creating demoapi_web_1 ... done
Creating d2c_postgres ... done
Attaching to demoapi_web_1, d2c_postgres
...
d2c_postgres | 2020-07-28 00:47:48.772 UTC [1] LOG: database system is ready to accept connections
But my node server is not starting.
These are my docker configuration files:
Dockerfile
FROM node:12.13-alpine As development
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY dist .
COPY wait-for-it.sh .
CMD ["npm", "run", "start"]
docker-compose.yml
version: '3'
services:
db:
image: postgres
networks:
- webnet
container_name: "d2c_postgres"
environment:
POSTGRES_PASSWORD: 010203
POSTGRES_USER: postgres
POSTGRES_DB: demo
ports:
- "5432:5432"
web:
image: nest-app
ports:
- "3000:3000"
networks:
- webnet
environment:
DB_HOST: db
command: ["./wait-for-it.sh", "db:5432", "--", "npm", "run", "start"]
networks:
webnet:
My only clue is this line:
env: can't execute 'bash': No such file or directory
I can stablish a connection to pgadmin/postgres with that configuration, but the node server is not starting. What am I doing wrong and how can I solve it?
Wait-for-it is base on bash and it's not compatible with alpine as alpine is base on ash or sh that is why you are seeing can't execute 'bash': No such file or directory. You can look into the open issue for alpine support.
Can you make an /bin/sh version for use with alpine linux
For alpine, you can use wait-for
./wait-for is a script designed to synchronize services like docker containers. It is sh and alpine compatible.
services:
db:
image: postgres:9.4
backend:
build: backend
command: sh -c './wait-for db:5432 -- npm start'
depends_on:
- db
After big research, I found a similar issue here:
docker-compose: nodejs container not communicating with Postgres container
For some reason wait for it wasn't working (not sure if is a windows issue), that sh file is not mandatory to wait until database start, you can use depends_on to indicate that the server should start after a specified service:
version: '3'
services:
db:
image: postgres
networks:
- webnet
container_name: "node_postgres"
environment:
POSTGRES_PASSWORD: 010203
POSTGRES_USER: postgres
POSTGRES_DB: demo
ports:
- "5432:5432"
web:
image: nest-app
depends_on:
- db
ports:
- "3000:3000"
networks:
- webnet
environment:
DB_HOST: db
command: ["npm", "run", "start"]
networks:
webnet:
I am on the Mac with docker install version 2.0.0.3 (31259)
docker-compose up -d
Removing ab-insight_postgres_1
Starting ab-insight_data_1 ... done
Recreating 31d36fb9c48a_ab-insight_postgres_1 ... error
ERROR: for 31d36fb9c48a_ab-insight_postgres_1 Cannot start service postgres: b'driver failed programming external connectivity on endpoint ab-insight_postgres_1 (5ed1c634dd3a43c2cd988ff7f14b5c1f3cde848e375c2915cf92420f819e21ac): Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated'
ERROR: for postgres Cannot start service postgres: b'driver failed programming external connectivity on endpoint ab-insight_postgres_1 (5ed1c634dd3a43c2cd988ff7f14b5c1f3cde848e375c2915cf92420f819e21ac): Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated'
ERROR: Encountered errors while bringing up the project.
Here is my docker-compose.yml
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:11
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./postgresql
volumes_from:
- data
expose:
- "5432"
and here is my Dockerfile
FROM python:3.6.1
MAINTAINER Ka So <kanel.soeng#kso.com>
# Create the group and user to be used in this container
RUN groupadd flaskgroup && useradd -m -g flaskgroup -s /bin/bash flask
# Create the working directory (and set it as the working directory)
RUN mkdir -p /home/flask/app/web
WORKDIR /home/flask/app/web
# Install the package dependencies (this step is separated
# from copying all the source code to avoid having to
# re-install all python packages defined in requirements.txt
# whenever any source code change is made)
COPY requirements.txt /home/flask/app/web
RUN pip install --no-cache-dir -r requirements.txt
# Copy the source code into the container
COPY . /home/flask/app/web
RUN chown -R flask:flaskgroup /home/flask
USER flask
run docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
This is happening due to postges running locally on your machine on the same port you have mentioned in your docker-compose.yml for postges service.
Either stop the sevice running on your local machine.(not recommended)
Or use other port to map to 5432 port of docker. To do so replace the
expose
-5432
in postgresa service with the following code
ports:
- "5433:5432"
The whole docker compose file will look like:
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:11
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./postgresql
volumes_from:
- data
ports:
- "5433:5432"