Strapi dockerize with docker-compose complete guide - docker

https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/installation/docker.html#creating-a-strapi-project
dockerize strapi with docker and dockercompose
Resolve different error
strapi failed to load resource: the server responded with a status of 404 ()

you can use my dockerized project.
Dockerfile:
FROM node:16.15-alpine3.14
RUN mkdir -p /opt/app
WORKDIR /opt/app
RUN adduser -S app
COPY app/ .
RUN npm install
RUN npm install --save #strapi/strapi
RUN chown -R app /opt/app
USER app
RUN npm run build
EXPOSE 1337
CMD [ "npm", "run", "start" ]
if you don't use RUN npm run build your project on port 80 or http://localhost work but strapi admin templates call http://localhost:1337 on your system that you are running on http://localhost and there is no http://localhost:1337 stabile url and strapi throw exceptions like:
Refused to connect to 'http://localhost:1337/admin/init' because it violates the document's Content Security Policy.
Refused to connect to 'http://localhost:1337/admin/init' because it violates the following Content Security Policy directive: "connect-src 'self' https:".
docker-compose.yml:
version: "3.9"
services:
#Strapi Service (APP Service)
strapi_app:
build:
context: .
depends_on:
- strapi_db
ports:
- "80:1337"
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
volumes:
- /var/scrapi/public/uploads:/opt/app/public/uploads
- /var/scrapi/public:/opt/app/public
networks:
- app-network
#PostgreSQL Service
strapi_db:
image: postgres
container_name: strapi_db
environment:
POSTGRES_USER: strapi_db
POSTGRES_PASSWORD: strapi_db
POSTGRES_DB: strapi_db
ports:
- '5432:5432'
volumes:
- dbdata:/var/lib/postgresql/data
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
in docker compose file I used postgres as database, you can use any other databases and set its config in app service environment variables like:
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
for using environment variables in project you must use process.env for getting operating system environment variables.
change app/config/database.js file to:
module.exports = ({ env }) => ({
connection: {
client: process.env.DATABASE_CLIENT,
connection: {
host: process.env.DATABASE_HOST,
port: parseInt(process.env.DATABASE_PORT),
database: process.env.DATABASE_NAME,
user: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
// ssl: Boolean(process.env.DATABASE_SSL),
ssl: false,
},
},
});

Dockerize Strapi with Docker-compose
FROM node:16.14.2
# Set up the working directory that will be used to copy files/directories below :
WORKDIR /app
# Copy package.json to root directory inside Docker container of Strapi app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
CMD ["npm", "start"]
#docker-compose file
version: '3.7'
services:
strapi:
container_name: strapi
restart: unless-stopped
build:
context: ./strapi
dockerfile: Dockerfile
volumes:
- strapi:/app
- /app/node_modules
ports:
- '1337:1337'
volumes:
strapi:
driver: local

Related

docker-compose: how to automatically propagate changes (both frontend and backend)?

In Docker Compose, we have two services (a backend in Flask and a frontend in React) running at the same time in different directories. What are best practices for automatically updating the frontend service or backend service when ha change to the respective code is made?
In our case, we have:
frontend/
index.html
docker-compose.yml
Dockerfile
src
App.js
index.js
..
And our backend is:
backend/
app.py
Dockerfile
docker-compose.yml
This is our docker-compose.yml file:
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ../frontend
dockerfile: ../frontend/Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
volumes:
- .:/frontend
app:
image: python:3.9
build:
context: .
dockerfile: ./Dockerfile
command: app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
Typically, we reload the app (on change) almost instantly from a volume in the volume's section. This approach correctly changes the backend service when the backend code is changed, but not the frontend service. Also, we have 2 docker-compose files, one in frontend, one in backend, which we hope to somehow learn how to consolidate.
Edit: These are the logs that work for the backend (app_1 is the backend) but do not work for the frontend:
app_1 | * Detected change in '/app/app.py', reloading
app_1 | environ({'PATH': '/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'HOSTNAME': '***', 'PGPASSWORD': '***', 'POSTGRESQL_PASSWORD': 'magical_password', 'POSTGRESQL_HOST': 'backend-database-1', 'POSTGRESQL_USER_NAME': '***', 'LOCAL_ENVIRONMENT': 'True', 'FLASK_ENV': 'development', 'LANG': 'C.UTF-8', 'GPG_KEY': '***', 'PYTHON_VERSION': '3.9.13', 'PYTHON_PIP_VERSION': '22.0.4', 'PYTHON_SETUPTOOLS_VERSION': '58.1.0', 'PYTHON_GET_PIP_URL': 'https://github.com/pypa/get-pip/raw/6ce3639da143c5d79b44f94b04080abf2531fd6e/public/get-pip.py', 'PYTHON_GET_PIP_SHA256': '***', 'HOST': '0.0.0.0', 'PORT': '8080', 'HOME': '/root', 'KMP_INIT_AT_FORK': 'FALSE', 'KMP_DUPLICATE_LIB_OK': 'True', 'WERKZEUG_SERVER_FD': '3', 'WERKZEUG_RUN_MAIN': 'true'})
app_1 | * Restarting with stat
app_1 | * Tip: There are .env or .flaskenv files present. Do "pip install python-dotenv" to use them.
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 203-417-897
Edit 2: We followed the link suggested in the comments. We attempted setting both WATCHPACK_POLLING and CHOKIDAR_USEPOLLING to “true” but no luck. And we refactored our docker-compose file to be outside the directories like so:
docker-compose.yml
frontend/
index.html
Dockerfile
src
App.js
index.js
..
backend/
app.py
Dockerfile
Here is the new docker-compose
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ./frontend
cache_from:
- node:alpine
dockerfile: ./Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING="true"
volumes:
- /app/node_modules
- ./frontend:/app
app:
image: python:3.9
build:
context: ./backend
cache_from:
- python:3.9
dockerfile: ./Dockerfile
command: backend/app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- backend/database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/backend/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
app:
And here are our Dockerfile for frontend
FROM node:alpine
RUN mkdir -p /frontend
WORKDIR /frontend
# We copy just the package.json first to leverage Docker cache
COPY package.json /frontend
RUN npm install --legacy-peer-deps
COPY . /frontend
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE ${PORT}
CMD ["npm", "start"]
and backend
FROM python:3.9
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=8080
EXPOSE ${PORT}
# This runs the app in the container
CMD [ "app.py" ]
Still backend hot reloads and every time we make a change the change is detected and picked up and reflected in docker-compose immediately. But frontend requires a restart with this command docker-compose down --volumes && docker-compose build --no-cache && docker-compose up the output we get from docker-compose is no logs. It’s like docker-compose can’t see the changes.
Edit 3: Any help would be much appreciated!

Docker: Node server is not running after start the server

I have a Dockerfile and a docker-compose.yml file.
If I execute docker-compose up, it returns:
Creating network "demoapi_webnet" with the default driver
Creating demoapi_web_1 ... done
Creating d2c_postgres ... done
Attaching to demoapi_web_1, d2c_postgres
...
d2c_postgres | 2020-07-28 00:47:48.772 UTC [1] LOG: database system is ready to accept connections
But my node server is not starting.
These are my docker configuration files:
Dockerfile
FROM node:12.13-alpine As development
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY dist .
COPY wait-for-it.sh .
CMD ["npm", "run", "start"]
docker-compose.yml
version: '3'
services:
db:
image: postgres
networks:
- webnet
container_name: "d2c_postgres"
environment:
POSTGRES_PASSWORD: 010203
POSTGRES_USER: postgres
POSTGRES_DB: demo
ports:
- "5432:5432"
web:
image: nest-app
ports:
- "3000:3000"
networks:
- webnet
environment:
DB_HOST: db
command: ["./wait-for-it.sh", "db:5432", "--", "npm", "run", "start"]
networks:
webnet:
My only clue is this line:
env: can't execute 'bash': No such file or directory
I can stablish a connection to pgadmin/postgres with that configuration, but the node server is not starting. What am I doing wrong and how can I solve it?
Wait-for-it is base on bash and it's not compatible with alpine as alpine is base on ash or sh that is why you are seeing can't execute 'bash': No such file or directory. You can look into the open issue for alpine support.
Can you make an /bin/sh version for use with alpine linux
For alpine, you can use wait-for
./wait-for is a script designed to synchronize services like docker containers. It is sh and alpine compatible.
services:
db:
image: postgres:9.4
backend:
build: backend
command: sh -c './wait-for db:5432 -- npm start'
depends_on:
- db
After big research, I found a similar issue here:
docker-compose: nodejs container not communicating with Postgres container
For some reason wait for it wasn't working (not sure if is a windows issue), that sh file is not mandatory to wait until database start, you can use depends_on to indicate that the server should start after a specified service:
version: '3'
services:
db:
image: postgres
networks:
- webnet
container_name: "node_postgres"
environment:
POSTGRES_PASSWORD: 010203
POSTGRES_USER: postgres
POSTGRES_DB: demo
ports:
- "5432:5432"
web:
image: nest-app
depends_on:
- db
ports:
- "3000:3000"
networks:
- webnet
environment:
DB_HOST: db
command: ["npm", "run", "start"]
networks:
webnet:

docker-composer up -d getting error postgres: Bind for 0.0.0.0:5432 failed: port is already allocated

I am on the Mac with docker install version 2.0.0.3 (31259)
docker-compose up -d
Removing ab-insight_postgres_1
Starting ab-insight_data_1 ... done
Recreating 31d36fb9c48a_ab-insight_postgres_1 ... error
ERROR: for 31d36fb9c48a_ab-insight_postgres_1 Cannot start service postgres: b'driver failed programming external connectivity on endpoint ab-insight_postgres_1 (5ed1c634dd3a43c2cd988ff7f14b5c1f3cde848e375c2915cf92420f819e21ac): Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated'
ERROR: for postgres Cannot start service postgres: b'driver failed programming external connectivity on endpoint ab-insight_postgres_1 (5ed1c634dd3a43c2cd988ff7f14b5c1f3cde848e375c2915cf92420f819e21ac): Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated'
ERROR: Encountered errors while bringing up the project.
Here is my docker-compose.yml
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:11
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./postgresql
volumes_from:
- data
expose:
- "5432"
and here is my Dockerfile
FROM python:3.6.1
MAINTAINER Ka So <kanel.soeng#kso.com>
# Create the group and user to be used in this container
RUN groupadd flaskgroup && useradd -m -g flaskgroup -s /bin/bash flask
# Create the working directory (and set it as the working directory)
RUN mkdir -p /home/flask/app/web
WORKDIR /home/flask/app/web
# Install the package dependencies (this step is separated
# from copying all the source code to avoid having to
# re-install all python packages defined in requirements.txt
# whenever any source code change is made)
COPY requirements.txt /home/flask/app/web
RUN pip install --no-cache-dir -r requirements.txt
# Copy the source code into the container
COPY . /home/flask/app/web
RUN chown -R flask:flaskgroup /home/flask
USER flask
run docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
This is happening due to postges running locally on your machine on the same port you have mentioned in your docker-compose.yml for postges service.
Either stop the sevice running on your local machine.(not recommended)
Or use other port to map to 5432 port of docker. To do so replace the
expose
-5432
in postgresa service with the following code
ports:
- "5433:5432"
The whole docker compose file will look like:
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:11
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./postgresql
volumes_from:
- data
ports:
- "5433:5432"

Docker-compose, Laravel-echo-server and Redis connectivity issue: [ioredis] Unhandled error event: Error: connect ECONNREFUSED 172.20.0.2:6379

I'm setting up a docker stack with PHP, PostgreSQL, Nginx, Laravel-Echo-Server and Redis and having some issues with Redis and the echo-server connecting. I'm using a docker-compose.yml:
version: '3'
networks:
app-tier:
driver: bridge
services:
app:
build:
context: .
dockerfile: .docker/php/Dockerfile
networks:
- app-tier
ports:
- 9002:9000
volumes:
- .:/srv/app
nginx:
build:
context: .
dockerfile: .docker/nginx/Dockerfile
networks:
- app-tier
ports:
- 8080:80
volumes:
- ./public:/srv/app/public
db:
build:
context: .docker/postgres/
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
ports:
- 5433:5432
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: secret
volumes:
- .docker/postgres/data:/var/lib/postgresql/data
laravel-echo-server:
build:
context: .docker/laravel-echo-server
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
ports:
- 6001:6001
links:
- 'redis:redis'
redis:
build:
context: .docker/redis
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
volumes:
- .docker/redis/data:/var/lib/redis/data
My echo-server Dockerfile:
FROM node:10-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN apk add --update \
python \
python-dev \
py-pip \
build-base
RUN npm install
COPY laravel-echo-server.json /usr/src/app/laravel-echo-server.json
EXPOSE 3000
CMD [ "npm", "start" ]
Redis Dockerfile:
FROM redis:latest
LABEL maintainer="maintainer"
COPY . /usr/src/app
COPY redis.conf /usr/src/app/redis/redis.conf
VOLUME /data
EXPOSE 6379
CMD ["redis-server", "/usr/src/app/redis/redis.conf"]
My laravel-echo-server.json:
{
"authHost": "localhost",
"authEndpoint": "/broadcasting/auth",
"clients": [],
"database": "redis",
"databaseConfig": {
"redis": {
"port": "6379",
"host": "redis"
}
},
"devMode": true,
"host": null,
"port": "6001",
"protocol": "http",
"socketio": {},
"sslCertPath": "",
"sslKeyPath": ""
}
The redis.conf is the default right now. The error I am getting from the laravel-echo-server is:
[ioredis] Unhandled error event: Error: connect ECONNREFUSED 172.20.0.2:6379 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1163:14)
Redis is up and running fine, using the configuration file and ready to accept connections. docker ps shows both redis and echo-server are up, so they're just not connecting as the error indicates. If I change the final line in the Redis Dockerfile to just CMD ["redis-server"] it appears to connect and auto uses the default config (which is the same as the one I have in my .docker directory), but I get this error: Possible SECURITY ATTACK detected. It looks like somebody is sending POST or Host: commands to Redis. This is likely due to an attacker attempting to use Cross Protocol Scripting to compromise your Redis instance. Connection aborted.

Exposing localhost ports in several local services

I'm currently attempting to use Docker to make our local dev experience involving two services easier, but I'm struggling to use host and container ports in the right way. Here's the situation:
One repo containing a Rails API, running on 127.0.0.1:3000 (lets call this backend)
One repo containing an isomorphic React/Redux frontend app, running on 127.0.0.1:8080 (lets call this frontend)
Both have their own Dockerfile and docker-compose.yml files as they are in separate repos, and both start with docker-compose up fine.
Currently not using Docker at all for CI or deployment, planning to in the future.
The issue I'm having is that in local development the frontend app is looking for the API backend on 127.0.0.1:3000 from within the frontend container, which isn't there - it's only available to the host and the backend container actually running the Rails app.
Is it possible to forward the backend container's 3000 port to the frontend container? Or at the very least the host's 3000 port as I can see the Rails app on localhost on my computer. I've tried 127.0.0.1:3000:3000 within the frontend docker-compose but I can't do that while running the Rails app as the port is in use and fails to connect. I'm thinking maybe I've misunderstood the point or am missing something obvious?
Files:
frontend Dockerfile
FROM node:8.7.0
RUN npm install --global --silent webpack yarn
RUN mkdir /app
WORKDIR /app
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN yarn install
COPY . /app
frontend docker-compose.yml
version: '3'
services:
web:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000' # rails backend exposed to localhost within container
backend Dockerfile
FROM ruby:2.4.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
RUN bundle install
COPY . /app
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
You have to unite the containers in one network. Do it in your docker-compose.yml files.
Check this docs to learn about networks in docker.
frontend docker-compose.yml
version: '3'
services:
gui:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000'
networks:
- webnet
networks:
webnet:
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
back:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
networks:
- webnet
networks:
webnet:
Docker has its own DNS resolution, so after you do this you will be able to connect to your backend by setting the address to: http://back:3000
Managed to solve this using external links in the frontend app to link to the default network of the backend app like so:
version: '3'
services:
web:
build: .
command: yarn start:dev
environment:
- API_HOST=http://backend_web_1:3000
external_links:
- backend_default
networks:
- default
- backend_default
ports:
- '8080:8080'
volumes:
- .:/app
networks:
backend_default: # share with backend app
external: true

Resources