I have a simple React app and a spring-boot application as backend. My front end should now communicate with the back end via a defined network. If I pack both services in a docker-compose file and deploy the whole thing, the container with both images is running, but when I call the React page I get an: "This page does not work (ERR_EMPTY_RESPONSE)" error. Where do I have a mistake in reasoning?
My Dockerfile for React:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8083
CMD [ "npm", "start" ]
And my compose file:
version: "3.9"
services:
backend:
container_name: "spring-boot"
build: ./Java
networks:
- bens_network
frontend:
container_name: "react-app"
build: C:\einkauf\einkauf
ports:
- 8083:8083
volumes:
- './:/app'
- '/app/node_modules'
stdin_open: true
tty: true
networks:
- bens_network
networks:
bens_network:
Und als letztes der Vollständigerhalber, mein Dockefile für mein Backend:
FROM openjdk:8
COPY ./target/makecalls.jar makecalls.jar
EXPOSE 8080
CMD ["java","-jar","makecalls.jar"]
Related
I just follow the docker docs example this, I have these lines in Dockerfile
FROM golang:1.18-buster AS build
WORKDIR /app
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY *.go ./
RUN go build -o /docker-gs-ping-roach
FROM gcr.io/distroless/base-debian10
WORKDIR /
COPY --from=build /docker-gs-ping-roach /docker-gs-ping-roach
EXPOSE 4433
USER nonroot:nonroot
ENTRYPOINT ["/docker-gs-ping-roach"]
In docker-compose.yaml:
version: '3.8'
services:
docker-gs-ping-roach:
depends_on:
- roach
build:
context: .
container_name: rest-server
hostname: rest-server
networks:
- mynet
ports:
- 8000:8000
- 4433:4433
environment:
- PGUSER=${PGUSER:-totoro}
- PGPASSWORD=${PGPASSWORD:?database password not set}
- PGHOST=${PGHOST:-db}
- PGPORT=${PGPORT:-26257}
- PGDATABASE=${PGDATABASE:-mydb}
deploy:
restart_policy:
condition: on-failure
roach:
image: cockroachdb/cockroach:latest-v20.1
container_name: roach
hostname: db
networks:
- mynet
ports:
- 26257:26257
- 8080:8080
volumes:
- roach:/cockroach/cockroach-data
command: start-single-node --insecure
volumes:
roach:
networks:
mynet:
driver: bridge
There is no error shown in the terminal and the database is working on http://localhost:8080/ but when I visit the go app on http://localhost:4433 i got this error
curl: (52) Empty reply from server
I checked the containers to make sure that I hit the right port:
I'm not sure where you got port 4433 or 8000 from.
The docs show -p 80:8080, so change your ports to use that instead.
More specifically, the web-server for the Go app defaults to start on port 8080, but that conflicts with CockroachDB, so you need to change it on the host.
Or you need to define HTTP_PORT=4433, then the port mapping of 4433:4433 would work.
In Docker Compose, we have two services (a backend in Flask and a frontend in React) running at the same time in different directories. What are best practices for automatically updating the frontend service or backend service when ha change to the respective code is made?
In our case, we have:
frontend/
index.html
docker-compose.yml
Dockerfile
src
App.js
index.js
..
And our backend is:
backend/
app.py
Dockerfile
docker-compose.yml
This is our docker-compose.yml file:
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ../frontend
dockerfile: ../frontend/Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
volumes:
- .:/frontend
app:
image: python:3.9
build:
context: .
dockerfile: ./Dockerfile
command: app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
Typically, we reload the app (on change) almost instantly from a volume in the volume's section. This approach correctly changes the backend service when the backend code is changed, but not the frontend service. Also, we have 2 docker-compose files, one in frontend, one in backend, which we hope to somehow learn how to consolidate.
Edit: These are the logs that work for the backend (app_1 is the backend) but do not work for the frontend:
app_1 | * Detected change in '/app/app.py', reloading
app_1 | environ({'PATH': '/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'HOSTNAME': '***', 'PGPASSWORD': '***', 'POSTGRESQL_PASSWORD': 'magical_password', 'POSTGRESQL_HOST': 'backend-database-1', 'POSTGRESQL_USER_NAME': '***', 'LOCAL_ENVIRONMENT': 'True', 'FLASK_ENV': 'development', 'LANG': 'C.UTF-8', 'GPG_KEY': '***', 'PYTHON_VERSION': '3.9.13', 'PYTHON_PIP_VERSION': '22.0.4', 'PYTHON_SETUPTOOLS_VERSION': '58.1.0', 'PYTHON_GET_PIP_URL': 'https://github.com/pypa/get-pip/raw/6ce3639da143c5d79b44f94b04080abf2531fd6e/public/get-pip.py', 'PYTHON_GET_PIP_SHA256': '***', 'HOST': '0.0.0.0', 'PORT': '8080', 'HOME': '/root', 'KMP_INIT_AT_FORK': 'FALSE', 'KMP_DUPLICATE_LIB_OK': 'True', 'WERKZEUG_SERVER_FD': '3', 'WERKZEUG_RUN_MAIN': 'true'})
app_1 | * Restarting with stat
app_1 | * Tip: There are .env or .flaskenv files present. Do "pip install python-dotenv" to use them.
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 203-417-897
Edit 2: We followed the link suggested in the comments. We attempted setting both WATCHPACK_POLLING and CHOKIDAR_USEPOLLING to “true” but no luck. And we refactored our docker-compose file to be outside the directories like so:
docker-compose.yml
frontend/
index.html
Dockerfile
src
App.js
index.js
..
backend/
app.py
Dockerfile
Here is the new docker-compose
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ./frontend
cache_from:
- node:alpine
dockerfile: ./Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING="true"
volumes:
- /app/node_modules
- ./frontend:/app
app:
image: python:3.9
build:
context: ./backend
cache_from:
- python:3.9
dockerfile: ./Dockerfile
command: backend/app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- backend/database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/backend/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
app:
And here are our Dockerfile for frontend
FROM node:alpine
RUN mkdir -p /frontend
WORKDIR /frontend
# We copy just the package.json first to leverage Docker cache
COPY package.json /frontend
RUN npm install --legacy-peer-deps
COPY . /frontend
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE ${PORT}
CMD ["npm", "start"]
and backend
FROM python:3.9
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=8080
EXPOSE ${PORT}
# This runs the app in the container
CMD [ "app.py" ]
Still backend hot reloads and every time we make a change the change is detected and picked up and reflected in docker-compose immediately. But frontend requires a restart with this command docker-compose down --volumes && docker-compose build --no-cache && docker-compose up the output we get from docker-compose is no logs. It’s like docker-compose can’t see the changes.
Edit 3: Any help would be much appreciated!
https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/installation/docker.html#creating-a-strapi-project
dockerize strapi with docker and dockercompose
Resolve different error
strapi failed to load resource: the server responded with a status of 404 ()
you can use my dockerized project.
Dockerfile:
FROM node:16.15-alpine3.14
RUN mkdir -p /opt/app
WORKDIR /opt/app
RUN adduser -S app
COPY app/ .
RUN npm install
RUN npm install --save #strapi/strapi
RUN chown -R app /opt/app
USER app
RUN npm run build
EXPOSE 1337
CMD [ "npm", "run", "start" ]
if you don't use RUN npm run build your project on port 80 or http://localhost work but strapi admin templates call http://localhost:1337 on your system that you are running on http://localhost and there is no http://localhost:1337 stabile url and strapi throw exceptions like:
Refused to connect to 'http://localhost:1337/admin/init' because it violates the document's Content Security Policy.
Refused to connect to 'http://localhost:1337/admin/init' because it violates the following Content Security Policy directive: "connect-src 'self' https:".
docker-compose.yml:
version: "3.9"
services:
#Strapi Service (APP Service)
strapi_app:
build:
context: .
depends_on:
- strapi_db
ports:
- "80:1337"
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
volumes:
- /var/scrapi/public/uploads:/opt/app/public/uploads
- /var/scrapi/public:/opt/app/public
networks:
- app-network
#PostgreSQL Service
strapi_db:
image: postgres
container_name: strapi_db
environment:
POSTGRES_USER: strapi_db
POSTGRES_PASSWORD: strapi_db
POSTGRES_DB: strapi_db
ports:
- '5432:5432'
volumes:
- dbdata:/var/lib/postgresql/data
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
in docker compose file I used postgres as database, you can use any other databases and set its config in app service environment variables like:
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
for using environment variables in project you must use process.env for getting operating system environment variables.
change app/config/database.js file to:
module.exports = ({ env }) => ({
connection: {
client: process.env.DATABASE_CLIENT,
connection: {
host: process.env.DATABASE_HOST,
port: parseInt(process.env.DATABASE_PORT),
database: process.env.DATABASE_NAME,
user: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
// ssl: Boolean(process.env.DATABASE_SSL),
ssl: false,
},
},
});
Dockerize Strapi with Docker-compose
FROM node:16.14.2
# Set up the working directory that will be used to copy files/directories below :
WORKDIR /app
# Copy package.json to root directory inside Docker container of Strapi app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
CMD ["npm", "start"]
#docker-compose file
version: '3.7'
services:
strapi:
container_name: strapi
restart: unless-stopped
build:
context: ./strapi
dockerfile: Dockerfile
volumes:
- strapi:/app
- /app/node_modules
ports:
- '1337:1337'
volumes:
strapi:
driver: local
I need help to link the back end of my application and the front end, to create a simple application which would return Working if backend is connected to front end. Besides, when I execute without using Docker, works properly, but when I try with Docker, it didnt connect.
Technologies used:
Springboot (Backend)
NodeJs - Express -Axios (Frontend)
Dockerfile, frontend
FROM node:14
WORKDIR /app
COPY package*.json /app/
RUN npm install -g nodemon
COPY . /app
EXPOSE 3000
CMD ["node", "start"]
Dockerfile Back-End:
FROM openjdk:latest
WORKDIR /usr/src/app
ADD target/springboot.jar /usr/src/app/springboot.jar
EXPOSE 8080
ENTRYPOINT [ "java", "-jar", "/usr/src/app/springboot.jar" ]
docker-compose.yml code:
version: '3.7'
services:
backend:
build:
context: backend
dockerfile: Dockerfile
ports:
- "8080:8080"
networks:
- integration
frontend:
build:
context: frontend
dockerfile: Dockerfile
command: nodemon start frontend/app.js
volumes:
- "./frontend:/app/"
depends_on:
- backend
ports:
- "3000:3000"
networks:
- integration
networks:
integration:
driver: bridge
Exposed ports are available to other containers at [service]:[port] in this case
backend:8080
So I have a basic frontend and backend. The backend relies on some environment variables and this is my docker-compose.yml.
version: "3.9"
services:
backend:
env_file:
- .env
build:
context: ./backend
container_name: fastapi-api
ports:
- 80:80
frontend:
build:
context: ./frontend
container_name: vue-ui
ports:
- 8080:8080
links:
- backend
This gives me ERR_EMPTY_RESPONSE when I go to http://127.0.0.1:8080/, however when I ran the individual Dockerfiles for my frontend and backend, this goes smoothly
My frontend
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'frontend' folder the current working directory
WORKDIR /frontend
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
My backend
FROM tiangolo/uvicorn-gunicorn:python3.8
LABEL maintainer="Sebastian Ramirez <tiangolo#gmail.com>"
WORKDIR /backend
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
EXPOSE 80
This is what I see from running docker ps
This is what's happening, frontend requests are being sent to the wrong place
I want it to go here
So requests should go to port 80 not port 8000
This is what I see from dev tools
However this is my code
axios
.post(`http://127.0.0.1:80/city/`, {
city_name: this.current_city
})
Where are the extra 0s coming from?
This is what happens when I ran the two containers separately
By looking at the docker ps output I would guess that you have by accident switched ports for backend and frontend in configuration. Frontend has unmapped port 80 and backend has unmapped port 8080.
Try this one:
version: "3.9"
services:
backend:
env_file:
- .env
build:
context: ./backend
container_name: fastapi-api
ports:
- 8080:8080
frontend:
build:
context: ./frontend
container_name: vue-ui
ports:
- 80:80
links:
- backend