I have an application that is divided in 2 parts: Frontend and Backend. My Frontend is a React JS application and my backend is a Java Spring boot application. This project is running in Docker, and there's 3 containers: frontend, backend and db (database). My problem is that I can't make my front and send any request to my backend container. Below is my Docker configuration files:
Docker-compose:
version: "3"
services:
db:
image: postgres:9.6
container_name: db
ports:
- "5433:5432"
environment:
- POSTGRES_PASSWORD=123
- POSTGRES_USER=postgres
- POSTGRES_DB=test
backend:
build:
context: ./backend
dockerfile: Dockerfile
container_name: backend
ports:
- "8085:8085"
depends_on:
- db
frontend:
container_name: frontend
build:
context: ./frontend
dockerfile: Dockerfile
expose:
- "80"
ports:
- "80:80"
links:
- backend
depends_on:
- backend
Dockerfile frontend:
# Stage 0, "build-stage", based on Node.js, to build and compile the frontend
FROM node:8.12.0 as build-stage
WORKDIR /app
COPY package*.json /app/
RUN yarn
COPY ./ /app/
RUN yarn run build
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
FROM nginx
RUN rm -rf /usr/share/nginx/html/*
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by tiangolo/node-frontend
COPY --from=build-stage /app/nginx.conf /etc/nginx/conf.d/default.conf
Dockerfile backend:
FROM openjdk:8
ADD /build/libs/reurb-sj-13-11-19.jar reurb-sj-13-11-19.jar
EXPOSE 8085
ENTRYPOINT ["java", "-jar", "reurb-sj-13-11-19.jar", "--app.db.host=
Is Frontend I've tried to send requests to these Ip's:
localhost:8085
172.18.0.3:8085
172.18.0.3
0.0.0.0:8085
When I try to send a request from Frontend, it "starts" and waits for about 10 seconds, then it returns with an error. The weird part is that my request doesn't return with any status.
PS.: I've read all internet and everyone said to put EXPOSE, PORTS and the LINKS (inside docker-compose), I've tried but still doesn't work.
You need to connect to backend:8085.
--
You shouldn't be using IP's to connect to your services but rather the service name listed in your docker-compose file.
Note: If using localhost, that refers to frontend container itself. Usually 0.0.0.0 is used to bind to all IP's or represent any IP address rather than connecting to a specific IP.
So in your front-end code, you need to use backend as the hostname (E.g., backend:8085).
It looks like you have already linked your services so networking shouldn't be an issue. My advice is to always test within the container using something such as:
docker-compose exec frontend bash
# You may need to install packages
ping backend
telnet backend 8085
I think it is worth mentioning that link is legacy and eventually will be removed.
Source: https://docs.docker.com/network/links/
Unless you really need it, you should create custom network for your app. Good documentation is here: https://docs.docker.com/compose/compose-file/#networks
And example:
version: "3"
services:
db:
image: postgres:9.6
container_name: db
ports:
- "5433:5432"
environment:
- POSTGRES_PASSWORD=123
- POSTGRES_USER=postgres
- POSTGRES_DB=test
networks:
- new
backend:
build:
context: ./backend
dockerfile: Dockerfile
container_name: backend
ports:
- "8085:8085"
depends_on:
- db
networks:
- new
frontend:
container_name: frontend
build:
context: ./frontend
dockerfile: Dockerfile
expose:
- "80"
ports:
- "80:80"
networks:
- new
depends_on:
- backend
networks:
new:
Related
my system contains 3 dockers:
mongodb
api backend, built with Nestjs
web application, build with Nuxt.js
the mongo and the backend seems to be working, because i can access the swagger at localhost:3000/api/.
the Nuxtjs web app is failing, and i'm getting 500 Nuxtserver error.
Dockerfile (for the web app):
FROM node:12.13-alpine
ENV APP_ROOT /src
RUN mkdir ${APP_ROOT}
WORKDIR ${APP_ROOT}
ADD . ${APP_ROOT}
RUN npm install
RUN npm run build
ENV HOST 0.0.0.0
EXPOSE 4000
docker-compose.yml:
version: "3"
services:
# backend nestjs app
api:
image: nestjs-api-server
container_name: my-api
depends_on:
- db
restart: unless-stopped
environment:
- NODE_ENV=production
ports:
- 3000:3001
networks:
- mynet
links:
- db
# mongodb
db:
image: mongo
container_name: db_mongo
restart: unless-stopped
volumes:
- ~/data/:/data/db
ports:
- 27017:27017
networks:
- mynet
# front web app, nuxt.js
web:
image: nuxtjs-web-app
container_name: my-web
depends_on:
- api
restart: always
ports:
- 4000:4000
environment:
- BASE_URL=http://localhost:3000/api
command:
"npm run start"
networks:
- mynet
networks:
mynet:
driver: bridge
Looks like the nuxtjs app cannot connect to the api. in the log i see:
ERROR connect ECONNREFUSED 127.0.0.1:3000
But why? the swagger (coming from the same api) works fine on http://localhost:3000/api/#/.
Any idea?
environment:
- BASE_URL=http://localhost:3000/api
localhost in a container means inside that particular container. i.e., it will try to resolve port 3000 in my-web container itself.
Basically from front-end you cannot do container communication. May be you can communicate via public hostname or ip or you can make use of extra_hosts concept in docker-compose to resolve localhost.
Got it. The problem was in nuxtServerInit. This is a very special method on vuex, and it is running in the server. i called $axios from it, and i guess you can't do that.
once i commented that method, it's working fine.
So I have a docker-compose file with 3 services: backend, react frontend and mongo.
backend Dockerfile:
FROM ubuntu:latest
WORKDIR /backend-server
COPY ./static/ ./static
COPY ./config.yml ./config.yml
COPY ./builds/backend-server_linux ./backend-server
EXPOSE 8080
CMD ["./backend-server"]
frontend Dockerfile:
FROM nginx:stable
WORKDIR /usr/share/nginx/html
COPY ./build .
COPY ./.env .env
EXPOSE 80
CMD ["sh", "-c", "nginx -g \"daemon off;\""]
So nothing unusual, I guess.
docker-compose.yml:
version: "3"
services:
mongo-db:
image: mongo:4.2.0-bionic
container_name: mongo-db
volumes:
- mongo-data:/data
network_mode: bridge
backend:
image: backend-linux:latest
container_name: backend
depends_on:
- mongo-db
environment:
- DATABASE_URL=mongodb://mongo-db:27017
..etc
network_mode: bridge
# networks:
# - mynetwork
expose:
- "8080"
ports:
- 8080:8080
links:
- mongo-db:mongo-db
restart: always
frontend:
image: frontend-linux:latest
container_name: frontend
depends_on:
- backend
network_mode: bridge
links:
- backend:backend
ports:
- 80:80
restart: always
volumes:
mongo-data:
driver: local
This is working. My problem is that by adding ports: - 8080:8080 to the backend part, that server becomes available to the host machine. Theoretically the network should work without these lines, as I read it in the docker docs and this question, but if I remove it, the API calls just stop working (but curl calls written in the docker-compose under the frontend service will still work).
Your react frontend is making requests from the browser.
Hence the endpoint, in this case, your API needs to be accessible to the browser, not the container that is handing out static js, css and html files.
Hope this image makes some sense.
P.S. If you wanted to specifically not expose the API you can get the Web Server to proxy Requests to /api/ to the API container, that will happen at the network level and mean you only need to expose the one server.
I do this by serving my Angular apps out of Nginx and then proxy traffic for /app1/api/* to one container and /app2/api/* to another container etc
I build system with docker to test in local.
Also use docker-compose to tie all image to one infra.
Below is images that I used.
nginx:latest
mongo:latest
ubuntu:latest
python:3.6.5
(python for flask web application)
[docker-compose.yml]
version: '3.7'
services:
nginx:
build:
context: .
dockerfile: docker/nginx/dockerfile
container_name: nginx
hostname: nginx-dev
ports:
- '80:80'
networks:
- backend
mongodb:
build:
context: .
dockerfile: docker/mongodb/dockerfile
container_name: mongodb
hostname: mongodb-dev
ports:
- '27017:27017'
networks:
- backend
web_project:
build:
context: .
dockerfile: docker/web/dockerfile
container_name: web_project
hostname: web_project_dev
ports:
- '5000:5000'
networks:
- backend
tty: true
depends_on:
- mongodb
links:
- mongodb
redis:
image: redis:latest
container_name: redis
hostname: redis_dev
networks:
backend:
driver: 'bridge'
[mongo's dockerfile]
FROM mongo:latest
EXPOSE 27017
[python's dockerfile]
FROM python:3.6.5
COPY . ./home
WORKDIR home
RUN pip install -r app/requirements.txt
CMD python manage.py run
When I run my python flask web app in local, it works fine because mongodb is located in local too.
But I run with docker-compose up, it can't access to mongodb.
Maybe every docker image was separated.
I think I have to tiny each image to access to other.
But I'm new at docker so confuse with it.
Is there any solution here?
Thanks.
Make sure you reference your Mongo in your Flask app with the hostname mongodb-dev instead of localhost
[SOLVED]
I modified 'host': 'mongodb-dev:27017' to 'host': 'mongodb-:27017',
and it works perfectly.
I think it happends by links: mongodb.
I'm using nestjs for my backend and using typeorm as ORM.
I tried to define my database and my application in an docker-compose file.
If I'm running my database as a container and my application from my local machine it works well. My program connects and creates the tables etc.
But if I try to connect the database from within my container or to start the container with docker-compose up it fails.
Always get an ECONNREFUSED Error.
Where is my mistake ?
docker-compose.yml
version: '3.1'
volumes:
dbdata:
services:
db:
image: postgres:10
volumes:
- ./dbData/:/var/lib/postgresql/data
restart: always
environment:
- POSTGRES_PASSWORD=${TYPEORM_PASSWORD}
- POSTGRES_USER=${TYPEORM_USERNAME}
- POSTGRES_DB=${TYPEORM_DATABASE}
ports:
- ${TYPEORM_PORT}:5432
backend:
build: .
ports:
- "3001:3000"
command: npm run start
volumes:
- .:/src
Dockerfile
FROM node:10.5
WORKDIR /home
# Bundle app source
COPY . /home
# Install app dependencies
#RUN npm install -g nodemon
# If you are building your code for production
# RUN npm install --only=production
RUN npm i -g #nestjs/cli
RUN npm install
EXPOSE 3000
.env
# .env
HOST=localhost
PORT=3000
NODE_ENV=development
LOG_LEVEL=debug
TYPEORM_CONNECTION=postgres
TYPEORM_HOST=localhost
TYPEORM_USERNAME=postgres
TYPEORM_PASSWORD=postgres
TYPEORM_DATABASE=mariokart
TYPEORM_PORT=5432
TYPEORM_SYNCHRONIZE=true
TYPEORM_DROP_SCHEMA=true
TYPEORM_LOGGING=all
TYPEORM_ENTITIES=src/database/entity/*.ts
TYPEORM_MIGRATIONS=src/database/migrations/**/*.ts
TYPEORM_SUBSCRIBERS=src/database/subscribers/**/*.ts
I tried to use links but it don't work in the container.
Take a look at your /etc/hosts inside the backend container. You will see
192.0.18.1 dir_db_1
or something like that. The IP will be different and dir will represent the dir you're in. Therefore, you must change TYPEORM_HOST=localhost to TYPEORM_HOST=dir_db_1.
Although, I suggest you set static names to your containers.
services:
db:
container_name: project_db
...
backend:
container_name: project_backend
In this case you can always be sure, that your container will have a static name and you can set TYPEORM_HOST=project_db and never worry about the name ever again.
You can create a network and share among two services.
Create network for db and backend services:
networks:
common-net: {}
and add the network to these two services. So your .yml file would like below after edit:
version: '3.1'
volumes:
dbdata:
services:
db:
image: postgres:10
volumes:
- ./dbData/:/var/lib/postgresql/data
restart: always
environment:
- POSTGRES_PASSWORD=${TYPEORM_PASSWORD}
- POSTGRES_USER=${TYPEORM_USERNAME}
- POSTGRES_DB=${TYPEORM_DATABASE}
ports:
- ${TYPEORM_PORT}:5432
networks:
- common-net
backend:
build: .
ports:
- "3001:3000"
command: npm run start
volumes:
- .:/src
networks:
- common-net
networks:
common-net: {}
Note1: After this change, there is no need to expose the Postgres port externally unless you have a reason for it. You can remove that section.
Note2: TYPEORM_HOST should be renamed to db. Docker would resolve the IP address of db service by itself.
I'm currently attempting to use Docker to make our local dev experience involving two services easier, but I'm struggling to use host and container ports in the right way. Here's the situation:
One repo containing a Rails API, running on 127.0.0.1:3000 (lets call this backend)
One repo containing an isomorphic React/Redux frontend app, running on 127.0.0.1:8080 (lets call this frontend)
Both have their own Dockerfile and docker-compose.yml files as they are in separate repos, and both start with docker-compose up fine.
Currently not using Docker at all for CI or deployment, planning to in the future.
The issue I'm having is that in local development the frontend app is looking for the API backend on 127.0.0.1:3000 from within the frontend container, which isn't there - it's only available to the host and the backend container actually running the Rails app.
Is it possible to forward the backend container's 3000 port to the frontend container? Or at the very least the host's 3000 port as I can see the Rails app on localhost on my computer. I've tried 127.0.0.1:3000:3000 within the frontend docker-compose but I can't do that while running the Rails app as the port is in use and fails to connect. I'm thinking maybe I've misunderstood the point or am missing something obvious?
Files:
frontend Dockerfile
FROM node:8.7.0
RUN npm install --global --silent webpack yarn
RUN mkdir /app
WORKDIR /app
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN yarn install
COPY . /app
frontend docker-compose.yml
version: '3'
services:
web:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000' # rails backend exposed to localhost within container
backend Dockerfile
FROM ruby:2.4.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
RUN bundle install
COPY . /app
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
You have to unite the containers in one network. Do it in your docker-compose.yml files.
Check this docs to learn about networks in docker.
frontend docker-compose.yml
version: '3'
services:
gui:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000'
networks:
- webnet
networks:
webnet:
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
back:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
networks:
- webnet
networks:
webnet:
Docker has its own DNS resolution, so after you do this you will be able to connect to your backend by setting the address to: http://back:3000
Managed to solve this using external links in the frontend app to link to the default network of the backend app like so:
version: '3'
services:
web:
build: .
command: yarn start:dev
environment:
- API_HOST=http://backend_web_1:3000
external_links:
- backend_default
networks:
- default
- backend_default
ports:
- '8080:8080'
volumes:
- .:/app
networks:
backend_default: # share with backend app
external: true