Docker-compose with nginx reverse, a website and a restful api? - docker

I hope you can help me with my problem! Here is the info.
Situation
I currently have two working containers that I need to run on the same port 80. There is the website, which currently is accesible by simply going to the host url of the server, and the restful api. However, it has to work by going all throug the port 80 and the login makes requests to the restful api which would have to listen on port 80 too to handle the requests. Therefore, from what I see I'd need a reverse proxy such as nginx to map the interl/external ports appropriately.
Problem
I really don't understand the tutorials out there when it comes to dockerizing an nginx reverse proxy along with two other containers... Currently, the restful api uses a simple Dockerfile and the application uses a docker-compose along with a mysql database. I am very unsure as to how I should be doing this. Should I have all of these inside one folder with the nginx reverse proxy and then the docker-compose handles all the subfolers which each have dockerfiles/docker-compose? Most tutorials I see talk about having two different websites and such, but not many seem to talk about a restful api along with a website for it. From what I understand, I'd most definitely be using this docker hub image.
Docker images current structure
- RestApi
- Dockerfile
- main.go
- Website
- Dockerfile
- Docker-compose
- Ruby app
Should I create a parent folder along a reverse-proxy folder and put all these 3 in the parent folder? Something like :
- Parentfolder
- Reverse-proxy
- RestApi
- Website
Then there's websites that talk about modifying the sites-enabled folder, some don't, some talk about vritual-hosts, others about launching the docker with the network tag... Where would I put my nginx.conf? I'd think in the reverse-proxy folder and mount it, but I'm unsure. Honestly, I'm a bit lost! What follows are my current dockerfile/docker-composes.
RestApi Dockerfile
FROM golang:1.14.4-alpine3.12
WORKDIR /go/src/go-restapi/
COPY ./testpackage testpackage/
COPY ./RestAPI .
RUN apk update
RUN apk add git
RUN go get -u github.com/dgrijalva/jwt-go
RUN go get -u github.com/go-sql-driver/mysql
RUN go get -u github.com/gorilla/context
RUN go get -u github.com/gorilla/mux
RUN go build -o main .
EXPOSE 12345
CMD ["./main"]
Website Dockerfile
FROM ruby:2.7.1
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y bash nodejs tzdata netcat libpq-dev nano tzdata apt-transport-https yarn
RUN gem install bundler
RUN mkdir /myapp
WORKDIR /myapp
COPY package.json yarn.lock Gemfile* ./
RUN yarn install --check-files
RUN bundle install
COPY . .
# EXPOSE 3000
# Running the startup script before starting the server
ENTRYPOINT ["sh", "./config/docker/startup.sh"]
CMD ["rails", "server", "-b", "-p 3000" "0.0.0.0"]
Website Docker-compose
version: '3'
services:
db:
image: mysql:latest
restart: always
command: --default-authentication-plugin=mysql_native_password
# volumes:
# - ./tmp/db:/var/lib/postgresql/data
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: test_dev
MYSQL_USERNAME: root
MYSQL_PASSWORD: root
web:
build: .
# command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
# volumes:
# - .:/myapp
ports:
- "80:3000"
depends_on:
- db
links:
- db
environment:
DB_USER: root
DB_NAME: test_dev
DB_PASSWORD: root
DB_HOST: db
DB_PORT: 3306
RAILS_ENV: development
Should I expect to just "docker-compose up" just one image which will handle the two other ones with the reverse proxy? If anyone could point me to what they'd think would be a good solution to my problem, I'd really appreciate it! Any tutorial seen as helpful would be greatly appreciated too! I've watched most on google and they all seem to be skipping some steps, but I'm very new to this and it makes it kinda hard...
Thank you very much!
NEW docker-compose.yml
version: '3.5'
services:
frontend:
image: 'test/webtest:first-test'
depends_on:
- db
environment:
DB_USER: root
DB_NAME: test_dev
DB_PASSWORD: root
DB_HOST: db
DB_PORT: 3306
RAILS_ENV: development
ports:
- "3000:3000"
networks:
my-network-name:
aliases:
- frontend-name
backend:
depends_on:
- db
image: 'test/apitest:first-test'
ports:
- "12345:12345"
networks:
my-network-name:
aliases:
- backend-name
nginx-proxy:
depends_on:
- frontend
- backend
image: nginx:alpine
volumes:
- $PWD/default.conf:/etc/nginx/conf.d/default.conf
networks:
my-network-name:
aliases:
- proxy-name
ports:
- 80:80
- 443:443
db:
image: mysql:latest
restart: always
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: test_dev
MYSQL_USERNAME: root
MYSQL_PASSWORD: root
ports:
- '3306:3306'
networks:
my-network-name:
aliases:
- mysql-name
networks:
my-network-name:

I wrote a tutorial specifically about reverse proxies with nginx and docker.
Create An Nginx Reverse Proxy With Docker
You'd basically have 3 containers and two without exposed ports that would be communicated through a docker network and each attached to the network.
Bash Method:
docker create my-network;
# docker run -it -p 80:80 --network=my-network ...
or
Docker Compose Method:
File: docker-compose.yml
version: '3'
services:
backend:
networks:
- my-network
...
frontend:
networks:
- my-network
proxy:
networks:
- my-network
networks:
my-network:
A - Nginx Container Proxy - MAPPED 80/80
B - REST API - Internally Serving 80 - given the name backend
C - Website - Internally Serving 80 - given the name frontend
In container A you would just have an nginx conf file that points to the different services via specific routes:
File: /etc/nginx/conf.d/default.conf
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
proxy_pass http://frontend;
}
location /api {
proxy_pass http://backend:5000/;
}
//...
}
This makes it so that when you visit:
http://yourwebsite.com/api = backend
http://yourwebsite.com = frontend
Let me know if you have questions, I've built this a few times, and even added SSL to the proxy container.
This is great if you're going to test one service for local development, but for production (depending on your hosting provider) it would be a different story and they may manage it themselves with their own proxy and load balancer.
===================== UPDATE 1: =====================
This is to simulate both backend, frontend, a proxy and a mysql container in docker compose.
There are four files you'll need in the main project directory to get this to work.
Files:
- backend.html
- frontend.html
- default.conf
- docker-compose.yml
File: ./backend.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Backend API</title>
</head>
<body>
<h1>Backend API</h1>
</body>
</html>
File: ./frontend.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Frontend / Website</title>
</head>
<body>
<h1>Frontend / Website</h1>
</body>
</html>
To configure the proxy nginx to point the right containers on the network.
File: ./default.conf
# This is a default site configuration which will simply return 404, preventing
# chance access to any other virtualhost.
server {
listen 80 default_server;
listen [::]:80 default_server;
# Frontend
location / {
proxy_pass http://frontend-name; # same name as network alias
}
# Backend
location /api {
proxy_pass http://backend-name/; # <--- note this has an extra /
}
# You may need this to prevent return 404 recursion.
location = /404.html {
internal;
}
}
File: ./docker-compose.yml
version: '3.5'
services:
frontend:
image: nginx:alpine
volumes:
- $PWD/frontend.html:/usr/share/nginx/html/index.html
networks:
my-network-name:
aliases:
- frontend-name
backend:
depends_on:
- mysql-database
image: nginx:alpine
volumes:
- $PWD/backend.html:/usr/share/nginx/html/index.html
networks:
my-network-name:
aliases:
- backend-name
nginx-proxy:
depends_on:
- frontend
- backend
image: nginx:alpine
volumes:
- $PWD/default.conf:/etc/nginx/conf.d/default.conf
networks:
my-network-name:
aliases:
- proxy-name
ports:
- 1234:80
mysql-database:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_DATABASE: 'root'
MYSQL_ROOT_PASSWORD: 'secret'
ports:
- '3306:3306'
networks:
my-network-name:
aliases:
- mysql-name
networks:
my-network-name:
Create those files and then run:
docker-compose -d up;
Then visit:
Frontend - http://localhost:1234
Backend - http://localhost:1234/api
You'll see both routes now communicate with their respective services.
You can also see that the fronend and backend don't have exposed ports.
That is because nginx in them default port 80 and we gave them aliases within our network my-network-name) to refer to them.
Additionally I added a mysql container that does have exposed ports, but you could not expose them and just have the backend communicate to the host: mysql-name on port 3306.
If you to walkthrough the process a bit more to understand how things work, before jumping into docker-compose, I would really recommend checking out my tutorial in the link above.
Hope this helps.
===================== UPDATE 2: =====================
Here's a diagram:

Related

Can't access web container from outside (Windows Docker-Desktop)

i'm using Docker-Desktop on Windows and i'm trying to get running 3 containers inside docker-desktop.
After few research and test, i get the 3 container running [WEB - API - DB], everything seems to compile/run without issue in the logs but i'can't access my web container from outside.
Here's my dockerfile and docker-compose, what did i miss or get wrong ?
[WEB] dockerfile
FROM node:16.17.0-bullseye-slim
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
#EXPOSE 4200 (the issue is the same with or without this line)
CMD ["npm", "run", "start"]
[API] dockerfile
FROM openjdk:17.0.1-jdk-slim
WORKDIR /app
COPY ./target/test-0.0.1-SNAPSHOT.jar /app
#EXPOSE 2022 (the issue is the same with or without this line)
CMD ["java", "-jar", "test-0.0.1-SNAPSHOT.jar"]
Docker-compose file
version: "3.8"
services:
### FRONTEND ###
web:
container_name: wallet-web
restart: always
build: ./frontend
ports:
- "80:4200"
depends_on:
- "api"
networks:
customnetwork:
ipv4_address: 172.20.0.12
#networks:
# - "api"
# - "web"
### BACKEND ###
api:
container_name: wallet-api
restart: always
build: ./backend
ports:
- "2022:2022"
depends_on:
- "db"
networks:
customnetwork:
ipv4_address: 172.20.0.11
#networks:
# - "api"
# - "web"
### DATABASE ###
db:
container_name: wallet-db
restart: always
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
networks:
customnetwork:
ipv4_address: 172.20.0.10
#networks:
# - "api"
# - "web"
networks:
customnetwork:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
# api:
# web:
Listening on:
enter image description here
I found several issue similar to mine but the solution didn't worked for me.
If i understand you are trying to access on port 80. To do that, you have to map your container port 4200 to 80 in yaml file 80:4200 instead of 4200:4200.
https://docs.docker.com/config/containers/container-networking/
Have you looked in the browsers development console, if there comes any error. Your docker-compose seems not to have any issue.
How ever lets try to debug it:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6245eaffd67e nginx "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:4200->80/tcp test-api-1
copy the container id then execute:
docker exec -it 6245eaffd67e bin/bash
Now you are inside the container. Instead of the id you can use also the containers name.
curl http://localhost:80
Note: in my case here i just create a container from an nginx image.
In your case use the port where your app is running. Control it in your code if you arent sure. A lot of Javascript-frameworks start default on 3000.
If you get an error: curl command not found, install it in your image:
FROM node:16.17.0-bullseye-slim
USER root # to install dependencies you need sudo permissions su we tell the image that it is root
RUN apt update -y && apt install curl -y
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
#EXPOSE 4200 (the issue is the same with or without this line)
USER node # we dont want to execute the image as root so we put user node (this user is defined in the node:16.17.0-bullseye-slim image)
CMD ["npm", "run", "start"]
Now the curl should work (if it doesnt already).
The same should work from your host.
Here is an important thing:
The localhost, always refers to the fisical computer, or the container itselfs where you are refering. Every container and your PC have localhost and they are not the same.
In the docker-compose you just map the port host/container, so your PC (host) where docker is running can access the docker network from the host on the host port you defined, inside the port of the container.
If you cant still access from your host, try to change the host ports 2022, 4200 ecc. Could be possible that something conflicts on your Windows machine.
It happens sometimes that the docker networks can create some conflicts.
Execute a docker-compose down, so it should be delete and recreated.
Still not working?
Reset docker-desktop to factory settings, control if you have last version (this is always better).
If all this doesnt help, let me know so we can debugg further.
For the sake of clarity i post you here the docker-compose which i used to check. I just used nginx to test the ports as i dont have your images.
version: "3.8"
services:
### FRONTEND ###
web:
restart: always
image: nginx
ports:
- "4200:80"
depends_on:
- "api"
networks:
- "web"
### BACKEND ###
api:
restart: always
image: nginx
ports:
- "2022:80"
depends_on:
- "db"
networks:
- "api"
- "web"
### DATABASE ###
db:
restart: always
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
networks:
- "api"
networks:
api:
web:
```
Update:
You can log what happens in the conatiner like so:
```
docker logs containerid/name
```
If you are using Visualcode there is excellent extension for docker build also by Microsoft:
Just search docker in the extensions. Has something like 20.000.000 downloads and can help you a lot debugging containers ecc. After installing it you see the dockericon on the left toolbar.
If you can see directly the errors that occurs in the logs, maybe you can post them partially. So it would be possible to understand. Please tell also something about your Frontendapp architecture, (react-app, angular). There are some frameworks that need to be startet on 0.0.0.0 instead of 127.0.0.1 or they dont work.

Docker Network Understanding

I'm trying to set up a 3 containers architecture for a web app (front-end, back-end, database). I created two networks one for the back (database + back-end), the other for the front (front + back). I am using compose to start the services.
I can't access my front container from my host even though I published a port.
Am I missing something to make it work ?
Here is my docker-compose.yml file.
services:
api:
image: ruby:3.1.2
command: sh -c "rm -f /app/tmp/pids/server.pid && bundle install && rails s"
working_dir: /app
depends_on:
- database
networks:
- back
- front
ports:
- "3000:3000"
volumes:
- type: bind
source: ../api
target: /app
web:
image: node
working_dir: /app
command: sh -c "yarn install && yarn build && yarn dev"
depends_on:
- api
networks:
- front
- host
volumes:
- type: bind
source: ../frontend
target: /app
ports:
- "8000:5173"
database:
image: keinos/sqlite3
networks:
- back
expose:
- "3306"
volumes:
- citrine-db:/db
networks:
back:
driver: bridge
front:
driver: bridge
host:
volumes:
citrine-db:
Based on this:
I get a connection refused when I try accesing ip_address:5173
It sounds like your application is only listening on the localhost address (127.0.0.1). You need it to listen on "all addresses" (0.0.0.0). This is why you're able to connect to localhost:5173 from inside the container, but connections from outside the container are failing.

Why do I need to mount Docker volumes for both php and nginx? And how to copy for production?

I have a simple Laravel application with Nginx, PHP and MySQL each in its own container. What I don't understand is:
why do I need to mount volumes for both my Nginx and also PHP? Isn't PHP only a programming language not a server?
Also, for production do I need to COPY my src to /var/www/html, but do I need to do it for both Nginx and PHP? Or only for Nginx?
Here is my docker-compose-dev.yml file:
version: '3'
networks:
laravel:
services:
nginx:
image: nginx:stable-alpine
container_name: nginxcontainer
ports:
- "8088:80"
volumes:
- ./src:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- mysql
networks:
- laravel
mysql:
image: mysql:5.7.22
container_name: mysqlcontainer
restart: unless-stopped
tty: true
ports:
- "4306:3306"
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_DATABASE: homestead
MYSQL_USER: homestead
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- laravel
php:
build:
context: .
dockerfile: php/Dockerfile-dev
container_name: phpcontainer
volumes:
- ./src:/var/www/html
ports:
- "9000:9000"
networks:
- laravel
and here is my php/Docker-dev file:
FROM php:7.2-fpm-alpine
RUN docker-php-ext-install pdo pdo_mysql
RUN chown -R www-data:www-data /var/www
RUN chmod 755 /var/www
The reason you need to provide your sources for both nginx and php is rather simple.
The webserver needs all of your assets (images, stylesheets, javascript etc.) so that it can serve them to the client.
PHP is interpreted serverside hence it requires your PHP source files and any referenced file (configs etc.) in order to execute them. In this case you are using PHP-FPM which is decoupled from the webserver and runs in a standalone fashion.
Many projects cleanly seperate frontend and backend code and only provide the frontend sources to the webserver and the backend code to the PHP container.
Another quick tip regarding docker: It is generally a good idea to compact RUN statements in the Dockerfile to avoid excessive layers in the image. Your Dockerfile could be compacted to:
FROM php:7.2-fpm-alpine
RUN docker-php-ext-install pdo pdo_mysql \
&& chown -R www-data:www-data /var/www \
&& chmod 755 /var/www

Docker compose service communication

So I have a docker-compose file with 3 services: backend, react frontend and mongo.
backend Dockerfile:
FROM ubuntu:latest
WORKDIR /backend-server
COPY ./static/ ./static
COPY ./config.yml ./config.yml
COPY ./builds/backend-server_linux ./backend-server
EXPOSE 8080
CMD ["./backend-server"]
frontend Dockerfile:
FROM nginx:stable
WORKDIR /usr/share/nginx/html
COPY ./build .
COPY ./.env .env
EXPOSE 80
CMD ["sh", "-c", "nginx -g \"daemon off;\""]
So nothing unusual, I guess.
docker-compose.yml:
version: "3"
services:
mongo-db:
image: mongo:4.2.0-bionic
container_name: mongo-db
volumes:
- mongo-data:/data
network_mode: bridge
backend:
image: backend-linux:latest
container_name: backend
depends_on:
- mongo-db
environment:
- DATABASE_URL=mongodb://mongo-db:27017
..etc
network_mode: bridge
# networks:
# - mynetwork
expose:
- "8080"
ports:
- 8080:8080
links:
- mongo-db:mongo-db
restart: always
frontend:
image: frontend-linux:latest
container_name: frontend
depends_on:
- backend
network_mode: bridge
links:
- backend:backend
ports:
- 80:80
restart: always
volumes:
mongo-data:
driver: local
This is working. My problem is that by adding ports: - 8080:8080 to the backend part, that server becomes available to the host machine. Theoretically the network should work without these lines, as I read it in the docker docs and this question, but if I remove it, the API calls just stop working (but curl calls written in the docker-compose under the frontend service will still work).
Your react frontend is making requests from the browser.
Hence the endpoint, in this case, your API needs to be accessible to the browser, not the container that is handing out static js, css and html files.
Hope this image makes some sense.
P.S. If you wanted to specifically not expose the API you can get the Web Server to proxy Requests to /api/ to the API container, that will happen at the network level and mean you only need to expose the one server.
I do this by serving my Angular apps out of Nginx and then proxy traffic for /app1/api/* to one container and /app2/api/* to another container etc

Exposing localhost ports in several local services

I'm currently attempting to use Docker to make our local dev experience involving two services easier, but I'm struggling to use host and container ports in the right way. Here's the situation:
One repo containing a Rails API, running on 127.0.0.1:3000 (lets call this backend)
One repo containing an isomorphic React/Redux frontend app, running on 127.0.0.1:8080 (lets call this frontend)
Both have their own Dockerfile and docker-compose.yml files as they are in separate repos, and both start with docker-compose up fine.
Currently not using Docker at all for CI or deployment, planning to in the future.
The issue I'm having is that in local development the frontend app is looking for the API backend on 127.0.0.1:3000 from within the frontend container, which isn't there - it's only available to the host and the backend container actually running the Rails app.
Is it possible to forward the backend container's 3000 port to the frontend container? Or at the very least the host's 3000 port as I can see the Rails app on localhost on my computer. I've tried 127.0.0.1:3000:3000 within the frontend docker-compose but I can't do that while running the Rails app as the port is in use and fails to connect. I'm thinking maybe I've misunderstood the point or am missing something obvious?
Files:
frontend Dockerfile
FROM node:8.7.0
RUN npm install --global --silent webpack yarn
RUN mkdir /app
WORKDIR /app
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN yarn install
COPY . /app
frontend docker-compose.yml
version: '3'
services:
web:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000' # rails backend exposed to localhost within container
backend Dockerfile
FROM ruby:2.4.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
RUN bundle install
COPY . /app
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
You have to unite the containers in one network. Do it in your docker-compose.yml files.
Check this docs to learn about networks in docker.
frontend docker-compose.yml
version: '3'
services:
gui:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000'
networks:
- webnet
networks:
webnet:
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
back:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
networks:
- webnet
networks:
webnet:
Docker has its own DNS resolution, so after you do this you will be able to connect to your backend by setting the address to: http://back:3000
Managed to solve this using external links in the frontend app to link to the default network of the backend app like so:
version: '3'
services:
web:
build: .
command: yarn start:dev
environment:
- API_HOST=http://backend_web_1:3000
external_links:
- backend_default
networks:
- default
- backend_default
ports:
- '8080:8080'
volumes:
- .:/app
networks:
backend_default: # share with backend app
external: true

Resources