docker-compose container isn't hosting files properly when building server first - docker

I have to host 2 different web apps from a docker container. My first attempt at this worked, but I didn't like it. I have a lamp-base folder that has the Dockerfile that gets the server working, then a docker-compose.yml file that puts everything together. In my first version for the web apps I did build: ./lamp-base/ to make them build the web server, but I feel like that's the wrong approach as it leaves me with 3 images (one for each web app and one for the mysql server). I feel like this should be 2 images and 3 containers. 1 image for the mysql container and 1 image for the web server, then the web apps should be able to use the web server. I was able to build it and make it work, kind of.
The problem is, when I go to the URL I defined and added to my local /etc/hosts file I get the default welcome to apache page, I don't get my web apps. I honestly can't figure out why it isn't working correctly. The healthchecks pass, so the files are being hosted properly, but I can't access the web pages. This might be a little long, but I'll post up the Dockerfile and the docker-compose.yml file as well as the default.conf that get's added to the web server. I have asked all the people I know and no one can figure out why I can't reach the webpages, especially since the healthchecks pass. Thank you for bearing with me on this, here are the files...
Dockerfile...
FROM centos:7
EXPOSE 80/tcp
RUN yum install epel-release yum-utils -y && yum install https://rpms.remirepo.net/enterprise/remi-release-7.rpm -y
RUN yum-config-manager --enable remi-php74 && yum install php php-common php-opcache php-mcrypt php-cli php-gd php-curl php-mysql php-pdo php-pdo_mysql php-pecl-mcrypt php-sodium php-mbstring php-runtime php-json php-devel php-xml php-soap php-mysqlnd php-intl php-pecl-xlswriter httpd mod_ssl mod_php -y && yum clean all -y && rm /etc/httpd/conf.d/ssl.conf
ADD httpd.conf /etc/httpd/conf/httpd.conf
ADD conf.d/default.conf /etc/httpd/conf.d/default.conf
CMD /sbin/httpd -D FOREGROUND
The default.conf file that has the vhosts for the web apps...
<VirtualHost my.server1.com:80>
ServerName my.server1.com
CustomLog /var/log/httpd/access_log combinedio
ErrorLog /var/log/httpd/error_log
LogLevel notice
RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK)
RewriteRule .* - [F,L]
# Set application web root and directory settings
DocumentRoot "/var/www/server1/html"
<Directory "/var/www/server1/html">
Options -Indexes
AllowOverride none
php_value include_path ".:/var/www/server1/class"
php_value arg_separator.output ";"
php_value arg_separator.input ";"
</Directory>
</VirtualHost>
<VirtualHost my.server2.com:80>
ServerName my.server2.com
CustomLog /var/log/httpd/access_log combinedio
ErrorLog /var/log/httpd/error_log
LogLevel notice
RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK)
RewriteRule .* - [F,L]
# Set application web root and directory settings
DocumentRoot "/var/www/server2/html"
<Directory "/var/www/server2/html">
Options -Indexes
AllowOverride none
php_value include_path ".:/var/www/server2/class"
php_value arg_separator.output ";"
php_value arg_separator.input ";"
</Directory>
</VirtualHost>
I renamed the servers because I don't think I can post the actual server names, sorry.
The docker-compose.yml file...
version: "3.7"
networks:
lamp-net:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
services:
lamp-web:
build: ./lamp-base/
image: lamp-web:dev
networks:
lamp-net:
ipv4_address: 172.20.0.6
ports:
- "80:80"
lamp-mysql:
image: mariadb:10.4
container_name: lamp-mysql
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_ALLOW_EMPTY_PASSWORD: "true"
volumes:
- ./sql-files:/docker-entrypoint-initdb.d
networks:
lamp-net:
ipv4_address: 172.20.0.2
ports:
- "3307:3306"
healthcheck:
test: "/usr/bin/mysql --user=root --password=password --execute \"SHOW DATABASES;\""
interval: 3s
timeout: 2s
retries: 10
server-one:
image: lamp-web:dev
container_name: server-one
hostname: my.server1.com
depends_on:
lamp-mysql:
condition: service_healthy
volumes:
- ./server1:/var/www/server1
- ./Server1Config.php:/var/www/server1/class/Config.php
networks:
lamp-net:
ipv4_address: 172.20.0.3
healthcheck:
test: curl -kf http://server-one/home/landing/ || exit 1
interval: 3s
timeout: 2s
retries: 10
server-two:
image: lamp-web:dev
container_name: server-two
hostname: my.server2.com
volumes:
- ./server2:/var/www/server2
- ./Server2Config.php:/var/www/server2/class/Config.php
depends_on:
lamp-mysql:
condition: service_healthy
cr-web:
condition: service_healthy
networks:
lamp-net:
ipv4_address: 172.20.0.4
healthcheck:
test: curl -kf http://server-one/api/mp/inventory || exit 1
interval: 3s
timeout: 3s
retries: 10
It all builds correctly, I end up with 2 Images (mysql and server) and 3 containers (mysql, server-one, and server-two) all the healthchecks pass properly, but if I go to http://my.server1.com in my browser I only get the default apache welcome page, I don't get my actual server.
I have also tried removing the server from the docker-compose file and just doing docker build --no-cache . -t lamp-web:dev in the /lamp-base directory, then running the docker-compose up -d and again, it all builds properly I get the right containers, the healthchecks pass, but going to the URL in any browser just yields the default welcome to apache page. I can't figure out why the web containers aren't working correctly.
On a side note, if I remove the web server entirely and change the image: lamp-web:dev to build: ./lamp-base/ on both web containers it works correctly, but I end up with a bunch of images. Am I just going about this the wrong way? Should both the web containers have to build their own web server image? Any help is greatly appreciated. Thank you and sorry for the long post.

Related

Varnish with apache2 and docker compose

I want to use Varnish above the openmaptiles and my SSL apache2 server, so I change the docker-compose.yml like that
version: "3"
volumes:
pgdata:
networks:
postgres:
driver: bridge
services:
postgres:
image: "${POSTGIS_IMAGE:-openmaptiles/postgis}:${TOOLS_VERSION}"
# Use "command: postgres -c jit=off" for PostgreSQL 11+ because of slow large MVT query processing
# Use "shm_size: 512m" if you want to prevent a possible 'No space left on device' during 'make generate-tiles-pg'
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- postgres
ports:
- "${PGPORT:-5432}:${PGPORT:-5432}"
env_file: .env
environment:
# postgress container uses old variable names
POSTGRES_DB: ${PGDATABASE:-openmaptiles}
POSTGRES_USER: ${PGUSER:-openmaptiles}
POSTGRES_PASSWORD: ${PGPASSWORD:-openmaptiles}
PGPORT: ${PGPORT:-5432}
import-data:
image: "openmaptiles/import-data:${TOOLS_VERSION}"
env_file: .env
networks:
- postgres
openmaptiles-tools: &openmaptiles-tools
image: "openmaptiles/openmaptiles-tools:${TOOLS_VERSION}"
env_file: .env
environment:
# Must match the version of this file (first line)
# download-osm will use it when generating a composer file
MAKE_DC_VERSION: "3"
# Allow DIFF_MODE, MIN_ZOOM, and MAX_ZOOM to be overwritten from shell
DIFF_MODE: ${DIFF_MODE}
MIN_ZOOM: ${MIN_ZOOM}
MAX_ZOOM: ${MAX_ZOOM}
#Provide BBOX from *.bbox file if exists, else from .env
BBOX: ${BBOX}
# Imposm configuration file describes how to load updates when enabled
IMPOSM_CONFIG_FILE: ${IMPOSM_CONFIG_FILE}
# Control import-sql processes
MAX_PARALLEL_PSQL: ${MAX_PARALLEL_PSQL}
PGDATABASE: ${PGDATABASE:-openmaptiles}
PGUSER: ${PGUSER:-openmaptiles}
PGPASSWORD: ${PGPASSWORD:-openmaptiles}
PGPORT: ${PGPORT:-5432}
MBTILES_FILE: ${MBTILES_FILE}
networks:
- postgres
volumes:
- .:/tileset
- ./data:/import
- ./data:/export
- ./build/sql:/sql
- ./build:/mapping
- ./cache:/cache
- ./style:/style
update-osm:
<<: *openmaptiles-tools
command: import-update
generate-changed-vectortiles:
image: "openmaptiles/generate-vectortiles:${TOOLS_VERSION}"
command: ./export-list.sh
volumes:
- ./data:/export
- ./build/openmaptiles.tm2source:/tm2source
networks:
- postgres
env_file: .env
environment:
MBTILES_NAME: ${MBTILES_FILE}
# Control tilelive-copy threads
COPY_CONCURRENCY: ${COPY_CONCURRENCY}
PGDATABASE: ${PGDATABASE:-openmaptiles}
PGUSER: ${PGUSER:-openmaptiles}
PGPASSWORD: ${PGPASSWORD:-openmaptiles}
PGPORT: ${PGPORT:-5432}
generate-vectortiles:
image: "openmaptiles/generate-vectortiles:${TOOLS_VERSION}"
volumes:
- ./data:/export
- ./build/openmaptiles.tm2source:/tm2source
networks:
- postgres
env_file: .env
environment:
MBTILES_NAME: ${MBTILES_FILE}
BBOX: ${BBOX}
MIN_ZOOM: ${MIN_ZOOM}
MAX_ZOOM: ${MAX_ZOOM}
# Control tilelive-copy threads
COPY_CONCURRENCY: ${COPY_CONCURRENCY}
#
PGDATABASE: ${PGDATABASE:-openmaptiles}
PGUSER: ${PGUSER:-openmaptiles}
PGPASSWORD: ${PGPASSWORD:-openmaptiles}
PGPORT: ${PGPORT:-5432}
postserve:
image: "openmaptiles/openmaptiles-tools:${TOOLS_VERSION}"
command: "postserve ${TILESET_FILE} --verbose --serve=${OMT_HOST:-http://localhost}:${PPORT:-8090}"
env_file: .env
environment:
TILESET_FILE: ${TILESET_FILE}
networks:
- postgres
#ports:
# - "${PPORT:-8090}:${PPORT:-8090}"
volumes:
- .:/tileset
varnish:
image: eeacms/varnish
ports:
- "6081:6081"
depends_on:
- postserve
networks:
- postgres
environment:
BACKENDS: "postserve"
BACKENDS_PORT: "8090"
BACKENDS_PROBE_INTERVAL: "60s"
BACKENDS_PROBE_TIMEOUT: "10s"
BACKENDS_PROBE_URL: "/data/openmaptiles/0/0/0.pbf"
#DNS_ENABLED: "true"
maputnik_editor:
image: "maputnik/editor"
ports:
- "8088:8888"
tileserver-gl:
image: "maptiler/tileserver-gl:latest"
command:
- --port
- "${TPORT:-8080}"
- --config
- "/style/config.json"
ports:
- "${TPORT:-8080}:${TPORT:-8080}"
depends_on:
- varnish
volumes:
- ./data:/data
- ./style:/style
- ./build:/build
And change my apache config to use the varnish port in the proxypass and proxyreverse:
<VirtualHost *:80>
ServerName tiles.example.com
Protocols h2 h2c http/1.1
ErrorDocument 404 /404.html
# disable proxy for the /font-family sub-directory
# must be placed on top of the other ProxyPass directive
ProxyPass /font-family !
Alias "/font-family" "/var/www/font-family"
#HTTP proxy
ProxyPass / http://localhost:6081/
ProxyPassReverse / http://localhost:6081/
ProxyPreserveHost On
ErrorLog ${APACHE_LOG_DIR}/tileserver-gl.error.log
CustomLog ${APACHE_LOG_DIR}/tileserver-gl.access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} =tiles.example.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
<IfModule mod_ssl.c>
SSLStaplingCache shmcb:/var/run/apache2/stapling_cache(128000)
<VirtualHost *:443>
ServerName tiles.example.com
Protocols h2 h2c http/1.1
ErrorDocument 404 /404.html
# disable proxy for the /font-family sub-directory
# must be placed on top of the other ProxyPass directive
ProxyPass /font-family !
Alias "/font-family" "/var/www/font-family"
#HTTP proxy
ProxyPass / http://localhost:6081/
ProxyPassReverse / http://localhost:6081/
ProxyPreserveHost On
ErrorLog ${APACHE_LOG_DIR}/tileserver-gl.error.log
CustomLog ${APACHE_LOG_DIR}/tileserver-gl.access.log combined
SSLCertificateFile /etc/letsencrypt/live/tiles.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/tiles.example.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
Header always set Strict-Transport-Security "max-age=31536000"
SSLUseStapling on
Header always set Content-Security-Policy upgrade-insecure-requests
RequestHeader set X-Forwarded-Host "tiles.example.com"
RequestHeader set X-Forwarded-Proto "https"
</VirtualHost>
</IfModule>
Then rerun docker-compose up -d
But when I access the tiles I've got a 503 error
503 Backend fetch failed
Any idea where is the error in configuration?
Thanks
Based on https://github.com/openmaptiles/openmaptiles-tools and https://hub.docker.com/r/openmaptiles/openmaptiles-tools it doesn't seem like your postserve container that runs the openmaptiles/openmaptiles-tools image actually exposes any network ports for Varnish to connect to.
And while you are specifying a --serve parameter to this container, the Dockerfile for this image doesn't have an EXPOSE definition that opens up any ports.
Maybe you should mount the generated tiles into a web server using volumes and then connect Varnish to that web server.
Please use the official Varnish Docker image that is available on the Docker hub. This image is supported by Varnish Software and receives regular updates. See https://www.varnish-software.com/developers/tutorials/running-varnish-docker for a tutorial on how to use it.

DDEV not serving Symfony Mercure-Hub over HTTPS

I am running a Symfony Project via drud/ddev (nginx) for local development.
I did this many times before and had no issues whatsoever.
In my recent project I have to use the Mercure-Hub to push Notifications from the server to the client.
I required the symfony/mercure-bundle via composer and copied the generated docker-compose content into a docker-compose.mercure.yaml (.ddev/docker-compose.mercure.yaml)
After starting the container the Mercure-Hub works seamlessly but is only reachable over http.
My problem: I only have beginner knowledge in the field of nginx and docker-compose.
I am thankful for every bit of advice! :)
Steps to reproduce
Setup basic Symfony Project and run it via DDEV.
Require symfony/mercure-bundle.
Copy docker-compose.yaml and docker-compose.override.yaml content to a docker-compose.mercure.yaml in the .ddev folder (change the port).
Configure Mercure-Hub URL in .env.
Start the container and visit [DDEV-URL]:[MERCURE-PORT] / subscribe a Mercure topic.
My problem
Mercure-Hub only reachable via http.
HTTPS call gets an 'ERR_SSL_PROTOCOL_ERROR'
My wish
Access the Mercure-Hub URL / subscribe to Mercure topics via HTTPS.
What I've tried
Reading the Mercure-Hub Docs and trying to adapt the Docker SSL / HTTPS instructions to my local drud/ddev environment
Adding another server to the nginx configuration as in the Mercure-Cookbook "Using NGINX as an HTTP/2 Reverse Proxy in Front of the Hub"
Googling a bunch
Hours of trial and error
Files
ddev config.yaml
name: project-name
type: php
docroot: public
php_version: "8.1"
webserver_type: nginx-fpm
router_http_port: "80"
router_https_port: "443"
xdebug_enabled: true
additional_hostnames: []
additional_fqdns: []
database:
type: mariadb
version: "10.4"
nfs_mount_enabled: true
mutagen_enabled: false
use_dns_when_possible: true
composer_version: "2"
web_environment: []
nodejs_version: "16"
docker-compose.mercure.yaml
version: '3'
services:
###> symfony/mercure-bundle ###
mercure:
image: dunglas/mercure
restart: unless-stopped
environment:
SERVER_NAME: ':3000'
MERCURE_PUBLISHER_JWT_KEY: '!ChangeThisMercureHubJWTSecretKey!'
MERCURE_SUBSCRIBER_JWT_KEY: '!ChangeThisMercureHubJWTSecretKey!'
# Set the URL of your Symfony project (without trailing slash!) as value of the cors_origins directive
MERCURE_EXTRA_DIRECTIVES: |
cors_origins http://127.0.0.1:8000
# Comment the following line to disable the development mode
command: /usr/bin/caddy run -config /etc/caddy/Caddyfile.dev
volumes:
- mercure_data:/data
- mercure_config:/config
ports:
- "3000:3000"
###< symfony/mercure-bundle ###
volumes:
###> symfony/mercure-bundle ###
mercure_data:
mercure_config:
###< symfony/mercure-bundle ###
.env
###> symfony/mercure-bundle ###
# See https://symfony.com/doc/current/mercure.html#configuration
# The URL of the Mercure hub, used by the app to publish updates (can be a local URL)
MERCURE_URL=http://ddev-pnp-master-mercure-1:3000/.well-known/mercure
# The public URL of the Mercure hub, used by the browser to connect
MERCURE_PUBLIC_URL=http://ddev-pnp-master-mercure-1:3000/.well-known/mercure
# The secret used to sign the JWTs
MERCURE_JWT_SECRET="!ChangeThisMercureHubJWTSecretKey!"
###< symfony/mercure-bundle ###
Edit 1
I changed my docker-compose thanks to the advice from rfay.
(only showing the relevant part below)
[...]
services:
mercure:
image: dunglas/mercure
restart: unless-stopped
expose:
- "3000"
environment:
- SERVER_NAME=":3000"
- HTTP_EXPOSE=9998:3000
- HTTPS_EXPOSE=9999:3000
[...]
replaced ports with expose
added HTTP_EXPOSE & HTTPS_EXPOSE
Problem with this
Now my problem is that the container doesn't expose any ports (see docker desktop screenshot below).
docker desktop port screenshot
Solution
With the help of rfay I found the solution (which consisted of reading the ddev documentation properly lol).
What I did
replacing ports with expose
adding VIRTUAL_HOST, HTTP_EXPOSE and HTTPS_EXPOSE under environment
adding container_name & labels (see code below)
My final docker-compose.mercure.yaml
version: '3'
services:
mercure:
image: dunglas/mercure
restart: unless-stopped
container_name: "ddev-${DDEV_SITENAME}-mercure-hub"
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: ${DDEV_APPROOT}
expose:
- "3000"
environment:
VIRTUAL_HOST: $DDEV_HOSTNAME
SERVER_NAME: ":3000"
HTTP_EXPOSE: "9998:3000"
HTTPS_EXPOSE: "9999:3000"
MERCURE_PUBLISHER_JWT_KEY: '!ChangeThisMercureHubJWTSecretKey!'
MERCURE_SUBSCRIBER_JWT_KEY: '!ChangeThisMercureHubJWTSecretKey!'
MERCURE_EXTRA_DIRECTIVES: |
cors_origins https://project-name.ddev.site
# Comment the following line to disable the development mode
command: /usr/bin/caddy run -config /etc/caddy/Caddyfile.dev
volumes:
- mercure_data:/data
- mercure_config:/config
volumes:
mercure_data:
mercure_config:
With this docker-compose in place my mercure container is available via HTTPS over the port 9999.
For further information see the ddev documentation: https://ddev.readthedocs.io/en/latest/users/extend/custom-compose-files/#docker-composeyaml-examples
The solution in https://stackoverflow.com/a/74735903/21252828 does not work until you add a minus before the config option at the command:
...
command: /usr/bin/caddy run --config /etc/caddy/Caddyfile.dev
...
Otherwise the container fails (and restarts endless).
Maybe you can edit your post Christian Neugebauer?

Use different hostname / url than localhost for docker-compose web application

Summary: 2 separate applications, both using docker-compose, how can I have http://app-1.test and http://app-2.test available at the same time?
Description:
I feel like I've missed something super-simple. I have 2 php-fpm (via nginx) applications, both run by similar docker-compose setups, somewhat like:
# docker-compose.yaml
version: '3'
services:
app:
build:
context: .
dockerfile: docker/Dockerfile
container_name: app_1
tty: true
depends_on:
- db
- dbtest
working_dir: /var/www
volumes:
- ./:/var/www
webserver:
image: nginx:stable
container_name: app_1_webserver
restart: always
ports:
- "80:80"
depends_on:
- app
volumes:
- ./:/var/www
- ./docker/app.conf:/etc/nginx/conf.d/default.conf
links:
- app
# ...
On my /etc/hosts, I can add something like
127.0.0.1 app-1.test
Now I can call the app via the browser by going to app-1.test.
The second one has a similar setup, but of course it won't go up, because port 80 is blocked. I can of course change the port, but then the url would be something like app-2.test:81 instead of app-2.test. What can I do, so I can run a second application under a different local hostname? Or is using a different port the best way to go?
You can't. What you can do is add a "router" in front of your images (a third image) which does routing (proxy passing) based on the host name.
Apache or Nginx are often used for these kinds of things.
e.g. with apache server
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
<VirtualHost *:80>
ServerName app-1.test
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / http://image1:80/
ProxyPassReverse / http://image1:80/
ErrorLog /var/log/apache2/error.log
LogLevel info
CustomLog /var/log/apache2/access.log combined
</VirtualHost>
<VirtualHost *:80>
ServerName app-2.test
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / http://image2:80/
ProxyPassReverse / http://image2:80/
ErrorLog /var/log/apache2/error.log
LogLevel info
CustomLog /var/log/apache2/access.log combined
</VirtualHost>
now you can add both names on the same ip in your /etc/hosts file and the server can route internally based on the provided hostname (ServerName).
The http://image1:80/ (and its like) references should be changed to the docker internal dns like specified in the docker-compose.yml

Docker-compose with nginx reverse, a website and a restful api?

I hope you can help me with my problem! Here is the info.
Situation
I currently have two working containers that I need to run on the same port 80. There is the website, which currently is accesible by simply going to the host url of the server, and the restful api. However, it has to work by going all throug the port 80 and the login makes requests to the restful api which would have to listen on port 80 too to handle the requests. Therefore, from what I see I'd need a reverse proxy such as nginx to map the interl/external ports appropriately.
Problem
I really don't understand the tutorials out there when it comes to dockerizing an nginx reverse proxy along with two other containers... Currently, the restful api uses a simple Dockerfile and the application uses a docker-compose along with a mysql database. I am very unsure as to how I should be doing this. Should I have all of these inside one folder with the nginx reverse proxy and then the docker-compose handles all the subfolers which each have dockerfiles/docker-compose? Most tutorials I see talk about having two different websites and such, but not many seem to talk about a restful api along with a website for it. From what I understand, I'd most definitely be using this docker hub image.
Docker images current structure
- RestApi
- Dockerfile
- main.go
- Website
- Dockerfile
- Docker-compose
- Ruby app
Should I create a parent folder along a reverse-proxy folder and put all these 3 in the parent folder? Something like :
- Parentfolder
- Reverse-proxy
- RestApi
- Website
Then there's websites that talk about modifying the sites-enabled folder, some don't, some talk about vritual-hosts, others about launching the docker with the network tag... Where would I put my nginx.conf? I'd think in the reverse-proxy folder and mount it, but I'm unsure. Honestly, I'm a bit lost! What follows are my current dockerfile/docker-composes.
RestApi Dockerfile
FROM golang:1.14.4-alpine3.12
WORKDIR /go/src/go-restapi/
COPY ./testpackage testpackage/
COPY ./RestAPI .
RUN apk update
RUN apk add git
RUN go get -u github.com/dgrijalva/jwt-go
RUN go get -u github.com/go-sql-driver/mysql
RUN go get -u github.com/gorilla/context
RUN go get -u github.com/gorilla/mux
RUN go build -o main .
EXPOSE 12345
CMD ["./main"]
Website Dockerfile
FROM ruby:2.7.1
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y bash nodejs tzdata netcat libpq-dev nano tzdata apt-transport-https yarn
RUN gem install bundler
RUN mkdir /myapp
WORKDIR /myapp
COPY package.json yarn.lock Gemfile* ./
RUN yarn install --check-files
RUN bundle install
COPY . .
# EXPOSE 3000
# Running the startup script before starting the server
ENTRYPOINT ["sh", "./config/docker/startup.sh"]
CMD ["rails", "server", "-b", "-p 3000" "0.0.0.0"]
Website Docker-compose
version: '3'
services:
db:
image: mysql:latest
restart: always
command: --default-authentication-plugin=mysql_native_password
# volumes:
# - ./tmp/db:/var/lib/postgresql/data
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: test_dev
MYSQL_USERNAME: root
MYSQL_PASSWORD: root
web:
build: .
# command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
# volumes:
# - .:/myapp
ports:
- "80:3000"
depends_on:
- db
links:
- db
environment:
DB_USER: root
DB_NAME: test_dev
DB_PASSWORD: root
DB_HOST: db
DB_PORT: 3306
RAILS_ENV: development
Should I expect to just "docker-compose up" just one image which will handle the two other ones with the reverse proxy? If anyone could point me to what they'd think would be a good solution to my problem, I'd really appreciate it! Any tutorial seen as helpful would be greatly appreciated too! I've watched most on google and they all seem to be skipping some steps, but I'm very new to this and it makes it kinda hard...
Thank you very much!
NEW docker-compose.yml
version: '3.5'
services:
frontend:
image: 'test/webtest:first-test'
depends_on:
- db
environment:
DB_USER: root
DB_NAME: test_dev
DB_PASSWORD: root
DB_HOST: db
DB_PORT: 3306
RAILS_ENV: development
ports:
- "3000:3000"
networks:
my-network-name:
aliases:
- frontend-name
backend:
depends_on:
- db
image: 'test/apitest:first-test'
ports:
- "12345:12345"
networks:
my-network-name:
aliases:
- backend-name
nginx-proxy:
depends_on:
- frontend
- backend
image: nginx:alpine
volumes:
- $PWD/default.conf:/etc/nginx/conf.d/default.conf
networks:
my-network-name:
aliases:
- proxy-name
ports:
- 80:80
- 443:443
db:
image: mysql:latest
restart: always
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: test_dev
MYSQL_USERNAME: root
MYSQL_PASSWORD: root
ports:
- '3306:3306'
networks:
my-network-name:
aliases:
- mysql-name
networks:
my-network-name:
I wrote a tutorial specifically about reverse proxies with nginx and docker.
Create An Nginx Reverse Proxy With Docker
You'd basically have 3 containers and two without exposed ports that would be communicated through a docker network and each attached to the network.
Bash Method:
docker create my-network;
# docker run -it -p 80:80 --network=my-network ...
or
Docker Compose Method:
File: docker-compose.yml
version: '3'
services:
backend:
networks:
- my-network
...
frontend:
networks:
- my-network
proxy:
networks:
- my-network
networks:
my-network:
A - Nginx Container Proxy - MAPPED 80/80
B - REST API - Internally Serving 80 - given the name backend
C - Website - Internally Serving 80 - given the name frontend
In container A you would just have an nginx conf file that points to the different services via specific routes:
File: /etc/nginx/conf.d/default.conf
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
proxy_pass http://frontend;
}
location /api {
proxy_pass http://backend:5000/;
}
//...
}
This makes it so that when you visit:
http://yourwebsite.com/api = backend
http://yourwebsite.com = frontend
Let me know if you have questions, I've built this a few times, and even added SSL to the proxy container.
This is great if you're going to test one service for local development, but for production (depending on your hosting provider) it would be a different story and they may manage it themselves with their own proxy and load balancer.
===================== UPDATE 1: =====================
This is to simulate both backend, frontend, a proxy and a mysql container in docker compose.
There are four files you'll need in the main project directory to get this to work.
Files:
- backend.html
- frontend.html
- default.conf
- docker-compose.yml
File: ./backend.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Backend API</title>
</head>
<body>
<h1>Backend API</h1>
</body>
</html>
File: ./frontend.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Frontend / Website</title>
</head>
<body>
<h1>Frontend / Website</h1>
</body>
</html>
To configure the proxy nginx to point the right containers on the network.
File: ./default.conf
# This is a default site configuration which will simply return 404, preventing
# chance access to any other virtualhost.
server {
listen 80 default_server;
listen [::]:80 default_server;
# Frontend
location / {
proxy_pass http://frontend-name; # same name as network alias
}
# Backend
location /api {
proxy_pass http://backend-name/; # <--- note this has an extra /
}
# You may need this to prevent return 404 recursion.
location = /404.html {
internal;
}
}
File: ./docker-compose.yml
version: '3.5'
services:
frontend:
image: nginx:alpine
volumes:
- $PWD/frontend.html:/usr/share/nginx/html/index.html
networks:
my-network-name:
aliases:
- frontend-name
backend:
depends_on:
- mysql-database
image: nginx:alpine
volumes:
- $PWD/backend.html:/usr/share/nginx/html/index.html
networks:
my-network-name:
aliases:
- backend-name
nginx-proxy:
depends_on:
- frontend
- backend
image: nginx:alpine
volumes:
- $PWD/default.conf:/etc/nginx/conf.d/default.conf
networks:
my-network-name:
aliases:
- proxy-name
ports:
- 1234:80
mysql-database:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_DATABASE: 'root'
MYSQL_ROOT_PASSWORD: 'secret'
ports:
- '3306:3306'
networks:
my-network-name:
aliases:
- mysql-name
networks:
my-network-name:
Create those files and then run:
docker-compose -d up;
Then visit:
Frontend - http://localhost:1234
Backend - http://localhost:1234/api
You'll see both routes now communicate with their respective services.
You can also see that the fronend and backend don't have exposed ports.
That is because nginx in them default port 80 and we gave them aliases within our network my-network-name) to refer to them.
Additionally I added a mysql container that does have exposed ports, but you could not expose them and just have the backend communicate to the host: mysql-name on port 3306.
If you to walkthrough the process a bit more to understand how things work, before jumping into docker-compose, I would really recommend checking out my tutorial in the link above.
Hope this helps.
===================== UPDATE 2: =====================
Here's a diagram:

Docker with Nginx: host not found in upstream

I'm trying to follow this guide to setting up a reverse proxy for a docker container (serving a static file), using another container with an instance of nginx as a reverse proxy.
I expect to see my page served on /, but I am blocked in the build with the error message:
container_nginx_1 | 2020/05/10 16:54:12 [emerg] 1#1: host not found in upstream "container1:8001" in /etc/nginx/conf.d/sites-enabled/virtual.conf:2
container_nginx_1 | nginx: [emerg] host not found in upstream "container1:8001" in /etc/nginx/conf.d/sites-enabled/virtual.conf:2
nginx_docker_test_container_nginx_1 exited with code 1
I have tried many variations on the following virtual.conf file, and this is the current, based on the example given and various other pages:
upstream cont {
server container1:8001;
}
server {
listen 80;
location / {
proxy_pass http://cont/;
}
}
If you are willing to look at a 3rd party site, I've made a minimal repo here, otherwise the most relevant files are below.
My docker-compose file looks like this:
version: '3'
services:
container1:
hostname: container1
restart: always
image: danjellz/http-server
ports:
- "8001:8001"
volumes:
- ./proj1:/public
command: "http-server . -p 8001"
depends_on:
- container_nginx
networks:
- app-network
container_nginx:
build:
context: .
dockerfile: docker/Dockerfile_nginx
ports:
- 8080:8080
networks:
- app-network
networks:
app-network:
driver: bridge
and the Dockerfile
# docker/Dockerfile_nginx
FROM nginx:latest
# add nginx config files to sites-available & sites-enabled
RUN mkdir /etc/nginx/conf.d/sites-available
RUN mkdir /etc/nginx/conf.d/sites-enabled
ADD projnginx/conf.d/sites-available/virtual.conf /etc/nginx/conf.d/sites-available/virtual.conf
RUN cp /etc/nginx/conf.d/sites-available/virtual.conf /etc/nginx/conf.d/sites-enabled/virtual.conf
# Replace the standard nginx conf
RUN sed -i 's|include /etc/nginx/conf.d/\*.conf;|include /etc/nginx/conf.d/sites-enabled/*.conf;|' /etc/nginx/nginx.conf
WORKDIR /
I'm running this using docker-compose up.
Similar: react - docker host not found in upstream
The problem is if the hostname can not be resolved in upstream blocks, nginx will not start. Here you have defined service container1 to be dependent on container_nginx . But nginx container is never up due to the fact the container1 hostname is not resolved (because container1 is not yet started) Don't you think it should be reverse? Nginx container should be dependent on the app container.
Additionally in your nginx port binding you have mapped 8080:8080 while in nginx conf you have 80 listening.

Resources