Docker compose with nginx keeps displaying welcome page - docker

I'm new to docker and trying the simplest docker-compose.yml showing an hello world page to build on top of that with eventually a full LEMP stack that would have the same config as my server. However most tutorials are obsolete and there are so many ways of using docker that I can't find one using only Docker compose v3 that is still actual. I checked the docs and it's awfully confusing as well for a beginner, been trying to make it work for the past 5 hours so I thought I'd ask on SO.
docker-compose.yml
version: '3'
services:
web:
image: bitnami/nginx:1.10.3-r0 #using this version as it's the same on my server
volumes:
- "./test.conf:/etc/nginx/sites-available/test.local"
- "./test.conf:/etc/nginx/sites-enabled/test.local"
- "./code:/var/www/html" #code contains only a basic index.html file
ports:
- "80:80"
test.conf
server {
listen 80;
listen [::]:80;
server_name test.local;
index index.html; #Only a basic helloworld index.html file
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/html;
}
Do I need a Dockerfile with this? Tutorials don't seem to mention it's needed.
NOTE:
Tried adding the volume
- "./default.conf:/etc/nginx/conf.d/default.conf"
but nothing changes and the welcome page still loads, while with nginx:latest I get a very verbose error containing this phrase: "unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type".
UPDATE about docker-compose.yml:
Without the line "./code:/usr/share/nginx/html", the /usr/share/nginx/html folder contains the default index.html file (expected)
With the line "./code:/usr/share/nginx/html", the /usr/share/nginx/html folder is EMPTY!
With the line "./:/usr/share/nginx/html", the /usr/share/nginx/html folder has and empty "code" folder and a bunch of random test files I deleted a while ago.
Between tries, I run my reset script to make sure I start fresh:
docker rm $(docker ps -a -q)
docker rmi $(docker images -q) --force
docker volume rm $(docker volume ls -q)
Running docker inspect <container> returns this for the volume, not sure it's normal that the type is "bind" for as a bind mount instead of a volume.
"Mounts": [
{
"Type": "bind",
"Source": "/e/DEV/sandbox/docker",
"Destination": "/usr/share/nginx/html",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],

It's easy to mount your own hello world page. I'll explain it using the official nginx:latest image but you can do it for yourself with the bitnami image if you want.
First the very basic. Just run the nginx container (without docker-compose). I'll explain it in detail and basic, of course I can try to perform more advanced or faster commands to read files which are inside the container but this can be confusing for a beginner. So just run the container and name it my-nginx:
$ docker run --rm -d -p 80:80 --name my-nginx nginx
Go to localhost:80, you'll see the default nginx page.
Now you can exec inside the container by using it's name. exec will bring you 'inside the container' so you can check its files.
$ docker exec -it my-nginx bash
root#2888fdb672a1:/# cd /etc/nginx/
root#2888fdb672a1:/etc/nginx# ls
conf.d koi-utf mime.types nginx.conf uwsgi_params
fastcgi_params koi-win modules scgi_params win-utf
Now read the nginx.conf by using cat.
The most important line in this file is:
include /etc/nginx/conf.d/*.conf;
This means all the confs inside that directory are used/read.
So go into /etc/nginx/conf.d/.
root#2888fdb672a1:~# cd /etc/nginx/conf.d/
root#2888fdb672a1:/etc/nginx/conf.d# ls
default.conf
The default.conf is the only file. Inside this file you see the configuration:
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
server is local host, port is 80 and the file that will be displayed is in the directory /usr/share/nginx/html/
Now check that file in your container:
root#2888fdb672a1:/etc/nginx/conf.d# cat /usr/share/nginx/html/index.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
...
It's the expected file. It's the 'Welcome to Nginx' page we can see.
So how can we show our own index.html? By just mounting it in /usr/share/nginx/html.
You'll docker-compose.yaml will look like this.
version: '3'
services:
web:
image: nginx:latest
volumes:
- ./code:/usr/share/nginx/html
ports:
- "80:80"
The code directory just contains an index.html with hello world.
Run docker-compose up -d --build and when you curl localhost:80 you will see your own index.html.
If you really want to put your code in /var/www/html instead of /usr/share/nginx you can do that.
Use your test.conf. Here you define to put your file in /var/www/html/:
server {
listen 80;
listen [::]:80;
server_name test.local;
index index.html; #Only a basic helloworld index.html file
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/html;
}
In the compose you will overwrite the default.conf with your own conf where you tell nginx to look in /var/www/html.
Your compose can look like this:
version: '3'
services:
web:
image: nginx:latest
volumes:
- "./test.conf:/etc/nginx/conf.d/default.conf"
- "./code:/var/www/html"
ports:
- "80:80"
Now you will also see your own index.html while it's on your own specified location. Long answer but I hope this helps.

After struggling a lot on this question, I have figured out three factors that made my deploy with docker-compose work.
I did a website in nodejs.
We expose web part and nginx on port 80
Dockerfile for nodejs : [ important line is -> EXPOSE 80 ]
FROM node:14-alpine
WORKDIR /
COPY package.json ./
EXPOSE 80
RUN npm install
COPY . .
CMD ["node", "bin/www"]
Nginx.conf for nginx : [ important line is -> listen 80; ]
events {
}
http {
server {
listen 80;
location / {
proxy_pass http://nodejs:3000;
}
}
}
We give to nginx config file, the name and port of web's image
docker-compose.yaml :
[ important line is -> image: nodejs ] that is the name of web's image
AND
[ important line is -> ports: -"3000:3000" ] that is the port to run web's part
version: "3.7"
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
networks:
- app-network
ports:
- "3000:3000"
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/:/etc/nginx/conf.d
depends_on:
- nodejs
networks:
- app-network
networks:
app-network:
driver: bridge
Nginx.conf for nginx : [ important line is -> proxy_pass http://nodejs:3000; ]
events {
}
http {
server {
listen 80;
location / {
proxy_pass http://nodejs:3000;
}
}
}
We give nginx's image, volumes as path for the config file to replace default one
Tree :
mainfolder
.
|_____nginx
| |______nginx.conf
|
|_____Dockerfile
|
|_____docker-compose.yaml
docker-compose.yaml :
[ important line is -> ./nginx/nginx.conf:/etc/nginx/nginx.conf ] to replace config file
version: "3.7"
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
networks:
- app-network
ports:
- "3000:3000"
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
depends_on:
- nodejs
networks:
- app-network
networks:
app-network:
driver: bridge

Related

Docker + nginx + slim

I'm a new user of Docker. What I want to do is to setup (on my Windows PC) the Docker platform with a nginx server and a slim framework on this server, so that I be able to "host" a simple "hello world" page.
My question is: should I create a container containing the Nginx and inside that container install the Slim framework?
Or should i create two different containers (one for Nginx, one for Slim). And if so, how those two communicate?
Anyway whatever the solution is, first I would like to understand the "architecture" of this "build" and after that how to do it.
Thanks in advance
You can use two containers, using docker-compose to connect slim and nginx, something like this:
docker-compose.yaml
version: "3.8"
services:
php:
container_name: slim
build:
context: ./docker/php
ports:
- '9000:9000'
volumes:
- .:/var/www/slim_app
nginx:
container_name: nginx
image: nginx:stable-alpine
ports:
- '80:80'
volumes:
- .:/var/www/slim_app
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
DOCKERFILE in ./docker/php
FROM php:7.4-fpm
RUN docker-php-ext-install ALL_YOUR EXTENSIONS
WORKDIR /var/www/slim_app
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
Docker nginx in /docker/nginx/default.conf
server {
listen 80;
index index.php;
server_name localhost;
root /var/www/slim_app/public;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\\.php(/|$) {
fastcgi_pass php:9000;
fastcgi_split_path_info ^(.+\\.php)(/.*)$;
internal;
}
location ~ \\.php$ {
return 404;
}
}
Just execute the containers
docker-compose up -d
go to http://localhost/

Docker with Nginx: host not found in upstream

I'm trying to follow this guide to setting up a reverse proxy for a docker container (serving a static file), using another container with an instance of nginx as a reverse proxy.
I expect to see my page served on /, but I am blocked in the build with the error message:
container_nginx_1 | 2020/05/10 16:54:12 [emerg] 1#1: host not found in upstream "container1:8001" in /etc/nginx/conf.d/sites-enabled/virtual.conf:2
container_nginx_1 | nginx: [emerg] host not found in upstream "container1:8001" in /etc/nginx/conf.d/sites-enabled/virtual.conf:2
nginx_docker_test_container_nginx_1 exited with code 1
I have tried many variations on the following virtual.conf file, and this is the current, based on the example given and various other pages:
upstream cont {
server container1:8001;
}
server {
listen 80;
location / {
proxy_pass http://cont/;
}
}
If you are willing to look at a 3rd party site, I've made a minimal repo here, otherwise the most relevant files are below.
My docker-compose file looks like this:
version: '3'
services:
container1:
hostname: container1
restart: always
image: danjellz/http-server
ports:
- "8001:8001"
volumes:
- ./proj1:/public
command: "http-server . -p 8001"
depends_on:
- container_nginx
networks:
- app-network
container_nginx:
build:
context: .
dockerfile: docker/Dockerfile_nginx
ports:
- 8080:8080
networks:
- app-network
networks:
app-network:
driver: bridge
and the Dockerfile
# docker/Dockerfile_nginx
FROM nginx:latest
# add nginx config files to sites-available & sites-enabled
RUN mkdir /etc/nginx/conf.d/sites-available
RUN mkdir /etc/nginx/conf.d/sites-enabled
ADD projnginx/conf.d/sites-available/virtual.conf /etc/nginx/conf.d/sites-available/virtual.conf
RUN cp /etc/nginx/conf.d/sites-available/virtual.conf /etc/nginx/conf.d/sites-enabled/virtual.conf
# Replace the standard nginx conf
RUN sed -i 's|include /etc/nginx/conf.d/\*.conf;|include /etc/nginx/conf.d/sites-enabled/*.conf;|' /etc/nginx/nginx.conf
WORKDIR /
I'm running this using docker-compose up.
Similar: react - docker host not found in upstream
The problem is if the hostname can not be resolved in upstream blocks, nginx will not start. Here you have defined service container1 to be dependent on container_nginx . But nginx container is never up due to the fact the container1 hostname is not resolved (because container1 is not yet started) Don't you think it should be reverse? Nginx container should be dependent on the app container.
Additionally in your nginx port binding you have mapped 8080:8080 while in nginx conf you have 80 listening.

How do I configure nginx to just deliver whatever it finds at the configured url

I start a nginx reverse proxy in docker-compose.
The first docker compose file looks like this:
version: "3.5"
services:
rproxy:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
- 443:443
volumes:
- '/etc/letsencrypt:/etc/letsencrypt'
networks:
- main
networks:
main:
name: main_network
The dockerfile just makes sure the nginx server has the following configuration:
server {
listen 443 ssl;
server_name website.dev;
ssl_certificate /etc/letsencrypt/live/www.website.dev/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.website.dev/privkey.pem;
location / {
resolver 127.0.0.11;
set $frontend http://website;
proxy_pass $frontend;
}
}
First I run this following docker-compose file. Then when I try to access www.website.dev i get a 502 error as expected.
Then I run this other docker-compose file defined below:
version: '3.5'
services:
website:
image: registry.website.dev/frontendcontainer:latest
command: npm run deploy
networks:
main:
aliases:
- website
networks:
main:
external:
name: main_network
This should start the website container on the same network as the nginx container.
"docker ps" shows that the docker container is running.
going to website.dev gives a 502 error. This is unexpected. I expect Nginx to now be able to connect to the now running docker container.
I reset the nginx server by running the following on the first docker-compose file:
docker-compose up -d
Going to website.dev now displays the contents of the website container.
I make changes to the website container upload the new docker container to the private container.
I use the following commands on the second docker-compose file:
docker-compose down
The old website container is no longer in existence.
docker-compose pull
The new website container is pulled.
docker-compose up
The new website container is now online.
Going to website.dev now displays the contents of the old (confirmed to be non-existent) container instead of the new container. This is unexpected
Reseting the nginx server will cause it to now deliver the correct website.
My question is, How do I configure nginx to just deliver whatever it finds at the configured url without having to reset the nginx server?
dockerfile as requested:
FROM nginx:alpine
RUN rm /etc/nginx/conf.d/*
COPY proxy.conf /etc/nginx/conf.d
we got all the parameter.
Using angular-cli and compile the code with
ng-build, this result in a static files, you don't need to serve
them with proxy path. You only need to set the location to the folder
with index.html and everything will work alone without http-server
NGinx:
server {
listen 443 ssl default_server;
server_name website.dev _ default_server;
ssl_certificate /etc/letsencrypt/live/www.website.dev/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.website.dev/privkey.pem;
location / {
root /path/to/dist/website; # where ng build put index.html with all .js and assets
index index.htm index.html;
}
}
docker-compose NGinx:
version: "3.5"
services:
rproxy:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
- 443:443
volumes:
- '/etc/letsencrypt:/etc/letsencrypt'
- '/host/path/shared:/path/to' # <-- Add this line. (host: /host/path/shared)
networks:
- main
networks:
main:
name: main_network
docker-compose website:
version: '3.5'
services:
website:
image: registry.website.dev/frontendcontainer:latest
command: npm run deploy
volumes:
- '/host/path/shared:/path/to' # <-- Add this line. (host: /host/path/shared)
networks:
main:
aliases:
- website
networks:
main:
external:
name: main_network
Now ng build --prod will create index.html and assets into
/host/path/shared/dist/website (internally: /path/to/dist/website).
Then NGinx will have access to those file internally on
/path/to/dist/website, without using http-server. Angular is
frontend client it don't need to be start in production mode

Sharing volumes in networked docker containers with docker composer fails

I have two docker-compose.yml files
THe first one is a global one and am using it to configure nginx webserver and the other one am using it for holding the application code and below are their configurations
First one with nginx configuration
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
image: globaldocker
container_name: app
restart: unless-stopped
tty: true
working_dir: /var/www
volumes:
- ./:/var/www
- ./dockerconfig/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- common_network
webserver:
image: nginx
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./dockerconfig/nginx/:/etc/nginx/conf.d/
networks:
- webserver_network
- common_network
networks:
common_network:
external: false
webserver_network:
external: false
The above creates two networks
global_docker_common_network, global_docker_webserver_network
On the docker config folder there is a nginx configuration like
server {
listen 80;
server_name pos.test www.pos.test;
index index.php index.html;
//other nginx configurations for pos.test
}
ON THE docker-compose configuration with php file
Now the one one holding the source code for pos.test i have the following configuration
app:
build:
context: .
dockerfile: Dockerfile
image: posapp/php
container_name: envanto_pos
restart: unless-stopped
tty: true
working_dir: /var/www/pos
volumes:
- ./:/var/www/pos
- ./dockerconfig/nginx/:/etc/nginx/conf.d/
networks:
- globaldocker_webserver_network
networks:
globaldocker_webserver_network:
external: true
Which i have added the external network
When i try accessing nginx pos.test it doesnt display the application but only shows the default nginx page
I have tried accessing the first docker nginx configuration bash and checked on the var/www/pos folder but i cant see the files from the second docker config(source code).
How do i share volumes with my nginx docker configuration container so that when i access docker via exposed port 80 am able to access my site pos.test
What am i missing out on this to make this work?
UPDATE
The two docker configuration files are located on different folders on my host machine
UPDATE ON THE QUESTION
This is my nginx config file
server {
listen 80;
server_name pos.test www.pos.test;
index index.php index.html;
error_log /var/log/nginx/pos_error.log;
access_log /var/log/nginx/pos_access.log;
root /var/www/pos/web;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
you are mounting the current directory of docker-compose file each. So the only that container will have the source code which resides in the same source code directory.
You need some common directory
First File
volumes:
- /path_to_sc/common:/var/www
Second File
volumes:
- /path_to_sc/common:/var/www/pos
When I try accessing Nginx pos.test it doesn't display the application
but only shows the default Nginx page
Probably you first File not picking correct configuration. Double-check ./dockerconfig/nginx/:/etc/nginx/conf.d/ or run the command inside docker to verify the correct configuration file.
docker exec nginx bash -c "cat /etc/nginx/conf.d/filename.conf`
I have tried accessing the first docker nginx configuration bash and
checked on the var/www/pos folder but i cant see the files from the
second docker config(source code).
Mount the common directory so that can accessible for both containers.
update:
From your comment, it seems like there is a syntax error in your docker-compose file. Take a look into this example
web:
image: nginx
volumes:
- ./data:/var/www/html/
ports:
- 80:80
command: [nginx-debug, '-g', 'daemon off;']
web2:
image: nginx
volumes:
- ./data:/var/www/html
command: [nginx-debug, '-g', 'daemon off;']

Nginx reverse proxy and path location

Hello I'm new to the world of Docker, so I tried an installation with NGINX reverse proxy (jwilder image) and a Docker app.
I have installed both without SSL to make it easy. Since the Docker app seems to be installed in the root path I want to separate the NGINX web server and the docker app.
upstream example.com {
server 172.29.12.2:4040;
}
server {
server_name example.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://example.com;
root /usr/share/nginx/html;
index index.html index.htm;
}
location /app {
proxy_pass http://example.com:4040;
}
}
So I want with http://example.com be redirected to the index.html
and with http://example.com/app be redirected to the docker app.
Furthermore, when I build the installation, I use in docker-compose expose: "4040" so when I reload NGINX configuration file with nginx -s reload, it warns me that I have not the port 4040 open.
With the configuration file I posted above any path lead me to the docker app.
I can't find a simple solution to my question.
As I far I understood your logic is right, docker is designed to run a single service to a single container; to reach your goal you still have a couple of thing to look after, if the EXPOSE 4040 was declared in you Docker file, that is not enough to let service reachable. in the docker-compose file you have to declare also the ports, I.E. for nginx you let host system to listen on all interface by adding
...
ports:
- 80:80
...
and this is the first thing, also you have to think on which way you want your proxy reach the "app", from the container network on the same node? If yes you can add in the composer file:
...
depends_on:
- app
...
where app is the declared name of your service in the docker-compose file like this nginx are able to reach your app with name app, so redirect will point to app:
location /app {
proxy_pass http://app:4040;
}
In case you want to reach the "app" via host network, may because one day will run in another host, you can add entry in the hosts file of the container run nginx with:
...
extra_hosts:
- "app:10.10.10.10"
- "appb:10.10.10.11"
...
and so on
Reference: https://docs.docker.com/compose/compose-file/
edit 01/01/2019!!!!! happy new year!!
an example using an "huge" docker compose file:
version: '3'
services:
app:
build: "./app" # in case you docker file is in a app dir
image: "some image name"
restart: always
command: "command to start your app"
nginx:
build: "./nginx" # in case you docker file is in a nginx dir
image: "some image name"
restart: always
ports:
- "80:80"
- "443:443"
depends_on:
- app
In the above example nginx can reach yuor app just with the "app" name so redirect will point to http://app:4040
systemctl (start directly with docker - no compose)
[Unit]
Description=app dockerized service
Requires=docker.service
After=docker.service
[Service]
ExecStartPre=/usr/bin/sleep 1
ExecStartPre=/usr/bin/docker pull mariadb:10.4
ExecStart=/usr/bin/docker run --restart=always --name=app -p 4040:4040 python:3.6-alpine # or your own builded image
ExecStop=/usr/bin/docker stop app
ExecStopPost=/usr/bin/docker rm -f app
ExecReload=/usr/bin/docker restart app
[Install]
WantedBy=multi-user.target
like the above example you can reach the app at port 4040 on the system host (which is in listen for connection on port 4040 by all interfaces) to give a specific interface: -p 10.10.10.10:4040:4040 like this will listen to port 4040 on address 10.10.10.10 (host machine)
docker-compose with extra_host:
version: '3'
services:
app:
build: "./app" # in case you docker file is in a app dir
image: "some image name"
restart: always
command: "command to start your app"
nginx:
build: "./nginx" # in case you docker file is in a nginx dir
image: "some image name"
restart: always
ports:
- "80:80"
- "443:443"
extra_hosts:
- "app:10.10.10.10"
like the above example nginx defined service can reach the name app at 10.10.10.10
least but not last extends service on compose file:
docker-compose.yml:
version: '2.1'
services:
app:
extends:
file: /path/to/app-service.yml
service: app
nginx:
extends: /path/to/nginx-service.yml
service: nginx
app-service.yml:
version: "2.1"
service:
app:
build: "./app" # in case you docker file is in a app dir
image: "some image name"
restart: always
command: "command to start your app"
nginx-service.yml
version: "2.1"
service:
nginx:
build: "./nginx" # in case you docker file is in a nginx dir
image: "some image name"
restart: always
ports:
- "80:80"
- "443:443"
extra_hosts:
- "app:10.10.10.10"
really hope the above posted are enough examples.

Resources