I try to serve an application with that docker-compose file took from
https://github.com/solidnerd/docker-bookstack/blob/master/docker-compose.yml
version: '2'
services:
mysql:
image: mysql:5.7.33
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=secret
volumes:
- mysql-data:/var/lib/mysql
bookstack:
image: solidnerd/bookstack
depends_on:
- mysql
environment:
- DB_HOST=mysql:3306
- DB_DATABASE=bookstack
- DB_USERNAME=bookstack
- DB_PASSWORD=secret
volumes:
- uploads:/var/www/bookstack/public/uploads
- storage-uploads:/var/www/bookstack/storage/uploads
ports:
- "8080:8080"
# nginx:
# build: ./nginx
# depends_on:
# - bookstack
# ports:
# - "80:80"
volumes:
mysql-data:
uploads:
storage-uploads:
On Windows, if I go to localhost:8080, it works.
On Debian 10, if I try one of these commands :
curl "localhost:8080"
curl "172.17.0.1:8080"
It stucks and returns a 502 Proxy error after a while.
Here is the result of docker ps -a
45afeda13eab solidnerd/bookstack "/bin/docker-entrypo…" 27 seconds ago Up 12 seconds 80/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp app_bookstack_1
953a91c86e7d mysql "docker-entrypoint.s…" 31 seconds ago Up 9 seconds 3306/tcp app_db_1
If I uncomment my nginx configuration and use these 2 files :
events {
}
http {
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://bookstack:8080;
}
}
}
FROM nginx:1.20.1
COPY nginx.conf /etc/nginx/nginx.conf
Then it works. Why ?
I would like to use that without nginx. I thought it was a problem with that application, but I had the same problem with another application. Then I guess problem comes from my Debian.
Could you advice me something to reach localhost:8080 without nginx ?
Related
i am trying to run coturn in docker folowing this tutorial and using mongodb as a selected database
here is docker-compose of coturn with mongodb
docker-compose-mongodb.yml
version: "3"
services:
# MongoDB
mongodb:
image: mongo
restart: unless-stopped
container_name: docker-mongo
ports:
- "27017:27017"
volumes:
- mongodb-data:/data/db
env_file:
- mongodb/mongodb.env
networks:
- backend
# Coturn
coturn:
build:
context: ../
dockerfile: ./docker/coturn/debian/Dockerfile
restart: always
volumes:
- ./coturn/turnserver.conf:/etc/turnserver.conf:ro
- ./coturn/privkey.pem:/etc/ssl/private/privkey.pem:ro
- ./coturn/cert.pem:/etc/ssl/certs/cert.pem:ro
ports:
## STUN/TURN
- "3478:3478"
- "3478:3478/udp"
- "3479:3479"
- "3479:3479/udp"
- "80:80"
- "80:80/udp"
## STUN/TURN SSL
- "5349:5349"
- "5349:5349/udp"
- "5350:5350"
- "5350:5350/udp"
- "443:443"
- "443:443/udp"
# Relay Ports
# - "49152-65535:49152-65535"
# - "49152-65535:49152-65535/udp"
networks:
- frontend
- backend
depends_on:
- mongodb
env_file:
- coturn/coturn.env
# DB
- mongodb/mongodb.env
volumes:
mongodb-data:
networks:
frontend:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
backend:
internal: true
and here is the coturn (dockerFile)[https://github.com/coturn/coturn/blob/master/docker/coturn/debian/Dockerfile]
i am using the file as it is
now when i run
docker-compose -f docker-compose-mongodb.yml up
and here is
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
85eaa0b1d14e docker_coturn "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:80->80/udp, :::80->80/tcp, :::80->80/udp, 0.0.0.0:443->443/tcp, 0.0.0.0:443->443/udp, :::443->443/tcp, :::443->443/udp, 0.0.0.0:3478-3479->3478-3479/tcp, 0.0.0.0:3478-3479->3478-3479/udp, :::3478-3479->3478-3479/tcp, :::3478-3479->3478-3479/udp, 0.0.0.0:5349-5350->5349-5350/tcp, 0.0.0.0:5349-5350->5349-5350/udp, :::5349-5350->5349-5350/tcp, :::5349-5350->5349-5350/udp docker_coturn_1
4226a039f1ea mongo "docker-entrypoint.s…" 3 minutes ago Up 3 minutes docker-mongo
when i try to ping
nc -zvv docker-mongo 27017
i am getting
nc: getaddrinfo for host "docker-mongo" port 27017: Temporary failure in name resolution
and
nc -zvv docker_coturn_1 3478
and its also giving the same error
nc: getaddrinfo for host "docker_coturn_1" port 3478: Temporary failure in name resolution
how can i resolve this error ? i am using ubuntu 20.04 lts
I try with no success to access to Mercure's hub through my browser at this URL :
http://locahost:3000 => ERR_CONNECTION_REFUSED
I use Docker for my development. Here's my docker-compose.yml :
# docker/docker-compose.yml
version: '3'
services:
database:
container_name: test_db
build:
context: ./database
environment:
- MYSQL_DATABASE=${DATABASE_NAME}
- MYSQL_USER=${DATABASE_USER}
- MYSQL_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DATABASE_ROOT_PASSWORD}
ports:
- "3309:3306"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./database/data:/var/lib/mysql
php-fpm:
container_name: test_php
build:
context: ./php-fpm
depends_on:
- database
environment:
- APP_ENV=${APP_ENV}
- APP_SECRET=${APP_SECRET}
- DATABASE_URL=mysql://${DATABASE_USER}:${DATABASE_PASSWORD}#database:3306/${DATABASE_NAME}?serverVersion=5.7
volumes:
- ./src:/var/www
nginx:
container_name: test_nginx
build:
context: ./nginx
volumes:
- ./src:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/sites/:/etc/nginx/sites-available
- ./nginx/conf.d/:/etc/nginx/conf.d
- ./logs:/var/log
depends_on:
- php-fpm
ports:
- "8095:80"
caddy:
container_name: test_mercure
image: dunglas/mercure
restart: unless-stopped
environment:
MERCURE_PUBLISHER_JWT_KEY: '!ChangeMe!'
MERCURE_SUBSCRIBER_JWT_KEY: '!ChangeMe!'
PUBLISH_URL: '${MERCURE_PUBLISH_URL}'
JWT_KEY: '${MERCURE_JWT_KEY}'
ALLOW_ANONYMOUS: '${MERCURE_ALLOW_ANONYMOUS}'
CORS_ALLOWED_ORIGINS: '${MERCURE_CORS_ALLOWED_ORIGINS}'
PUBLISH_ALLOWED_ORIGINS: '${MERCURE_PUBLISH_ALLOWED_ORIGINS}'
ports:
- "3000:80"
I have executed successfully :
docker-compose up -d
docker ps -a :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0e4a72fe75b2 dunglas/mercure "caddy run --config …" 2 hours ago Up 2 hours 443/tcp, 2019/tcp, 0.0.0.0:3000->80/tcp, :::3000->80/tcp test_mercure
724fe920ebef nginx "/docker-entrypoint.…" 3 hours ago Up 3 hours 0.0.0.0:8095->80/tcp, :::8095->80/tcp test_nginx
9e63fddf50ef php-fpm "docker-php-entrypoi…" 3 hours ago Up 3 hours 9000/tcp test_php
e7989b26084e database "docker-entrypoint.s…" 3 hours ago Up 3 hours 0.0.0.0:3309->3306/tcp, :::3309->3306/tcp test_db
I can reach http://localhost:8095 to access to my Symfony app but I don't know on which URL I am supposed to reach my Mercure's hub.
Thank's for your help !
I tried for months to get symfony + nginx + mysql + phpmyadmin + mercure + docker to work both locally for development and in production (obviously). To no avail.
And, while this isn't directly answering your question. The only way I can contribute is with an "answer", as I don't have enough reputation to only comment, or I would have done that.
If you're not tied to nginx for any reason besides a means of a web server, and can replace it with caddy, I have a repo that is symfony + caddy + mysql + phpmyadmin + mercure + docker that works with SSL both locally and in production.
https://github.com/thund3rb1rd78/symfony-mercure-website-skeleton-dockerized
I've been struggling to make changes to my docker app. After a lot of trial and error it looks what I thought was my nginx conf file might not actually be my nginx conf file.
I have determined this because I tried removing it entirely and my app runs the same with docker.
It looks like changes I make to my nginx service via the app.conf file have been having no impact on the rest of my app.
I am trying to understand if my volume mapping is correct. Here's my docker compose:
version: "3.5"
services:
collabora:
image: collabora/code
container_name: collabora
restart: always
cap_add:
- MKNOD
environment:
- "extra_params=--o:ssl.enable=false --o:ssl.termination=true"
- domain=mydomain\.com
- dictionaries=en_US
ports:
- "9980:9980"
volumes:
- ./appdata/collabora:/config
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
depends_on:
- certbot
- collabora
volumes:
# - ./data/nginx:/etc/nginx/conf.d
- ./data/nginx:/etc/nginx
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
My projects directory is:
/my_project
docker-compose.yaml
data
nginx
app.conf
And then my app.conf has various nginx settings
server {
listen 80;
server_name example.com www.example.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
} ... more settings below
Assuming I'm correct that my app.conf file is not being used, how can I correctly map app.conf on local to the correct place in the nginx container?
nginx conf directory is
/etc/nginx/nginx.conf
This File has an include of all of contento from
/etc/nginx/conf.d/*
You can verify your nginx.conf in use executing a ps -ef | grep nginx on your container.
Verify your default.conf. Anyway, in your compose you must have:
volumes:
- ./data/nginx:/etc/nginx/conf.d
Try with absolute path
Regards
I have a reactjs front end application and a simple python flask. And I am using a docker-compose.yml to spin up both the containers, and it is like this:
version: "3.2"
services:
frontend:
build: .
environment:
CHOKIDAR_USEPOLLING: "true"
ports:
- 80:80
links:
- "backend:backend"
depends_on:
- backend
backend:
build: ./api
# volumes:
# - ./api:/usr/src/app
environment:
# CHOKIDAR_USEPOLLING: "true"
FLASK_APP: /usr/src/app/server.py
FLASK_DEBUG: 1
ports:
- 8083:8083
I have used links so the frontend service can talk to backend service using axios as below:
axio.get("http://backend:8083/monitors").then(res => {
this.setState({
status: res.data
});
});
I used docker-compose up --build -d to build and start the two containers and they are started without any issue and running fine.
But now the frontend cannot talk to backend.
I am using an AWS ec2 instance. When the page loads, I tried to see the for any console errors and I get this error:
VM167:1 GET http://backend:8083/monitors net::ERR_NAME_NOT_RESOLVED
Can someone please help me?
The backend service is up and running.
You can use a nginx as reverse proxy for both
The compose file
version: "3.2"
services:
frontend:
build: .
environment:
CHOKIDAR_USEPOLLING: "true"
depends_on:
- backend
backend:
build: ./api
# volumes:
# - ./api:/usr/src/app
environment:
# CHOKIDAR_USEPOLLING: "true"
FLASK_APP: /usr/src/app/server.py
FLASK_DEBUG: 1
proxy:
image: nginx
volumes:
- ./nginx.conf:/etc/nginx/conf.d/example.conf
ports:
- 80:80
minimal nginx config (nginx.conf):
server {
server_name example.com;
server_tokens off;
location / {
proxy_pass http://frontend:80;
}
}
server {
server_name api.example.com;
server_tokens off;
location / {
proxy_pass http://backend:8083;
}
}
The request hits the nginx container and is routed according the domain to the right container.
To use example.com and api.example.com you need to edit your hosts file:
Linux: /etc/hosts
Windows: c:\windows\system32\drivers\etc\hosts
Mac: /private/etc/hosts
127.0.0.1 example.com api.example.com
Heyo!
Update: I figured it out and added my answer.
I'm currently in the process of learning docker and I've written a docker-compose file that should launch nginx, gitea, nextcloud and route them all via domain name as a reverse proxy.
All is going well except for with nextcloud. I can access it via localhost:3001 but not via the nginx reverse proxy. All is well with gitea, it works both ways.
The error I'm getting is:
nginx_proxy | 2018/08/10 00:17:34 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: cloud.example.ca, request: "GET / HTTP/1.1", upstream: "http://172.19.0.4:3001/", host: "cloud.example.ca"
docker-compose.yml:
version: '3.1'
services:
nginx:
container_name: nginx_proxy
image: nginx:latest
restart: always
volumes:
// Here I'm swapping out my default.conf for the container's by mounting my
directory over theirs.
- ./nginx-conf:/etc/nginx/conf.d
ports:
- 80:80
- 443:443
networks:
- proxy
nextcloud_db:
container_name: nextcloud_db
image: mariadb:latest
restart: always
volumes:
- nextcloud_db:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/cloud_db_root
MYSQL_PASSWORD_FILE: /run/secrets/cloud_db_pass
MYSQL_DATABASE: devcloud
MYSQL_USER: devcloud
secrets:
- cloud_db_root
- cloud_db_pass
networks:
- database
gitea_db:
container_name: gitea_db
image: mariadb:latest
restart: always
volumes:
- gitea_db:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/cloud_db_root
MYSQL_PASSWORD_FILE: /run/secrets/cloud_db_pass
MYSQL_DATABASE: gitea
MYSQL_USER: gitea
secrets:
- cloud_db_root
- cloud_db_pass
networks:
- database
nextcloud:
image: nextcloud
container_name: nextcloud
ports:
- 3001:80
volumes:
- nextcloud:/var/www/html
restart: always
networks:
- proxy
- database
gitea:
container_name: gitea
image: gitea/gitea:latest
environment:
- USER_UID=1000
- USER_GID=1000
restart: always
volumes:
- gitea:/data
ports:
- 3000:3000
- 22:22
networks:
- proxy
- database
volumes:
nextcloud:
nextcloud_db:
gitea:
gitea_db:
networks:
proxy:
database:
secrets:
cloud_db_pass:
file: cloud_db_pass.txt
cloud_db_root:
file: cloud_db_root.txt
My default.conf that gets mounted into /etc/nginx/conf.d/default.conf
upstream nextcloud {
server nextcloud:3001;
}
upstream gitea {
server gitea:3000;
}
server {
listen 80;
listen [::]:80;
server_name cloud.example.ca;
location / {
proxy_pass http://nextcloud;
}
}
server {
listen 80;
listen [::]:80;
server_name git.example.ca;
location / {
proxy_pass http://gitea;
}
}
I of course have my hosts file setup to route the domains to localhost. I've done a bit of googling but nothing I've found so far seems to align with what I'm running into. Thanks in advance!
Long story short, one does not simply reverse proxy to port 80 with nextcloud. It's just not allowed. I have it deployed and working great with a certificate over 443! :)