ngrok with docker & nginx. failed to complete tunnel connection - docker

So if I run ngrok outside of docker, the tunnel connects fine. However when I add it to my docker-compose up i get
failed to complete tunnel connection
Here is my docker-compose:
services:
local.box:
image: nginx:1.21
ports:
- "8888:80"
volumes:
- .:/app
ngrok:
image: ngrok/ngrok:alpine
depends_on:
- local.box
environment:
- NGROK_AUTHTOKEN=<my_token>
command: "http local.box:8888"
volumes:
- .:/app
networks:
default:
name: dev-dev
external: true
I have tried host header rewrites, as well as editing my nginx.conf to not use default server.
The nginx stuff starts up just fine as well.

Try to use this command to image ngrox:
services:
local.box:
image: nginx:1.21
ports:
- "8888:80"
volumes:
- .:/app
ngrok:
image: ngrok:alpine
depends_on:
- local.box
environment:
- NGROK_AUTHTOKEN=<my_token>
command: "http local.box:8888"
volumes:
- .:/app
networks:
default:
name: dev-dev
external: true

Related

Dockerize MERN stack using traefik

I did follow this tutorial : https://rafrasenberg.com/posts/docker-container-management-with-traefik-v2-and-portainer/ to build a treafik reverse proxy on my server
And I tested it with a very simple application that render a "hello world" in a index.html:
version: "3"
services:
app:
image: nginx
environment:
PORT: ${PORT}
volumes:
- .:/usr/share/nginx/html/
networks:
- proxy
- default
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.app-secure.entrypoints=websecure"
- "traefik.http.routers.app-secure.rule=Host(`my-test.localhost`)"
networks:
proxy:
external: true
it works!
Now I want to go on the next step and use it to build a MERN stack project and I'm a bit lost.
usually I dockerize a mern stack by:
create a dockerfile in /server
create a dockerfile in /client
create a docker-compose on the root directory
version: "3.7"
services:
server:
build:
context: ./server
dockerfile: Dockerfile
image: myapp-server
container_name: myapp-node-server
command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- ./server/:/usr/src/app
- /usr/src/app/node_modules
ports:
- 5000
depends_on:
- mongo
env_file: ./server/.env
environment:
- NODE_ENV=development
networks:
- app-network
- proxy
mongo:
image: mongo
volumes:
- data-volume:/data/db
ports:
- 27017
networks:
- app-network
- proxy
client:
build:
context: ./client
dockerfile: Dockerfile
image: myapp-client
container_name: myapp-react-client
command: npm start
volumes:
- ./client/:/usr/app
- /usr/app/node_modules
depends_on:
- server
ports:
- 3001
networks:
- app-network
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.app2-secure.entrypoints=websecure"
- "traefik.http.routers.app2-secure.rule=Host(`test.localhost`)"
networks:
app-network:
driver: bridge
proxy:
external: true
volumes:
data-volume:
node_modules:
web-root:
driver: local
But seems that my proxy not working because the console doesn't return any error and it works on localhost:3000 but on test.localhost I have a "Gateway Timeout" error

multiple docker compose files with traefik (v2.1) and database networks

I would like to build a docker landscape. I use a container with a traefik (v2. 1) image and a mysql container for multiple databases.
traefik/docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api=true"
- "--api.dashboard=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=proxy"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.traefik-dashboard.address=:8080"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge=true"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge.entrypoint=web"
#- "--certificatesresolvers.devnik-resolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
- "--certificatesresolvers.devnik-resolver.acme.email=####"
- "--certificatesresolvers.devnik-resolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "./letsencrypt:/letsencrypt"
- "./data:/etc/traefik"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- "proxy"
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`devnik.dev`)"
- "traefik.http.routers.traefik.entrypoints=traefik-dashboard"
- "traefik.http.routers.traefik.tls.certresolver=devnik-resolver"
#basic auth
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.routers.traefik.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.usersfile=/etc/traefik/.htpasswd"
#Docker Networks
networks:
proxy:
database/docker-compose.yml
version: "3.3"
services:
#MySQL Service
mysql:
image: mysql:5.7
container_name: mysql
restart: always
ports:
- "3306:3306"
volumes:
#persist data
- ./mysqldata/:/var/lib/mysql/
- ./init:/docker-entrypoint-initdb.d
networks:
- "mysql"
environment:
MYSQL_ROOT_PASSWORD: ####
TZ: Europe/Berlin
#Docker Networks
networks:
mysql:
driver: bridge
For the structure I want to control all projects via multiple docker-compose files. These containers should run on the same network as the traefik container and some with the mysql container.
This also works for the following case (but only sometimes)
dev-releases/docker-compose.yml
version: "3.3"
services:
backend:
image: "registry.gitlab.com/devnik/dev-releases-backend/master:latest"
container_name: "dev-releases-backend"
restart: always
volumes:
#laravel logs
- "./logs/backend:/app/storage/logs"
#cron logs
- "./logs/backend/cron.log:/var/log/cron.log"
labels:
- "traefik.enable=true"
- "traefik.http.routers.dev-releases-backend.rule=Host(`dev-releases.backend.devnik.dev`)"
- "traefik.http.routers.dev-releases-backend.entrypoints=websecure"
- "traefik.http.routers.dev-releases-backend.tls.certresolver=devnik-resolver"
networks:
- proxy
- mysql
environment:
TZ: Europe/Berlin
#Docker Networks
networks:
proxy:
external:
name: "traefik_proxy"
mysql:
external:
name: "database_mysql"
As soon as I restart the containers in dev-releases/ via docker-compose up -d I get the typical error "Gateway timeout" when calling them in the browser.
As soon as I comment the network networks: #- mysql and restart the docker-compose in dev-releases/ it works again.
My guess is that I have not configured the external networks correctly. Is it not possible to use 2 external networks?
I'd like some container have access to the 'mysql' network but it should not be accessible for the whole traefik network.
Let me know if you need more information
EDIT (26.03.2020)
I make it running.
I put all my containers into one network "proxy". It seems mysql also have to be in the proxy network.
So I add following to database/docker-compose.yml
networks:
proxy:
external:
name: "traefik_proxy"
And removed the database_mysql network out of dev-releases/docker-compose.yml
based on the names of the files, your mysql network should be mysql_mysql.
you can verify this by executing
$> docker network ls
You are also missing a couple of labels for your services such as
traefik command line
- '--providers.docker.watch=true'
- '--providers.docker.swarmMode=true'
labels
- traefik.docker.network=proxy
- traefik.http.services.dev-releases-backend.loadbalancer.server.port=yourport
- traefik.http.routers.dev-releases-backend.service=mailcatcher
You can check this for more info

How to setup Nginx reverse proxy for multiple containers with each container having its own Nginx server

I've a VPS on which I want to deploy multiple web applications (for which I've already read posts and they're perfect when we have directly sub container). I want to manage each web application having its own nginx to route to its sub domains and also some static website's related to them. I've two docker compose. My network looks like the following. Image.
My NGINX Rerverse proxy should be responsible to route the domains to respective nginx container. It might be possible or not I'm not sure (that's why I asked help). If someone can provide a better understanding that I'm open to suggestions. Below is my configuration for nginx proxy container and other web apps docker yaml file code. I've used the NGINX PROXY
NGINX PROXY Docker Compose File
version: "3.7"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
WEB APPLICATION 1 docker-compose.yml
version: "3.7"
networks:
default:
external:
name: nginx-proxy
yoda-network:
driver: bridge
services:
adminer:
container_name: ${APP_NAME}_adminer
image: adminer
depends_on:
- mysql
expose:
- 8080
environment:
VIRTUAL_HOST: yodaledger.com
VIRTUAL_PORT: 8080
networks:
- yoda-network
python:
container_name: ${APP_NAME}_python
image: python:3.6
command: bash -c "pip3 install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
volumes:
- ${APP_PATH}var/www/api.yodaledger.com/yodaledger_backend:/app
- ${APP_PATH}var/www/static.yodaledger.com:/app/static
depends_on:
- mysql
working_dir: /app
environment:
- PYTHONUNBUFFERED=1
- MYSQL_HOST=mysql
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
networks:
- yoda-network
nginx:
container_name: ${APP_NAME}_nginx
image: nginx:alpine
expose:
- 443
- 80
volumes:
- ${APP_PATH}var/www:/var/www
- ${APP_PATH}var/log:/var/log/nginx
- ${APP_PATH}var/ssl:/var/ssl
- ${APP_PATH}etc/nginx:/etc/nginx
- /tmp/${APP_NAME}/nginx:/tmp
depends_on:
- python
environment:
VIRTUAL_PORT: 443
VIRTUAL_HOST: yodaledger.com,api.yodaledger.com,app.yodaledger.com,static.yodaledger.com
networks:
- yoda-network
- default
mysql:
container_name: ${APP_NAME}_mysql
image: mariadb:latest
#ports:
# - "3306:3306"
volumes:
- ${APP_PATH}var/mysql:/var/lib/mysql
- ${APP_PATH}etc/mysql:/etc/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- TZ=Europe/Paris
networks:
- yoda-network
WEB APPLICATION 2
It looks the same as web application 1.

Setting up docker auto build to use docker-compose file

I am trying to set up auto builds using docker cloud/docker hub. It is always looking for Dockerfile when I have a docker-compose.yml. I am unable to find any option to change this. I am wondering whether this isn't possible or am I missing something?
This is my docker-compose.yml
version: '3'
services:
reverse-proxy:
image: traefik
ports:
- "80:80"
- "443:443"
- "${TRAEFIK_DASHBOARD_PORT}:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.toml:/etc/traefik/traefik.toml
- ./traefik/certs/journal.crt:/certs/journal.crt
- ./traefik/certs/journal.key:/certs/journal.key
networks:
- web
prisma:
image: prismagraphql/prisma:1.8
restart: always
ports:
- "${PRISMA_PORT}"
networks:
- web
environment:
PRISMA_CONFIG: |
port: ${PRISMA_PORT}
managementApiSecret: ${PRISMA_MANAGEMENT_API_SECRET}
databases:
default:
connector: postgres
host: ${PRISMA_DB_HOST}
port: ${PRISMA_DB_PORT}
database: ${PRISMA_DB}
user: ${PRISMA_DB_USER}
password: ${PRISMA_DB_PASSWORD}
migrations: ${PRISMA_ENABLE_MIGRATION}
graphql-server:
build:
context: ./graphql-server/
args:
- PORT=${GRAPHQL_SERVER_PORT}
networks:
- web
ports:
- "${GRAPHQL_SERVER_PORT}"
volumes:
- ./graphql-server:/usr/src/app
depends_on:
- prisma
command: ["./wait-for-it.sh", "prisma:${PRISMA_PORT}", "--", "./bootstrap.sh"]
environment:
- PRISMA_SERVICE_NAME=prisma
- PRISMA_PORT
- GRAPHQL_SERVER_PORT
- APOLLO_ENGINE_KEY
- PRISMA_ENDPOINT
- PRISMA_MANAGEMENT_API_SECRET
labels:
- "traefik.backend=graphql"
- "traefik.frontend.rule=Host:api.journal.com"
- "traefik.enable=true"
- "traefik.port=8080"
- "traefik.docker.network=web"
react-client:
build:
context: ./react-client/
args:
- PORT=${REACT_CLIENT_PORT}
ports:
- "${REACT_CLIENT_PORT}"
volumes:
- ./react-client:/usr/src/app
depends_on:
- graphql-server
environment:
- GRAPHQL_SERVER_PORT
- REACT_CLIENT_PORT
networks:
- web
networks:
web:
external: true
Both docker hub and docker cloud are trying to get only the dockerfile and not docker-compose. I also saw a post mentioning docker-compose should be used only for running and not for building; so I am not sure whether I am doing something wrong.

Docker ports work in localhost but not with public ip

I`m starting with docker i have the next docker-compose when i run docker-compose up all with success, when i do curl localhost works fine but when i try to access from the public ip dont works the connection timeout.
version: '3'
services:
db:
environment:
- POSTGRES_PASSWORD=mipass
- POSTGRES_USER=miuser
- POSTGRES_DB=pdfdd
image: postgres:9.6
web:
restart: always
tty: true
stdin_open: true
build: .
command: python ./code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
- .:/code
links:
- web:web
Thanks #BMitch, i forgot to open 80 port in security group if someone have the same error please see Opening port 80 EC2 Amazon web services for instructions.

Resources