I am trying to run a gitea server with drone. They are currently both hosted on the same ubuntu machine and the docker containers are set up through a docker-compose.yml file.
When starting up all services I get the following error in the logs of the drone runner service:
time="2020-08-12T19:10:42Z" level=error msg="cannot ping the remote server" error="Post http://drone:80/rpc/v2/ping: dial tcp: lookup drone on 127.0.0.11:53: no such host"
Both http://gitea and http://drone point to localhost (via /etc/hosts). I sadly don't understand how or why the drone runner can not find the server. Calling "docker container inspect" on all my 4 containers shows they are all connected to the same network (drone_and_gitea_giteanet). Which is also the network I set in the DRONE_RUNNER_NETWORKS environment variable.
This is how my docker-compose.yml file looks:
version: "3.8"
# Create named volumes for gitea server, gitea database and drone server
volumes:
gitea:
gitea-db:
drone:
# Create shared network for gitea and drone
networks:
giteanet:
external: false
services:
gitea:
container_name: gitea
image: gitea/gitea:1
#restart: always
environment:
- APP_NAME="Automated Student Assessment Tool"
- USER_UID=1000
- USER_GID=1000
- ROOT_URL=http://gitea:3000
- DB_TYPE=postgres
- DB_HOST=gitea-db:5432
- DB_NAME=gitea
- DB_USER=gitea
- DB_PASSWD=gitea
networks:
- giteanet
ports:
- "3000:3000"
- "222:22"
volumes:
- gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- gitea-db
gitea-db:
container_name: gitea-db
image: postgres:9.6
#restart: always
environment:
- POSTGRES_USER=gitea
- POSTGRES_PASSWORD=gitea
- POSTGRES_DB=gitea
networks:
- giteanet
volumes:
- gitea-db:/var/lib/postgresql/data
drone-server:
container_name: drone-server
image: drone/drone:1
#restart: always
environment:
# General server settings
- DRONE_SERVER_HOST=drone:80
- DRONE_SERVER_PROTO=http
- DRONE_RPC_SECRET=topsecret
# Gitea Config
- DRONE_GITEA_SERVER=http://gitea:3000
- DRONE_GITEA_CLIENT_ID=<CLIENT ID>
- DRONE_GITEA_CLIENT_SECRET=<CLIENT SECRET>
# Create Admin User, name should be the same as Gitea Admin user
- DRONE_USER_CREATE=username:AdminUser,admin:true
# Drone Logs Settings
- DRONE_LOGS_PRETTY=true
- DRONE_LOGS_COLOR=true
networks:
- giteanet
ports:
- "80:80"
volumes:
- drone:/data
depends_on:
- gitea
drone-agent:
container_name: drone-agent
image: drone/drone-runner-docker:1
#restart: always
environment:
- DRONE_RPC_PROTO=http
- DRONE_RPC_HOST=drone:80
- DRONE_RPC_SECRET=topsecret
- DRONE_RUNNER_CAPACITY=1
- DRONE_RUNNER_NETWORKS=drone_and_gitea_giteanet
networks:
- giteanet
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- drone-server
It would help me a lot if somebody could maybe take a look at the issue and help me out! :)
Related
To control some Tasmota driven WiFi Sockets and some other stuff I want to install a Docker based SmartHome central on my Synology DS218+.
This installation is to be reachable only from inside my LAN while some other Docker containers on my NAS are accessible from the Internet.
So I decided to use a docker-compose Setup based on a Traefik-script with one single Traefik-container and a SmartHome-script with some SmartHome related containers (both scripts see below).
During a step-wise installation I first implemented the ioBroker container, finished the initial Setup and installed the Node-RED adapter.
After that I added a Mosquitto container to my SmartHome-script and a dependency to let the ioBroker container start after Mosquitto.
All containers of the above setup come up without any problems but ioBroker is the only Service that's accessible.
Whether my Tasmota-deices nor ioBroker seem to have access to Mosquitto and when I try to start the Node-RED-instance, I get an Error "404 page not found"
Traefik-script:
version: "3.9"
services:
traefik:
image: traefik:v2.4
command:
- --log.level=ERROR
- --entrypoints.web.address=:80
- --entrypoints.web.http.redirections.entrypoint.to=web-secure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
- --entrypoints.web-secure.address=:443
- --entrypoints.web-secure.http.tls.certresolver=lets-encrypt
- --entrypoints.something.address=:1234
...
- --entrypoints.node-red.address=:1880
- --entrypoints.mosquitto.address=:1883
- --entrypoints.iobroker.address=:8081
...
- --entrypoints.something-different.address=:23456
- --entrypoints.something-different.http.redirections.entrypoint.to=something-different
- --entrypoints.something-different.http.redirections.entrypoint.scheme=https
- --providers.docker=true
- --providers.docker.endpoint=unix:///var/run/docker.sock
- --providers.file.directory=/etc/traefik/dynamic/
- --providers.file.watch=true
- --certificatesresolvers.lets-encrypt.acme.email=my.email#internet.com
- --certificatesresolvers.lets-encrypt.acme.storage=/etc/traefik/acme.json
- --certificatesresolvers.lets-encrypt.acme.tlschallenge=true
restart:
- unless-stopped
ports:
- 80:80
- 443:443
- 1234:1234
...
- 1880:1880
- 1883:1883
- 8081:8081
...
- 23456:23456
volumes:
- /etc/localtime:/etc/localtime:ro
- ${PWD}/traefik:/etc/traefik
- /var/run/docker.sock:/var/run/docker.sock:ro
labels:
- traefik.enable=false
networks:
- traefik
networks:
traefik:
external: false
driver: bridge
name: traefik
SmartHome-script
version: "3.9"
services:
mosquitto:
image: eclipse-mosquitto:latest
restart:
- unless-stopped
volumes:
- ${PWD}/mosquitto-config:/mosquitto/config
- ${PWD}/mosquitto-data:/mosquitto/data
- ${PWD}/mosquitto-log:/mosquitto/log
labels:
- traefik.enable=true
- traefik.tcp.routers.mosquitto.entrypoints=mosquitto
- traefik.tcp.routers.mosquitto.rule=HostSNI(`my.synology.nas.local`)
- traefik.tcp.routers.mosquitto.service=svc-mosquitto
- traefik.tcp.services.svc-mosquitto.loadbalancer.server.port=1883
networks:
- traefik
iobroker:
image: iobroker/iobroker:latest
restart:
- unless-stopped
depends_on:
- mosquitto
environment:
- LANG=de_DE.UTF‑8
- LANGUAGE=de_DE:de
- LC_ALL=de_DE.UTF-8
- TZ=Europe/Berlin
volumes:
- ${PWD}/iobroker-data:/opt/iobroker
labels:
- traefik.enable=true
- traefik.http.routers.iobroker.entrypoints=iobroker
- traefik.http.routers.iobroker.rule=Host(`my.synology.nas.local`)
networks:
- traefik
networks:
traefik:
external: true
I suspect that the inaccessible Mosquitto-server is related to the "labels" section of the Mosquitto-container, because this is the first time I try to use TCP routing.
The inaccessible Node-RED instance within ioBroker might be related to using more than one HTTP-port with this container but I have no idea, where to begin troubleshooting.
What's the "correct way" to handle such use cases in docker-compose scripts respectively in Traefik?
Thanx in advance for your hints!
Lanzi
I am new to this today. I have been trying to figure out what the problem is all day.
docker-compose version 1.28.5, build 324b023a
I run:
docker-compose up -d
and I get:
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.teslamate: 'database'
version: "3"
services:
teslamate:
image: teslamate/teslamate
restart: always
environment:
- ENCRYPTION_KEY= <Insert Key>
- DB_USER=teslamate
- DB_PASS= <Insert password>
- DB_NAME=teslamate
- DB_HOST=database
- MQTT_HOST=mosquitto
- VIRTUAL_HOST=<Insert IP address>
# if you're going to access the UI from another machine replace
# "localhost" with the hostname / IP address of the docker host.
- TZ=US # (optional) replace to use local time in debug logs. See "Configuration".
ports:
- 4000:4000
volumes:
- ./import:/opt/app/import
cap_drop:
- all
database:
image: postgres:14
restart: always
environment:
- POSTGRES_USER=teslamate
- POSTGRES_PASSWORD= <Insert password>
- POSTGRES_DB=teslamate
volumes:
- teslamate-db:/var/lib/postgresql/data
grafana:
image: teslamate/grafana
restart: always
environment:
- DATABASE_USER=teslamate
- DATABASE_PASS= goforit
- DATABASE_NAME=teslamate
- DATABASE_HOST=database
ports:
- 3000:3000
volumes:
- teslamate-grafana-data:/var/lib/grafana
mosquitto:
image: eclipse-mosquitto:2
restart: always
command: mosquitto -c /mosquitto-no-auth.conf
# ports:
# - 1883:1883
volumes:
- mosquitto-conf:/mosquitto/config
- mosquitto-data:/mosquitto/data
volumes:
teslamate-db:
teslamate-grafana-data:
mosquitto-conf:
mosquitto-data:
Could someone please let me know what is wrong?
Thank you,
It is just a Yaml indentation problem. Your services teslamate, database, grafana and mosquito needs to have the same indentation, otherwise database is seen as a property of teslamate and it is not a valid property for docker-compose.
version: "3"
services:
teslamate:
image: teslamate/teslamate
restart: always
environment:
- ENCRYPTION_KEY= <Insert Key>
- DB_USER=teslamate
- DB_PASS= <Insert password>
- DB_NAME=teslamate
- DB_HOST=database
- MQTT_HOST=mosquitto
- VIRTUAL_HOST=<Insert IP address>
# if you're going to access the UI from another machine replace
# "localhost" with the hostname / IP address of the docker host.
- TZ=US # (optional) replace to use local time in debug logs. See "Configuration".
ports:
- 4000:4000
volumes:
- ./import:/opt/app/import
cap_drop:
- all
database:
image: postgres:14
restart: always
environment:
- POSTGRES_USER=teslamate
- POSTGRES_PASSWORD= <Insert password>
- POSTGRES_DB=teslamate
volumes:
- teslamate-db:/var/lib/postgresql/data
grafana:
image: teslamate/grafana
restart: always
environment:
- DATABASE_USER=teslamate
- DATABASE_PASS= goforit
- DATABASE_NAME=teslamate
- DATABASE_HOST=database
ports:
- 3000:3000
volumes:
- teslamate-grafana-data:/var/lib/grafana
mosquitto:
image: eclipse-mosquitto:2
restart: always
command: mosquitto -c /mosquitto-no-auth.conf
# ports:
# - 1883:1883
volumes:
- mosquitto-conf:/mosquitto/config
- mosquitto-data:/mosquitto/data
volumes:
teslamate-db:
teslamate-grafana-data:
mosquitto-conf:
mosquitto-data:
I would like to build a docker landscape. I use a container with a traefik (v2. 1) image and a mysql container for multiple databases.
traefik/docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api=true"
- "--api.dashboard=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=proxy"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.traefik-dashboard.address=:8080"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge=true"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge.entrypoint=web"
#- "--certificatesresolvers.devnik-resolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
- "--certificatesresolvers.devnik-resolver.acme.email=####"
- "--certificatesresolvers.devnik-resolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "./letsencrypt:/letsencrypt"
- "./data:/etc/traefik"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- "proxy"
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`devnik.dev`)"
- "traefik.http.routers.traefik.entrypoints=traefik-dashboard"
- "traefik.http.routers.traefik.tls.certresolver=devnik-resolver"
#basic auth
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.routers.traefik.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.usersfile=/etc/traefik/.htpasswd"
#Docker Networks
networks:
proxy:
database/docker-compose.yml
version: "3.3"
services:
#MySQL Service
mysql:
image: mysql:5.7
container_name: mysql
restart: always
ports:
- "3306:3306"
volumes:
#persist data
- ./mysqldata/:/var/lib/mysql/
- ./init:/docker-entrypoint-initdb.d
networks:
- "mysql"
environment:
MYSQL_ROOT_PASSWORD: ####
TZ: Europe/Berlin
#Docker Networks
networks:
mysql:
driver: bridge
For the structure I want to control all projects via multiple docker-compose files. These containers should run on the same network as the traefik container and some with the mysql container.
This also works for the following case (but only sometimes)
dev-releases/docker-compose.yml
version: "3.3"
services:
backend:
image: "registry.gitlab.com/devnik/dev-releases-backend/master:latest"
container_name: "dev-releases-backend"
restart: always
volumes:
#laravel logs
- "./logs/backend:/app/storage/logs"
#cron logs
- "./logs/backend/cron.log:/var/log/cron.log"
labels:
- "traefik.enable=true"
- "traefik.http.routers.dev-releases-backend.rule=Host(`dev-releases.backend.devnik.dev`)"
- "traefik.http.routers.dev-releases-backend.entrypoints=websecure"
- "traefik.http.routers.dev-releases-backend.tls.certresolver=devnik-resolver"
networks:
- proxy
- mysql
environment:
TZ: Europe/Berlin
#Docker Networks
networks:
proxy:
external:
name: "traefik_proxy"
mysql:
external:
name: "database_mysql"
As soon as I restart the containers in dev-releases/ via docker-compose up -d I get the typical error "Gateway timeout" when calling them in the browser.
As soon as I comment the network networks: #- mysql and restart the docker-compose in dev-releases/ it works again.
My guess is that I have not configured the external networks correctly. Is it not possible to use 2 external networks?
I'd like some container have access to the 'mysql' network but it should not be accessible for the whole traefik network.
Let me know if you need more information
EDIT (26.03.2020)
I make it running.
I put all my containers into one network "proxy". It seems mysql also have to be in the proxy network.
So I add following to database/docker-compose.yml
networks:
proxy:
external:
name: "traefik_proxy"
And removed the database_mysql network out of dev-releases/docker-compose.yml
based on the names of the files, your mysql network should be mysql_mysql.
you can verify this by executing
$> docker network ls
You are also missing a couple of labels for your services such as
traefik command line
- '--providers.docker.watch=true'
- '--providers.docker.swarmMode=true'
labels
- traefik.docker.network=proxy
- traefik.http.services.dev-releases-backend.loadbalancer.server.port=yourport
- traefik.http.routers.dev-releases-backend.service=mailcatcher
You can check this for more info
I'm attempting to run this script in Win10 to configure everything.
All containers except the elastic container are initialized correctly and
Elastic times out and then exits with code 124.
https://imgur.com/a/FO8ckwc (some log outputs)
I'm running this script where I didn't touch anything except the Windows ports (you can see the comments)
https://pastebin.com/7Z8Gnenr
version: '3.1'
# Generated on 23-04-2018
services:
alfresco:
image: openmbeeguest/mms-repo:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseConcMarkSweepGC"
depends_on:
- postgresql
- activemq
- elastic
networks:
- internal
ports:
- 8080:8080
volumes:
- alf_logs:/usr/local/tomcat/logs
- alf_data:/opt/alf_data
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
solr:
image: openmbeeguest/mms-solr:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:G1HeapRegionSize=8m -XX:MaxGCPauseMillis=200"
depends_on:
- alfresco
networks:
- internal
volumes:
- solr_logs:/usr/local/tomcat/logs/
- solr_content_store:/opt/solr/ContentStore
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
activemq:
image: openmbeeguest/mms-activemq:3.2.4-SNAPSHOT
ports:
#I changed these Windows side ports
- 61615:61616
- 61617:61614
- 8162:8161
# ORIGINAL
#- 61616:61616
#- 61614:61614
#- 8161:8161
volumes:
- activemq-data-volume:/data/activemq
- activemq-log-volume:/var/log/activemq
- activemq-conf-volume:/opt/activemq/conf
environment:
- ACTIVEMQ_ADMIN_LOGIN admin
- ACTIVEMQ_ADMIN_PASSWORD admin
networks:
- internal
elastic:
image: openmbeeguest/mms-elastic:3.2.4-SNAPSHOT
environment:
CLEAN: 'false'
ports:
- 9200:9200
volumes:
- elastic-data-volume:/usr/share/elasticsearch/data
networks:
- internal
postgresql:
image: openmbeeguest/mms-postgres:3.2.4-SNAPSHOT
volumes:
- pgsql_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=alfresco
- POSTGRES_PASSWORD=alfresco
- POSTGRES_DB=alfresco
networks:
- internal
volumes:
alf_logs:
alf_data:
solr_logs:
solr_content_store:
pgsql_data:
activemq-data-volume:
activemq-log-volume:
activemq-conf-volume:
elastic-data-volume:
nginx-external-volume:
networks:
internal:
Any help would be greatly appreciated!
Do you have the logs from the elasticsearch container to share? Without that it's hard to tell why it's exiting.
One thing that's tripped me up repeatedly though is the vm.max_map_count setting - the default in Docker is too low for elasticsearch to function, so it's a good first thing to check.
I'm using jwilder/nginx-proxy with separate docker-compose.yaml. It looks like this:
proxy:
image: jwilder/nginx-proxy
restart: always
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/conf.d/proxy.conf:/etc/nginx/conf.d/proxy.conf:ro
- /Users/marcin/Docker/local_share/certificates:/etc/nginx/certs:ro
ports:
- "80:80"
- "443:443"
container_name: proxy
I'm using it for quite a long time and it's working fine when my project docker-compose.yaml looks like this:
web:
build: /Users/marcin/Docker/definitions/php-nginx/php-7.1-ubuntu
volumes:
- /Users/marcin/Docker/projects/test.local/html/:/usr/share/nginx/html/
- /Users/marcin/Docker/projects/test.local/nginx/conf.d/:/etc/nginx/conf.d/
- /Users/marcin/Docker/projects/test.local/nginx/log/:/var/log/nginx/
- /Users/marcin/Docker/projects/test.local/supervisor/conf.d/:/etc/supervisor/conf.d/
- /Users/marcin/Docker/projects/test.local/supervisor/log/:/var/log/supervisor/
- /Users/marcin/Docker/projects/test.local/cron/:/root/.cron/
- /Users/marcin/Docker/local_share/:/root/.local_share/
- /Users/marcin/Docker/local_share/certificates/:/usr/share/nginx/certificates/
working_dir: /usr/share/nginx/html/
links:
- db
container_name: test.php
hostname: test.local
ports:
- "336:22"
- "8081:80"
- "18080:443"
environment:
- VIRTUAL_HOST=test.local
- CERT_NAME=default
- HTTPS_METHOD=noredirect
db:
build: /Users/marcin/Docker/definitions/mysql/5.7
environment:
- MYSQL_ROOT_PASSWORD=pass
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
expose:
- 3306
volumes:
- /Users/marcin/Docker/projects/test.local/mysql/data/:/var/lib/mysql/
- /Users/marcin/Docker/projects/test.local/mysql/conf.d/:/etc/mysql/conf.d/source
- /Users/marcin/Docker/projects/test.local/mysql/log/:/var/log/mysql/
ports:
- "33060:3306"
container_name: test.db
hostname: test.local
I can access site without any problem using http://test.local or https://test.local what is expected.
However I had to update my file structure to newer version:
version: "3.2"
services:
web:
build: /Users/marcin/Docker/definitions/php-nginx/php-7.1-ubuntu
volumes:
- /Users/marcin/Docker/projects/test.local/html/:/usr/share/nginx/html/
- /Users/marcin/Docker/projects/test.local/nginx/conf.d/:/etc/nginx/conf.d/
- /Users/marcin/Docker/projects/test.local/nginx/log/:/var/log/nginx/
- /Users/marcin/Docker/projects/test.local/supervisor/conf.d/:/etc/supervisor/conf.d/
- /Users/marcin/Docker/projects/test.local/supervisor/log/:/var/log/supervisor/
- /Users/marcin/Docker/projects/test.local/cron/:/root/.cron/
- /Users/marcin/Docker/local_share/:/root/.local_share/
- /Users/marcin/Docker/local_share/certificates/:/usr/share/nginx/certificates/
working_dir: /usr/share/nginx/html/
links:
- db
container_name: test.php
hostname: test.local
ports:
- "336:22"
- "8081:80"
- "18080:443"
environment:
- VIRTUAL_HOST=test.local
- CERT_NAME=default
- HTTPS_METHOD=noredirect
db:
build: /Users/marcin/Docker/definitions/mysql/5.7
environment:
- MYSQL_ROOT_PASSWORD=pass
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
expose:
- 3306
volumes:
- /Users/marcin/Docker/projects/test.local/mysql/data/:/var/lib/mysql/
- /Users/marcin/Docker/projects/test.local/mysql/conf.d/:/etc/mysql/conf.d/source
- /Users/marcin/Docker/projects/test.local/mysql/log/:/var/log/mysql/
ports:
- "33060:3306"
container_name: test.db
hostname: test.local
and after that it seems not to work. I can access site using ip and port without a problem, but I cannot longer use domain to access it. When I try I'm getting:
503 Service Temporarily Unavailable
nginx/1.13.8
And this is for sure from jwilder nginx (and not the nginx in project).
So the question is - where should I put environment variables to make it work? It seems that when they are placed as they are at the moment they are not read by proxy.
The 503 indicates that the nginx-proxy container can see your container running in docker and it has the configuration needed for nginx to route traffic to it, but it is unable to connect to that container over the docker network. For container-to-container networking to work, you need to have a common docker network defined. You should first run the following to create a network:
docker network create proxy
Then update your nginx-proxy compose file to use the network (this should also be upgraded to at least a v2 syntax, I've gone with 3.2 to match your other file):
version: "3.2"
networks:
proxy:
external: true
services:
proxy:
image: jwilder/nginx-proxy
restart: always
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/conf.d/proxy.conf:/etc/nginx/conf.d/proxy.conf:ro
- /Users/marcin/Docker/local_share/certificates:/etc/nginx/certs:ro
ports:
- "80:80"
- "443:443"
container_name: proxy
networks:
- proxy
And then do something similar for your application:
version: "3.2"
networks:
proxy:
external: true
services:
web:
build: /Users/marcin/Docker/definitions/php-nginx/php-7.1-ubuntu
volumes:
- /Users/marcin/Docker/projects/test.local/html/:/usr/share/nginx/html/
- /Users/marcin/Docker/projects/test.local/nginx/conf.d/:/etc/nginx/conf.d/
- /Users/marcin/Docker/projects/test.local/nginx/log/:/var/log/nginx/
- /Users/marcin/Docker/projects/test.local/supervisor/conf.d/:/etc/supervisor/conf.d/
- /Users/marcin/Docker/projects/test.local/supervisor/log/:/var/log/supervisor/
- /Users/marcin/Docker/projects/test.local/cron/:/root/.cron/
- /Users/marcin/Docker/local_share/:/root/.local_share/
- /Users/marcin/Docker/local_share/certificates/:/usr/share/nginx/certificates/
working_dir: /usr/share/nginx/html/
links:
- db
container_name: test.php
hostname: test.local
ports:
- "336:22"
- "8081:80"
- "18080:443"
environment:
- VIRTUAL_HOST=test.local
- CERT_NAME=default
- HTTPS_METHOD=noredirect
networks:
- proxy
- default
db:
build: /Users/marcin/Docker/definitions/mysql/5.7
environment:
- MYSQL_ROOT_PASSWORD=pass
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
expose:
- 3306
volumes:
- /Users/marcin/Docker/projects/test.local/mysql/data/:/var/lib/mysql/
- /Users/marcin/Docker/projects/test.local/mysql/conf.d/:/etc/mysql/conf.d/source
- /Users/marcin/Docker/projects/test.local/mysql/log/:/var/log/mysql/
ports:
- "33060:3306"
container_name: test.db
hostname: test.local
If you were upgrading from a v1 syntax (without a version defined), you will find that docker switches from running everything on the same network without dns to running each compose project or stack on a dedicated network with dns. To run your apps on other networks, you'll need to explicitly configure that. In the above example, only the web container was placed on the proxy network, and both are on the default network created for this project or stack.