I have a docker-compose.yaml file as below and I want to make sure that port 6379 on the server is not exposed to the internet (just to the first container "web" mentioned).
If I just remove the "expose" link from the "redis:" section, will that keep my redis working internally but block it from being accessed from outside?
version: '2'
services:
web:
image: myimage/version1:1.4.5
restart: always
ports:
- 8082:3000
container_name: web
networks:
- web
- default
expose:
- '3000'
labels:
- 'traefik.docker.network=web'
- 'traefik.enable=true'
- 'traefik.basic.frontend.rule=Host:abcd.com'
- 'traefik.basic.port=3000'
- 'traefik.basic.protocol=http'
depends_on:
- redis
redis:
image: redis:4.0.5-alpine
restart: always
ports:
- 6379:6379
expose:
- 6379
command: ["redis-server", "--appendonly", "yes"]
hostname: redis
networks:
- web
volumes:
- redis-data:/data
networks:
web:
external: true
volumes:
redis-data:
Expose makes the port accessible only to linked services which is what you want. You should remove the redis ports so the port will not be bound to the host.
Related
I've a VPS on which I want to deploy multiple web applications (for which I've already read posts and they're perfect when we have directly sub container). I want to manage each web application having its own nginx to route to its sub domains and also some static website's related to them. I've two docker compose. My network looks like the following. Image.
My NGINX Rerverse proxy should be responsible to route the domains to respective nginx container. It might be possible or not I'm not sure (that's why I asked help). If someone can provide a better understanding that I'm open to suggestions. Below is my configuration for nginx proxy container and other web apps docker yaml file code. I've used the NGINX PROXY
NGINX PROXY Docker Compose File
version: "3.7"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
WEB APPLICATION 1 docker-compose.yml
version: "3.7"
networks:
default:
external:
name: nginx-proxy
yoda-network:
driver: bridge
services:
adminer:
container_name: ${APP_NAME}_adminer
image: adminer
depends_on:
- mysql
expose:
- 8080
environment:
VIRTUAL_HOST: yodaledger.com
VIRTUAL_PORT: 8080
networks:
- yoda-network
python:
container_name: ${APP_NAME}_python
image: python:3.6
command: bash -c "pip3 install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
volumes:
- ${APP_PATH}var/www/api.yodaledger.com/yodaledger_backend:/app
- ${APP_PATH}var/www/static.yodaledger.com:/app/static
depends_on:
- mysql
working_dir: /app
environment:
- PYTHONUNBUFFERED=1
- MYSQL_HOST=mysql
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
networks:
- yoda-network
nginx:
container_name: ${APP_NAME}_nginx
image: nginx:alpine
expose:
- 443
- 80
volumes:
- ${APP_PATH}var/www:/var/www
- ${APP_PATH}var/log:/var/log/nginx
- ${APP_PATH}var/ssl:/var/ssl
- ${APP_PATH}etc/nginx:/etc/nginx
- /tmp/${APP_NAME}/nginx:/tmp
depends_on:
- python
environment:
VIRTUAL_PORT: 443
VIRTUAL_HOST: yodaledger.com,api.yodaledger.com,app.yodaledger.com,static.yodaledger.com
networks:
- yoda-network
- default
mysql:
container_name: ${APP_NAME}_mysql
image: mariadb:latest
#ports:
# - "3306:3306"
volumes:
- ${APP_PATH}var/mysql:/var/lib/mysql
- ${APP_PATH}etc/mysql:/etc/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- TZ=Europe/Paris
networks:
- yoda-network
WEB APPLICATION 2
It looks the same as web application 1.
I have very simple docker-compose config:
version: '3.5'
services:
consul:
image: consul:latest
hostname: "consul"
command: "consul agent -server -bootstrap-expect 1 -client=0.0.0.0 -ui -data-dir=/tmp"
environment:
SERVICE_53_IGNORE: 'true'
SERVICE_8301_IGNORE: 'true'
SERVICE_8302_IGNORE: 'true'
SERVICE_8600_IGNORE: 'true'
SERVICE_8300_IGNORE: 'true'
SERVICE_8400_IGNORE: 'true'
SERVICE_8500_IGNORE: 'true'
ports:
- 8300:8300
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- backend
registrator:
command: -internal consul://consul:8500
image: gliderlabs/registrator:master
depends_on:
- consul
links:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- backend
image_tagger:
build: image_tagger
image: image_tagger:latest
ports:
- 8000
networks:
- backend
mongo:
image: mongo
command: [--auth]
ports:
- "27017:27017"
restart: always
networks:
- backend
volumes:
- /mnt/data/mongo-data:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: qwerty
postgres:
image: postgres:11.1
# ports:
# - "5432:5432"
networks:
- backend
volumes:
- ./postgres-data:/var/lib/postgresql/data
- ./scripts:/docker-entrypoint-initdb.d
restart: always
environment:
POSTGRES_PASSWORD: qwerty
POSTGRES_DB: ttt
SERVICE_5432_NAME: postgres
SERVICE_5432_ID: postgres
networks:
backend:
name: backend
(and some other services)
Also I configured dnsmasq on host to access containers by internal name.
I spent couple of days, but still not able to make it stable:
1. Very often some services are just not get registered by registrator (sometimes I get 5 out of 15).
2. Very often containers are registered with wrong ip address. So in container info I have one address(correct), in consul - another (incorrect). And when I want to reach some service by address like myservice.service.consul I end up at wrong container.
3. Sometimes resolution fails at all even when containers are registered with correct ip.
Do I have some mistakes in config?
So, at least for now I was able to fix this by passing -resync 15 param to registrator. Not sure if it's correct solution, but it works.
I try to share a container through my local network, to access this container from an another machine on the same network. I have follow tihs tutorial (section "With macvlan devices") and I succeeded to share a simple web container and access from an another host.
But the container that I want to share is a little more sophisticated, because he comminicate with other containers on the host through an internal network on the host.
I try to bind my existing container created in my docker-compose but I can't access to it. Can you help me, or tell me where I'm wrong if so please ?
This is my docker-compose :
version: "2"
services:
baseimage:
container_name: baseimage
image: base
build:
context: ./
dockerfile: Dockerfile.base
web:
container_name: web
image: web
env_file:
- .env
context: ./
dockerfile: Dockerfile.web
extra_hosts:
- dev.api.exemple.com:127.0.0.1
- dev.admin.exemple.com:127.0.0.1
- dev.www.exemple.com:127.0.0.1
ports:
- 80:80
- 443:443
volumes:
- ./code:/ass
- /var/run/docker.sock:/var/run/docker.sock
tty: true
dns:
- 8.8.8.8
- 8.8.4.4
links:
- mysql
- redis
- elasticsearch
- baseimage
networks:
devbox:
ipv4_address: 172.20.0.2
cron:
container_name: cron
image: cron
build:
context: ./
dockerfile: Dockerfile.cron
volumes:
- ./code:/ass
tty: true
dns:
- 8.8.8.8
- 8.8.4.4
links:
- web:dev.api.exemple.com
- mysql
- redis
- elasticsearch
- baseimage
networks:
devbox:
ipv4_address: 172.20.0.3
mysql:
container_name: mysql
image: mysql:5.6
ports:
- 3306:3306
networks:
devbox:
ipv4_address: 172.20.0.4
redis:
container_name: redis
image: redis:3.2.4
ports:
- 6379:6379
networks:
devbox:
ipv4_address: 172.20.0.5
elasticsearch:
container_name: elastic
image: elasticsearch:2.3.4
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- ./es_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
devbox:
ipv4_address: 172.20.0.6
chromedriver:
container_name: chromedriver
image: robcherry/docker-chromedriver:latest
privileged: true
ports:
- 4444:4444
environment:
- CHROMEDRIVER_WHITELISTED_IPS='172.20.0.2'
- CHROMEDRIVER_URL_BASE='wd/hub'
- CHROMEDRIVER_EXTRA_ARGS='--ignore-certificate-errors'
networks:
devbox:
ipv4_address: 172.20.0.7
links:
- web:dev.www.exemple.com
networks:
devbox:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
Create an external network assign the external network and devbox network to web. Web would then be publicly accessible via the external network public ip address and communicate with the internal services using the devbox network.
Will post working example asap
I use the windows docker toolbox and I am confused what I am missing. I want to use redis commander (https://www.npmjs.com/package/redis-commander) with a docker image redis from the docker hub.
I used the docker-compose.yml from above link:
version: '3'
services:
redis:
container_name: redis
hostname: redis
image: redis
redis-commander:
container_name: redis-commander
hostname: redis-commander
image: rediscommander/redis-commander:latest
build: .
restart: always
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- 8081:8081
Now I can start the app with the toolbox IP on port 8081
Ther it says undefined redis server: local:redis:6379:0
Since I am using the toolbox I assume I have to put some IP correct in the compose file.
Using redis alone with $ docker run --name some-redis -d redis
works and I can reach the server und er local:6379
But what means REDIS_HOSTS=local:redis:6379
Any help to set this up correctly?
For fix that you need link redis and redis-commander like that:
version: "3.9"
services:
redis:
image: redis:6.2.5
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis:/var/lib/redis
- redis-config:/usr/local/etc/redis/redis.conf
ports:
- ${REDIS_PORT}:6379
networks:
- redis-network
redis-commander:
image: rediscommander/redis-commander:latest
restart: always
environment:
REDIS_HOSTS: redis
REDIS_HOST: redis
REDIS_PORT: redis:6379
REDIS_PASSWORD: ${REDIS_PASSWORD}
HTTP_USER: root
HTTP_PASSWORD: root
ports:
- 8081:8081
networks:
- redis-network
volumes:
redis:
redis-config:
networks:
redis-network:
driver: bridge
or that:
version: "3.9"
services:
redis:
image: redis:6.2.5
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis:/var/lib/redis
- redis-config:/usr/local/etc/redis/redis.conf
ports:
- ${REDIS_PORT}:6379
links:
- redis-commander
redis-commander:
image: rediscommander/redis-commander:latest
restart: always
environment:
REDIS_HOSTS: redis
REDIS_HOST: redis
REDIS_PORT: redis:6379
REDIS_PASSWORD: ${REDIS_PASSWORD}
HTTP_USER: root
HTTP_PASSWORD: root
ports:
- 8081:8081
volumes:
redis:
redis-config:
i think you missed to link your 2 containers.
the redis container needs a port + link and the redis-commander the correct environment.
you can only use the container name for the link/environment.
version: '3'
services:
redis:
container_name: redis
hostname: redis
image: redis
ports:
- "6379:6379"
links: redis-commander
redis-commander:
container_name: redis-commander
hostname: redis-commander
image: rediscommander/redis-commander:latest
build: .
restart: always
environment:
- REDIS_HOSTS=redis
ports:
- "8081:8081"
REDIS_HOSTS=local:redis:6379 means it will create config file to connect to docker container with the hostname redis on port 6379 and will have the connection name or label as local.
REDIS_HOSTS is used when you want to have multiple connection, separated by comma.
There are some ways to write REDIS_HOSTS as mentioned in the documentation.
hostname
label:hostname
label:hostname:port
label:hostname:port:dbIndex
label:hostname:port:dbIndex:password
Example:
Let say you want to use redis for two app called app1 and app2. app1 will have the db index 1, app2 will have the db index 2. The REDIS_HOSTS would look like:
REDIS_HOSTS=app1:redis:6379:1,app2:redis:6379:2
The working docker-compose.yml you could use (add network):
version: '3'
services:
redis:
container_name: redis
hostname: redis
image: redis
networks:
- redis_network
redis-commander:
container_name: redis-commander
hostname: redis-commander
image: rediscommander/redis-commander:latest
build: .
restart: always
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- 8081:8081
networks:
- redis_network
networks:
redis_network:
driver: bridge
Whats wrong in my code? thanks in advance!
I'm trying to set up a virtual host for my docker container.
On localhost: 8000 works perfectly, but when I try to access through http: //borgesmelo.local/ the error ERR_NAME_NOT_RESOLVED appears, what can be missing?
This is my -> docker-compose.yml
version: '3.3'
services:
borgesmelo_db:
image: mariadb:latest
container_name: borgesmelo_db
restart: always
volumes:
- ./mariadb/:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: My#159#Sql
MYSQL_PASSWORD: My#159#Sql
borgesmelo_ws:
image: richarvey/nginx-php-fpm:latest
container_name: borgesmelo_ws
restart: always
volumes:
- ./public/:/var/www/html
ports:
- "8000:80"
borgesmelo_wp:
image: wordpress:latest
container_name: borgesmelo_wp
volumes:
- ./public/:/var/www/html
restart: always
environment:
VIRTUAL_HOST: borgesmelo.local
WORDPRESS_DB_HOST: borgesmelo_db:3306
WORDPRESS_DB_PASSWORD: My#159#Sql
depends_on:
- borgesmelo_db
- borgesmelo_ws
borgesmelo_phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: borgesmelo_phpmyadmin
links:
- borgesmelo_db
ports:
- "8001:80"
environment:
- PMA_ARBITRARY=1
borgesmelo_vh:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "8002:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
This is my hosts file (/etc/hosts) [macOS]
#DOCKER
127.0.0.1:8000 borgesmelo.local
Hosts file doesn't support ports as it is for name lookup only. So you would have to set your hosts file to:
127.0.0.1 borgesmelo.local
Then access your application with http://borgesmelo.local:8000.
If you are listening on port 8000 because you already have something else on port 80, then consider using nginx as a reverse proxy and then you can route to different applications based on the server_name. That way, you can access multiple applications through port 80. If you're dealing with docker containers, then consider looking into Traefik as a reverse proxy.