tl;dr I want every container in this stack to use the same IP & MAC address and be on my local network but need help on how to
For starters I'm new to docker and docker-compose. I made a docker-stack for my Plex Servers (three of them, one for movies by general categories/tv-shows, music, and holidays) with each one having its own IP address & MAC on my local network and now I want to make a second stack for some of my media management tools but this time I'd like the whole stack to use one IP address and MAC address but I haven't been able to figure out how to do it correctly/so it works
This is running on a QNAP NAS (TVS1282v3/QTS) but I am working through the CLI as I leaned that if I do a docker-compose through container station that it won't create the network for me
version: '2.4'
services:
Sonarr:
image: linuxserver/sonarr
container_name: Sonarr
environment:
- TZ=AMERICA/Denver
- name= Sonarr
volumes:
- /share/MediaManagement/Sonarr/config:/config:rw
- /share/MediaManagement/rip:/rip:rw
- /share/Plex:/Plex:rw
ports:
- 8989:8989
restart: unless-stopped
Radarr:
image: linuxserver/radarr
container_name: Radarr
environment:
- TZ=AMERICA/Denver
- name= Radarr
volumes:
- /share/MediaManagement/Radarr/config:/config:rw
- /share/MediaManagement/rip:/rip:rw
- /share/Plex:/Plex:rw
ports:
- 7878:7878
restart: unless-stopped
Lidarr:
image: linuxserver/lidarr
container_name: Lidarr
hostname: Lidarr
environment:
- TZ=AMERICA/Denver
- name= Lidarr
volumes:
- /share/MediaManagement/Lidarr/config:/config:rw
- /share/MediaManagement/rip:/rip:rw
- /share/Plex:/Plex:rw
ports:
- 8686:8686
restart: unless-stopped
networks:
qnet-static:
ipv4_address: 192.168.2.100
mac_address: 05:4A:AA:08:51:43
networks:
qnet-static:
driver: qnet
ipam:
driver: qnet
options:
iface: "eth0"
config:
- subnet: 192.168.2.0/23
gateway: 192.168.2.1
I have also tried it like how it was set up in my Plex compose file where I put
services:
NameOfService:
mac_address: 05:4A:AA:08:51:43
networks:
qnet-static:
ipv4_address: 192.168.2.100
....
networks: ##At the end, not in each service##
qnet-static:
driver: qnet
ipam:
driver: qnet
options:
iface: "eth0"
config:
- subnet: 192.168.2.0/23
gateway: 192.168.2.1
in each service but only the first container worked....
I also tried this at one point but still no luck/ it's syntax is wrong
networks:
qnet-static:
driver: qnet
ipam:
driver: qnet
options:
iface: "eth0"
config:
- subnet: 192.168.2.0/23
gateway: 192.168.2.250
ipv4_address: 192.168.2.100
mac_address: 05:4A:AA:08:51:43
Any help would be appreciate it as I am probably just missing a minor piece
Delete absolutely all of the networks: settings in the file. Don't try to manually assign IP address to containers or configure their MAC addresses.
Your containers will be accessible on your host's IP address, using the first ports: number for each. As far as other hosts on your network are concerned, the processes in containers will be indistinguishable from other services not running in containers.
You also do not need to manually set container_name: or hostname: in most circumstances. There are additional details of the Compose networking environment in Networking in Compose in the Docker documentation, though this mostly focuses on connections between containers. You usually don't need to think about the container-private IP address or (especially) the artificial MAC address within the container network environment.
Related
What should I do to access container via IP ADDRESS not PORT?
Can I define any kind of network or bridge?
I am using standard Docker Desktop with WSL2 on Windows 10.
This required to expose containers as machines not as ports.
I try this but is not works :)
This compose is very complex but you can use pure Ubuntu image to test it - not matter what image.
networks:
cassandra:
volumes:
cassandra-data-1:
driver: local
cassandra-log-1:
driver: local
cassandra-data-2:
driver: local
cassandra-log-2:
driver: local
cassandra-data-3:
driver: local
cassandra-log-3:
driver: local
cassandra-data-4:
driver: local
cassandra-log-4:
driver: local
services:
cassandra-1:
image: cassandra:4.0.5
container_name: cassandra-1
hostname: dc-cassandra-1
mem_limit: 2g
networks:
- cassandra
environment: &cassandra_environment
MAX_HEAP_SIZE: 1G
HEAP_NEWSIZE: 100M
CASSANDRA_SEEDS: dc-cassandra-1,dc-cassandra-2,dc-cassandra-3,dc-cassandra-4
CASSANDRA_CLUSTER_NAME: dptr-v2
CASSANDRA_DC: dptr-v2-dc0
CASSANDRA_RACK: dptr-v2-r0
volumes:
- cassandra-data-1:/var/lib/cassandra
- cassandra-log-1:/var/log/cassandra
cassandra-2:
image: cassandra:4.0.5
container_name: cassandra-2
hostname: dc-cassandra-2
mem_limit: 2g
networks:
- cassandra
environment: *cassandra_environment
volumes:
- cassandra-data-2:/var/lib/cassandra
- cassandra-log-2:/var/log/cassandra
cassandra-3:
image: cassandra:4.0.5
container_name: cassandra-3
hostname: dc-cassandra-3
mem_limit: 2g
networks:
- cassandra
environment: *cassandra_environment
volumes:
- cassandra-data-3:/var/lib/cassandra
- cassandra-log-3:/var/log/cassandra
cassandra-4:
image: cassandra:4.0.5
container_name: cassandra-4
hostname: dc-cassandra-4
mem_limit: 2g
networks:
- cassandra
environment: *cassandra_environment
volumes:
- cassandra-data-4:/var/lib/cassandra
- cassandra-log-4:/var/log/cassandra
You can't access Linux containers by IP address from a Windows host. (...or on a MacOS host, or if you're using a VM-based Docker solution, or if the client isn't on the same host as the containers, or...) Access the containers through their published ports: instead. There's no need to ever look up a container's Docker-internal IP address.
The Docker Desktop documentation notes, under "Known limitations for all platforms":
Per-container IP addressing is not possible: The docker bridge network is not reachable from the host. However if you are a Windows user, it works with Windows containers.
I am trying to use docker on my debian server. There are several sites using Django framework. Every project run in it's own container with gunicorn, single nginx container works as a reverse proxy, data stores in mariadb container. Everything works correctly. It is necessary to add zabbix monitoring system on server. So, I use zabbix-server-mysql image as a zabbix-backend and zabbix-web-nginx-mysql image as a frontend. Backend run successfully, frontend fails with errors such as: "can't binding to 0.0.0.0:80 port is already allocated", nginx refuse connection to domains. As I understand, zabbix-web-nginx-mysql create another nginx container and it causes problems. Is there a right way to use zabbix images with existing nginx container?
I have a nginx reverse proxy installed on the host, which I use for proxy redirect into container. I have a working configuration for docker zabbix with the following configuration (I have omitted the environment variables).
My port 80 for the web application is served through anoter which is same set on nginx proxy_pass. Here the configuration
version: '2'
services:
zabbix-server4:
container_name: zabbix-server4
image: zabbix/zabbix-server-mysql:alpine-4.0.5
user: root
networks:
zbx_net:
aliases:
- zabbix-server4
- zabbix-server4-mysql
ipv4_address: 172.16.238.5
zabbix-web4:
container_name: zabbix-web4
image: zabbix/zabbix-web-nginx-mysql:alpine-4.0.5
ports:
- 127.0.0.1:11011:80
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-web4
- zabbix-web4-nginx-alpine
- zabbix-web4-nginx-mysql
ipv4_address: 172.16.238.10
zabbix-agent4:
container_name: zabbix-agent4
image: zabbix/zabbix-agent:alpine-4.0.5
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-agent4
ipv4_address: 172.16.238.15
networks:
zbx_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
gateway: 172.16.238.1
I have 2 docker containers and I would like to deploy them using Ansible. They are deployed in the same host. Also, I need these 2 docker containers to communicate with each other via socketing either it's a TCP/IP socket or UNIX domain socket. However, I do not know what is the best practice to allow them to do so.
You could check network settings in docker-compose command
https://docs.docker.com/compose/networking/
I have a running Zabbix configured in this way, in which I had to set-up static IPs in order to link all the 3 containers in a stable way even after a server restart.
Network details are at the bottom.
version: '2'
services:
zabbix-server4:
container_name: zabbix-server4
image: zabbix/zabbix-server-mysql:alpine-4.0.5
networks:
zbx_net:
aliases:
- zabbix-server4
ipv4_address: 172.16.238.5
zabbix-web4:
container_name: zabbix-web4
image: zabbix/zabbix-web-nginx-mysql:alpine-4.0.5
ports:
- 127.0.0.1:11011:80
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-web4
ipv4_address: 172.16.238.10
zabbix-agent4:
container_name: zabbix-agent4
image: zabbix/zabbix-agent:alpine-4.0.5
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-agent4
ipv4_address: 172.16.238.15
networks:
zbx_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
gateway: 172.16.238.1
Is that possible to specify a static IP for docker container using docker-compose?
eth-java:
image:
registry-intl.ap-southeast-1.aliyuncs.com/einnity/coin-ethereum:1.0
container_name:
eth-java
ports:
- "8002:8198"
networks:
my-network:
ipv4_address: 192.168.1.21
And this container will communicate with
eth:
image:
ethereum/client-go
container_name:
eth
ports:
- "8545:8545"
- "30303:30303"
networks:
my-network:
ipv4_address: 192.168.1.17
volumes:
- /storage/eth/rinkeby:/root/.ethereum/rinkeby/
and the network settings is
networks:
my-network:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.1.0/24
I type docker exec -it eth-java /bin/bash. Then I type curl and call RPC on 192.168.1.17:8545, it doesnt work. If don't hardcode the IP and use the dynamic IP, this will works. I just hate using dynamic IP because everytime when restarting the docker container, another IP will be given and I need to change my DB value every time.
I have following docker-compose.yml (left out not essential parts):
zabbix-server:
image: zabbix/zabbix-server-pgsql:alpine-4.0-latest
ports:
- "10051:10051"
networks:
zbx_net_backend:
aliases:
- zabbix-server
zabbix-agent:
image: zabbix/zabbix-agent:alpine-4.0-latest
ports:
- "10050:10050"
networks:
zbx_net_backend:
aliases:
- zabbix-agent
networks:
zbx_net_backend:
driver: bridge
internal: true
ipam:
driver: default
config:
- subnet: 172.16.239.0/24
The zabbix-server is looking out-of-the-box for the zabbix-agent on its localhost:10050. Is it possible to make the port 10050 of the zabbix-agent available on localhost:10050 of the zabbix-server?
I know that I can configure the zabbix-agent hostname in the zabbix-server via "Configuration" -> "Hosts" -> edit -> "DNS Name" but I want to avoid that if possible and achieve it via configuration of the docker-compose.yml.
One option is to make the target address configurable where it is used (maybe it already is?) and set the target to be the relevant docker compose service:
zabbix-server:
image: zabbix/zabbix-server-pgsql:alpine-4.0-latest
environment:
AGENT_URL: zabbix-agent
ports:
- "10051:10051"
zabbix-agent:
image: zabbix/zabbix-agent:alpine-4.0-latest
It is not possible to redirect the localhost loopback device point to another container. But, if you really want this, then you can connect them both to the host network.
Like this:
zabbix-server:
image: zabbix/zabbix-server-pgsql:alpine-4.0-latest
ports:
- "10051:10051"
network_mode: "host"
zabbix-agent:
image: zabbix/zabbix-agent:alpine-4.0-latest
ports:
- "10050:10050"
network_mode: "host"
Doing this, will allow you to address the zabbix-agent with localhost:10050 from the zabbix-server container.