I'm just learning docker and all of its goodness like swarm and compose. My intention is to create a Redis cluster in docker swarm.
Here is my compose file -
version: '3'
services:
redis:
image: redis:alpine
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 60000","--cluster-require-full-coverage no"]
deploy:
replicas: 5
restart_policy:
condition: on-failure
ports:
- 6379:6379
- 16379:16379
networks:
host:
external: true
If I add the network: - host then none of the containers start, if I remove it then the containers start but when I try to connect it throws an error like CLUSTERDOWN Hash slot not served.
Specs -
Windows 10
Docker Swarm Nodes -
2 Virtual Box VMs running Alpine Linux 3.7.0 with two networks
VirtualBox VM Network -
eth0 - NAT
eth1 - VirtualBox Host-only network
Docker running inside the above VMs -
17.12.1-ce
This seems to work for me, network config from here :
version: '3.6'
services:
redis:
image: redis:5.0.3
command:
- "redis-server"
- "--cluster-enabled yes"
- "--cluster-config-file nodes.conf"
- "--cluster-node-timeout 5000"
- "--appendonly yes"
deploy:
mode: global
restart_policy:
condition: on-failure
networks:
hostnet: {}
networks:
hostnet:
external: true
name: host
Then run for example: echo yes | docker run -i --rm --entrypoint redis-cli redis:5.0.3 --cluster create 1.2.3.4{1,2,3}:6379 --cluster-replicas 0
Replace your IPs obviously.
For anyone struggling with this unfortunately this can't be done via docker-compose.yml yet. Refer to this issue Start Redis cluster #79. The only way to do this is by getting the IP address and ports of all the nodes that are running Redis and then running this command in any of the swarm nodes.
# Gives you all the command help
docker run --rm -it thesobercoder/redis-trib
# This creates all master nodes
docker run --rm -it thesobercoder/redis-trib create 172.17.8.101:7000 172.17.8.102:7000 172.17.8.103:7000
# This creates slaves nodes. Note that this requires at least six nodes running master
docker run --rm -it thesobercoder/redis-trib create --replicas 1 172.17.8.101:7000 172.17.8.102:7000 172.17.8.103:7000 172.17.8.104:7000 172.17.8.105:7000 172.17.8.106:7000
here is repo for redis cluster
https://github.com/jay-johnson/docker-redis-cluster/blob/master/docker-compose.yml
Related
I am able to connect to another container's network stack by running this command:
docker run -it --net=container:<container name> <container image> bash
How something like that can be achieved in Docker Compose?
version: "3.8"
services:
client:
image: ubuntu
networks:
- mynet
attachedclient:
image: ubuntu
networks:
- <???>
networks:
mynet:
What should be added in ??? or somewhere else, so that the attachedclient container would connect to client container's network stack?
simply mynet, containers on the same network can communicate
I'm trying to create a service that must join the existing stack, so I force the compose to use the same network.
Surely, my network is persists
docker network ls
NETWORK ID NAME DRIVER SCOPE
oiaxfyeil72z ELK_default overlay swarm
okhs1e1wu73y ELK_elk overlay swarm
My docker-compose.yml
version: '3.3'
services:
logstash:
image: docker.elastic.co/logstash/logstash-oss:7.5.1
ports:
- "5000:5000"
- "9600:9600"
volumes:
- '/share/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro'
- '/share/elk/logstash/pipeline/:/usr/share/logstash/pipeline/:ro'
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms256m"
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
external:
name: ELK_elk
the other services was created with
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
ports:
- "9200:9200"
- "9300:9300"
volumes:
- '/share/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro'
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
volumes:
- '/share/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro'
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
driver: overlay
check with docker stack services
docker stack services ELK
ID NAME MODE REPLICAS IMAGE PORTS
c0rux6mdvzq3 ELK_kibana replicated 1/1 docker.elastic.co/kibana/kibana:7.5.1 *:5601->5601/tcp
j824fd0blxdp ELK_elasticsearch replicated 1/1 docker.elastic.co/elasticsearch/elasticsearch:7.5.1 *:9200->9200/tcp, *:9300->9300/tcp
Then trying to bring the service up with docker-compose up -d. The service doesn't create but produce the error
docker-compose up -d
WARNING: Some services (logstash) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Removing tmp_logstash_1
Recreating bbf503fc3eaa_tmp_logstash_1 ... error
ERROR: for bbf503fc3eaa_tmp_logstash_1 Cannot start service logstash: Could not attach to network ELK_elk: rpc error: code = PermissionDenied desc = network ELK_elk not manually attachable
ERROR: for logstash Cannot start service logstash: Could not attach to network ELK_elk: rpc error: code = PermissionDenied desc = network ELK_elk not manually attachable
ERROR: Encountered errors while bringing up the project.
The issue is due to the fact that the elk network is defined as an "overlay" network. It's a docker swarm feature so docker-compose does not know how to deal with it.
Instead of using docker-compose up you need to deploy a docker swarm stack:
docker stack deploy -c docker-compose.yml <service_name>
You can refer to the Docker documentation for more info:
https://docs.docker.com/network/
For some reason non manager nodes only see networks with active containers using it (Run on non manager node):
docker run --rm -d --name dummy busybox # Run a dummy container
docker network connect [OVERLAY_NETWORK] dummy # Connect to overlay network
now network is available on non manager node and you can run:
docker compose -f compose.yaml -p project up -d
docker stop dummy # Remove dummy container
Compose file:
networks:
db:
external: true
driver: overlay
I have a docker-compose file with three services (Solr, PostgreSQL and pgAdmin), all sharing a Docker network.
version: '2'
services:
solr:
image: solr:7.7.2
ports:
- '8983:8983'
networks:
primus-dev:
ipv4_address: 10.105.1.101
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- primus
- /opt/solr/server/solr/configsets/sample_techproducts_configs
environment:
- SOLR_HEAP=2048m
logging:
options:
max-size: 5m
db:
image: "postgres:11.5"
container_name: "primus_postgres"
ports:
- "5432:5432"
networks:
primus-dev:
ipv4_address: 10.105.1.102
volumes:
- primus_dbdata:/var/lib/postgres/data
environment:
- POSTGRES_DB=primus75
- POSTGRES_USER=primus
- POSTGRES_PASSWORD=primstav
pgadm4:
image: "dpage/pgadmin4"
networks:
primus-dev:
ipv4_address: 10.105.1.103
ports:
- "3050:80"
volumes:
- /home/nils/docker-home:/var/docker-home
environment:
- PGADMIN_DEFAULT_EMAIL=nils.weinander#kulturit.se
- PGADMIN_DEFAULT_PASSWORD=dev
networks:
primus-dev:
driver: bridge
ipam:
config:
- subnet: 10.105.1.0/24
volumes:
data:
primus_dbdata:
This works just fine after docker-compose up (at least pgAdmin can talk to PostgreSQL).
But, then I have a script (actuall a make target, but that's not the point here), which builds, runs and deletes a container with docker-compose run:
docker-compose run -e HOME=/app -e PYTHONPATH=/app/server -u 0 --rm backend \
bash -c 'cd /app/server && python tools/reindex_mp.py -s -n'
This does not work as the reindex_mp.py cannot reach Solr on 10.105.1.101, as the one shot container is not on the same Docker network. So, is there a way to tell docker-compose to use a named network with docker-compose run? docker run has an option --network but that is not available for docker-compose.
You can create a docker network outside your docker-compose and use that network while running services in docker-compose.
docker network create my-custom-created-network
now inside your docker-compose, use this network like this:
services:
serv1:
image: img
networks:
my-custom-created-network
networks:
my-custom-created-network:
external: true
The network creation example creates a bridge network.
To access containers across hosts, use an overlay network.
You can also use the network created inside docker-compose and connect containers to that network.
Docker creates a default network for docker-compose and services which do not have any network configuration specified, will be using default network created by docker for that compose file.
you can find the network name by executing this command:
docker network ls
Use the network appropriate name while starting a container, like this
docker run [options] --network <network-name> <image-name>
Note: Containers in a same network are accessible using container names, you can leverage this instead of using ips
I need to connect FTP server from my_go_app container.
When I do it from it from docker compose, I can do it with:
apk add lftp
lftp -d ftp://julien:test#ftpd-server
and it connects well
but when I try to run my container via docker run, I cannot connect anymore to FTP server
Here the command I use:
docker run --name my_go_app --rm -v volume:/go my_go_app:exp --network=my_go_app_network --env-file ./test.env
Here is the working docker-compose.yml
version: '3'
services:
my_go_app:
image: my_go_app:exp
volumes:
- ./volume:/go
networks:
my_go_app_network:
env_file:
- test.env
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
ports:
- "21:21"
- "30000-30009:30000-30000"
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "julien"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/www/julien"
restart: on-failure
networks:
my_go_app_network:
networks:
my_go_app_network:
external: true
EDIT:
I added the network as external and created it manually with:
docker network create my_go_app_network
Now it appears that my_go_app is part of the default network:
my_go_app git:(tests) ✗ docker inspect my_go_app -f "{{json .NetworkSettings.Networks }}"
{"bridge":{"IPAMConfig":null,"Links":null,"Aliases":null,"NetworkID":"62b2dff15ff00d5cd56c966cc562b8013d06f18750e3986db530fbb4dc4cfba7","EndpointID":"6d0a81a83cdf639ff13635f0a38eeb962075cd729181b7c60fadd43446e13607","Gateway":"172.17.0.1","IPAddress":"172.17.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:11:00:02","DriverOpts":null}}
➜ my_go_app git:(tests) ✗ docker network ls
NETWORK ID NAME DRIVER SCOPE
62b2dff15ff0 bridge bridge local
f33ab34dd91d host host local
ee2d604d6604 none null local
61a661c82262 my_go_app_network bridge local
What am I missing ?
Your network my_go_app_network should be declared as "external", otherwise compose will create a network called "project_name_my_go_app_network". Therefore your go app was not in the same network with the ftp server.
(I guess you have created my_go_app_network manually so your docker run did not throw any network not found error.)
EDIT
You put the arguments in the wrong order. Image name has to be the last one, otherwise they are considered as "commands" for the container. Try
docker run --name my_go_app --rm -v volume:/go --network=my_go_app_network --env-file ./test.env my_go_app:exp
I am quite new to Docker and Consul and now trying to set up a local Consul Cluster consisting of 3 dockerized nodes. I am using the progrium/consul Docker image and went through the whole tutorial and examples described.
The cluster works fine until it comes to restarting / rebooting.
Here is my docker-compose.yml:
---
node1:
command: "-server -bootstrap-expect 3 -ui-dir /ui -advertise 10.67.203.217"
image: progrium/consul
ports:
- "10.67.203.217:8300:8300"
- "10.67.203.217:8400:8400"
- "10.67.203.217:8500:8500"
- "10.67.203.217:8301:8301"
- "10.67.203.217:8302:8302"
- "10.67.203.217:8301:8301/udp"
- "10.67.203.217:8302:8302/udp"
- "172.17.42.1:53:53/udp"
restart: always
node2:
command: "-server -join 10.67.203.217"
image: progrium/consul
restart: always
node3:
command: "-server -join 10.67.203.217"
image: progrium/consul
restart: always
registrator:
command: "consul://10.67.203.217:8500"
image: "progrium/registrator:latest"
restart: always
I get message like:
[ERR] raft: Failed to make RequestVote RPC to 172.17.0.103:8300: dial tcp 172.17.0.103:8300: no route to host
which is obviously because of the new IP my nodes 2 and 3 get after the restart. So is it possible to prevent this? A read about linking and environment variables but it seems those variables are also not updated after a reboot.
I have had the same problem until I have read that there is a ARP table caching problem when you restart a containerized consul node.
As far as I know, there are 2 workaround:
Run your container using --net=host
Clear ARP table before you restart your container: docker run --net=host --privileged --rm cap10morgan/conntrack -F
The owner(Jeff Lindsay) told me that they are redisigning the entire container with this fix built in, no timelines unfortunately.
Source: https://github.com/progrium/docker-consul/issues/26