Consul docker - advertise flag ignored - docker

Hi i have configured a cluster with two nodes (two vm into virtualbox), cluster start correctly but advertise flag seems to be ignored by consul
vm1 (app) ip 192.168.20.10
vm2 (web) ip 192.168.20.11
docker-compose vm1 (app)
version: '2'
services:
appconsul:
build: consul/
ports:
- 192.168.20.10:8300:8300
- 192.168.20.10:8301:8301
- 192.168.20.10:8301:8301/udp
- 192.168.20.10:8302:8302
- 192.168.20.10:8302:8302/udp
- 192.168.20.10:8400:8400
- 192.168.20.10:8500:8500
- 172.32.0.1:53:53/udp
hostname: node_1
command: -server -advertise 192.168.20.10 -bootstrap-expect 2 -ui-dir /ui
networks:
net-app:
appregistrator:
build: registrator/
hostname: app
command: consul://192.168.20.10:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
depends_on:
- appconsul
networks:
net-app:
networks:
net-app:
driver: bridge
ipam:
config:
- subnet: 172.32.0.0/24
docker-compose vm2 (web)
version: '2'
services:
webconsul:
build: consul/
ports:
- 192.168.20.11:8300:8300
- 192.168.20.11:8301:8301
- 192.168.20.11:8301:8301/udp
- 192.168.20.11:8302:8302
- 192.168.20.11:8302:8302/udp
- 192.168.20.11:8400:8400
- 192.168.20.11:8500:8500
- 172.33.0.1:53:53/udp
hostname: node_2
command: -server -advertise 192.168.20.11 -join 192.168.20.10
networks:
net-web:
webregistrator:
build: registrator/
hostname: web
command: consul://192.168.20.11:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
depends_on:
- webconsul
networks:
net-web:
networks:
net-web:
driver: bridge
ipam:
config:
- subnet: 172.33.0.0/24
After start i not have error about advertise flag but the services has registered with private ip of internal network instead with IP declared in advertise (192.168.20.10 and 192.168.20.11), any idea?
Attach log of node_1, but they are the same as node_2
appconsul_1 | ==> WARNING: Expect Mode enabled, expecting 2 servers
appconsul_1 | ==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
appconsul_1 | ==> Starting raft data migration...
appconsul_1 | ==> Starting Consul agent...
appconsul_1 | ==> Starting Consul agent RPC...
appconsul_1 | ==> Consul agent running!
appconsul_1 | Node name: 'node_1'
appconsul_1 | Datacenter: 'dc1'
appconsul_1 | Server: true (bootstrap: false)
appconsul_1 | Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400)
appconsul_1 | Cluster Addr: 192.168.20.10 (LAN: 8301, WAN: 8302)
appconsul_1 | Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
appconsul_1 | Atlas: <disabled>
appconsul_1 |
appconsul_1 | ==> Log data will now stream in as it occurs:
appconsul_1 |
appconsul_1 | 2017/06/13 14:57:24 [INFO] raft: Node at 192.168.20.10:8300 [Follower] entering Follower state
appconsul_1 | 2017/06/13 14:57:24 [INFO] serf: EventMemberJoin: node_1 192.168.20.10
appconsul_1 | 2017/06/13 14:57:24 [INFO] serf: EventMemberJoin: node_1.dc1 192.168.20.10
appconsul_1 | 2017/06/13 14:57:24 [INFO] consul: adding server node_1 (Addr: 192.168.20.10:8300) (DC: dc1)
appconsul_1 | 2017/06/13 14:57:24 [INFO] consul: adding server node_1.dc1 (Addr: 192.168.20.10:8300) (DC: dc1)
appconsul_1 | 2017/06/13 14:57:25 [ERR] agent: failed to sync remote state: No cluster leader
appconsul_1 | 2017/06/13 14:57:25 [ERR] agent: failed to sync changes: No cluster leader
appconsul_1 | 2017/06/13 14:57:26 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
appconsul_1 | 2017/06/13 14:57:48 [ERR] agent: failed to sync remote state: No cluster leader
appconsul_1 | 2017/06/13 14:58:13 [ERR] agent: failed to sync remote state: No cluster leader
appconsul_1 | 2017/06/13 14:58:22 [INFO] serf: EventMemberJoin: node_2 192.168.20.11
appconsul_1 | 2017/06/13 14:58:22 [INFO] consul: adding server node_2 (Addr: 192.168.20.11:8300) (DC: dc1)
appconsul_1 | 2017/06/13 14:58:22 [INFO] consul: Attempting bootstrap with nodes: [192.168.20.10:8300 192.168.20.11:8300]
appconsul_1 | 2017/06/13 14:58:23 [WARN] raft: Heartbeat timeout reached, starting election
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: Node at 192.168.20.10:8300 [Candidate] entering Candidate state
appconsul_1 | 2017/06/13 14:58:23 [WARN] raft: Remote peer 192.168.20.11:8300 does not have local node 192.168.20.10:8300 as a peer
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: Election won. Tally: 2
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: Node at 192.168.20.10:8300 [Leader] entering Leader state
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: cluster leadership acquired
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: New leader elected: node_1
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: pipelining replication to peer 192.168.20.11:8300
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: member 'node_1' joined, marking health alive
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: member 'node_2' joined, marking health alive
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_solr_1:8983'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8302'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8302:udp'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8301'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8500'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8300'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'consul'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_mysql_1:3306'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8400'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:53:udp'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8301:udp'
Thanks for any reply
UPDATE:
I have tried to remove networks section from compose file but have same problem, i resolved using compose v1, this configuration works:
compose vm1 (app)
appconsul:
build: consul/
ports:
- 192.168.20.10:8300:8300
- 192.168.20.10:8301:8301
- 192.168.20.10:8301:8301/udp
- 192.168.20.10:8302:8302
- 192.168.20.10:8302:8302/udp
- 192.168.20.10:8400:8400
- 192.168.20.10:8500:8500
- 172.32.0.1:53:53/udp
hostname: node_1
command: -server -advertise 192.168.20.10 -bootstrap-expect 2 -ui-dir /ui
appregistrator:
build: registrator/
hostname: app
command: consul://192.168.20.10:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
links:
- appconsul
compose vm2 (web)
webconsul:
build: consul/
ports:
- 192.168.20.11:8300:8300
- 192.168.20.11:8301:8301
- 192.168.20.11:8301:8301/udp
- 192.168.20.11:8302:8302
- 192.168.20.11:8302:8302/udp
- 192.168.20.11:8400:8400
- 192.168.20.11:8500:8500
- 172.33.0.1:53:53/udp
hostname: node_2
command: -server -advertise 192.168.20.11 -join 192.168.20.10
webregistrator:
build: registrator/
hostname: web
command: consul://192.168.20.11:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
links:
- webconsul

The problem is version of compose file, v2 and v3 have same problem, work only with compose file v1

Related

Rabbitmq on docker: Application mnesia exited with reason: stopped

I'm trying to launch Rabbitmq with docker-compose alongside DRF and Celery.
Here's my docker-compose file. Everything else works fine, except for rabbitmq:
version: '3.7'
services:
drf:
build: ./drf
entrypoint: ["/bin/sh","-c"]
command:
- |
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
volumes:
- ./drf/:/usr/src/drf/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=base_test
redis:
image: redis:alpine
volumes:
- redis:/data
ports:
- "6379:6379"
depends_on:
- drf
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
networks:
- net_1
celery_worker:
command: sh -c "wait-for redis:3000 && wait-for drf:8000 -- celery -A base-test worker -l info"
depends_on:
- drf
- db
- redis
deploy:
replicas: 2
restart_policy:
condition: on-failure
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
hostname: celery_worker
image: app-image
networks:
- net_1
restart: on-failure
celery_beat:
command: sh -c "wait-for redis:3000 && wait-for drf:8000 -- celery -A mysite beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler"
depends_on:
- drf
- db
- redis
hostname: celery_beat
image: app-image
networks:
- net_1
restart: on-failure
networks:
net_1:
driver: bridge
volumes:
postgres_data:
redis:
And here's what happens when I launch it. Can someone please help me find the problem? I can't even follow the instruction and read the generated dump file because rabbitmq container exits after the error.
rabbitmq | Starting broker...2021-04-05 16:49:58.330 [info] <0.273.0>
rabbitmq | node : rabbit#0e652f57b1b3
rabbitmq | home dir : /var/lib/rabbitmq
rabbitmq | config file(s) : /etc/rabbitmq/rabbitmq.conf
rabbitmq | cookie hash : ZPam/SOKy2dEd/3yt0OlaA==
rabbitmq | log(s) : <stdout>
rabbitmq | database dir : /var/lib/rabbitmq/mnesia/rabbit#0e652f57b1b3
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: list of feature flags found:
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] drop_unroutable_metric
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] empty_basic_get_metric
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] implicit_default_bindings
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] maintenance_mode_status
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [ ] quorum_queue
rabbitmq | 2021-04-05 16:50:09.543 [info] <0.273.0> Feature flags: [ ] user_limits
rabbitmq | 2021-04-05 16:50:09.545 [info] <0.273.0> Feature flags: [ ] virtual_host_metadata
rabbitmq | 2021-04-05 16:50:09.546 [info] <0.273.0> Feature flags: feature flag states written to disk: yes
rabbitmq | 2021-04-05 16:50:10.844 [info] <0.273.0> Running boot step pre_boot defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.845 [info] <0.273.0> Running boot step rabbit_core_metrics defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.846 [info] <0.273.0> Running boot step rabbit_alarm defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.854 [info] <0.414.0> Memory high watermark set to 2509 MiB (2631391641 bytes) of 6273 MiB (6578479104 bytes) total
rabbitmq | 2021-04-05 16:50:10.864 [info] <0.416.0> Enabling free disk space monitoring
rabbitmq | 2021-04-05 16:50:10.864 [info] <0.416.0> Disk free limit set to 50MB
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.273.0> Running boot step code_server_cache defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.273.0> Running boot step file_handle_cache defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.419.0> Limiting to approx 1048479 file handles (943629 sockets)
rabbitmq | 2021-04-05 16:50:10.873 [info] <0.420.0> FHC read buffering: OFF
rabbitmq | 2021-04-05 16:50:10.873 [info] <0.420.0> FHC write buffering: ON
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.273.0> Running boot step worker_pool defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.372.0> Will use 4 processes for default worker pool
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.372.0> Starting worker pool 'worker_pool' with 4 processes in it
rabbitmq | 2021-04-05 16:50:10.876 [info] <0.273.0> Running boot step database defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.899 [info] <0.273.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbitmq | 2021-04-05 16:50:10.900 [info] <0.273.0> Successfully synced tables from a peer
rabbitmq | 2021-04-05 16:50:10.908 [info] <0.44.0> Application mnesia exited with reason: stopped
rabbitmq |
rabbitmq | 2021-04-05 16:50:10.908 [info] <0.44.0> Application mnesia exited with reason: stopped
rabbitmq | 2021-04-05 16:50:10.908 [error] <0.273.0>
rabbitmq | 2021-04-05 16:50:10.908 [error] <0.273.0> BOOT FAILED
rabbitmq | BOOT FAILED
rabbitmq | ===========
rabbitmq | Error during startup: {error,
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> ===========
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> Error during startup: {error,
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> {schema_integrity_check_failed,
rabbitmq | {schema_integrity_check_failed,
rabbitmq | [{table_attributes_mismatch,rabbit_queue,
rabbitmq | 2021-04-05 16:50:10.910 [error] <0.273.0> [{table_attributes_mismatch,rabbit_queue,
rabbitmq | 2021-04-05 16:50:10.910 [error] <0.273.0> [name,durable,auto_delete,exclusive_owner,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> recoverable_slaves,policy,operator_policy,
rabbitmq | [name,durable,auto_delete,exclusive_owner,
rabbitmq | arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> gm_pids,decorators,state,policy_version,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> slave_pids_pending_shutdown,vhost,options],
rabbitmq | 2021-04-05 16:50:10.912 [error] <0.273.0> [name,durable,auto_delete,exclusive_owner,
rabbitmq | 2021-04-05 16:50:10.912 [error] <0.273.0> arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> recoverable_slaves,policy,operator_policy,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> gm_pids,decorators,state,policy_version,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> slave_pids_pending_shutdown,vhost,options,
rabbitmq | recoverable_slaves,policy,operator_policy,
rabbitmq | gm_pids,decorators,state,policy_version,
rabbitmq | slave_pids_pending_shutdown,vhost,options],
rabbitmq | [name,durable,auto_delete,exclusive_owner,
rabbitmq | arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | recoverable_slaves,policy,operator_policy,
rabbitmq | gm_pids,decorators,state,policy_version,
rabbitmq | slave_pids_pending_shutdown,vhost,options,
rabbitmq | type,type_state]}]}}
rabbitmq | 2021-04-05 16:50:10.914 [error] <0.273.0> type,type_state]}]}}
rabbitmq | 2021-04-05 16:50:10.916 [error] <0.273.0>
rabbitmq |
rabbitmq | 2021-04-05 16:50:11.924 [info] <0.272.0> [{initial_call,{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}},{pid,<0.272.0>},{registered_name,[]},{error_info
,{exit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_
pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_
pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},{rabbit,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,138}]},{proc_l
ib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}},{ancestors,[<0.271.0>]},{message_queue_len,1},{messages,[{'EXIT',<0.273.0>,normal}]},{links,[<0.271.0>,<0.44.0>]},{dictionary,[]},{trap_exit,true},{
status,running},{heap_size,610},{stack_size,28},{reductions,534}], []
rabbitmq | 2021-04-05 16:50:11.924 [error] <0.272.0> CRASH REPORT Process <0.272.0> with 0 neighbours exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name
,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name
,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,t
ype_state]}]},...} in application_master:init/4 line 138
rabbitmq | 2021-04-05 16:50:11.924 [info] <0.44.0> Application rabbit exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},...}
rabbitmq | 2021-04-05 16:50:11.925 [info] <0.44.0> Application rabbit exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},...}
rabbitmq | {"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusi
ve_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusi
ve_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},{rabbit,start,
[normal,[]]}}}"}
rabbitmq | Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusiv
e_owner,arg
rabbitmq |
rabbitmq | Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done
rabbitmq exited with code 0
I've managed to make it work by removing container_name and volumes from rabbitmq section of docker-compose file. Still would be nice to have an explanation of this behavior.

Two rabbitmq isntances on one server with docker compose how to change the default port

I would like to run two instances of rabbitmq on one server. All I create with docker-compose. The thing is how I can change the default node and management ports. I have tried setting it via ports but it didn't help. When I was facing the same scenario but with mongo, I have used command: mongod --port CUSTOM_PORT . What would be the analogical command here for rabbitmq?
Here is my config for the second instance of rabbitmq.
version: '2'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq_test'
ports:
- 5673:5673
- 15673:15673
volumes:
- ./rabbitmq/data/:/var/lib/rabbitmq/
- ./rabbitmq/log/:/var/log/rabbitmq
networks:
- rabbitmq_go_net_test
environment:
RABBITMQ_DEFAULT_USER: 'test'
RABBITMQ_DEFAULT_PASS: 'test'
HOST_PORT_RABBIT: 5673
HOST_PORT_RABBIT_MGMT: 15673
networks:
rabbitmq_go_net_test:
driver: bridge
And the outcome is below
Management plugin: HTTP (non-TLS) listener started on port 15672
rabbitmq_test | 2021-03-18 11:32:42.553 [info] <0.738.0> Ready to start client connection listeners
rabbitmq_test | 2021-03-18 11:32:42.553 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#fb24038613f3
rabbitmq_test | 2021-03-18 11:32:42.557 [info] <0.1035.0> started TCP listener on [::]:5672
We can see that there are still ports 5672 and 15672 exposed instead of 5673 and 15673.
EDIT
ports:
- 5673:5672
- 15673:15672
I have tried that the above conf yet with no success
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.797.0> Management plugin: HTTP (non-TLS) listener started on port 15672
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.903.0> Statistics database started.
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.902.0> Starting worker pool 'management_worker_pool' with 3 processes in it
rabbitmq_test | 2021-03-18 14:08:56.168 [info] <0.44.0> Application rabbitmq_management started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.208 [info] <0.44.0> Application prometheus started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.916.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.738.0> Ready to start client connection listeners
rabbitmq_test | 2021-03-18 14:08:56.216 [info] <0.1035.0> started TCP listener on [::]:5672
I have found the solution. I provided the configuration file to the rabbitmq container.
loopback_users.guest = false
listeners.tcp.default = 5673
default_pass = test
default_user = test
management.tcp.port = 15673
And a working docker-compose file
version: '2'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq_test'
ports:
- 5673:5673
- 15673:15673
volumes:
- ./rabbitmq/data/:/var/lib/rabbitmq/
- ./rabbitmq/log/:/var/log/rabbitmq
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.conf
networks:
- rabbitmq_go_net_test
networks:
rabbitmq_go_net_test:
driver: bridge
A working example with rabbitmq:3.9.13-management-alpine
docker/rabbitmq/rabbitmq.conf:
loopback_users.guest = false
listeners.tcp.default = 5673
default_pass = guest
default_user = guest
default_vhost = /
docker/rabbitmq/Dockerfile:
FROM rabbitmq:3.9.13-management-alpine
COPY --chown=rabbitmq:rabbitmq rabbitmq.conf /etc/rabbitmq/rabbitmq.conf
EXPOSE 4369 5671 5672 5673 15691 15692 25672 25673
docker-compose.yml:
...
rabbitmq:
#image: "rabbitmq:3-management-alpine"
build: './docker/rabbitmq/'
container_name: my-rabbitmq
environment:
RABBITMQ_DEFAULT_VHOST: /
ports:
- 5673:5672
- 15673:15672
networks:
- default
...

Why I can't expose portainer agent port 9001?

I'm trying to expose portainer agent port 9001 on a swarm cluster in order to reach it from an external portainer, it is deployed in 'global' mode.
Following docker-compose file works :
version: "3.2"
services:
agent:
image: "portainer/agent:1.1.2"
environment:
AGENT_CLUSTER_ADDR: tasks.agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
networks:
- priv_portainer
deploy:
mode: global
networks:
priv_portainer:
driver: overlay
Then, when I try to expose port 9001, stack starts but there are log errors and portainer fails to connect these agents :
version: "3.2"
services:
agent:
image: "portainer/agent:1.1.2"
environment:
AGENT_CLUSTER_ADDR: tasks.agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- "9001:9001"
networks:
- priv_portainer
deploy:
mode: global
networks:
priv_portainer:
driver: overlay
Event with another port :
ports:
- "19001:9001"
And even with a port that has nothing to do :
ports:
- "12345:54321"
EDIT
Logs from stack :
portainer_agent_agent.0.13cjb851d9me#ignochtulelk02d | 2018/11/26 05:28:50 [INFO] serf: EventMemberJoin: b6040a1ccc2a 10.255.0.13
portainer_agent_agent.0.13cjb851d9me#ignochtulelk02d | 2018/11/26 05:28:50 [INFO] - Starting Portainer agent version 1.1.2 on 0.0.0.0:9001 (cluster mode: true)
portainer_agent_agent.0.13cjb851d9me#ignochtulelk02d | 2018/11/26 05:28:50 [INFO] serf: EventMemberJoin: c6c277e3f60b 10.255.0.11
portainer_agent_agent.0.13cjb851d9me#ignochtulelk02d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.11:7946: write udp [::]:7946->10.255.0.11:7946: sendto: operation not permitted
portainer_agent_agent.0.13cjb851d9me#ignochtulelk02d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.11:7946: write udp [::]:7946->10.255.0.11:7946: sendto: operation not permitted
portainer_agent_agent.0.13cjb851d9me#ignochtulelk02d | 2018/11/26 05:28:51 [INFO] serf: EventMemberJoin: 3e290151a5eb 10.255.0.12
portainer_agent_agent.0.13cjb851d9me#ignochtulelk02d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.11:7946: write udp [::]:7946->10.255.0.11:7946: sendto: operation not permitted
portainer_agent_agent.0.13cjb851d9me#ignochtulelk02d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.12:7946: write udp [::]:7946->10.255.0.12:7946: sendto: operation not permitted
portainer_agent_agent.0.13cjb851d9me#ignochtulelk02d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.12:7946: write udp [::]:7946->10.255.0.12:7946: sendto: operation not permitted
portainer_agent_agent.0.13cjb851d9me#ignochtulelk02d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.11:7946: write udp [::]:7946->10.255.0.11:7946: sendto: operation not permitted
portainer_agent_agent.0.985h7xcfkux0#ignopotulelk03d | 2018/11/26 05:28:51 [INFO] serf: EventMemberJoin: 3e290151a5eb 10.255.0.12
portainer_agent_agent.0.985h7xcfkux0#ignopotulelk03d | 2018/11/26 05:28:51 [INFO] serf: EventMemberJoin: b6040a1ccc2a 10.255.0.13
portainer_agent_agent.0.985h7xcfkux0#ignopotulelk03d | 2018/11/26 05:28:51 [INFO] serf: EventMemberJoin: c6c277e3f60b 10.255.0.11
portainer_agent_agent.0.985h7xcfkux0#ignopotulelk03d | 2018/11/26 05:28:51 [INFO] - Starting Portainer agent version 1.1.2 on 0.0.0.0:9001 (cluster mode: true)
portainer_agent_agent.0.985h7xcfkux0#ignopotulelk03d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.13:7946: write udp [::]:7946->10.255.0.13:7946: sendto: operation not permitted
portainer_agent_agent.0.985h7xcfkux0#ignopotulelk03d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.11:7946: write udp [::]:7946->10.255.0.11:7946: sendto: operation not permitted
portainer_agent_agent.0.mljirysir6px#ignopotulelk01d | 2018/11/26 05:28:50 [INFO] serf: EventMemberJoin: c6c277e3f60b 10.255.0.11
portainer_agent_agent.0.mljirysir6px#ignopotulelk01d | 2018/11/26 05:28:50 [INFO] serf: EventMemberJoin: b6040a1ccc2a 10.255.0.13
portainer_agent_agent.0.mljirysir6px#ignopotulelk01d | 2018/11/26 05:28:50 [INFO] - Starting Portainer agent version 1.1.2 on 0.0.0.0:9001 (cluster mode: true)
portainer_agent_agent.0.mljirysir6px#ignopotulelk01d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.13:7946: write udp [::]:7946->10.255.0.13:7946: sendto: operation not permitted
portainer_agent_agent.0.mljirysir6px#ignopotulelk01d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.13:7946: write udp [::]:7946->10.255.0.13:7946: sendto: operation not permitted
portainer_agent_agent.0.mljirysir6px#ignopotulelk01d | 2018/11/26 05:28:51 [INFO] serf: EventMemberJoin: 3e290151a5eb 10.255.0.12
portainer_agent_agent.0.mljirysir6px#ignopotulelk01d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.13:7946: write udp [::]:7946->10.255.0.13:7946: sendto: operation not permitted
portainer_agent_agent.0.mljirysir6px#ignopotulelk01d | 2018/11/26 05:28:51 [ERR] memberlist: Failed to send gossip to 10.255.0.12:7946: write udp [::]:7946->10.255.0.12:7946: sendto: operation not permitted
When I replace :
ports:
- "9001:9001"
With :
- target: 9001
published: 9001
protocol: tcp
mode: host
It works, why host mode solves this problem ?

Connect consul agent to consul

I'm trying to setup the consul server and connect an agent to it for 2 or 3 days already. I'm using docker-compose.
But after performing a join operation, agent gets a message "Agent not live or unreachable".
Here are the logs:
root#e33a6127103f:/app# consul agent -join 10.1.30.91 -data-dir=/tmp/consul
==> Starting Consul agent...
==> Joining cluster...
Join completed. Synced with 1 initial agents
==> Consul agent running!
Version: 'v1.0.1'
Node ID: '0e1adf74-462d-45a4-1927-95ed123f1526'
Node name: 'e33a6127103f'
Datacenter: 'dc1' (Segment: '')
Server: false (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 172.17.0.2 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2017/12/06 10:44:43 [INFO] serf: EventMemberJoin: e33a6127103f 172.17.0.2
2017/12/06 10:44:43 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2017/12/06 10:44:43 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2017/12/06 10:44:43 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2017/12/06 10:44:43 [INFO] agent: (LAN) joining: [10.1.30.91]
2017/12/06 10:44:43 [INFO] serf: EventMemberJoin: consul1 172.19.0.2 2017/12/06 10:44:43 [INFO] consul: adding server consul1 (Addr: tcp/172.19.0.2:8300) (DC: dc1)
2017/12/06 10:44:43 [INFO] agent: (LAN) joined: 1 Err: <nil>
2017/12/06 10:44:43 [INFO] agent: started state syncer
2017/12/06 10:44:43 [WARN] manager: No servers available
2017/12/06 10:44:43 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:44:54 [INFO] memberlist: Suspect consul1 has failed, no acks received
2017/12/06 10:44:55 [ERR] consul: "Catalog.NodeServices" RPC failed to server 172.19.0.2:8300: rpc error getting client: failed to get conn: dial tcp <nil>->172.19.0.2:8300: i/o timeout
2017/12/06 10:44:55 [ERR] agent: failed to sync remote state: rpc error getting client: failed to get conn: dial tcp <nil>->172.19.0.2:8300: i/o timeout
2017/12/06 10:44:58 [INFO] memberlist: Marking consul1 as failed, suspect timeout reached (0 peer confirmations)
2017/12/06 10:44:58 [INFO] serf: EventMemberFailed: consul1 172.19.0.2
2017/12/06 10:44:58 [INFO] consul: removing server consul1 (Addr: tcp/172.19.0.2:8300) (DC: dc1)
2017/12/06 10:45:05 [INFO] memberlist: Suspect consul1 has failed, no acks received
2017/12/06 10:45:06 [WARN] manager: No servers available
2017/12/06 10:45:06 [ERR] agent: Coordinate update error: No known Consul servers
2017/12/06 10:45:12 [WARN] manager: No servers available
2017/12/06 10:45:12 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:45:13 [INFO] serf: attempting reconnect to consul1 172.19.0.2:8301
2017/12/06 10:45:28 [WARN] manager: No servers available
2017/12/06 10:45:28 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:45:32 [WARN] manager: No servers available . `
My settings are:
docker-compose SERVER:
consul1:
image: "consul.1.0.1"
container_name: "consul1"
hostname: "consul1"
volumes:
- ./consul/config:/config/
ports:
- "8400:8400"
- "8500:8500"
- "8600:53"
- "8300:8300"
- "8301:8301"
command: "agent -config-dir=/config -ui -server -bootstrap-expect 1"
Help please solve the problem.
I think you using wrong ip-address of consul-server
"consul agent -join 10.1.30.91 -data-dir=/tmp/consul"
10.1.30.91 this is not docker container ip it might be your host address/virtualbox.
Get consul-container ip and use that to join in consul-agent command.
For more info about how consul and agent works follow the link
https://dzone.com/articles/service-discovery-with-docker-and-consul-part-1
Try to get the right IP address by executing this command:
docker inspect <container id> | grep "IPAddress"
Where the is the container ID of the consul server.
Than use the obtained address instead of "10.1.30.91" in the command
consul agent -join <IP ADDRESS CONSUL SERVER> -data-dir=/tmp/consul

Consul Empty reply from server

I'm trying to get a consul server cluster up and running. I have 3 dockerized consul servers running, but I can't access the Web UI, the HTTP API nor the DNS.
$ docker logs net-sci_discovery-service_consul_1
==> WARNING: Expect Mode enabled, expecting 3 servers
==> Starting Consul agent...
==> Consul agent running!
Version: 'v0.8.5'
Node ID: 'ccd38897-6047-f8b6-be1c-2aa0022a1483'
Node name: 'consul1'
Datacenter: 'dc1'
Server: true (bootstrap: false)
Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 172.20.0.2 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2017/07/07 23:24:07 [INFO] raft: Initial configuration (index=0): []
2017/07/07 23:24:07 [INFO] raft: Node at 172.20.0.2:8300 [Follower] entering Follower state (Leader: "")
2017/07/07 23:24:07 [INFO] serf: EventMemberJoin: consul1 172.20.0.2
2017/07/07 23:24:07 [INFO] consul: Adding LAN server consul1 (Addr: tcp/172.20.0.2:8300) (DC: dc1)
2017/07/07 23:24:07 [INFO] serf: EventMemberJoin: consul1.dc1 172.20.0.2
2017/07/07 23:24:07 [INFO] consul: Handled member-join event for server "consul1.dc1" in area "wan"
2017/07/07 23:24:07 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2017/07/07 23:24:07 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2017/07/07 23:24:07 [INFO] agent: Started HTTP server on 127.0.0.1:8500
2017/07/07 23:24:09 [INFO] serf: EventMemberJoin: consul2 172.20.0.3
2017/07/07 23:24:09 [INFO] consul: Adding LAN server consul2 (Addr: tcp/172.20.0.3:8300) (DC: dc1)
2017/07/07 23:24:09 [INFO] serf: EventMemberJoin: consul2.dc1 172.20.0.3
2017/07/07 23:24:09 [INFO] consul: Handled member-join event for server "consul2.dc1" in area "wan"
2017/07/07 23:24:10 [INFO] serf: EventMemberJoin: consul3 172.20.0.4
2017/07/07 23:24:10 [INFO] consul: Adding LAN server consul3 (Addr: tcp/172.20.0.4:8300) (DC: dc1)
2017/07/07 23:24:10 [INFO] consul: Found expected number of peers, attempting bootstrap: 172.20.0.2:8300,172.20.0.3:8300,172.20.0.4:8300
2017/07/07 23:24:10 [INFO] serf: EventMemberJoin: consul3.dc1 172.20.0.4
2017/07/07 23:24:10 [INFO] consul: Handled member-join event for server "consul3.dc1" in area "wan"
2017/07/07 23:24:14 [ERR] agent: failed to sync remote state: No cluster leader
2017/07/07 23:24:17 [WARN] raft: Heartbeat timeout from "" reached, starting election
2017/07/07 23:24:17 [INFO] raft: Node at 172.20.0.2:8300 [Candidate] entering Candidate state in term 2
2017/07/07 23:24:17 [INFO] raft: Election won. Tally: 2
2017/07/07 23:24:17 [INFO] raft: Node at 172.20.0.2:8300 [Leader] entering Leader state
2017/07/07 23:24:17 [INFO] raft: Added peer 172.20.0.3:8300, starting replication
2017/07/07 23:24:17 [INFO] raft: Added peer 172.20.0.4:8300, starting replication
2017/07/07 23:24:17 [INFO] consul: cluster leadership acquired
2017/07/07 23:24:17 [INFO] consul: New leader elected: consul1
2017/07/07 23:24:17 [WARN] raft: AppendEntries to {Voter 172.20.0.3:8300 172.20.0.3:8300} rejected, sending older logs (next: 1)
2017/07/07 23:24:17 [WARN] raft: AppendEntries to {Voter 172.20.0.4:8300 172.20.0.4:8300} rejected, sending older logs (next: 1)
2017/07/07 23:24:17 [INFO] raft: pipelining replication to peer {Voter 172.20.0.3:8300 172.20.0.3:8300}
2017/07/07 23:24:17 [INFO] raft: pipelining replication to peer {Voter 172.20.0.4:8300 172.20.0.4:8300}
2017/07/07 23:24:18 [INFO] consul: member 'consul1' joined, marking health alive
2017/07/07 23:24:18 [INFO] consul: member 'consul2' joined, marking health alive
2017/07/07 23:24:18 [INFO] consul: member 'consul3' joined, marking health alive
2017/07/07 23:24:20 [INFO] agent: Synced service 'consul'
2017/07/07 23:24:20 [INFO] agent: Synced service 'messaging-service-kafka'
2017/07/07 23:24:20 [INFO] agent: Synced service 'messaging-service-zookeeper'
$ curl http://127.0.0.1:8500/v1/catalog/service/consul
curl: (52) Empty reply from server
dig #127.0.0.1 -p 8600 consul.service.consul
; <<>> DiG 9.8.3-P1 <<>> #127.0.0.1 -p 8600 consul.service.consul
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
$ dig #127.0.0.1 -p 8600 messaging-service-kafka.service.consul
; <<>> DiG 9.8.3-P1 <<>> #127.0.0.1 -p 8600 messaging-service-kafka.service.consul
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
I can't get my services to register via the HTTP API either; those shown above are registered using a config script when the container launches.
Here's my docker-compose.yml:
version: '2'
services:
consul1:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_1"
hostname: "consul1"
ports:
- "8400:8400"
- "8500:8500"
- "8600:8600"
volumes:
- ./etc/consul.d:/etc/consul.d
command: "agent -server -ui -bootstrap-expect 3 -config-dir=/etc/consul.d -bind=0.0.0.0"
consul2:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_2"
hostname: "consul2"
command: "agent -server -join=consul1"
links:
- "consul1"
consul3:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_3"
hostname: "consul3"
command: "agent -server -join=consul1"
links:
- "consul1"
I'm relatively new to both docker and consul. I've had a look around the web and the above options are my understanding of what is required. Any suggestions on the way forward would be very welcome.
Edit:
Result of docker container ps -all:
$ docker container ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e0a1c3bba165 consul:latest "docker-entrypoint..." 38 seconds ago Up 36 seconds 8300-8302/tcp, 8500/tcp, 8301-8302/udp, 8600/tcp, 8600/udp net-sci_discovery-service_consul_3
7f05555e81e0 consul:latest "docker-entrypoint..." 38 seconds ago Up 36 seconds 8300-8302/tcp, 8500/tcp, 8301-8302/udp, 8600/tcp, 8600/udp net-sci_discovery-service_consul_2
9e2dedaa224b consul:latest "docker-entrypoint..." 39 seconds ago Up 38 seconds 0.0.0.0:8400->8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp, 8300-8302/tcp, 8600/udp, 0.0.0.0:8600->8600/tcp net-sci_discovery-service_consul_1
27b34c5dacb7 messagingservice_kafka "start-kafka.sh" 3 hours ago Up 3 hours 0.0.0.0:9092->9092/tcp net-sci_messaging-service_kafka
0389797b0b8f wurstmeister/zookeeper "/bin/sh -c '/usr/..." 3 hours ago Up 3 hours 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp net-sci_messaging-service_zookeeper
Edit:
Updated docker-compose.yml to include long format for ports:
version: '3.2'
services:
consul1:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_1"
hostname: "consul1"
ports:
- target: 8400
published: 8400
mode: host
- target: 8500
published: 8500
mode: host
- target: 8600
published: 8600
mode: host
volumes:
- ./etc/consul.d:/etc/consul.d
command: "agent -server -ui -bootstrap-expect 3 -config-dir=/etc/consul.d -bind=0.0.0.0 -client=127.0.0.1"
consul2:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_2"
hostname: "consul2"
command: "agent -server -join=consul1"
links:
- "consul1"
consul3:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul_3"
hostname: "consul3"
command: "agent -server -join=consul1"
links:
- "consul1"
From the Consul Web Gui page, make sure you have launched an agent with the -ui parameter.
The UI is available at the /ui path on the same port as the HTTP API.
By default this is http://localhost:8500/ui
I do see 8500 mapped to your host on broadcast (0.0.0.0).
Check also (as in this answer) if the client_addr can help (at least for testing)

Resources