Rabbitmq on docker: Application mnesia exited with reason: stopped - docker

I'm trying to launch Rabbitmq with docker-compose alongside DRF and Celery.
Here's my docker-compose file. Everything else works fine, except for rabbitmq:
version: '3.7'
services:
drf:
build: ./drf
entrypoint: ["/bin/sh","-c"]
command:
- |
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
volumes:
- ./drf/:/usr/src/drf/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=base_test
redis:
image: redis:alpine
volumes:
- redis:/data
ports:
- "6379:6379"
depends_on:
- drf
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
networks:
- net_1
celery_worker:
command: sh -c "wait-for redis:3000 && wait-for drf:8000 -- celery -A base-test worker -l info"
depends_on:
- drf
- db
- redis
deploy:
replicas: 2
restart_policy:
condition: on-failure
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
hostname: celery_worker
image: app-image
networks:
- net_1
restart: on-failure
celery_beat:
command: sh -c "wait-for redis:3000 && wait-for drf:8000 -- celery -A mysite beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler"
depends_on:
- drf
- db
- redis
hostname: celery_beat
image: app-image
networks:
- net_1
restart: on-failure
networks:
net_1:
driver: bridge
volumes:
postgres_data:
redis:
And here's what happens when I launch it. Can someone please help me find the problem? I can't even follow the instruction and read the generated dump file because rabbitmq container exits after the error.
rabbitmq | Starting broker...2021-04-05 16:49:58.330 [info] <0.273.0>
rabbitmq | node : rabbit#0e652f57b1b3
rabbitmq | home dir : /var/lib/rabbitmq
rabbitmq | config file(s) : /etc/rabbitmq/rabbitmq.conf
rabbitmq | cookie hash : ZPam/SOKy2dEd/3yt0OlaA==
rabbitmq | log(s) : <stdout>
rabbitmq | database dir : /var/lib/rabbitmq/mnesia/rabbit#0e652f57b1b3
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: list of feature flags found:
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] drop_unroutable_metric
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] empty_basic_get_metric
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] implicit_default_bindings
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] maintenance_mode_status
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [ ] quorum_queue
rabbitmq | 2021-04-05 16:50:09.543 [info] <0.273.0> Feature flags: [ ] user_limits
rabbitmq | 2021-04-05 16:50:09.545 [info] <0.273.0> Feature flags: [ ] virtual_host_metadata
rabbitmq | 2021-04-05 16:50:09.546 [info] <0.273.0> Feature flags: feature flag states written to disk: yes
rabbitmq | 2021-04-05 16:50:10.844 [info] <0.273.0> Running boot step pre_boot defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.845 [info] <0.273.0> Running boot step rabbit_core_metrics defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.846 [info] <0.273.0> Running boot step rabbit_alarm defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.854 [info] <0.414.0> Memory high watermark set to 2509 MiB (2631391641 bytes) of 6273 MiB (6578479104 bytes) total
rabbitmq | 2021-04-05 16:50:10.864 [info] <0.416.0> Enabling free disk space monitoring
rabbitmq | 2021-04-05 16:50:10.864 [info] <0.416.0> Disk free limit set to 50MB
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.273.0> Running boot step code_server_cache defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.273.0> Running boot step file_handle_cache defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.419.0> Limiting to approx 1048479 file handles (943629 sockets)
rabbitmq | 2021-04-05 16:50:10.873 [info] <0.420.0> FHC read buffering: OFF
rabbitmq | 2021-04-05 16:50:10.873 [info] <0.420.0> FHC write buffering: ON
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.273.0> Running boot step worker_pool defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.372.0> Will use 4 processes for default worker pool
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.372.0> Starting worker pool 'worker_pool' with 4 processes in it
rabbitmq | 2021-04-05 16:50:10.876 [info] <0.273.0> Running boot step database defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.899 [info] <0.273.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbitmq | 2021-04-05 16:50:10.900 [info] <0.273.0> Successfully synced tables from a peer
rabbitmq | 2021-04-05 16:50:10.908 [info] <0.44.0> Application mnesia exited with reason: stopped
rabbitmq |
rabbitmq | 2021-04-05 16:50:10.908 [info] <0.44.0> Application mnesia exited with reason: stopped
rabbitmq | 2021-04-05 16:50:10.908 [error] <0.273.0>
rabbitmq | 2021-04-05 16:50:10.908 [error] <0.273.0> BOOT FAILED
rabbitmq | BOOT FAILED
rabbitmq | ===========
rabbitmq | Error during startup: {error,
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> ===========
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> Error during startup: {error,
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> {schema_integrity_check_failed,
rabbitmq | {schema_integrity_check_failed,
rabbitmq | [{table_attributes_mismatch,rabbit_queue,
rabbitmq | 2021-04-05 16:50:10.910 [error] <0.273.0> [{table_attributes_mismatch,rabbit_queue,
rabbitmq | 2021-04-05 16:50:10.910 [error] <0.273.0> [name,durable,auto_delete,exclusive_owner,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> recoverable_slaves,policy,operator_policy,
rabbitmq | [name,durable,auto_delete,exclusive_owner,
rabbitmq | arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> gm_pids,decorators,state,policy_version,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> slave_pids_pending_shutdown,vhost,options],
rabbitmq | 2021-04-05 16:50:10.912 [error] <0.273.0> [name,durable,auto_delete,exclusive_owner,
rabbitmq | 2021-04-05 16:50:10.912 [error] <0.273.0> arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> recoverable_slaves,policy,operator_policy,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> gm_pids,decorators,state,policy_version,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> slave_pids_pending_shutdown,vhost,options,
rabbitmq | recoverable_slaves,policy,operator_policy,
rabbitmq | gm_pids,decorators,state,policy_version,
rabbitmq | slave_pids_pending_shutdown,vhost,options],
rabbitmq | [name,durable,auto_delete,exclusive_owner,
rabbitmq | arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | recoverable_slaves,policy,operator_policy,
rabbitmq | gm_pids,decorators,state,policy_version,
rabbitmq | slave_pids_pending_shutdown,vhost,options,
rabbitmq | type,type_state]}]}}
rabbitmq | 2021-04-05 16:50:10.914 [error] <0.273.0> type,type_state]}]}}
rabbitmq | 2021-04-05 16:50:10.916 [error] <0.273.0>
rabbitmq |
rabbitmq | 2021-04-05 16:50:11.924 [info] <0.272.0> [{initial_call,{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}},{pid,<0.272.0>},{registered_name,[]},{error_info
,{exit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_
pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_
pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},{rabbit,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,138}]},{proc_l
ib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}},{ancestors,[<0.271.0>]},{message_queue_len,1},{messages,[{'EXIT',<0.273.0>,normal}]},{links,[<0.271.0>,<0.44.0>]},{dictionary,[]},{trap_exit,true},{
status,running},{heap_size,610},{stack_size,28},{reductions,534}], []
rabbitmq | 2021-04-05 16:50:11.924 [error] <0.272.0> CRASH REPORT Process <0.272.0> with 0 neighbours exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name
,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name
,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,t
ype_state]}]},...} in application_master:init/4 line 138
rabbitmq | 2021-04-05 16:50:11.924 [info] <0.44.0> Application rabbit exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},...}
rabbitmq | 2021-04-05 16:50:11.925 [info] <0.44.0> Application rabbit exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},...}
rabbitmq | {"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusi
ve_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusi
ve_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},{rabbit,start,
[normal,[]]}}}"}
rabbitmq | Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusiv
e_owner,arg
rabbitmq |
rabbitmq | Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done
rabbitmq exited with code 0

I've managed to make it work by removing container_name and volumes from rabbitmq section of docker-compose file. Still would be nice to have an explanation of this behavior.

Related

node-red localhost not connecting

I am running node-red in docker compose and from the gitlab-cli file I am calling docker/compose image and my pipeline is working and I can see this:
node-red | 11 Nov 11:28:51 - [info]
462node-red |
463node-red | Welcome to Node-RED
464node-red | ===================
465node-red |
466node-red | 11 Nov 11:28:51 - [info] Node-RED version: v3.0.2
467node-red | 11 Nov 11:28:51 - [info] Node.js version: v16.16.0
468node-red | 11 Nov 11:28:51 - [info] Linux 5.15.49-linuxkit x64 LE
469node-red | 11 Nov 11:28:52 - [info] Loading palette nodes
470node-red | 11 Nov 11:28:53 - [info] Settings file : /data/settings.js
471node-red | 11 Nov 11:28:53 - [info] Context store : 'default' [module=memory]
472node-red | 11 Nov 11:28:53 - [info] User directory : /data
473node-red | 11 Nov 11:28:53 - [warn] Projects disabled : editorTheme.projects.enabled=false
474node-red | 11 Nov 11:28:53 - [info] Flows file : /data/flows.json
475node-red | 11 Nov 11:28:53 - [warn]
476node-red |
477node-red | ---------------------------------------------------------------------
478node-red | Your flow credentials file is encrypted using a system-generated key.
479node-red |
480node-red | If the system-generated key is lost for any reason, your credentials
481node-red | file will not be recoverable, you will have to delete it and re-enter
482node-red | your credentials.
483node-red |
484node-red | You should set your own key using the 'credentialSecret' option in
485node-red | your settings file. Node-RED will then re-encrypt your credentials
486node-red | file using your chosen key the next time you deploy a change.
487node-red | ---------------------------------------------------------------------
488node-red |
489node-red | 11 Nov 11:28:53 - [info] Server now running at http://127.0.0.1:1880/
490node-red | 11 Nov 11:28:53 - [warn] Encrypted credentials not found
491node-red | 11 Nov 11:28:53 - [info] Starting flows
492node-red | 11 Nov 11:28:53 - [info] Started flows
but when I am trying to open the localhost server to access the node-red or the dashboard, I am getting the error "Failed to open page"
This is my docker-compose.yml
version: "3.7"
services:
node-red:
image: nodered/node-red:latest
user: '1000'
container_name: node-red
environment:
- TZ=Europe/Amsterdam
ports:
- "1880:1880"
This is my .gitlab-cli.yml
yateen-docker:
stage: build
image:
name: docker/compose
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
script:
- docker-compose up
only:
- main
Any help!
I tried to create the node-red docker via docker-compose not just by running docker run command. Though my node-red image is running but I can't access the server page

Two rabbitmq isntances on one server with docker compose how to change the default port

I would like to run two instances of rabbitmq on one server. All I create with docker-compose. The thing is how I can change the default node and management ports. I have tried setting it via ports but it didn't help. When I was facing the same scenario but with mongo, I have used command: mongod --port CUSTOM_PORT . What would be the analogical command here for rabbitmq?
Here is my config for the second instance of rabbitmq.
version: '2'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq_test'
ports:
- 5673:5673
- 15673:15673
volumes:
- ./rabbitmq/data/:/var/lib/rabbitmq/
- ./rabbitmq/log/:/var/log/rabbitmq
networks:
- rabbitmq_go_net_test
environment:
RABBITMQ_DEFAULT_USER: 'test'
RABBITMQ_DEFAULT_PASS: 'test'
HOST_PORT_RABBIT: 5673
HOST_PORT_RABBIT_MGMT: 15673
networks:
rabbitmq_go_net_test:
driver: bridge
And the outcome is below
Management plugin: HTTP (non-TLS) listener started on port 15672
rabbitmq_test | 2021-03-18 11:32:42.553 [info] <0.738.0> Ready to start client connection listeners
rabbitmq_test | 2021-03-18 11:32:42.553 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#fb24038613f3
rabbitmq_test | 2021-03-18 11:32:42.557 [info] <0.1035.0> started TCP listener on [::]:5672
We can see that there are still ports 5672 and 15672 exposed instead of 5673 and 15673.
EDIT
ports:
- 5673:5672
- 15673:15672
I have tried that the above conf yet with no success
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.797.0> Management plugin: HTTP (non-TLS) listener started on port 15672
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.903.0> Statistics database started.
rabbitmq_test | 2021-03-18 14:08:56.167 [info] <0.902.0> Starting worker pool 'management_worker_pool' with 3 processes in it
rabbitmq_test | 2021-03-18 14:08:56.168 [info] <0.44.0> Application rabbitmq_management started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.208 [info] <0.44.0> Application prometheus started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.916.0> Prometheus metrics: HTTP (non-TLS) listener started on port 15692
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.44.0> Application rabbitmq_prometheus started on node rabbit#9358e6f4d2a5
rabbitmq_test | 2021-03-18 14:08:56.213 [info] <0.738.0> Ready to start client connection listeners
rabbitmq_test | 2021-03-18 14:08:56.216 [info] <0.1035.0> started TCP listener on [::]:5672
I have found the solution. I provided the configuration file to the rabbitmq container.
loopback_users.guest = false
listeners.tcp.default = 5673
default_pass = test
default_user = test
management.tcp.port = 15673
And a working docker-compose file
version: '2'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq_test'
ports:
- 5673:5673
- 15673:15673
volumes:
- ./rabbitmq/data/:/var/lib/rabbitmq/
- ./rabbitmq/log/:/var/log/rabbitmq
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.conf
networks:
- rabbitmq_go_net_test
networks:
rabbitmq_go_net_test:
driver: bridge
A working example with rabbitmq:3.9.13-management-alpine
docker/rabbitmq/rabbitmq.conf:
loopback_users.guest = false
listeners.tcp.default = 5673
default_pass = guest
default_user = guest
default_vhost = /
docker/rabbitmq/Dockerfile:
FROM rabbitmq:3.9.13-management-alpine
COPY --chown=rabbitmq:rabbitmq rabbitmq.conf /etc/rabbitmq/rabbitmq.conf
EXPOSE 4369 5671 5672 5673 15691 15692 25672 25673
docker-compose.yml:
...
rabbitmq:
#image: "rabbitmq:3-management-alpine"
build: './docker/rabbitmq/'
container_name: my-rabbitmq
environment:
RABBITMQ_DEFAULT_VHOST: /
ports:
- 5673:5672
- 15673:15672
networks:
- default
...

docker rabbit mq image fails to load up

i have a project that i am using rabbitmq image docker when i use this command in my project :
docker-compose -f dc.rabbitmq.yml up
i get the below error for failing to start the container :
477a2217c16b: Pull complete
Digest: sha256:d8fb3795026b4c81eae33f4990e8bbc7b29c877388eef6ead2aca2945074c3f3
Status: Downloaded newer image for rabbitmq:3-management-alpine
Creating rabbitmq ... done
Attaching to rabbitmq
rabbitmq | Configuring logger redirection
rabbitmq | 07:50:26.108 [error]
rabbitmq |
rabbitmq | BOOT FAILED
rabbitmq | 07:50:26.116 [error] BOOT FAILED
rabbitmq | 07:50:26.117 [error] ===========
rabbitmq | ===========
rabbitmq | 07:50:26.117 [error] Exception during startup:
rabbitmq | 07:50:26.117 [error]
rabbitmq | Exception during startup:
rabbitmq |
rabbitmq | supervisor:'-start_children/2-fun-0-'/3 line 355
rabbitmq | 07:50:26.117 [error] supervisor:'-start_children/2-fun-0-'/3 line 355
rabbitmq | 07:50:26.117 [error] supervisor:do_start_child/2 line 371
rabbitmq | 07:50:26.117 [error] supervisor:do_start_child_i/3 line 385
rabbitmq | supervisor:do_start_child/2 line 371
rabbitmq | supervisor:do_start_child_i/3 line 385
rabbitmq | rabbit_prelaunch:run_prelaunch_first_phase/0 line 27
rabbitmq | rabbit_prelaunch:do_run/0 line 108
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch:run_prelaunch_first_phase/0 line 27
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch:do_run/0 line 108
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch_conf:setup/1 line 33
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch_conf:decrypt_config/2 line 404
rabbitmq | rabbit_prelaunch_conf:setup/1 line 33
rabbitmq | rabbit_prelaunch_conf:decrypt_config/2 line 404
rabbitmq | rabbit_prelaunch_conf:decrypt_app/3 line 425
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch_conf:decrypt_app/3 line 425
rabbitmq | 07:50:26.117 [error] throw:{config_decryption_error,{key,default_pass},badarg}
rabbitmq | throw:{config_decryption_error,{key,default_pass},badarg}
rabbitmq |
rabbitmq | 07:50:26.117 [error]
rabbitmq | 07:50:27.118 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {config_decryption_error,{key,default_pass},badarg} in context start_error
so what i have tried so far what trying to search for any child proccess who fails the rabbit but could not find any and removed images and container and build and up them again but again the same error .
#EDIT
here is the config file :
version: "3.7"
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq
ports:
- "15672:15672"
- "5672:5672"
volumes:
- ./docker/rabbitmq/definitions.json:/opt/definitions.json:ro
- ./docker/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.config:ro
- dms_rabbitmq_data:/var/lib/rabbitmq/
- dms_rabbitmq_logs:/var/log/rabbitmq
networks:
- rabbitmq_network
labels:
- co.elastic.logs/module=rabbitmq
- co.elastic.logs/fileset.stdout=access
- co.elastic.logs/fileset.stderr=error
- co.elastic.metrics/module=rabbitmq
- co.elastic.metrics/metricsets=status
volumes:
dms_rabbitmq_etc:
name: dms_rabbitmq_etc
dms_rabbitmq_data:
name: dms_rabbitmq_data
dms_rabbitmq_logs:
driver: local
driver_opts:
type: "none"
o: "bind"
device: ${PWD}/storage/logs/rabbitmq
networks:
rabbitmq_network:
name: rabbitmq_network
I had the same problem. The solution for me was recreating an encoded password.

Alfresco deployment with docker in a system that has a rabbitmq instance running

I am trying to deploy Alfresco community edition with its official docker-compose file, the problem that I am facing is that in the host system there is a RabbitMq instance running (with default configs) an I think the ActiveMq and RabbitMq interferes with each other causing the Alfresco Content Service (ACS) to get stuck in "Starting 'Messaging' subsystem, ID: [Messaging, default]", but the ActiveMq seems to run properly.
this is my docker-compose.yml (I changed ActiveMq ports):
version: "2"
services:
alfresco:
image: alfresco/alfresco-content-repository-community:6.2.0-ga
mem_limit: 1500m
environment:
JAVA_OPTS: "
-Ddb.driver=org.postgresql.Driver
-Ddb.username=alfresco
-Ddb.password=alfresco
-Ddb.url=jdbc:postgresql://postgres:5432/alfresco
-Dsolr.host=solr6
-Dsolr.port=8983
-Dsolr.secureComms=none
-Dsolr.base.url=/solr
-Dindex.subsystem.name=solr6
-Dshare.host=127.0.0.1
-Dshare.port=8080
-Dalfresco.host=localhost
-Dalfresco.port=8080
-Daos.baseUrlOverwrite=http://localhost:8080/alfresco/aos
-Dmessaging.broker.url=\"failover:(nio://activemq:11617)?timeout=3000&jms.useCompression=true\"
-Ddeployment.method=DOCKER_COMPOSE
-Dlocal.transform.service.enabled=true
-DlocalTransform.pdfrenderer.url=http://alfresco-pdf-renderer:8090/
-DlocalTransform.imagemagick.url=http://imagemagick:8090/
-DlocalTransform.libreoffice.url=http://libreoffice:8090/
-DlocalTransform.tika.url=http://tika:8090/
-DlocalTransform.misc.url=http://transform-misc:8090/
-Dlegacy.transform.service.enabled=true
-Dalfresco-pdf-renderer.url=http://alfresco-pdf-renderer:8090/
-Djodconverter.url=http://libreoffice:8090/
-Dimg.url=http://imagemagick:8090/
-Dtika.url=http://tika:8090/
-Dtransform.misc.url=http://transform-misc:8090/
-Dcsrf.filter.enabled=false
-Xms1500m -Xmx1500m
"
alfresco-pdf-renderer:
image: alfresco/alfresco-pdf-renderer:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8090:8090
imagemagick:
image: alfresco/alfresco-imagemagick:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8091:8090
libreoffice:
image: alfresco/alfresco-libreoffice:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8092:8090
tika:
image: alfresco/alfresco-tika:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8093:8090
transform-misc:
image: alfresco/alfresco-transform-misc:2.1.0
mem_limit: 1g
environment:
JAVA_OPTS: " -Xms256m -Xmx512m"
ports:
- 8094:8090
share:
image: alfresco/alfresco-share:6.2.0
mem_limit: 1g
environment:
REPO_HOST: "alfresco"
REPO_PORT: "8080"
JAVA_OPTS: "
-Xms500m
-Xmx500m
-Dalfresco.host=localhost
-Dalfresco.port=8080
-Dalfresco.context=alfresco
-Dalfresco.protocol=http
"
postgres:
image: postgres:11.4
mem_limit: 512m
environment:
- POSTGRES_PASSWORD=alfresco
- POSTGRES_USER=alfresco
- POSTGRES_DB=alfresco
command: postgres -c max_connections=300 -c log_min_messages=LOG
ports:
- 5432:5432
solr6:
image: alfresco/alfresco-search-services:1.4.0
mem_limit: 2g
environment:
#Solr needs to know how to register itself with Alfresco
- SOLR_ALFRESCO_HOST=alfresco
- SOLR_ALFRESCO_PORT=8080
#Alfresco needs to know how to call solr
- SOLR_SOLR_HOST=solr6
- SOLR_SOLR_PORT=8983
#Create the default alfresco and archive cores
- SOLR_CREATE_ALFRESCO_DEFAULTS=alfresco,archive
#HTTP by default
- ALFRESCO_SECURE_COMMS=none
- "SOLR_JAVA_MEM=-Xms2g -Xmx2g"
ports:
- 8083:8983 #Browser port
activemq:
image: alfresco/alfresco-activemq:5.15.8
mem_limit: 1g
ports:
- 1162:8161 # Web Console
- 1673:5672 # AMQP
- 11617:61616 # OpenWire
- 11614:61613 # STOMP
proxy:
image: alfresco/acs-community-ngnix:1.0.0
mem_limit: 128m
depends_on:
- alfresco
ports:
- 8080:8080
links:
- alfresco
- share
this is ActivMq logs :
activemq_1 | INFO: Loading '/opt/activemq/bin/env'
activemq_1 | INFO: Using java '/usr/java/default/bin/java'
activemq_1 | INFO: Starting in foreground, this is just for debugging purposes (stop process by pressing CTRL+C)
activemq_1 | INFO: Creating pidfile /opt/activemq/data/activemq.pid
activemq_1 | Extensions classpath:
activemq_1 | [/opt/activemq/lib,/opt/activemq/lib/camel,/opt/activemq/lib/optional,/opt/activemq/lib/web,/opt/activemq/lib/extra]
activemq_1 | ACTIVEMQ_HOME: /opt/activemq
activemq_1 | ACTIVEMQ_BASE: /opt/activemq
activemq_1 | ACTIVEMQ_CONF: /opt/activemq/conf
activemq_1 | ACTIVEMQ_DATA: /opt/activemq/data
activemq_1 | Loading message broker from: xbean:activemq.xml
activemq_1 | INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#73ad2d6: startup date [Mon Apr 27 09:57:23 UTC 2020]; root of context hierarchy
activemq_1 | INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/opt/activemq/data/kahadb]
activemq_1 | INFO | PListStore:[/opt/activemq/data/localhost/tmp_storage] started
activemq_1 | INFO | Apache ActiveMQ 5.15.8 (localhost, ID:7f445cd32cc5-39441-1587981447728-0:1) is starting
activemq_1 | INFO | Listening for connections at: tcp://7f445cd32cc5:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector openwire started
activemq_1 | INFO | Listening for connections at: amqp://7f445cd32cc5:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector amqp started
activemq_1 | INFO | Listening for connections at: stomp://7f445cd32cc5:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector stomp started
activemq_1 | INFO | Listening for connections at: mqtt://7f445cd32cc5:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector mqtt started
activemq_1 | INFO | Starting Jetty server
activemq_1 | INFO | Creating Jetty connector
activemq_1 | WARN | ServletContext#o.e.j.s.ServletContextHandler#8e50104{/,null,STARTING} has uncovered http methods for path: /
activemq_1 | INFO | Listening for connections at ws://7f445cd32cc5:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600
activemq_1 | INFO | Connector ws started
activemq_1 | INFO | Apache ActiveMQ 5.15.8 (localhost, ID:7f445cd32cc5-39441-1587981447728-0:1) started
activemq_1 | INFO | For help or more information please see: http://activemq.apache.org
activemq_1 | WARN | Store limit is 102400 mb (current store usage is 0 mb). The data directory: /opt/activemq/data/kahadb only has 20358 mb of usable space. - resetting to maximum available disk space: 20358 mb
activemq_1 | WARN | Temporary Store limit is 51200 mb (current store usage is 0 mb). The data directory: /opt/activemq/data only has 20358 mb of usable space. - resetting to maximum available disk space: 20358 mb
and this is the Alfresco last log that it get stucks for ever :
alfresco_1 | 2020-04-27 09:59:50,116 INFO [management.subsystems.ChildApplicationContextFactory] [localhost-startStop-1] Starting 'Messaging' subsystem, ID: [Messaging, default]

Consul docker - advertise flag ignored

Hi i have configured a cluster with two nodes (two vm into virtualbox), cluster start correctly but advertise flag seems to be ignored by consul
vm1 (app) ip 192.168.20.10
vm2 (web) ip 192.168.20.11
docker-compose vm1 (app)
version: '2'
services:
appconsul:
build: consul/
ports:
- 192.168.20.10:8300:8300
- 192.168.20.10:8301:8301
- 192.168.20.10:8301:8301/udp
- 192.168.20.10:8302:8302
- 192.168.20.10:8302:8302/udp
- 192.168.20.10:8400:8400
- 192.168.20.10:8500:8500
- 172.32.0.1:53:53/udp
hostname: node_1
command: -server -advertise 192.168.20.10 -bootstrap-expect 2 -ui-dir /ui
networks:
net-app:
appregistrator:
build: registrator/
hostname: app
command: consul://192.168.20.10:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
depends_on:
- appconsul
networks:
net-app:
networks:
net-app:
driver: bridge
ipam:
config:
- subnet: 172.32.0.0/24
docker-compose vm2 (web)
version: '2'
services:
webconsul:
build: consul/
ports:
- 192.168.20.11:8300:8300
- 192.168.20.11:8301:8301
- 192.168.20.11:8301:8301/udp
- 192.168.20.11:8302:8302
- 192.168.20.11:8302:8302/udp
- 192.168.20.11:8400:8400
- 192.168.20.11:8500:8500
- 172.33.0.1:53:53/udp
hostname: node_2
command: -server -advertise 192.168.20.11 -join 192.168.20.10
networks:
net-web:
webregistrator:
build: registrator/
hostname: web
command: consul://192.168.20.11:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
depends_on:
- webconsul
networks:
net-web:
networks:
net-web:
driver: bridge
ipam:
config:
- subnet: 172.33.0.0/24
After start i not have error about advertise flag but the services has registered with private ip of internal network instead with IP declared in advertise (192.168.20.10 and 192.168.20.11), any idea?
Attach log of node_1, but they are the same as node_2
appconsul_1 | ==> WARNING: Expect Mode enabled, expecting 2 servers
appconsul_1 | ==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
appconsul_1 | ==> Starting raft data migration...
appconsul_1 | ==> Starting Consul agent...
appconsul_1 | ==> Starting Consul agent RPC...
appconsul_1 | ==> Consul agent running!
appconsul_1 | Node name: 'node_1'
appconsul_1 | Datacenter: 'dc1'
appconsul_1 | Server: true (bootstrap: false)
appconsul_1 | Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400)
appconsul_1 | Cluster Addr: 192.168.20.10 (LAN: 8301, WAN: 8302)
appconsul_1 | Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
appconsul_1 | Atlas: <disabled>
appconsul_1 |
appconsul_1 | ==> Log data will now stream in as it occurs:
appconsul_1 |
appconsul_1 | 2017/06/13 14:57:24 [INFO] raft: Node at 192.168.20.10:8300 [Follower] entering Follower state
appconsul_1 | 2017/06/13 14:57:24 [INFO] serf: EventMemberJoin: node_1 192.168.20.10
appconsul_1 | 2017/06/13 14:57:24 [INFO] serf: EventMemberJoin: node_1.dc1 192.168.20.10
appconsul_1 | 2017/06/13 14:57:24 [INFO] consul: adding server node_1 (Addr: 192.168.20.10:8300) (DC: dc1)
appconsul_1 | 2017/06/13 14:57:24 [INFO] consul: adding server node_1.dc1 (Addr: 192.168.20.10:8300) (DC: dc1)
appconsul_1 | 2017/06/13 14:57:25 [ERR] agent: failed to sync remote state: No cluster leader
appconsul_1 | 2017/06/13 14:57:25 [ERR] agent: failed to sync changes: No cluster leader
appconsul_1 | 2017/06/13 14:57:26 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
appconsul_1 | 2017/06/13 14:57:48 [ERR] agent: failed to sync remote state: No cluster leader
appconsul_1 | 2017/06/13 14:58:13 [ERR] agent: failed to sync remote state: No cluster leader
appconsul_1 | 2017/06/13 14:58:22 [INFO] serf: EventMemberJoin: node_2 192.168.20.11
appconsul_1 | 2017/06/13 14:58:22 [INFO] consul: adding server node_2 (Addr: 192.168.20.11:8300) (DC: dc1)
appconsul_1 | 2017/06/13 14:58:22 [INFO] consul: Attempting bootstrap with nodes: [192.168.20.10:8300 192.168.20.11:8300]
appconsul_1 | 2017/06/13 14:58:23 [WARN] raft: Heartbeat timeout reached, starting election
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: Node at 192.168.20.10:8300 [Candidate] entering Candidate state
appconsul_1 | 2017/06/13 14:58:23 [WARN] raft: Remote peer 192.168.20.11:8300 does not have local node 192.168.20.10:8300 as a peer
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: Election won. Tally: 2
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: Node at 192.168.20.10:8300 [Leader] entering Leader state
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: cluster leadership acquired
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: New leader elected: node_1
appconsul_1 | 2017/06/13 14:58:23 [INFO] raft: pipelining replication to peer 192.168.20.11:8300
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: member 'node_1' joined, marking health alive
appconsul_1 | 2017/06/13 14:58:23 [INFO] consul: member 'node_2' joined, marking health alive
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_solr_1:8983'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8302'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8302:udp'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8301'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8500'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8300'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'consul'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_mysql_1:3306'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8400'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:53:udp'
appconsul_1 | 2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8301:udp'
Thanks for any reply
UPDATE:
I have tried to remove networks section from compose file but have same problem, i resolved using compose v1, this configuration works:
compose vm1 (app)
appconsul:
build: consul/
ports:
- 192.168.20.10:8300:8300
- 192.168.20.10:8301:8301
- 192.168.20.10:8301:8301/udp
- 192.168.20.10:8302:8302
- 192.168.20.10:8302:8302/udp
- 192.168.20.10:8400:8400
- 192.168.20.10:8500:8500
- 172.32.0.1:53:53/udp
hostname: node_1
command: -server -advertise 192.168.20.10 -bootstrap-expect 2 -ui-dir /ui
appregistrator:
build: registrator/
hostname: app
command: consul://192.168.20.10:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
links:
- appconsul
compose vm2 (web)
webconsul:
build: consul/
ports:
- 192.168.20.11:8300:8300
- 192.168.20.11:8301:8301
- 192.168.20.11:8301:8301/udp
- 192.168.20.11:8302:8302
- 192.168.20.11:8302:8302/udp
- 192.168.20.11:8400:8400
- 192.168.20.11:8500:8500
- 172.33.0.1:53:53/udp
hostname: node_2
command: -server -advertise 192.168.20.11 -join 192.168.20.10
webregistrator:
build: registrator/
hostname: web
command: consul://192.168.20.11:8500
volumes:
- /var/run/docker.sock:/tmp/docker.sock
links:
- webconsul
The problem is version of compose file, v2 and v3 have same problem, work only with compose file v1

Resources