docker rabbit mq image fails to load up - docker

i have a project that i am using rabbitmq image docker when i use this command in my project :
docker-compose -f dc.rabbitmq.yml up
i get the below error for failing to start the container :
477a2217c16b: Pull complete
Digest: sha256:d8fb3795026b4c81eae33f4990e8bbc7b29c877388eef6ead2aca2945074c3f3
Status: Downloaded newer image for rabbitmq:3-management-alpine
Creating rabbitmq ... done
Attaching to rabbitmq
rabbitmq | Configuring logger redirection
rabbitmq | 07:50:26.108 [error]
rabbitmq |
rabbitmq | BOOT FAILED
rabbitmq | 07:50:26.116 [error] BOOT FAILED
rabbitmq | 07:50:26.117 [error] ===========
rabbitmq | ===========
rabbitmq | 07:50:26.117 [error] Exception during startup:
rabbitmq | 07:50:26.117 [error]
rabbitmq | Exception during startup:
rabbitmq |
rabbitmq | supervisor:'-start_children/2-fun-0-'/3 line 355
rabbitmq | 07:50:26.117 [error] supervisor:'-start_children/2-fun-0-'/3 line 355
rabbitmq | 07:50:26.117 [error] supervisor:do_start_child/2 line 371
rabbitmq | 07:50:26.117 [error] supervisor:do_start_child_i/3 line 385
rabbitmq | supervisor:do_start_child/2 line 371
rabbitmq | supervisor:do_start_child_i/3 line 385
rabbitmq | rabbit_prelaunch:run_prelaunch_first_phase/0 line 27
rabbitmq | rabbit_prelaunch:do_run/0 line 108
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch:run_prelaunch_first_phase/0 line 27
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch:do_run/0 line 108
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch_conf:setup/1 line 33
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch_conf:decrypt_config/2 line 404
rabbitmq | rabbit_prelaunch_conf:setup/1 line 33
rabbitmq | rabbit_prelaunch_conf:decrypt_config/2 line 404
rabbitmq | rabbit_prelaunch_conf:decrypt_app/3 line 425
rabbitmq | 07:50:26.117 [error] rabbit_prelaunch_conf:decrypt_app/3 line 425
rabbitmq | 07:50:26.117 [error] throw:{config_decryption_error,{key,default_pass},badarg}
rabbitmq | throw:{config_decryption_error,{key,default_pass},badarg}
rabbitmq |
rabbitmq | 07:50:26.117 [error]
rabbitmq | 07:50:27.118 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {config_decryption_error,{key,default_pass},badarg} in context start_error
so what i have tried so far what trying to search for any child proccess who fails the rabbit but could not find any and removed images and container and build and up them again but again the same error .
#EDIT
here is the config file :
version: "3.7"
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq
ports:
- "15672:15672"
- "5672:5672"
volumes:
- ./docker/rabbitmq/definitions.json:/opt/definitions.json:ro
- ./docker/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.config:ro
- dms_rabbitmq_data:/var/lib/rabbitmq/
- dms_rabbitmq_logs:/var/log/rabbitmq
networks:
- rabbitmq_network
labels:
- co.elastic.logs/module=rabbitmq
- co.elastic.logs/fileset.stdout=access
- co.elastic.logs/fileset.stderr=error
- co.elastic.metrics/module=rabbitmq
- co.elastic.metrics/metricsets=status
volumes:
dms_rabbitmq_etc:
name: dms_rabbitmq_etc
dms_rabbitmq_data:
name: dms_rabbitmq_data
dms_rabbitmq_logs:
driver: local
driver_opts:
type: "none"
o: "bind"
device: ${PWD}/storage/logs/rabbitmq
networks:
rabbitmq_network:
name: rabbitmq_network

I had the same problem. The solution for me was recreating an encoded password.

Related

Sonarscanner cannot reach sonarqube server using docker-compose

I have just created my docker-compose file, trying to run sonarqube server along side posgres and sonarscanner. The sonarqube server and the database can connect however my sonarscanner cannot reach the sonarqube server.
This is my docker-compose file:
version: "3"
services:
sonarqube:
image: sonarqube
build: .
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://postgres:5432/sonar
- sonar.jdbc.username=sonar
- sonar.jdbc.password=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
postgres:
image: postgres
build: .
networks:
- sonarnet
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
sonarscanner:
image: newtmitch/sonar-scanner
networks:
- sonarnet
depends_on:
- sonarqube
volumes:
- ./:/usr/src
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
This is my sonar-project.propeties file:
# must be unique in a given SonarQube instance
sonar.projectKey=toh-token
# --- optional properties ---
#defaults to project key
#sonar.projectName=toh
# defaults to 'not provided'
#sonar.projectVersion=1.0
# Path is relative to the sonar-project.properties file. Defaults to .
#sonar.sources=$HOME/.solo/angular/toh
# Encoding of the source code. Default is default system encoding
#sonar.sourceEncoding=UTF-8
My sonar-project.properties is located in the same directory as the docker-compose file.
This is what happens whenever I start the services:
Attaching to sonarqube-postgres-1, sonarqube-sonarqube-1, sonarqube-sonarscanner-1
sonarqube-sonarqube-1 | Dropping Privileges
sonarqube-postgres-1 |
sonarqube-postgres-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
sonarqube-postgres-1 |
sonarqube-postgres-1 | 2022-06-12 20:59:39.522 UTC [1] LOG: starting PostgreSQL 14.3 (Debian 14.3-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
sonarqube-postgres-1 | 2022-06-12 20:59:39.523 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
sonarqube-postgres-1 | 2022-06-12 20:59:39.523 UTC [1] LOG: listening on IPv6 address "::", port 5432
sonarqube-postgres-1 | 2022-06-12 20:59:39.525 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
sonarqube-postgres-1 | 2022-06-12 20:59:39.533 UTC [26] LOG: database system was shut down at 2022-06-12 20:57:58 UTC
sonarqube-postgres-1 | 2022-06-12 20:59:39.542 UTC [1] LOG: database system is ready to accept connections
sonarqube-sonarscanner-1 | INFO: Scanner configuration file: /usr/lib/sonar-scanner/conf/sonar-scanner.properties
sonarqube-sonarscanner-1 | INFO: Project root configuration file: /usr/src/sonar-project.properties
sonarqube-sonarscanner-1 | INFO: SonarScanner 4.5.0.2216
sonarqube-sonarscanner-1 | INFO: Java 12-ea Oracle Corporation (64-bit)
sonarqube-sonarscanner-1 | INFO: Linux 5.10.117-1-MANJARO amd64
sonarqube-sonarscanner-1 | INFO: User cache: /root/.sonar/cache
sonarqube-sonarqube-1 | 2022.06.12 20:59:40 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
sonarqube-sonarqube-1 | 2022.06.12 20:59:40 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:41087]
sonarqube-sonarscanner-1 | ERROR: SonarQube server [http://sonarqube:9000] can not be reached
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | INFO: EXECUTION FAILURE
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | INFO: Total time: 0.802s
sonarqube-sonarscanner-1 | INFO: Final Memory: 3M/20M
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | ERROR: Error during SonarScanner execution
sonarqube-sonarscanner-1 | org.sonarsource.scanner.api.internal.ScannerException: Unable to execute SonarScanner analysis
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:85)
sonarqube-sonarscanner-1 | at java.base/java.security.AccessController.doPrivileged(AccessController.java:310)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:74)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:70)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.EmbeddedScanner.doStart(EmbeddedScanner.java:185)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.EmbeddedScanner.start(EmbeddedScanner.java:123)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.cli.Main.execute(Main.java:73)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.cli.Main.main(Main.java:61)
sonarqube-sonarscanner-1 | Caused by: java.lang.IllegalStateException: Fail to get bootstrap index from server
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.BootstrapIndexDownloader.getIndex(BootstrapIndexDownloader.java:42)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.JarDownloader.getScannerEngineFiles(JarDownloader.java:58)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.JarDownloader.download(JarDownloader.java:53)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:76)
sonarqube-sonarscanner-1 | ... 7 more
sonarqube-sonarscanner-1 | Caused by: java.net.ConnectException: Failed to connect to sonarqube/172.30.0.2:9000
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connectSocket(RealConnection.java:265)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connect(RealConnection.java:183)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:224)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java:108)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.find(ExchangeFinder.java:88)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.Transmitter.newExchange(Transmitter.java:169)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:41)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.RealCall.getResponseWithInterceptorChain(RealCall.java:221)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.RealCall.execute(RealCall.java:81)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.ServerConnection.callUrl(ServerConnection.java:114)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.ServerConnection.downloadString(ServerConnection.java:99)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.BootstrapIndexDownloader.getIndex(BootstrapIndexDownloader.java:39)
sonarqube-sonarscanner-1 | ... 10 more
sonarqube-sonarscanner-1 | Caused by: java.net.ConnectException: Connection refused (Connection refused)
sonarqube-sonarscanner-1 | at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
sonarqube-sonarscanner-1 | at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
sonarqube-sonarscanner-1 | at java.base/java.net.Socket.connect(Socket.java:591)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.platform.Platform.connectSocket(Platform.java:130)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connectSocket(RealConnection.java:263)
sonarqube-sonarscanner-1 | ... 31 more
sonarqube-sonarscanner-1 | ERROR:
sonarqube-sonarscanner-1 | ERROR: Re-run SonarScanner using the -X switch to enable full debug logging.
Is there something I am doing wrong?
As #Hans Killian said, the issue was with the scanner trying to connect to the server before the server was up and running. I fixed it by just adding the following in the service of the scanner:
command: ["sh", "-c", "sleep 60 && sonar-scanner && -Dsonar.projectBaseDir=/usr/src]. This allows the scanner to be suspended until the server is up and running
I then added the following credentials in the sonar.project.properties file:
sonar.login=admin
sonar.password=admin

Rabbitmq on docker: Application mnesia exited with reason: stopped

I'm trying to launch Rabbitmq with docker-compose alongside DRF and Celery.
Here's my docker-compose file. Everything else works fine, except for rabbitmq:
version: '3.7'
services:
drf:
build: ./drf
entrypoint: ["/bin/sh","-c"]
command:
- |
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
volumes:
- ./drf/:/usr/src/drf/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=base_test
redis:
image: redis:alpine
volumes:
- redis:/data
ports:
- "6379:6379"
depends_on:
- drf
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
networks:
- net_1
celery_worker:
command: sh -c "wait-for redis:3000 && wait-for drf:8000 -- celery -A base-test worker -l info"
depends_on:
- drf
- db
- redis
deploy:
replicas: 2
restart_policy:
condition: on-failure
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
hostname: celery_worker
image: app-image
networks:
- net_1
restart: on-failure
celery_beat:
command: sh -c "wait-for redis:3000 && wait-for drf:8000 -- celery -A mysite beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler"
depends_on:
- drf
- db
- redis
hostname: celery_beat
image: app-image
networks:
- net_1
restart: on-failure
networks:
net_1:
driver: bridge
volumes:
postgres_data:
redis:
And here's what happens when I launch it. Can someone please help me find the problem? I can't even follow the instruction and read the generated dump file because rabbitmq container exits after the error.
rabbitmq | Starting broker...2021-04-05 16:49:58.330 [info] <0.273.0>
rabbitmq | node : rabbit#0e652f57b1b3
rabbitmq | home dir : /var/lib/rabbitmq
rabbitmq | config file(s) : /etc/rabbitmq/rabbitmq.conf
rabbitmq | cookie hash : ZPam/SOKy2dEd/3yt0OlaA==
rabbitmq | log(s) : <stdout>
rabbitmq | database dir : /var/lib/rabbitmq/mnesia/rabbit#0e652f57b1b3
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: list of feature flags found:
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] drop_unroutable_metric
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] empty_basic_get_metric
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] implicit_default_bindings
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] maintenance_mode_status
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [ ] quorum_queue
rabbitmq | 2021-04-05 16:50:09.543 [info] <0.273.0> Feature flags: [ ] user_limits
rabbitmq | 2021-04-05 16:50:09.545 [info] <0.273.0> Feature flags: [ ] virtual_host_metadata
rabbitmq | 2021-04-05 16:50:09.546 [info] <0.273.0> Feature flags: feature flag states written to disk: yes
rabbitmq | 2021-04-05 16:50:10.844 [info] <0.273.0> Running boot step pre_boot defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.845 [info] <0.273.0> Running boot step rabbit_core_metrics defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.846 [info] <0.273.0> Running boot step rabbit_alarm defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.854 [info] <0.414.0> Memory high watermark set to 2509 MiB (2631391641 bytes) of 6273 MiB (6578479104 bytes) total
rabbitmq | 2021-04-05 16:50:10.864 [info] <0.416.0> Enabling free disk space monitoring
rabbitmq | 2021-04-05 16:50:10.864 [info] <0.416.0> Disk free limit set to 50MB
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.273.0> Running boot step code_server_cache defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.273.0> Running boot step file_handle_cache defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.419.0> Limiting to approx 1048479 file handles (943629 sockets)
rabbitmq | 2021-04-05 16:50:10.873 [info] <0.420.0> FHC read buffering: OFF
rabbitmq | 2021-04-05 16:50:10.873 [info] <0.420.0> FHC write buffering: ON
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.273.0> Running boot step worker_pool defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.372.0> Will use 4 processes for default worker pool
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.372.0> Starting worker pool 'worker_pool' with 4 processes in it
rabbitmq | 2021-04-05 16:50:10.876 [info] <0.273.0> Running boot step database defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.899 [info] <0.273.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbitmq | 2021-04-05 16:50:10.900 [info] <0.273.0> Successfully synced tables from a peer
rabbitmq | 2021-04-05 16:50:10.908 [info] <0.44.0> Application mnesia exited with reason: stopped
rabbitmq |
rabbitmq | 2021-04-05 16:50:10.908 [info] <0.44.0> Application mnesia exited with reason: stopped
rabbitmq | 2021-04-05 16:50:10.908 [error] <0.273.0>
rabbitmq | 2021-04-05 16:50:10.908 [error] <0.273.0> BOOT FAILED
rabbitmq | BOOT FAILED
rabbitmq | ===========
rabbitmq | Error during startup: {error,
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> ===========
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> Error during startup: {error,
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> {schema_integrity_check_failed,
rabbitmq | {schema_integrity_check_failed,
rabbitmq | [{table_attributes_mismatch,rabbit_queue,
rabbitmq | 2021-04-05 16:50:10.910 [error] <0.273.0> [{table_attributes_mismatch,rabbit_queue,
rabbitmq | 2021-04-05 16:50:10.910 [error] <0.273.0> [name,durable,auto_delete,exclusive_owner,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> recoverable_slaves,policy,operator_policy,
rabbitmq | [name,durable,auto_delete,exclusive_owner,
rabbitmq | arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> gm_pids,decorators,state,policy_version,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> slave_pids_pending_shutdown,vhost,options],
rabbitmq | 2021-04-05 16:50:10.912 [error] <0.273.0> [name,durable,auto_delete,exclusive_owner,
rabbitmq | 2021-04-05 16:50:10.912 [error] <0.273.0> arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> recoverable_slaves,policy,operator_policy,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> gm_pids,decorators,state,policy_version,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> slave_pids_pending_shutdown,vhost,options,
rabbitmq | recoverable_slaves,policy,operator_policy,
rabbitmq | gm_pids,decorators,state,policy_version,
rabbitmq | slave_pids_pending_shutdown,vhost,options],
rabbitmq | [name,durable,auto_delete,exclusive_owner,
rabbitmq | arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | recoverable_slaves,policy,operator_policy,
rabbitmq | gm_pids,decorators,state,policy_version,
rabbitmq | slave_pids_pending_shutdown,vhost,options,
rabbitmq | type,type_state]}]}}
rabbitmq | 2021-04-05 16:50:10.914 [error] <0.273.0> type,type_state]}]}}
rabbitmq | 2021-04-05 16:50:10.916 [error] <0.273.0>
rabbitmq |
rabbitmq | 2021-04-05 16:50:11.924 [info] <0.272.0> [{initial_call,{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}},{pid,<0.272.0>},{registered_name,[]},{error_info
,{exit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_
pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_
pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},{rabbit,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,138}]},{proc_l
ib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}},{ancestors,[<0.271.0>]},{message_queue_len,1},{messages,[{'EXIT',<0.273.0>,normal}]},{links,[<0.271.0>,<0.44.0>]},{dictionary,[]},{trap_exit,true},{
status,running},{heap_size,610},{stack_size,28},{reductions,534}], []
rabbitmq | 2021-04-05 16:50:11.924 [error] <0.272.0> CRASH REPORT Process <0.272.0> with 0 neighbours exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name
,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name
,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,t
ype_state]}]},...} in application_master:init/4 line 138
rabbitmq | 2021-04-05 16:50:11.924 [info] <0.44.0> Application rabbit exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},...}
rabbitmq | 2021-04-05 16:50:11.925 [info] <0.44.0> Application rabbit exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},...}
rabbitmq | {"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusi
ve_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusi
ve_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},{rabbit,start,
[normal,[]]}}}"}
rabbitmq | Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusiv
e_owner,arg
rabbitmq |
rabbitmq | Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done
rabbitmq exited with code 0
I've managed to make it work by removing container_name and volumes from rabbitmq section of docker-compose file. Still would be nice to have an explanation of this behavior.

Elastic Search connection refused with Docker Compose (connect ECONNREFUSED )

There are multiple services that I had been trying to run (redis, front-end, back-end and elastic-search) and I was not able to connect to the elastic search service. I even tried giving a static ip for the service. (The networking part is currently commented out in the docker file attached). I tried changing the images and it still was not working.
When I tested ES locally using curl localhost:9200/_cat/health as I have mapped the container port locally it gives me that the cluster is green. I could connect to the other services like redis without issues. As with redis, I am using the service name, elasticsearch to connect it to the back-end service. Following is my docker-compose.yml file.
version: '3'
services:
arc-external:
image: arc-external
build:
context: ./arc-development-branch/arc-external
ports:
- '4201:4201'
# networks:
# - vpcbr
redis:
image: redis:3.2.11-alpine
ports:
- '6379:6379'
# networks:
# - vpcbr
elasticsearch:
image: elasticsearch:2
ports:
- '9200:9200'
- '9300:9300'
environment:
- node.name=elasticsearch
- cluster.name=datasearch
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=elasticsearch
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./data/elastic:/usr/share/elasticsearch/data
# networks:
# vpcbr:
# ipv4_address: 10.5.0.4
api-external:
image: api-external
build: .
ports:
- '3001:3001'
depends_on:
- redis
- elasticsearch
# networks:
# - vpcbr
# networks:
# vpcbr:
# driver: bridge
# ipam:
# config:
# - subnet: 10.5.0.0/16
# gateway: 10.5.0.1
This is the exact error that I am getting when running docker compose-up
api-external_1 | 2021-03-09 20:41:46.3253 - info: Finished setting up log directories
api-external_1 | 2021-03-09 20:41:46.3514 - info: Connection successful to mongodb # mongodb://10.0.0.44:27017/arc
api-external_1 | 2021-03-09 20:41:46.3764 - info: Connection successful to redis at: host: redis port: 6379
api-external_1 | Elasticsearch ERROR: 2021-03-09T20:41:46Z
api-external_1 | Error: Request error, retrying
api-external_1 | HEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.24.0.4:9200
api-external_1 | at Log.error (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/log.js:226:56)
api-external_1 | at checkRespForFailure (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:259:18)
api-external_1 | at HttpConnector.<anonymous> (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/connectors/http.js:164:7)
api-external_1 | at ClientRequest.wrapper (/usr/src/app/api-external/node_modules/lodash/lodash.js:4935:19)
api-external_1 | at ClientRequest.emit (events.js:198:13)
api-external_1 | at ClientRequest.EventEmitter.emit (domain.js:448:20)
api-external_1 | at Socket.socketErrorListener (_http_client.js:401:9)
api-external_1 | at Socket.emit (events.js:198:13)
api-external_1 | at Socket.EventEmitter.emit (domain.js:448:20)
api-external_1 | at emitErrorNT (internal/streams/destroy.js:91:8)
api-external_1 | at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
api-external_1 | at process._tickCallback (internal/process/next_tick.js:63:19)
api-external_1 |
api-external_1 | Elasticsearch WARNING: 2021-03-09T20:41:46Z
api-external_1 | Unable to revive connection: http://elasticsearch:9200/
api-external_1 |
api-external_1 | Elasticsearch WARNING: 2021-03-09T20:41:46Z
api-external_1 | No living connections
api-external_1 |
api-external_1 | 2021-03-09 20:41:46.3844 - error: Error: Failed to connect to elasticsearch # elasticsearch:9200
api-external_1 | at exports.esClient.ping (/usr/src/app/api-external/dist/setup/elastic-search.js:33:46)
api-external_1 | at respond (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:327:9)
api-external_1 | at sendReqWithConnection (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:226:7)
api-external_1 | at next (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/connection_pool.js:214:7)
api-external_1 | at process._tickCallback (internal/process/next_tick.js:61:11)
api-external_1 | 2021-03-09 20:41:46.3854 - error: Error: No Living connections
api-external_1 | at sendReqWithConnection (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/transport.js:226:15)
api-external_1 | at next (/usr/src/app/api-external/node_modules/elasticsearch/src/lib/connection_pool.js:214:7)
api-external_1 | at process._tickCallback (internal/process/next_tick.js:61:11)
api-external_1 | npm ERR! code ELIFEC
Frankly, I searched a lot and were not able to debug it. Any help would be appreciated.
I was able to figure out the answer. The thing stating as depends_on does not wait the services to completely up. Here, api-external does get start up as soon as the redis and elasticsearch starts. However, elasticsearch need a bit time to configure everything so restarting the service will do the trick.
More permanent solution is to write a script that would wait until the elasticsearch is up completely before starting the api-external service

Docker - Celery cannot connect to redis

Project structure:
client
nginx
web/
celery_worker.py
project
config.py
api/
I have the following services in my docker-compose:
version: '3.6'
services:
web:
build:
context: ./services/web
dockerfile: Dockerfile-dev
volumes:
- './services/web:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web-db
- redis
celery:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/logs:/usr/src/app
command: celery worker -A celery_worker.celery --loglevel=INFO -Q cache
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
links:
- redis:redis
redis:
image: redis:5.0.3-alpine
restart: always
expose:
- '6379'
ports:
- '6379:6379'
monitor:
image: dev3_web
ports:
- 5555:5555
command: flower -A celery_worker.celery --port=5555 --broker=redis://redis:6379/0
depends_on:
- web
- redis
web-db:
build:
context: ./services/web/project/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
nginx:
build:
context: ./services/nginx
dockerfile: Dockerfile-dev
restart: always
ports:
- 80:80
- 8888:8888
depends_on:
- web
- client
- redis
client:
build:
context: ./services/client
dockerfile: Dockerfile-dev
volumes:
- './services/client:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- 3007:3000
environment:
- NODE_ENV=development
- REACT_APP_WEB_SERVICE_URL=${REACT_APP_WEB_SERVICE_URL}
depends_on:
- web
- redis
CELERY LOG
However, celery is not being able to connect, from this log:
celery_1 | [2019-03-29 03:09:32,111: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Address not available..
celery_1 | Trying again in 2.00 seconds...
WEB LOG
and so is not web service (running the backend), by the same log:
web_1 | Waiting for postgres...
web_1 | PostgreSQL started
web_1 | * Environment: development
web_1 | * Debug mode: on
web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
web_1 | * Restarting with stat
web_1 | * Debugger is active!
web_1 | * Debugger PIN: 316-641-271
web_1 | 172.21.0.9 - - [29/Mar/2019 03:03:17] "GET /users HTTP/1.0" 200 -
web_1 | 172.21.0.9 - - [29/Mar/2019 03:03:26] "POST /auth/register HTTP/1.0" 500 -
web_1 | Traceback (most recent call last):
web_1 | File "/usr/lib/python3.6/site-packages/redis/connection.py", line 492, in connect
web_1 | sock = self._connect()
web_1 | File "/usr/lib/python3.6/site-packages/redis/connection.py", line 550, in _connect
web_1 | raise err
web_1 | File "/usr/lib/python3.6/site-packages/redis/connection.py", line 538, in _connect
web_1 | sock.connect(socket_address)
web_1 | OSError: [Errno 99] Address not available
web_1 |
web_1 | During handling of the above exception, another exception occurred:
web_1 |
web_1 | Traceback (most recent call last):
web_1 | File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 431, in _reraise_as_library_errors
web_1 | yield
web_1 | File "/usr/lib/python3.6/site-packages/celery/app/base.py", line 744, in send_task
web_1 | self.backend.on_task_call(P, task_id)
web_1 | File "/usr/lib/python3.6/site-packages/celery/backends/redis.py", line 265, in on_task_call
web_1 | self.result_consumer.consume_from(task_id)
web_1 | File "/usr/lib/python3.6/site-packages/celery/backends/redis.py", line 125, in consume_from
web_1 | return self.start(task_id)
web_1 | File "/usr/lib/python3.6/site-packages/celery/backends/redis.py", line 107, in start
web_1 | self._consume_from(initial_task_id)
web_1 | File "/usr/lib/python3.6/site-packages/celery/backends/redis.py", line 132, in _consume_from
web_1 | self._pubsub.subscribe(key)
web_1 | File "/usr/lib/python3.6/site-packages/redis/client.py", line 3096, in subscribe
web_1 | ret_val = self.execute_command('SUBSCRIBE', *iterkeys(new_channels))
web_1 | File "/usr/lib/python3.6/site-packages/redis/client.py", line 3003, in execute_command
web_1 | self.shard_hint
web_1 | File "/usr/lib/python3.6/site-packages/redis/connection.py", line 994, in get_connection
web_1 | connection.connect()
web_1 | File "/usr/lib/python3.6/site-packages/redis/connection.py", line 497, in connect
web_1 | raise ConnectionError(self._error_message(e))
web_1 | redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Address not available.
web_1 |
web_1 | During handling of the above exception, another exception occurred:
web_1 |
web_1 | Traceback (most recent call last):
web_1 | File "/usr/lib/python3.6/site-packages/flask/app.py", line 2309, in __call__
web_1 | return self.wsgi_app(environ, start_response)
web_1 | File "/usr/lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app
web_1 | response = self.handle_exception(e)
web_1 | File "/usr/lib/python3.6/site-packages/flask_cors/extension.py", line 161, in wrapped_function
web_1 | return cors_after_request(app.make_response(f(*args, **kwargs)))
web_1 | File "/usr/lib/python3.6/site-packages/flask/app.py", line 1741, in handle_exception
web_1 | reraise(exc_type, exc_value, tb)
web_1 | File "/usr/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
web_1 | raise value
web_1 | File "/usr/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
web_1 | response = self.full_dispatch_request()
web_1 | File "/usr/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
web_1 | rv = self.handle_user_exception(e)
web_1 | File "/usr/lib/python3.6/site-packages/flask_cors/extension.py", line 161, in wrapped_function
web_1 | return cors_after_request(app.make_response(f(*args, **kwargs)))
web_1 | File "/usr/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
REDIS LOG
Redis, however, seems to be working:
redis_1 | 1:C 29 Mar 2019 02:33:32.722 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 29 Mar 2019 02:33:32.722 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 29 Mar 2019 02:33:32.722 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 29 Mar 2019 02:33:32.724 * Running mode=standalone, port=6379.
redis_1 | 1:M 29 Mar 2019 02:33:32.724 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 29 Mar 2019 02:33:32.724 # Server initialized
redis_1 | 1:M 29 Mar 2019 02:33:32.724 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 29 Mar 2019 02:33:32.725 * DB loaded from disk: 0.000 seconds
redis_1 | 1:M 29 Mar 2019 02:33:32.725 * Ready to accept connections
config.py
class DevelopmentConfig(BaseConfig):
"""Development configuration"""
DEBUG_TB_ENABLED = True
DEBUG = True
BCRYPT_LOG_ROUNDS = 4
#set key
#sqlalchemy
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL')
#SQLALCHEMY_DATABASE_URI= "sqlite:///models/data/database.db"
# mail
MAIL_SERVER='smtp.gmail.com'
MAIL_PORT = 587
MAIL_USE_TLS = True
MAIL_DEBUG = True
MAIL_USERNAME = 'me#gmail.com'
MAIL_PASSWORD = 'MEfAc6w74WGx'
SEVER_NAME = 'http://127.0.0.1:8080'
# celery broker
REDIS_HOST = "0.0.0.0"
REDIS_PORT = 6379
BROKER_URL = os.environ.get('REDIS_URL', "redis://{host}:{port}/0".format(
host=REDIS_HOST,
port=str(REDIS_PORT)))
INSTALLED_APPS = ['routes']
# celery config
CELERYD_CONCURRENCY = 10
CELERY_BROKER_URL = BROKER_URL
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
CELERY_IMPORTS = ('project.api.routes.background',)
what am I missing here?
TL;DR change redis://localhost:6379/0 to redis://redis:6379/0
When you run docker-compose, it creates a new network under which all your containers are running. Docker engine also creates an internal routing which allows all the containers to reference each other using their names.
In your case, your web and celery containers were trying to access redis over localhost. But inside the container, localhost means their own localhost. You need to change the configuration to point the hostname to the name of the container.
If you were not using docker, but had different machines for each of your container, localhost would have meant their own server. In order to connect to redis server, you would have passed the IP address of the machine on which redis was running. In docker, instead of IP address, you can just pass the name of the container because of the engine's routing discussed above.
Note that you can still assign static IP addresses to each of your container, and use those IP addresses instead of container_names. For more details, read the networking section of docker documents.

celery + rabbitmq on Docker

I'm trying to follow this tutorial How to build docker cluster with celery and RabbitMQ in 10 minutes.
Followed the tutorial, although I did changed the following files.
My docker-compose.yml file looks as follows:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=pass
- HOSTNAME=rabbitmq
- RABBITMQ_NODENAME=rabbitmq
ports:
- "5672:5672" # we forward this port because it's useful for debugging
- "15672:15672" # here, we can access rabbitmq management plugin
worker:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit
test_celery/celery.py:
from __future__ import absolute_import, unicode_literals
from celery import Celery
app = Celery('test_celery',broker='amqp://user:pass#rabbit:5672//',backend='rpc://', include=['test_celery.tasks'])
and Dockerfile:
FROM python:3.6
ADD requirements.txt /app/requirements.txt
ADD ./test_celery /app/
WORKDIR /app/
RUN pip install -r requirements.txt
ENTRYPOINT celery -A test_celery worker --loglevel=info
I run the code with the following commands(My OS is Ubuntu 16.04):
sudo docker-compose build
sudo docker-compose scale worker=5
sudo docker-compose up
The output on screen looks something like this:
rabbit_1 | closing AMQP connection <0.501.0> (172.19.0.6:60470 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
rabbit_1 |
rabbit_1 | =WARNING REPORT==== 8-Jun-2017::03:34:15 ===
rabbit_1 | closing AMQP connection <0.479.0> (172.19.0.6:60468 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
rabbit_1 |
rabbit_1 | =WARNING REPORT==== 8-Jun-2017::03:34:15 ===
rabbit_1 | closing AMQP connection <0.366.0> (172.19.0.4:44754 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
rabbit_1 |
rabbit_1 | =WARNING REPORT==== 8-Jun-2017::03:34:15 ===
rabbit_1 | closing AMQP connection <0.359.0> (172.19.0.4:44752 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
worker_1 | [2017-06-08 03:34:19,138: INFO/MainProcess] missed heartbeat from celery#f77048a9d801
worker_1 | [2017-06-08 03:34:24,140: INFO/MainProcess] missed
heartbeat from celery#79aa2323a472
worker_1 | [2017-06-08 03:34:24,141: INFO/MainProcess] missed heartbeat from celery#93af751ed1b5
Then in the same directory I run
python -m test_celery.run_tasks
and the output from this gives me:
a kombu.exceptions.OperationalError: timed out error which I am not sure how to fix and get the same output as in the tutorial.
As the output and error report, "client unexpectedly closed TCP connection" , "kombu.exceptions.OperationalError: timed out", it seems that RabbitMQ didn't start as expected. Could you run the command "docker ps -a" to check what's the status of Rabbit container? If exited , "docker logs container-id" will print out logs of Rabbit container.

Resources