celery + rabbitmq on Docker - docker

I'm trying to follow this tutorial How to build docker cluster with celery and RabbitMQ in 10 minutes.
Followed the tutorial, although I did changed the following files.
My docker-compose.yml file looks as follows:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=pass
- HOSTNAME=rabbitmq
- RABBITMQ_NODENAME=rabbitmq
ports:
- "5672:5672" # we forward this port because it's useful for debugging
- "15672:15672" # here, we can access rabbitmq management plugin
worker:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit
test_celery/celery.py:
from __future__ import absolute_import, unicode_literals
from celery import Celery
app = Celery('test_celery',broker='amqp://user:pass#rabbit:5672//',backend='rpc://', include=['test_celery.tasks'])
and Dockerfile:
FROM python:3.6
ADD requirements.txt /app/requirements.txt
ADD ./test_celery /app/
WORKDIR /app/
RUN pip install -r requirements.txt
ENTRYPOINT celery -A test_celery worker --loglevel=info
I run the code with the following commands(My OS is Ubuntu 16.04):
sudo docker-compose build
sudo docker-compose scale worker=5
sudo docker-compose up
The output on screen looks something like this:
rabbit_1 | closing AMQP connection <0.501.0> (172.19.0.6:60470 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
rabbit_1 |
rabbit_1 | =WARNING REPORT==== 8-Jun-2017::03:34:15 ===
rabbit_1 | closing AMQP connection <0.479.0> (172.19.0.6:60468 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
rabbit_1 |
rabbit_1 | =WARNING REPORT==== 8-Jun-2017::03:34:15 ===
rabbit_1 | closing AMQP connection <0.366.0> (172.19.0.4:44754 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
rabbit_1 |
rabbit_1 | =WARNING REPORT==== 8-Jun-2017::03:34:15 ===
rabbit_1 | closing AMQP connection <0.359.0> (172.19.0.4:44752 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
worker_1 | [2017-06-08 03:34:19,138: INFO/MainProcess] missed heartbeat from celery#f77048a9d801
worker_1 | [2017-06-08 03:34:24,140: INFO/MainProcess] missed
heartbeat from celery#79aa2323a472
worker_1 | [2017-06-08 03:34:24,141: INFO/MainProcess] missed heartbeat from celery#93af751ed1b5
Then in the same directory I run
python -m test_celery.run_tasks
and the output from this gives me:
a kombu.exceptions.OperationalError: timed out error which I am not sure how to fix and get the same output as in the tutorial.

As the output and error report, "client unexpectedly closed TCP connection" , "kombu.exceptions.OperationalError: timed out", it seems that RabbitMQ didn't start as expected. Could you run the command "docker ps -a" to check what's the status of Rabbit container? If exited , "docker logs container-id" will print out logs of Rabbit container.

Related

Laravel 9 Sail port forward error for mysql

I have port forwarded applications mysql port to 3307 because I need my host mysql to keep running at 3306, but it gives below error.
Also I am able to get welcome page after running sail up
I am using laravel 9 latest version
Error
Illuminate\Database\QueryException
PHP 8.1.9
9.26.1
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo for mysql failed: Temporary failure in name resolution
SELECT count(*) AS aggregate FROM `users` WHERE `email` = test#test.com
.env
APP_URL=http://127.0.0.1
APP_PORT=81
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
FORWARD_DB_PORT=3307
docker-composer.yml
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-81}:80'
- '${VITE_PORT:-5174}:${VITE_PORT:-5173}'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
mysql:
image: 'mysql/mysql-server:8.0'
ports:
- '${FORWARD_DB_PORT:-3307}:3306'
environment:
MYSQL_ROOT_PASSWORD: '{DB_PASSWORD}'
MYSQL_ROOT_HOST: '{DB_HOST}'
MYSQL_DATABASE: '{DB_DATABASE}'
MYSQL_USER: '{DB_USERNAME}'
MYSQL_PASSWORD: '{DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- 'sail-mysql:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sail-mysql:
driver: local
Update 1
My terminal ouput is as follows
sm_v2-laravel.test-1 "start-container" laravel.test exited (0)
Shutting down old Sail processes...
[+] Running 0/1
⠙ Network sm_v2_sail Creating 0.2s
[+] Running 3/3d orphan containers ([sm_v2-service-1]) for this project. If you removed or renamed this service in your co ⠿ Network sm_v2_sail Created 0.2s
⠿ Container sm_v2-mysql-1 Created 1.5s
⠿ Container sm_v2-laravel.test-1 Created 0.5s
Attaching to sm_v2-laravel.test-1, sm_v2-mysql-1
sm_v2-mysql-1 | [Entrypoint] MySQL Docker Image 8.0.30-1.2.9-server
sm_v2-mysql-1 | [Entrypoint] Starting MySQL 8.0.30-1.2.9-server
sm_v2-mysql-1 | 2022-08-30T15:19:04.087084Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
sm_v2-mysql-1 | 2022-08-30T15:19:04.092964Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.30) starting as process 1
sm_v2-mysql-1 | 2022-08-30T15:19:04.148193Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
sm_v2-mysql-1 | 2022-08-30T15:19:04.303213Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files.
sm_v2-mysql-1 | 2022-08-30T15:19:04.755173Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine
sm_v2-mysql-1 | 2022-08-30T15:19:04.755609Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
sm_v2-mysql-1 | 2022-08-30T15:19:04.755681Z 0 [ERROR] [MY-010119] [Server] Aborting
sm_v2-mysql-1 | 2022-08-30T15:19:04.757223Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.30) MySQL Community Server - GPL.
sm_v2-mysql-1 exited with code 1
sm_v2-laravel.test-1 | 2022-08-30 15:19:07,746 INFO Set uid to user 0 succeeded
sm_v2-laravel.test-1 | 2022-08-30 15:19:07,751 INFO supervisord started with pid 1
sm_v2-laravel.test-1 | 2022-08-30 15:19:08,756 INFO spawned: 'php' with pid 16
sm_v2-laravel.test-1 | 2022-08-30 15:19:09,759 INFO success: php entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | INFO Server running on [http://0.0.0.0:80].
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | Press Ctrl+C to stop the server
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | 2022-08-30 15:19:21 ................................................... ~ 1s
sm_v2-laravel.test-1 | 2022-08-30 15:19:23 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:23 ................................................... ~ 1s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /build/assets/app.ac81e540.css .................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /build/assets/app.ab93cf8a.js ..................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:27 ................................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:29 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 16:07:14 ................................................... ~ 0s
Update 2
I get different error now
SQLSTATE[HY000] [1045] Access denied for user 'root'#'192.168.128.3' (using password: YES)
I finally solved it after mental frustation of more than a week. But it is very strange that no one was able to provide any answer in any forums, yes I tried all the famous forums possible.
I made sure that two users are added on my host(main computer) machine not the docker mysql, and I granted them full grant using mysql cli, there were 2 entries like these along with other entries
root | %
root | localhost
I ran following commands one after another. I don't know which commands exactly solved the problem as I am a beginner in docker and sail but here are my steps that I tried after which it started working.
I was getting Docker is not running. , so I tried following to make docker running.
sudo systemctl enable docker.service
sudo systemctl enable docker.socket
After that I tried sail up but it did not work, so ran following
sudo systemctl stop docker
sudo systemctl start docker
sudo systemctl disable docker.service
sudo systemctl enable docker.service
sail up
After that I rebooted my computer (I am on Ubuntu 22.04)
reboot
Removed some unnecessary files, also I got some failed error in docker service which I solved by running line 2&3 of the code below
sudo rm /etc/systemd/system/docker.service.d/override.conf
sudo systemctl reset-failed docker.service
sudo systemctl start docker.service
systemctl daemon-reload
sudo systemctl start docker.service
sail down
sail build --no-cache
sail up
php artisan config:clear
After that I migrated database and it worked
sail artisan migrate
After that
sudo systemctl enable docker
sail up
sail build
sail ps
sudo usermod -aG docker ${USER}
Removed daemon.json
sudo rm daemon.json
Removed old volumes
I think this was helpful
sail down --rmi all -v
sail up / (you can use sail up --no-cache)
Now mysql works on host computer port 3306 as well as other ports used for docker 3307,3308 simultaneously
I Appreciate #Mihai effort becoz only #Mihai responded in the comments
Update 2
I had to add platform: 'linux/x86_64' in docker-compose.yml file
mysql:
image: 'mysql/mysql-server:8.0'
platform: 'linux/x86_64'
ports:
- '${FORWARD_DB_PORT:-3307}:3306'

Topic authorization failed error while KSQL tries connecting to cluster

I am trying to have my ksqldb-server docker instance up and connect to a remote Kafka cluster, but getting an error. Here are the details
docker-compose.yml
---
version: '2'
services:
ksqldb-server:
build:
context: .
dockerfile: ./Dockerfile
hostname: ksqldb-server
container_name: ksqldb-server-remote
ports:
- "8088:8088"
environment:
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: broker1:port1,broker2:port2,broker3:port,broker4:port
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 2
KSQL__confluent-ksql-apptest__command_topic_REPLICATION_FACTOR: 2
KSQL_TOPIC_AUTHORIZATION_CHECKS: "false"
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.24.0
container_name: ksqldb-cli-remote
depends_on:
# - brokers
- ksqldb-server
entrypoint: /bin/sh
tty: true
Dockerfile
FROM confluentinc/ksqldb-server:0.24.0
ADD ksql-server.properties /etc/meta/ksql-server.properties
ADD certificates /etc/meta/certificates
RUN rm -fr /etc/ksqldb/ksql-server.properties
RUN cp -a /etc/meta/ksql-server.properties /etc/ksqldb/ksql-server.properties
ENV KSQL_BOOTSTRAP_SERVERS="broker1:port,broker2:port,broker3:port,broker4:port"
ENV KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE="true"
ENV KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE="true"
ENV KSQL_LISTENERS="http://0.0.0.0:8088"
ENTRYPOINT ["/bin/sh","-c","java -cp /usr/share/java/ksqldb-rest-app/*: -Xmx3g -server -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+ExplicitGCInvokesConcurrent -XX:NewRatio=1 -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dksql.log.dir=/usr/logs -Dlog4j.configuration=file:/etc/ksqldb/log4j.properties -Dksql.server.install.dir=/usr -Xlog:gc*:file=/usr/logs/ksql-server-gc.log:time,tags:filecount=10,filesize=102400 io.confluent.ksql.rest.server.KsqlServerMain /etc/meta/ksql-server.properties"]
ksql-server.properties
#------ Endpoint config -------
### HTTP ###
listeners=http://0.0.0.0:8088
### HTTPS ###
security.protocol=SSL
ssl.truststore.location=/etc/meta/certificates/truststore.jks
ssl.truststore.password=prmcert
ssl.keystore.location=/etc/meta/certificates/keystore.jks
ssl.keystore.password=prmcert
# ssl.key.password=?
#------ Logging config -------
# Automatically create the processing log topic if it does not already exist:
ksql.logging.processing.topic.auto.create=true
# Automatically create a stream within KSQL for the processing log:
ksql.logging.processing.stream.auto.create=true
#------ Kafka -------
# The set of Kafka brokers to bootstrap Kafka cluster information from:
bootstrap.servers=elr6hz1-06-s13.uhc.com:16016,elr6hz1-06-s16.uhc.com:16016,elr6hz1-06-s17.uhc.com:16016,elr6hz1-06-s19.uhc.com:16016,elr6hz1-06-s12.uhc.com:16016
# Enable snappy compression for the Kafka producers
compression.type=snappy
#------ Schema Registry -------
ksql.service.id=apptest_
This is the error that I see
ksqldb-server-remote | [2022-04-21 04:26:48,821] ERROR Unhandled
exception in server startup
(io.confluent.ksql.rest.server.KsqlServerMain:97) ksqldb-server-remote
| io.confluent.ksql.exception.KafkaResponseGetFailedException: Failed
to set config for Kafka Topic _confluent-ksql-apptest__command_topic
ksqldb-server-remote | at
io.confluent.ksql.services.KafkaTopicClientImpl.addTopicConfig(KafkaTopicClientImpl.java:258)
ksqldb-server-remote | at
io.confluent.ksql.rest.util.KsqlInternalTopicUtils.validateTopicConfig(KsqlInternalTopicUtils.java:124)
ksqldb-server-remote | at
io.confluent.ksql.rest.util.KsqlInternalTopicUtils.ensureTopic(KsqlInternalTopicUtils.java:66)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlRestApplication.registerCommandTopic(KsqlRestApplication.java:1069)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlRestApplication.initialize(KsqlRestApplication.java:455)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlRestApplication.startKsql(KsqlRestApplication.java:390)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlRestApplication.startAsync(KsqlRestApplication.java:372)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlServerMain.tryStartApp(KsqlServerMain.java:93)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:68)
ksqldb-server-remote | Caused by:
org.apache.kafka.common.errors.TopicAuthorizationException: Topic
authorization failed.
Can someone please suggest on why I am seeing this error and how to resolve this.
The exception shows that KSQL cannot create the command topic because lack of permissions. The exception does not have what permissions, though. You can look at the kafka-authorizer.log from the kafka broker to see what ACL was denied and add the required ACL to allow KSQL to start.

Symfony 5: Why am I getting this error? SQLSTATE[HY000] [2002] Connection refused

I'm getting this error:
An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
I have tried changing the IP address in my .env to localhost but I then got a not found error.
I also tried changing my .env db host to match my docker compose file:
DB_HOST=mysql
docker composer file:
version: "3.7"
services:
app:
image: kooldev/php:7.4-nginx
ports:
- ${KOOL_APP_PORT:-80}:80
environment:
ASUSER: ${KOOL_ASUSER:-0}
UID: ${UID:-0}
volumes:
- .:/app:delegated
networks:
- kool_local
- kool_global
database:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
ports:
- ${KOOL_DATABASE_PORT:-3306}:3306
I used kool.dev to do the Symfony install, that looks ok and the DB seems to be working as expected:
user#DESKTOP-QSCSABV:/mnt/c/dev/symfony-project$ kool status
+----------+---------+------------------------------------------------------+-------------------------+
| SERVICE | RUNNING | PORTS
| STATE |
+----------+---------+------------------------------------------------------+-------------------------+
| app | Running | 0.0.0.0:80->80/tcp, :::80->80/tcp, 9000/tcp
| Up 15 minutes |
| database | Running | 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp,
33060/tcp | Up 15 minutes (healthy) |
+----------+---------+------------------------------------------------------+-------------------------+
[done] Fetching services status
g
in my .env file:
DB_USERNAME=myusername
DB_PASSWORD=mypassword
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=mydatabase
DB_VERSION=8.0
DATABASE_URL="mysql://${DB_USERNAME}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_DATABASE}?serverVersion=${DB_VERSION}"
Any suggestions on how to resolve this?
DB_HOST=127.0.0.1
in your environment file should be
DB_HOST=database
127.0.0.1 is the address of the container itself, so in your case, the app container tries to make a connection to itself. Docker compose creates a virtual network where each container can be addressed by its service name. So in your case, you want to connect to the database service.

Serverless Offline - ECONNREFUSED Elasticmq with Docker-Compose

I am trying to work on a webscraper using the Serverless Framework that I want to be easily ran locally by users without having to install any necessary depedencies on their local machine. I am using serverless-offline-sqs with a local Elasticmq server hosted on a Docker container.
Currently, I have a docker-compose file that I run, then run serverless offline in another terminal which works well. That docker-compose.yml file looks like this:
# docker-compose.yml
version: '3'
services:
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
This works well with no issues, and after ensuring that all of my containers are up, I can run serverless offline and my app works. I am trying to also include the act of running Serverless in its own docker container. I have created the following Dockerfile:
# Dockerfile
FROM node:12
RUN npm --loglevel=error install -g serverless && npm --loglevel=error install -g serverless-offline
WORKDIR /usr/src/app
COPY package*.json ./
COPY ./scripts/wait-for-it.sh ./
RUN ["chmod", "+x", "/usr/src/app/wait-for-it.sh"]
RUN npm install
COPY . .
EXPOSE 3000
I am trying to follow the Docker documentation for affecting the start-up order, found here to ensure that my queue service is up before running this. This has led me to this docker-compose.yml:
version: '3'
services:
serverless:
container_name: 'serverless'
build:
context: .
dockerfile: Dockerfile
env_file:
- .env.development
ports:
- '3000:3000'
depends_on:
- sqs
command: ["./wait-for-it.sh", "sqs:9324", "--", "serverless", "offline"]
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
I am using the wait-for-it.sh script which the Docker documentation suggests, but it says that I am getting the following error:
Successfully built 38df0769a202
Successfully tagged assessorscraper_serverless:latest
Starting sqs ... done
Starting database ... done
Recreating serverless ... done
Starting sqs-create ... done
Attaching to sqs, database, sqs-create, serverless
serverless | wait-for-it.sh: waiting 15 seconds for sqs:9324
sqs | 07:54:45.046 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (1.0.0) ...
sqs | 07:54:48.133 [elasticmq-akka.actor.default-dispatcher-6] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
sqs | 07:54:51.385 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.rest.sqs.TheSQSRestServerBuilder - Started SQS rest server, bind address 0.0.0.0:9324, visible server address http://localhost:9324
sqs | 07:54:51.643 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.r.s.TheStatisticsRestServerBuilder - Started statistics rest server, bind address 0.0.0.0:9325
sqs | 07:54:51.649 [main] INFO org.elasticmq.server.Main$ - === ElasticMQ server (1.0.0) started in 8819 ms ===
serverless | wait-for-it.sh: sqs:9324 is available after 9 seconds
sqs-create | Creating queue TownQueue
sqs | 07:54:53.808 [elasticmq-akka.actor.default-dispatcher-6] INFO o.elasticmq.actor.QueueManagerActor - Creating queue QueueData(TownQueue,MillisVisibilityTimeout(30000),PT0S,PT0S,2021-01-07T07:54:53.494Z,2021-01-07T07:54:53.494Z,None,false,false,None,None,Map())
sqs-create exited with code 0
serverless | Serverless: Running "serverless" installed locally (in service node_modules)
serverless | Serverless: DOTENV: Loading environment variables from .env.development:
serverless | Serverless: - DATABASE_URL
serverless | Serverless: - ACCOUNT_ID
serverless | Serverless: - QUEUE_URL
serverless | Serverless: Deprecation warning: Starting with next major version, default value of provider.lambdaHashingVersion will be equal to "20201221"
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#LAMBDA_HASHING_VERSION_V2
serverless | Serverless: Deprecation warning: Starting with next major version, API Gateway naming will be changed from "{stage}-{service}" to "{service}-{stage}".
serverless | Set "provider.apiGateway.shouldStartNameWithService" to "true" to adapt to the new behavior now.
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#AWS_API_GATEWAY_NAME_STARTING_WITH_SERVICE
serverless | offline: Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | Networking Error ---------------------------------------
serverless |
serverless | Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
serverless |
serverless | Get Support --------------------------------------------
serverless | Docs: docs.serverless.com
serverless | Bugs: github.com/serverless/serverless/issues
serverless | Issues: forum.serverless.com
serverless |
serverless | Your Environment Information ---------------------------
serverless | Operating System: linux
serverless | Node Version: 12.20.1
serverless | Framework Version: 2.17.0 (local)
serverless | Plugin Version: 4.4.1
serverless | SDK Version: 2.3.2
serverless | Components Version: 3.4.4
serverless |
Am I still getting some race condition? Any suggestions here would be much appreciated!
The problem is likely to be in ECONNREFUSED 0.0.0.0:9324. Judging by the port number it is an attempt to reach the sqs service, but the IP-address is bad. It should connect to sqs:9324 or an IP-address of that container. 0.0.0.0 means 'any IP-address' and it is usually used to bind a port. Check your serverless configuration.
Also, you can easily check if you are in a 'race condition' or not. For that simply start your services one by one using several terminals:
docker-compose up database
docker-compose up sqs
docker-compose up sqs-create
docker-compose up serverless
If you can start services one by one then it is likely you are. In this case you can add restart: on-failure property to a service. This way if a container exits with a code other than 0 - docker restarts the container.
It turns out, my issue was actually in my serverless.yml configuration. Here, I had my serverless.yml with a custom configuration as follows:
custom:
serverless-offline-sqs:
autoCreate: true # create queue if not exists
apiVersion: '2012-11-05'
endpoint: http://0.0.0.0:9324
region: us-east-1
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false
The correct endpoint was actually `http://sqs:9324'. Everything else was correct!

I am getting error when I try to dockerize my MERN application

Here is my Dockerfile for React.js with the error I got in terminal:
FROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./package.json /usr/src/app
RUN npm install
RUN npm build
EXPOSE 3000
CMD ["npm", "run", "start"]
Error:-
react_1 |
react_1 | > ecom-panther#0.1.0 start /usr/src/app
react_1 | > react-scripts start
react_1 |
react_1 | ℹ 「wds」: Project is running at http://172.18.0.2/
react_1 | ℹ 「wds」: webpack output is served from
react_1 | ℹ 「wds」: Content not from webpack is served from /usr/src/app/public
react_1 | ℹ 「wds」: 404s will fallback to /
react_1 | Starting the development server...
react_1 |
ecom-panther_react_1 exited with code 0
For Node and Express, I got this:
express_1 | (node:30) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
express_1 | server is running on port: 5000
express_1 | (node:30) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [localhost:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017]
express_1 | at Pool.<anonymous> (/usr/src/app/node_modules/mongodb/lib/core/topologies/server.js:438:11)
express_1 | at emitOne (events.js:116:13)
express_1 | at Pool.emit (events.js:211:7)
express_1 | at createConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:561:14)
express_1 | at connect (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:994:11)
express_1 | at makeConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:31:7)
express_1 | at callback (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:264:5)
express_1 | at Socket.err (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:294:7)
express_1 | at Object.onceWrapper (events.js:315:30)
express_1 | at emitOne (events.js:116:13)
express_1 | at Socket.emit (events.js:211:7)
express_1 | at emitErrorNT (internal/streams/destroy.js:73:8)
express_1 | at _combinedTickCallback (internal/process/next_tick.js:139:11)
express_1 | at process._tickCallback (internal/process/next_tick.js:181:9)
express_1 | (node:30) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
express_1 | (node:30) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Docker file for backend:-
FROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
COPY . /usr/src/app
EXPOSE 5000
CMD ["npm","start"]
Here is my docker-compose.yml file
version: '3' # specify docker-compose version
# Define the service/container to be run
services:
react: #name of first service
build: client #specify the directory of docker file
ports:
- "3000:3000" #specify port mapping
express: #name of second service
build: server #specify the directory of docker file
ports:
- "5000:5000" #specify port mapping
links:
- database #link this service to the database service
database: #name of third service
image: mongo #specify image to build contasiner flow
ports:
- "27017:27017" #specify port mapping
How I can run frontend at browser and is there any easy approach to do this in a better way ?
Error 1:
Add stdin_open: true to your react service, like:
...
services:
react: #name of first service
build: client #specify the directory of docker file
stdin_open: true
ports:
- "3000:3000" #specify port mapping
...
You might need to rebuild or clean cached so "docker-compose up --build" or "docker-compose build --no-cache" then "docker-compose up"
Error 2:
In your database connections line in your index.js file or whatever you named should have :
mongodb://database:27017/
where "database" is your named MongoDB service. You can use your container IP address too with docker inspect <container> and use the IP the see there too. Ideally you want to have a ENV in your Dockerfile or docker-compose.yml:
ENV MONGO_URL mongodb://database:27017/

Resources