redis_slave can't reach redis_master when using docker-compose - docker

I am using docker-compose with four postgres containers a redis_master container and a redis_slave container. The redis server boots normally but about every 8 hours after launch the slave cannot reach the master for replication. The docker-compose logs show the following error logs from redis_master repeating:
redis_master | 1:S 15 Jul 2020 11:39:05.338 * Connecting to MASTER UNKNOWN.IP:58270
redis_master | 1:S 15 Jul 2020 11:39:05.338 * MASTER REPLICA sync started
redis_master | 1:S 15 Jul 2020 11:39:05.497 # Error condition on socket for SYNC: Connection refused
redis_master | 1:S 15 Jul 2020 11:39:06.341 * Connecting to MASTER UNKNOWN.IP:58270
redis_master | 1:S 15 Jul 2020 11:39:06.341 * MASTER REPLICA sync started
redis_master | 1:S 15 Jul 2020 11:39:06.506 # Error condition on socket for SYNC: Connection refused
The UNKNOWN.IP is not a private IP address, not my server IP nor the IP of any clients accessing the server. That IP address is first used in the logs here:
redis_master | 1:S 15 Jul 2020 11:37:50.127 * REPLICAOF UNKNOWN.IP:58270 enabled (user request from 'id=525 addr=UNKNOWN.IP:35762 fd=13 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=48 qbuf-free=32720 obl=0 oll=0 omem=0 events=r cmd=slaveof')
This seems to trigger an error when trying to interact with the redis server.
WARN [2020-07-15 13:58:42,036] org.eclipse.jetty.server.HttpChannel: /v1/websocket/
! redis.clients.jedis.exceptions.JedisDataException: READONLY You can't write against a read only replica.
The server has also been hit by a crypto mining malware "kdevtmpfsi" which seems to have gotten into the redis server, possibly causing some of these issues. The malware is using about 400% of CPU and 1GB of memory. I have not been able to get rid of it completely and I am trying the process on a new server with more ports closed to try and stop the malware from getting in again. Any advice for stopping the malware from getting in again? Or what might be causing the redis replication issue. My docker-compose config file is taken from this GitHub repo for cloning the Signal app's server. I start the docker environment with
sudo docker-compose up
and I am using all default configurations for redis.
docker-compose.yml
version: '2.2'
services:
signal_account_database:
image: postgres:11
container_name: postgres_account_database
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: signal
PGDATA: /var/lib/postgresql/data/pgdata
ports:
- '5431:5432'
volumes:
- ./postgres_database:/var/lib/postgresql/data
signal_keys_database:
image: postgres:11
container_name: postgres_keys_database
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: signal
PGDATA: /var/lib/postgresql/data/pgdata
ports:
- '5432:5432'
volumes:
- ./postgres_keys_database:/var/lib/postgresql/data
signal_message_database:
image: postgres:11
container_name: postgres_message_database
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: signal
PGDATA: /var/lib/postgresql/data/pgdata
ports:
- '5433:5432'
volumes:
- ./postgres_message_store:/var/lib/postgresql/data
signal_abuse_database:
image: postgres:11
container_name: postgres_abuse_database
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: signal
PGDATA: /var/lib/postgresql/data/pgdata
ports:
- '5434:5432'
volumes:
- ./postgres_abuse_database:/var/lib/postgresql/data
redis_main:
image: redis:5
container_name: redis_master
ports:
- '6379:6379'
volumes:
- ./redis_main:/data
redis_replication:
image: redis:5
container_name: redis_slave
command: redis-server --port 6380
ports:
- '6380:6380'
volumes:
- ./redis_replication:/data
Has anyone else had the same replication problem? Looking for advice on fixing this issue.

redis_main:
image: redis:5
container_name: redis_master
restart: always
command: redis-server --port 6379
ports:
- '6379:6379'
volumes:
- ./redis_main:/data
redis_replication:
image: redis:5
container_name: redis_slave
command: redis-server --slaveof 127.0.0.1 6379 --port 6380 --slave-announce-ip 127.0.0.1
ports:
- '6380:6380'
volumes:
- ./redis_replication:/data
sentinel:
build: ./sentinel
container_name: redis_sentinel
restart: always
command: redis-sentinel --port 26379
ports:
- '26379:26379'
environment:
- SENTINEL_NAME=mysentinel
- HOST_IP=127.0.0.1
volumes:
- ./redis_slave:/data

Related

Docker containers in the same network cannot communicate with container names (M1 Mac)

I am trying to run express, Prisma ORM, and postgre applications in Docker.
I have two containers in the same network, but they cannot communicate with each other unless they use the actual IP address instead of the container name. Here is my docker-compose.yml file:
version: '3.8'
services:
web:
container_name: urefer-backend
networks:
- backend
build: .
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "5000:5000"
depends_on:
- db
environment:
DATABASE_URL: postgresql://postgres:docker#urefer-db:5432/urefer?schema=public
stdin_open: true # docker run -i
tty: true # docker run -t
db:
networks:
- backend
image: postgres:latest
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-docker}
POSTGRES_DB: "urefer"
container_name: urefer-db
restart: always
ports:
- 5432:5432
volumes:
- database-data:/var/lib/postgresql/data/
volumes:
database-data:
networks:
backend:
The urefer-backend container and the urefer-db container are in the same "backend" network. However, when I run docker-compose up --build, I always have a connection issue:
urefer-backend | Error: Error in migration engine: Can't reach database server at `urefer-db`:`5432`
urefer-backend |
urefer-backend | Please make sure your database server is running at `urefer-db`:`5432`.
When I replace "urefer-db" with the actual IP address of the db server, then it connects successfully. This I think means that there is something wrong with the DNS setup.
Could anyone please help me connect the two containers without using the actual IP address? the IP address for DB is always changing whenever I stop and restart the container, so it is really bothering to use.
#edit:
I got a suggestion from a comment to put all log and errors I got on the console.
urefer-db |
urefer-db | PostgreSQL Database directory appears to contain a database; Skipping initialization
urefer-db |
urefer-db | 2021-10-01 14:44:57.165 UTC [1] LOG: starting PostgreSQL 14.0 (Debian 14.0-1.pgdg110+1) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
urefer-db | 2021-10-01 14:44:57.165 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
urefer-db | 2021-10-01 14:44:57.165 UTC [1] LOG: listening on IPv6 address "::", port 5432
urefer-db | 2021-10-01 14:44:57.168 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
urefer-db | 2021-10-01 14:44:57.171 UTC [28] LOG: database system was shut down at 2021-10-01 14:44:49 UTC
urefer-db | 2021-10-01 14:44:57.177 UTC [1] LOG: database system is ready to accept connections
urefer-backend | Prisma schema loaded from prisma/schema.prisma
urefer-backend | Datasource "db": PostgreSQL database "urefer", schema "public" at "urefer-db:5432"
urefer-backend |
urefer-backend | Error: P1001: Can't reach database server at `urefer-db`:`5432`
urefer-backend |
urefer-backend | Please make sure your database server is running at `urefer-db`:`5432`.
urefer-backend | Prisma schema loaded from prisma/schema.prisma
urefer-backend |
When the backend service is up, it runs the command npx prisma migrate dev --name init, and this command creates this connection error.
You should use host name instead of container name. Containers uses hostname when communicate each other. So if you add hostname docker-compose for container it will work
version: '3.8'
services:
web:
container_name: urefer-backend
networks:
- backend
build: .
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "5000:5000"
depends_on:
- db
environment:
DATABASE_URL: postgresql://postgres:docker#urefer-db:5432/urefer?schema=public
stdin_open: true # docker run -i
tty: true # docker run -t
db:
networks:
- backend
image: postgres:latest
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-docker}
POSTGRES_DB: "urefer"
container_name: urefer-db
hostname: urefer-db
restart: always
ports:
- 5432:5432
volumes:
- database-data:/var/lib/postgresql/data/
volumes:
database-data:
networks:
backend:

Creating a docker container for postgresql with laravel sail

I created a docker container using the standard "image: postgres:13", but inside the container it doesn't start postgresql because there is no cluster. What could be the problem?
Thx for answers!
My docker-compose:
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
ports:
- '${APP_PORT:-80}:80'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- pgsql
pgsql:
image: 'postgres:13'
ports:
- '${FORWARD_DB_PORT:-5432}:5432'
environment:
PGPASSWORD: '${DB_PASSWORD:-secret}'
POSTGRES_DB: '${DB_DATABASE}'
POSTGRES_USER: '${DB_USERNAME}'
POSTGRES_PASSWORD: '${DB_PASSWORD:-secret}'
volumes:
- 'sailpgsql:/var/lib/postgresql/data'
networks:
- sail
healthcheck:
test: ["CMD", "pg_isready", "-q", "-d", "${DB_DATABASE}", "-U", "${DB_USERNAME}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sailpgsql:
driver: local
and I get an error when trying to contact the container:
SQLSTATE[08006] [7] could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
and inside the container, when I try to start or restart postgres, I get this message:
[warn] No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning).
You should not connect through localhost but by the container name as host name.
So change your .env to contain
DB_CONNECTION=[what the name is in the config array]
DB_HOST=pgsql
DB_PORT=5432
DB_DATABASE=laravel
DB_USERNAME=[whatever you want]
DB_PASSWORD=[whatever you want]

Unable to connect mysql after docker mysql created

I setup my docker mysql and phpadmin via
version: '3.2'
services:
mysqllocal:
image: mysql:8.0
container_name: container_mysqllocal
restart: always
ports:
- '6603:3306'
environment:
MYSQL_ROOT_PASSWORD: pass1234
MYSQL_ROOT_HOST: '%'
phpmyadmin:
depends_on:
- mysqllocal
image: phpmyadmin/phpmyadmin
container_name: container_phpmyadmin
restart: always
ports:
- '8080:80'
environment:
PMA_HOST: mysqllocal
UPLOAD_LIMIT: 3000000000
docker inspect container_mysqllocal and got '172.30.0.2'. However when I tried to connect mysql via SequelPro. It gives Connection Failed error. How do I connect mysql to my SequelPro / mysql workbench app, is there any configuration I've missed?
Below is the screenshot of my SequelPro connection:
have you tried to access mysql from your local machine with the port defined, in your config is 6603

phpmyadmin can't connect to mariadb with docker-compose: Packets out of order

So whats wrong with this docker-compose.yml? It actually looks ok to me.
But when i try to log in to phpmyadmin on http://localhost:8080/index.php
i get errors:
Packets out of order. Expected 0 received 1. Packet size=71
mysqli_real_connect(): MySQL server has gone away
mysqli_real_connect(): Error while reading greeting packet. PID=33
mysqli_real_connect(): (HY000/2006): MySQL server has gone away
version: "3"
services:
db:
image: mariadb:10.4
volumes:
- test_db_data:/var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: test
MYSQL_USER: test
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: root
networks:
- dbtest
pma:
image: phpmyadmin/phpmyadmin
depends_on:
- db
ports:
- 8080:80
environment:
- PMA_HOST=db
networks:
- dbtest
adminer:
image: adminer
restart: unless-stopped
ports:
- 8081:8080
networks:
- dbtest
volumes:
test_db_data:
networks:
dbtest:
Context:
Docker version 19.03.3
docker-compose version 1.23.2
Update:
I added adminer as well and login also fails.
Mysql stderr shows:
[Warning] Aborted connection 9 to db: 'unconnected' user: 'unauthenticated' host: '192.168.32.3' (This connection closed normally without authentication)
I had the same error and fixed it by deleting the database volume and recreating the database. Not the nicest of solutions. The MySQL server was getting stuck on startup.
I had the luck of it being a database on a dev box so running migrations and reseeding with test data was all I had to do.

Sphinx in Docker: Lost connection to MySQL server at 'reading initial communication packet', system error: 0

I have trouble with connection to Sphinx in Docker from outside.
In docker container connection works fine:
root#b8161ac2de3e:/app# mysql -h127.0.0.1 -P 9306
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 2.3.2-id64-beta (4409612)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.
mysql>
Docker forwarding port:
b8161ac2de3e test_sphinx "searchd.sh" 9 minutes ago Up 9 minutes 9000/tcp, 0.0.0.0:9306->9306/tcp test_sphinx_1
But when I'm trying to connect from outside container, I've got this:
vagrant#test:/srv/projects/test$ mysql -h0.0.0.0 -P 9306
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
Do you have any ideas?
UP!
Add docker-compose.yml
version: '3'
services:
web:
image: nginx
depends_on:
- php-fpm
volumes:
- .:/app
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./docker/nginx/frontend.conf:/etc/nginx/conf.d/default.conf
restart: ${RESTART}
ports:
- 80:80
php-fpm:
build:
context: ./docker/php-fpm
args:
ENABLE_XDEBUG: ${PHP_ENABLE_XDEBUG}
PHP_CLI_UID: ${PHP_CLI_UID}
PHP_CLI_GID: ${PHP_CLI_GID}
PHP_FPM_UID: ${PHP_FPM_UID}
PHP_FPM_GID: ${PHP_FPM_GID}
environment:
XDEBUG_CONFIG: remote_host=${XDEBUG_REMOTE_HOST} remote_port=${XDEBUG_REMOTE_PORT} remote_enable=On
PHP_IDE_CONFIG: serverName=app
volumes:
- .:/app
- ./docker/php-fpm/config/php.ini:/usr/local/etc/php/php.ini
- ./sysconfig/etc/freetds:/etc/freetds
- ./sysconfig/etc/odbc.ini:/etc/odbc.ini
- ./sysconfig/etc/odbcinst.ini:/etc/odbcinst.ini
- ./sysconfig/usr/local/sphinxsearch/etc/sphinx.conf:/etc/sphinxsearch/sphinx.conf
- ./sysconfig/usr/local/sphinxsearch:/usr/local/sphinxsearch
- ./docker/sphinx/data:/usr/local/sphinxsearch/var/data
- ./docker/sphinx/log:/usr/local/sphinxsearch/var/log
restart: ${RESTART}
mysql:
image: mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=blog
ports:
- 3306:3306
sphinx:
build:
context: ./docker/php-fpm
args:
ENABLE_XDEBUG: ${PHP_ENABLE_XDEBUG}
PHP_CLI_UID: ${PHP_CLI_UID}
PHP_CLI_GID: ${PHP_CLI_GID}
PHP_FPM_UID: ${PHP_FPM_UID}
PHP_FPM_GID: ${PHP_FPM_GID}
environment:
XDEBUG_CONFIG: remote_host=${XDEBUG_REMOTE_HOST} remote_port=${XDEBUG_REMOTE_PORT} remote_enable=On
PHP_IDE_CONFIG: serverName=app
command: /app/docker/php-fpm/searchd.sh
volumes:
- .:/app
- ./docker/php-fpm/config/php.ini:/usr/local/etc/php/php.ini
- ./sysconfig/etc/freetds:/etc/freetds
- ./sysconfig/etc/odbc.ini:/etc/odbc.ini
- ./sysconfig/etc/odbcinst.ini:/etc/odbcinst.ini
- ./sysconfig/usr/local/sphinxsearch/etc/sphinx.conf:/etc/sphinxsearch/sphinx.conf
- ./sysconfig/usr/local/sphinxsearch:/usr/local/sphinxsearch
- ./docker/sphinx/data:/usr/local/sphinxsearch/var/data
restart: ${RESTART}
ports:
- 9306:9306
networks:
- default
- web
redis:
image: redis:latest
This starts services, other services except sphinx is available in host machine.

Resources