redis master does not see slave - docker

I have an issue with my redis sentinel setup, which has 4 nodes (1 master, 3 slaves). I've patched first slave node (docker version changed from 17.03.1-ce to 17.12.0-ce). My problem is that master does not anymore take slave node1 to the members pool.
Slave (node1) info (It recognized master node):
$ docker exec -it redis-sentinel redis-cli info replication
# Replication
role:slave
master_host:<master_ip>
master_port:6379
master_link_status:down
Master info:
$ docker exec -it redis-sentinel redis-cli info replication
# Replication
role:master
connected_slaves:2
slave0:ip=<slave_2_ip>,port=6379,state=online,offset=191580670534,lag=0
slave1:ip=<slave_3_ip>,port=6379,state=online,offset=191580666435,lag=0
master_repl_offset:191580672343
Master must have 3 slaves. Master IP is correct on node1 (what was patched). node 2,3,4 docker versions are 17.03.1-ce. When I tested the same situation in development - all works fine. Can you suggest something, what I need to do, to enable replication between master and slave node1?
After docker restart (#node1) I see something like that (msg="unknown container"):
Jan 31 08:16:12 node1 dockerd[17288]: time="2018-01-31T08:16:12.150892519+02:00" level=warning msg="unknown container" container=23e48b7846bd325ba5af772217085b60708660f5f5d8bb6fefd23094235ac01f module=libcontainerd namespace=plugins.moby
Jan 31 08:16:12 node1 dockerd[17288]: time="2018-01-31T08:16:12.177513187+02:00" level=warning msg="unknown container" container=23e48b7846bd325ba5af772217085b60708660f5f5d8bb6fefd23094235ac01f module=libcontainerd namespace=plugins.moby
When I examine node4 master logs I see that node1 was converted to slave:
1:X 30 Jan 21:35:09.301 # +sdown sentinel 66f6a8950a72952ac7df18f6a653718445fad5db node1_slave 26379 # sentinel-xx node4_master 6379
1:X 30 Jan 21:35:10.276 # +sdown slave node1_slave:6379 node1_slave 6379 # sentinel-xx node4_master 6379
1:X 30 Jan 21:58:10.388 * +reboot slave node1_slave:6379 node1_slave 6379 # sentinel-xx node4_master 6379
1:X 30 Jan 21:58:10.473 # -sdown slave node1_slave:6379 node1_slave 6379 # sentinel-xx node4_master 6379
1:X 30 Jan 21:58:10.473 # -sdown sentinel 66f6a8950a72952ac7df18f6a653718445fad5db node1_slave 26379 # sentinel-xx node4_master 6379
1:X 30 Jan 21:58:20.436 * +convert-to-slave slave node1_slave:6379 node1_slave 6379 # sentinel-xx node4_master 6379
1:X 30 Jan 21:58:30.516 * +convert-to-slave slave node1_slave:6379 node1_slave 6379 # sentinel-xx node4_master 6379
1:X 30 Jan 21:58:40.529 * +convert-to-slave slave node1_slave:6379 node1_slave 6379 # sentinel-xx node4_master 6379
1:X 30 Jan 22:39:48.284 * +reboot slave node1_slave:6379 node1_slave 6379 # sentinel-xx node4_master 6379
1:X 30 Jan 22:39:58.391 * +convert-to-slave slave node1_slave:6379 node1_slave 6379 # sentinel-xx node4_master 6379
1:X 30 Jan 22:40:08.447 * +convert-to-slave slave node1_slave:6379 node1_slave 6379 # sentinel-xx node4_master 6379
On the other hand redis-client logs are showing me that it cannot save DB on disk.
$ docker logs --follow redis-client
1:M 31 Jan 07:47:09.451 * Slave node3_slave:6379 asks for synchronization
1:M 31 Jan 07:47:09.451 * Full resync requested by slave node3_slave:6379
1:M 31 Jan 07:47:09.451 * Starting BGSAVE for SYNC with target: disk
1:M 31 Jan 07:47:09.452 # Can't save in background: fork: Out of memory
1:M 31 Jan 07:47:09.452 # BGSAVE for replication failed
1:M 31 Jan 07:47:24.628 * Slave node1_slave:6379 asks for synchronization
1:M 31 Jan 07:47:24.628 * Full resync requested by slave node1_slave:6379
1:M 31 Jan 07:47:24.628 * Starting BGSAVE for SYNC with target: disk
1:M 31 Jan 07:47:24.628 # Can't save in background: fork: Out of memory
1:M 31 Jan 07:47:24.628 # BGSAVE for replication failed
1:M 31 Jan 07:48:10.560 * Slave node3_slave:6379 asks for synchronization
1:M 31 Jan 07:48:10.560 * Full resync requested by slave node3_slave:6379
1:M 31 Jan 07:48:10.560 * Starting BGSAVE for SYNC with target: disk

Problem solved by switching vm.overcommit_memory to 1.
sysctl vm.overcommit_memory=1
Thanks to yanhan comment
The log is now somethig like that:
1:M 31 Jan 07:48:10.560 * Slave node2_slave:6379 asks for synchronization
1:M 31 Jan 07:48:10.560 * Full resync requested by slave node2_slave:6379
1:M 31 Jan 07:48:10.560 * Starting BGSAVE for SYNC with target: disk
1:M 31 Jan 07:48:10.569 * Background saving started by pid 16
1:M 31 Jan 07:49:15.773 # Connection with slave client id #388090 lost.
1:M 31 Jan 07:49:16.219 # Connection with slave node2_slave:6379 lost.
1:M 31 Jan 07:49:25.394 * Slave node1_slave:6379 asks for synchronization
1:M 31 Jan 07:49:25.395 * Full resync requested by slave node1_slave:6379
1:M 31 Jan 07:49:25.395 * Can't attach the slave to the current BGSAVE. Waiting for next BGSAVE for SYNC
1:S 31 Jan 07:49:35.421 # Connection with slave node1_slave:6379 lost.
1:S 31 Jan 07:49:35.518 * SLAVE OF node2_slave:6379 enabled (user request from 'id=395598 addr=node2_slave:33026 fd=7 name=sentinel-52caa67d-cmd age=10 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=0 qbuf-free=32768 obl=36 oll=0 omem=0 events=r cmd=exec')
1:S 31 Jan 07:49:36.121 * Connecting to MASTER node2_slave:6379
1:S 31 Jan 07:49:36.122 * MASTER <-> SLAVE sync started
1:S 31 Jan 07:49:36.135 * Non blocking connect for SYNC fired the event.
1:S 31 Jan 07:49:36.138 * Master replied to PING, replication can continue...
1:S 31 Jan 07:49:36.147 * Partial resynchronization not possible (no cached master)
1:S 31 Jan 07:49:36.153 * Full resync from master: f15e28b26604bda49ad515b38cba2639ee8e13bc:191935552685
1:S 31 Jan 07:49:46.523 * MASTER <-> SLAVE sync: receiving 1351833877 bytes from master
1:S 31 Jan 07:49:57.888 * MASTER <-> SLAVE sync: Flushing old data
16:C 31 Jan 07:50:17.083 * DB saved on disk
16:C 31 Jan 07:50:17.114 * RDB: 3465 MB of memory used by copy-on-write
1:S 31 Jan 07:51:22.749 * MASTER <-> SLAVE sync: Loading DB in memory
1:S 31 Jan 07:51:46.609 * MASTER <-> SLAVE sync: Finished with success
1:S 31 Jan 07:51:46.609 * Background saving terminated with success

Related

Issues with setting up redis cluster via docker

I am trying to configure the Redis cluster using the docker image bitnami/redis-cluster.
Following is the docker-compose.yml:
version: '3.8'
services:
redis-node-0:
image: bitnami/redis-cluster:6.2.7
volumes:
- redis-node-data-0:/bitnami/redis/data
environment:
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_NODES=redis-node-0 redis-node-1 redis-node-2
redis-node-1:
image: bitnami/redis-cluster:6.2.7
volumes:
- redis-node-data-1:/bitnami/redis/data
environment:
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_NODES=redis-node-0 redis-node-1 redis-node-2
redis-node-2:
image: bitnami/redis-cluster:6.2.7
volumes:
- redis-node-data-2:/bitnami/redis/data
ports:
- 6379:6379
depends_on:
- redis-node-0
- redis-node-1
environment:
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_NODES=redis-node-0 redis-node-1 redis-node-2
- REDIS_CLUSTER_REPLICAS=1
- REDIS_CLUSTER_CREATOR=yes
volumes:
redis-node-data-0:
driver: local
redis-node-data-1:
driver: local
redis-node-data-2:
driver: local
networks:
default:
name: local_network
Docker container's are perfectly running:
Output of docker ps:
CONTAINER ID
IMAGE
COMMAND
CREATED
STATUS
PORTS
NAMES
bea9a7c52eba
bitnami/redis-cluster:6.2.7
"/opt/bitnami/script…"
12 minutes ago
12 minutes ago
0.0.0.0:6379->6379/tcp
local-redis-node-2-1
63c08f1330e0
bitnami/redis-cluster:6.2.7
"/opt/bitnami/script…"
12 minutes ago
12 minutes ago
6379/tcp
local-redis-node-1-1
e1b163d75254
bitnami/redis-cluster:6.2.7
"/opt/bitnami/script…"
12 minutes ago
12 minutes ago
6379/tcp
local-redis-node-0-1
As I have set local-redis-node-2-1 as REDIS_CLUSTER_CREATOR, now it is the incharge of initializing the cluster. Therefore, I am executing all below commands with this node.
Going inside container: docker exec -it local-redis-node-2-1 redis-cli
Then, trying to save data in redis set a 1, I am getting error: (error) CLUSTERDOWN Hash slot not serve
Output of cluster slots: (empty array)
I tried docker exec -it local-redis-node-2-1 redis-cli --cluster fix localhost:6379. But it is assigning 6379 slots out of 16383 to local-redis-node-2-1 only, remaining slots are not getting assigned to other nodes. Below is the output of cluster slots after fixing via above command:
127.0.0.1:6379> cluster slots
1) 1) (integer) 0
2) (integer) 16383
3) 1) "172.18.0.9"
2) (integer) 6379
3) "819770cf8b39793517efa10b9751083c854e15d7"
I am doing something wrong. I would love to know the manual solution as well but more interested in knowing the solution to set cluster slots via docker-compose.
Can someone help me in setting the cluster slots automatically via docker-compose.yml?
The read replicas will work or not with this docker-compose.yml or have I missed something?
Also, can someone confirm whether the cluster will work fine after resolving the cluster slots or not?
Logs of local-redis-node-2-1 below:
redis-cluster 00:47:45.70
redis-cluster 00:47:45.73 Welcome to the Bitnami redis-cluster container
redis-cluster 00:47:45.76 Subscribe to project updates by watching https://github.com/bitnami/containers
redis-cluster 00:47:45.78 Submit issues and feature requests at https://github.com/bitnami/containers/issues
redis-cluster 00:47:45.80
redis-cluster 00:47:45.83 INFO ==> ** Starting Redis setup **
redis-cluster 00:47:46.00 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
redis-cluster 00:47:46.05 INFO ==> Initializing Redis
redis-cluster 00:47:46.29 INFO ==> Setting Redis config file
Changing old IP 172.18.0.8 by the new one 172.18.0.8
Changing old IP 172.18.0.6 by the new one 172.18.0.6
Changing old IP 172.18.0.9 by the new one 172.18.0.9
redis-cluster 00:47:47.30 INFO ==> ** Redis setup finished! **
1:C 10 Dec 2022 00:47:47.579 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 10 Dec 2022 00:47:47.580 # Redis version=6.2.7, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 10 Dec 2022 00:47:47.580 # Configuration loaded
1:M 10 Dec 2022 00:47:47.584 * monotonic clock: POSIX clock_gettime
1:M 10 Dec 2022 00:47:47.588 * Node configuration loaded, I'm 819770cf8b39793517efa10b9751083c854e15d7
1:M 10 Dec 2022 00:47:47.595 # A key '__redis__compare_helper' was added to Lua globals which is not on the globals allow list nor listed on the deny list.
1:M 10 Dec 2022 00:47:47.598 * Running mode=cluster, port=6379.
1:M 10 Dec 2022 00:47:47.599 # Server initialized
1:M 10 Dec 2022 00:47:47.612 * Ready to accept connections
1:M 10 Dec 2022 00:47:49.673 # Cluster state changed: ok
Logs of local-redis-node-1-1 below:
redis-cluster 00:47:45.43
redis-cluster 00:47:45.46 Welcome to the Bitnami redis-cluster container
redis-cluster 00:47:45.48 Subscribe to project updates by watching https://github.com/bitnami/containers
redis-cluster 00:47:45.51 Submit issues and feature requests at https://github.com/bitnami/containers/issues
redis-cluster 00:47:45.54
redis-cluster 00:47:45.56 INFO ==> ** Starting Redis setup **
redis-cluster 00:47:45.73 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
redis-cluster 00:47:45.79 INFO ==> Initializing Redis
redis-cluster 00:47:46.00 INFO ==> Setting Redis config file
Changing old IP 172.18.0.8 by the new one 172.18.0.8
Changing old IP 172.18.0.6 by the new one 172.18.0.6
Changing old IP 172.18.0.9 by the new one 172.18.0.9
redis-cluster 00:47:47.10 INFO ==> ** Redis setup finished! **
1:C 10 Dec 2022 00:47:47.387 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 10 Dec 2022 00:47:47.388 # Redis version=6.2.7, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 10 Dec 2022 00:47:47.388 # Configuration loaded
1:M 10 Dec 2022 00:47:47.392 * monotonic clock: POSIX clock_gettime
1:M 10 Dec 2022 00:47:47.395 * Node configuration loaded, I'm 2ab0b8db952cc101f7873cdcf8cf691f8f6bae7b
1:M 10 Dec 2022 00:47:47.403 # A key '__redis__compare_helper' was added to Lua globals which is not on the globals allow list nor listed on the deny list.
1:M 10 Dec 2022 00:47:47.406 * Running mode=cluster, port=6379.
1:M 10 Dec 2022 00:47:47.407 # Server initialized
1:M 10 Dec 2022 00:47:47.418 * Ready to accept connections
1:M 10 Dec 2022 00:56:02.716 # New configEpoch set to 1
1:M 10 Dec 2022 00:56:41.943 # Cluster state changed: ok
Logs of local-redis-node-0-1 below:
redis-cluster 00:47:45.43
redis-cluster 00:47:45.46 Welcome to the Bitnami redis-cluster container
redis-cluster 00:47:45.49 Subscribe to project updates by watching https://github.com/bitnami/containers
redis-cluster 00:47:45.51 Submit issues and feature requests at https://github.com/bitnami/containers/issues
redis-cluster 00:47:45.54
redis-cluster 00:47:45.56 INFO ==> ** Starting Redis setup **
redis-cluster 00:47:45.73 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
redis-cluster 00:47:45.79 INFO ==> Initializing Redis
redis-cluster 00:47:46.00 INFO ==> Setting Redis config file
Changing old IP 172.18.0.8 by the new one 172.18.0.8
Changing old IP 172.18.0.6 by the new one 172.18.0.6
Changing old IP 172.18.0.9 by the new one 172.18.0.9
redis-cluster 00:47:47.11 INFO ==> ** Redis setup finished! **
1:C 10 Dec 2022 00:47:47.387 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 10 Dec 2022 00:47:47.388 # Redis version=6.2.7, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 10 Dec 2022 00:47:47.388 # Configuration loaded
1:M 10 Dec 2022 00:47:47.391 * monotonic clock: POSIX clock_gettime
1:M 10 Dec 2022 00:47:47.395 * Node configuration loaded, I'm 5ffeca48faa750a5f47c76639598fdb9b7b8b720
1:M 10 Dec 2022 00:47:47.402 # A key '__redis__compare_helper' was added to Lua globals which is not on the globals allow list nor listed on the deny list.
1:M 10 Dec 2022 00:47:47.405 * Running mode=cluster, port=6379.
1:M 10 Dec 2022 00:47:47.405 # Server initialized
1:M 10 Dec 2022 00:47:47.415 * Ready to accept connections

docker redis auto restart (Starting automatic rewriting of AOF on 100%)

why docker redis container auto restart and how to avoid it !
docker version: Docker version 19.03.15, build 99e3ed8919
redis version: 6.0.10
the message redis log
{"log":"242:C 28 Jul 2021 21:31:53.745 * Parent agreed to stop sending diffs. Finalizing AOF...\r\n","stream":"stdout","time":"2021-07-28T21:31:53.74533764Z"}
{"log":"242:C 28 Jul 2021 21:31:53.745 * Concatenating 0.42 MB of AOF diff received from parent.\r\n","stream":"stdout","time":"2021-07-28T21:31:53.745340654Z"}
{"log":"242:C 28 Jul 2021 21:31:53.747 * SYNC append only file rewrite performed\r\n","stream":"stdout","time":"2021-07-28T21:31:53.748060366Z"}
{"log":"242:C 28 Jul 2021 21:31:53.771 * AOF rewrite: 4082 MB of memory used by copy-on-write\r\n","stream":"stdout","time":"2021-07-28T21:31:53.771090503Z"}
{"log":"1:M 28 Jul 2021 21:31:53.864 * Background AOF rewrite terminated with success\r\n","stream":"stdout","time":"2021-07-28T21:31:53.86509657Z"}
{"log":"1:M 28 Jul 2021 21:31:53.865 * Residual parent diff successfully flushed to the rewritten AOF (0.01 MB)\r\n","stream":"stdout","time":"2021-07-28T21:31:53.865110611Z"}
{"log":"1:M 28 Jul 2021 21:31:53.865 * Background AOF rewrite finished successfully\r\n","stream":"stdout","time":"2021-07-28T21:31:53.865186183Z"}
{"log":"1:M 29 Jul 2021 06:26:03.289 * Starting automatic rewriting of AOF on 100% growth\r\n","stream":"stdout","time":"2021-07-29T06:26:03.289352072Z"}
{"log":"1:M 29 Jul 2021 06:26:03.305 * Background append only file rewriting started by pid 243\r\n","stream":"stdout","time":"2021-07-29T06:26:03.305924003Z"}
{"log":"1:C 29 Jul 2021 06:26:45.473 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo\r\n","stream":"stdout","time":"2021-07-29T06:26:45.473187567Z"}
{"log":"1:C 29 Jul 2021 06:26:45.473 # Redis version=6.0.10, bits=64, commit=00000000, modified=0, pid=1, just started\r\n","stream":"stdout","time":"2021-07-29T06:26:45.473464239Z"}
{"log":"1:C 29 Jul 2021 06:26:45.473 # Configuration loaded\r\n","stream":"stdout","time":"2021-07-29T06:26:45.473469256Z"}
{"log":" _._ \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473892404Z"}
{"log":" _.-``__ ''-._ \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473900258Z"}
{"log":" _.-`` `. `_. ''-._ Redis 6.0.10 (00000000/0) 64 bit\r\n","stream":"stdout","time":"2021-07-29T06:26:45.473903008Z"}
{"log":" .-`` .-```. ```\\/ _.,_ ''-._ \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473905368Z"}
{"log":" ( ' , .-` | `, ) Running in standalone mode\r\n","stream":"stdout","time":"2021-07-29T06:26:45.473907848Z"}
{"log":" |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\r\n","stream":"stdout","time":"2021-07-29T06:26:45.473910903Z"}
{"log":" | `-._ `._ / _.-' | PID: 1\r\n","stream":"stdout","time":"2021-07-29T06:26:45.473913623Z"}
{"log":" `-._ `-._ `-./ _.-' _.-' \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473916704Z"}
{"log":" |`-._`-._ `-.__.-' _.-'_.-'| \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473919668Z"}
{"log":" | `-._`-._ _.-'_.-' | http://redis.io \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473923023Z"}
{"log":" `-._ `-._`-.__.-'_.-' _.-' \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473926135Z"}
{"log":" |`-._`-._ `-.__.-' _.-'_.-'| \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473929275Z"}
{"log":" | `-._`-._ _.-'_.-' | \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473932644Z"}
{"log":" `-._ `-._`-.__.-'_.-' _.-' \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473935824Z"}
{"log":" `-._ `-.__.-' _.-' \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473938926Z"}
{"log":" `-._ _.-' \r\n","stream":"stdout","time":"2021-07-29T06:26:45.47394213Z"}
{"log":" `-.__.-' \r\n","stream":"stdout","time":"2021-07-29T06:26:45.473945854Z"}
{"log":"\r\n","stream":"stdout","time":"2021-07-29T06:26:45.473949032Z"}
the system log
[四 7月 29 14:24:29 2021] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/docker/38e8db6ef19db644454094af6d8c7dc9ddf8f31082c11b03e942a450f955c3f8,task=redis-server,pid=7014,uid=999
[四 7月 29 14:24:29 2021] Out of memory: Killed process 7014 (redis-server) total-vm:9966160kB, anon-rss:9061760kB, file-rss:0kB, shmem-rss:0kB, UID:999
[四 7月 29 14:24:29 2021] oom_reaper: reaped process 7014 (redis-server), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[四 7月 29 14:24:31 2021] veth0a70248: renamed from eth0
[四 7月 29 14:24:31 2021] docker0: port 1(veth0e9f210) entered disabled state
mem info
total used free shared buff/cache available
Mem: 15Gi 8.8Gi 4.3Gi 2.0Mi 2.1Gi 6.1Gi
Swap: 0B 0B 0B
[webedit#lian-redis data]$

Failed to start LSB: Start Jenkins at boot time

I've been trying since a while to add and modify things within jenkins. Jenkins was running on 8080 port, I redirected trafic to 80 port through this command:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 54.185.x.x:8080
I did some modifications and now I cannot start jenkins:
Jun 08 13:20:17 ip-10-173-x-x jenkins[32108]: Correct java version found
Jun 08 13:20:17 ip-10-173-x-x jenkins[32108]: * Starting Jenkins Automation Server jenkins
Jun 08 13:20:17 ip-10-173-x-x su[32157]: Successful su for jenkins by root
Jun 08 13:20:17 ip-10-173-x-x su[32157]: + ??? root:jenkins
Jun 08 13:20:17 ip-10-173-x-x su[32157]: pam_unix(su:session): session opened for user jenkins by (uid=0)
Jun 08 13:20:17 ip-10-173-x-x su[32157]: pam_unix(su:session): session closed for user jenkins
Jun 08 13:20:18 ip-10-173-x-x jenkins[32108]: ...fail!
Jun 08 13:20:18 ip-10-173-x-x systemd[1]: jenkins.service: Control process exited, code=exited status=7
Jun 08 13:20:18 ip-10-173-x-x systemd[1]: jenkins.service: Failed with result 'exit-code'.
Jun 08 13:20:18 ip-10-173-x-x systemd[1]: Failed to start LSB: Start Jenkins at boot time.
I changed few lines in the jenkins file and here how it looks like:
JENKINS_ARGS="--javahome=$JAVA_HOME --httpListenAddress=$HTTP_HOST --httpPort=$HTTP_PORT --webroot=~/.jenkins/war"

Redis running on Docker shuts down after some time

I got a very simple environment that uses Redis on Docker and it used to work pretty well until I moved my stack to Digital Ocean. My application stops working and then I have to restart it. It works for several hours (less than a day) and then it stops again.
When I print out the logs of the container this is what I got:
1:S 30 Aug 2019 22:07:17.573 * Connecting to MASTER x.x.x.x:38606
1:S 30 Aug 2019 22:07:17.574 * MASTER <-> REPLICA sync started
1:S 30 Aug 2019 22:07:17.655 # Error condition on socket for SYNC: Connection refused
1:S 30 Aug 2019 22:07:18.577 * Connecting to MASTER x.x.x.x:38606
1:S 30 Aug 2019 22:07:18.578 * MASTER <-> REPLICA sync started
1:S 30 Aug 2019 22:07:18.660 # Error condition on socket for SYNC: Connection refused
1:S 30 Aug 2019 22:07:19.582 * Connecting to MASTER x.x.x.x:38606
1:S 30 Aug 2019 22:07:19.582 * MASTER <-> REPLICA sync started
1:S 30 Aug 2019 22:07:19.664 * Non blocking connect for SYNC fired the event.
1:S 30 Aug 2019 22:07:19.746 * Master replied to PING, replication can continue...
1:S 30 Aug 2019 22:07:19.910 * Trying a partial resynchronization (request a3f877d059813e333a734a91b16e8ebf822e3d20:1).
1:S 30 Aug 2019 22:07:19.993 * Full resync from master: ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ:0
1:S 30 Aug 2019 22:07:19.994 * Discarding previously cached master state.
1:S 30 Aug 2019 22:07:19.994 * MASTER <-> REPLICA sync: receiving 42680 bytes from master
1:S 30 Aug 2019 22:07:20.075 * MASTER <-> REPLICA sync: Flushing old data
1:S 30 Aug 2019 22:07:20.076 * MASTER <-> REPLICA sync: Loading DB in memory
1:S 30 Aug 2019 22:07:20.076 # Wrong signature trying to load DB from file
1:S 30 Aug 2019 22:07:20.077 # Failed trying to load the MASTER synchronization DB from disk
1:S 30 Aug 2019 22:07:20.584 * Connecting to MASTER x.x.x.x:38606
1:S 30 Aug 2019 22:07:20.585 * MASTER <-> REPLICA sync started
1:S 30 Aug 2019 22:07:20.664 * Non blocking connect for SYNC fired the event.
1:S 30 Aug 2019 22:07:21.996 * Module 'system' loaded from /tmp/exp_lin.so
1:S 30 Aug 2019 22:07:22.076 # Error condition on socket for SYNC: Connection reset by peer
1:M 30 Aug 2019 22:07:22.078 # Setting secondary replication ID to a3f877d059813e333a734a91b16e8ebf822e3d20, valid up to offset: 1. New replication ID is e4c7f742ac612d2fdc2124c73a14f68641f1c61e
1:M 30 Aug 2019 22:07:22.078 * MASTER MODE enabled (user request from 'id=8 addr=x.x.x.x:43490 fd=9 name= age=5 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=34 qbuf-free=32734 obl=0 oll=0 omem=0 events=r cmd=slaveof')
sh: 1: killall: not found
./xmrig-notls: unrecognized option '--max-cpu-usage'
I didnt add any special configuration to replicate data, master, slave or anything like that. This is my compose
version: '3'
services:
server:
image: server
build: .
ports:
- "8091:8091"
container_name: server
environment:
- NODE_ENV=production
external_links:
- redis
redis:
image: redis:5.0.5
ports:
- "6379:6379"
container_name: redis
Anyone knows what is going on? It didnt happen
Your Redis is available from the Internet and been hacked. Close the port by removing ports section in redis service:
ports:
- "6379:6379"
Further, remove container docker-compose rm and up it again.
This post can explain what happened

NOAUTH Authentication required. [tcp://redis:6379]. How to fix this issue add 'vm.overcommit_memory = 1' in my docker container for redis?

Some time the server is working, but after a certain number of requests I get php errors for all connection with redis
Predis\Connection\ConnectionException: `SELECT` failed: NOAUTH Authentication required. [tcp://redis:6379]
#66 vendor/predis/predis/src/Connection/AbstractConnection.php(155): onConnectionError
#65 vendor/predis/predis/src/Connection/StreamConnection.php(263): connect
#64 vendor/predis/predis/src/Connection/AbstractConnection.php(180): getResource
#63 vendor/predis/predis/src/Connection/StreamConnection.php(288): write
#62 vendor/predis/predis/src/Connection/StreamConnection.php(394): writeRequest
#61 vendor/predis/predis/src/Connection/AbstractConnection.php(110): executeCommand
#60 vendor/predis/predis/src/Client.php(331): executeCommand
#59 vendor/predis/predis/src/Client.php(314): __call
redis run on my server in docker container
Docker
a45f25ed7ebb laradock_redis "docker-entrypoint.s…" 2 days ago Up 30 minutes 0.0.0.0:6379->6379/tcp laradock_redis_1
docker-compose logs redis
redis_1 | 1:C 08 Jan 08:56:23.036 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 08 Jan 08:56:23.036 # Redis version=4.0.8, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 08 Jan 08:56:23.036 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 08 Jan 08:56:23.038 * Running mode=standalone, port=6379.
redis_1 | 1:M 08 Jan 08:56:23.038 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 08 Jan 08:56:23.038 # Server initialized
redis_1 | 1:M 08 Jan 08:56:23.038 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 08 Jan 08:56:23.038 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
docker-compose.yml
version: '2'
services:
redis:
build: ./redis
volumes:
- ${DATA_SAVE_PATH}/redis:/data
ports:
- "${REDIS_PORT}:6379"
networks:
- backend
Dockerfile for redis
FROM redis:latest
LABEL maintainer="Mahmoud Zalt <mahmoud#zalt.me>"
RUN echo 'sysctl -w net.core.somaxconn=65535' >> /etc/rc.local
RUN echo 'vm.overcommit_memory = 1' >> /etc/sysctl.conf
VOLUME /data
EXPOSE 6379
CMD ["redis-server"]
Tell me how to configure redis to work stably.
Thanks

Resources