How to config default port of Node Exporter - port

I deploy Node Exporter for monitor my system, but some app in my server used port 9100 , and Node Export Services cannot start.?
How to change another port for Node_Exporter
Thank for reading, this is my fist post
I try on RHEL 8
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-0724T14:43:11+07:00" level=info msg="Build context (go=go1.12.5, user=root#b50852a1acba,date=20190604-16:41:18)" source="node_exporter.go:157"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg="Enabled collectors:" source="node_exporter.go:97"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg=" - arp" source="node_exporter.go:104"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg=" - bcache" source="node_exporter.go:104"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg=" - bonding" source="node_exporter.go:104"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg=" - conntrack" source="node_exporter.go:104"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg=" - cpu" source="node_exporter.go:104"
Jul 24 14:43:11 xxxxxxx systemd[1]: node_exporter.service: main process exited, code=exited, status=1/FAILURE
Jul 24 14:43:11 xxxxxxx systemd[1]: Unit node_exporter.service entered failed state.
Jul 24 14:43:11 xxxxxxx systemd[1]: node_exporter.service failed.
`

I found the answer.
Just add --web.listen-address=:9500 behind ExecStart=/usr/local/bin/node_exporter in the config file
It looklike
ExecStart=/usr/local/bin/node_exporter --web.listen-address=:[custum port]

Related

Redis config doesn't setup on docker

I tried to specify Redis config in the Dockerfile:
FROM redis:7.0.0
EXPOSE 6379
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD ["redis-server", "--include /usr/local/etc/redis/redis.conf"]
But in logs:
redis_1 | 1:C 14 Nov 2022 12:32:28.045 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 14 Nov 2022 12:32:28.045 # Redis version=7.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 14 Nov 2022 12:32:28.045 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 14 Nov 2022 12:32:28.046 * monotonic clock: POSIX clock_gettime
redis_1 | 1:M 14 Nov 2022 12:32:28.050 * Running mode=standalone, port=6379.
redis_1 | 1:M 14 Nov 2022 12:32:28.051 # Server initialized
And redis config doesn't work
With config I want to disable replica-read-only, which produces "Can't write against read-only replica" errors.

sentinel cannot promote initial master back to MASTER mode after failover

I am running redis in sentinel mode with 1 master node, 2 replica nodes and 3 sentinel nodes. I am running all the nodes in docker swarm environment.
All nodes starts fine. At start we have the following IPs for nodes
master 10.0.20.2
replica-1 10.0.20.5
replica-2 10.0.20.10
Next I stop the master container to bring master node down so that sentinel should pick one of replica nodes as new master.
This goes fine and replica-1 node is selected as new master.
In meantime, docker swarm spin up new container for masterand it joins as slave in the redis sentinel network.
Next, I bring the replica-1 node down for another failover. Now the actual issue happens when sentinel tries to upgrade master node from slave to master.
Below is the masternode redis config file when sentinel tries to make it master. I am wondering why the file is updated with replicaof 10.0.20.2 6379 when this node is the new master and IP is of same node.
master node redis.conf
root#0fd67f6ceb37:/data# tail -f /etc/redis/redis.conf
replica-announce-ip "redis-master"
#replica-announce-port 6379
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbchecksum yes
# Generated by CONFIG REWRITE
replicaof 10.0.20.2 6379
This is wrong configuration so it fails in sometime and sentinel picks replica-2 node as new master
This is the error I see when masternode logs ( below is the detailed log file)
Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master
And in the end replica-2 acts as masterand replica-1 and master as two slaves.
master node logs (this is after master joins as slave and sentinel tries to promote it to master mode)
[docker#chopswarm1 redis-failover]$ d logs 0fd67f6ceb37
1:C 05 Nov 2019 06:43:49.360 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 05 Nov 2019 06:43:49.360 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 05 Nov 2019 06:43:49.360 # Configuration loaded
1:M 05 Nov 2019 06:43:49.361 * Running mode=standalone, port=6379.
1:M 05 Nov 2019 06:43:49.361 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 05 Nov 2019 06:43:49.361 # Server initialized
1:M 05 Nov 2019 06:43:49.361 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 05 Nov 2019 06:43:49.361 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 05 Nov 2019 06:43:49.361 * DB loaded from disk: 0.000 seconds
1:M 05 Nov 2019 06:43:49.361 * Ready to accept connections
1:S 05 Nov 2019 06:43:59.817 * Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
1:S 05 Nov 2019 06:43:59.817 * REPLICAOF 10.0.20.5:6379 enabled (user request from 'id=5 addr=10.0.20.7:60534 fd=10 name=sentinel-38a1e461-cmd age=10 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=148 qbuf-free=32620 obl=36 oll=0 omem=0 events=r cmd=exec')
1:S 05 Nov 2019 06:43:59.817 # CONFIG REWRITE executed with success.
1:S 05 Nov 2019 06:44:00.386 * Connecting to MASTER 10.0.20.5:6379
1:S 05 Nov 2019 06:44:00.387 * MASTER <-> REPLICA sync started
1:S 05 Nov 2019 06:44:00.387 * Non blocking connect for SYNC fired the event.
1:S 05 Nov 2019 06:44:00.387 * Master replied to PING, replication can continue...
1:S 05 Nov 2019 06:44:00.387 * Trying a partial resynchronization (request 0b1ed09c8d497744632c93cab960c4ca4ee9a11e:1).
1:S 05 Nov 2019 06:44:00.388 * Full resync from master: f3c311652d8860c93048eba075521df7033cab2f:38645
1:S 05 Nov 2019 06:44:00.388 * Discarding previously cached master state.
1:S 05 Nov 2019 06:44:00.486 * MASTER <-> REPLICA sync: receiving 178 bytes from master
1:S 05 Nov 2019 06:44:00.486 * MASTER <-> REPLICA sync: Flushing old data
1:S 05 Nov 2019 06:44:00.486 * MASTER <-> REPLICA sync: Loading DB in memory
1:S 05 Nov 2019 06:44:00.486 * MASTER <-> REPLICA sync: Finished with success
1:S 05 Nov 2019 06:44:35.367 # Connection with master lost.
1:S 05 Nov 2019 06:44:35.367 * Caching the disconnected master state.
1:S 05 Nov 2019 06:44:35.464 * Connecting to MASTER 10.0.20.5:6379
1:S 05 Nov 2019 06:44:35.465 * MASTER <-> REPLICA sync started
1:S 05 Nov 2019 06:44:35.465 # Error condition on socket for SYNC: Connection refused
1:S 05 Nov 2019 06:44:36.466 * Connecting to MASTER 10.0.20.5:6379
1:S 05 Nov 2019 06:44:36.466 * MASTER <-> REPLICA sync started
1:M 05 Nov 2019 06:44:40.748 # Setting secondary replication ID to f3c311652d8860c93048eba075521df7033cab2f, valid up to offset: 46004. New replication ID is 77213f07383dd307e4b6d917b6a8789de42cad20
1:M 05 Nov 2019 06:44:40.748 * Discarding previously cached master state.
1:M 05 Nov 2019 06:44:40.748 * MASTER MODE enabled (user request from 'id=16 addr=10.0.20.7:60576 fd=17 name=sentinel-38a1e461-cmd age=31 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=140 qbuf-free=32628 obl=36 oll=0 omem=0 events=r cmd=exec')
1:M 05 Nov 2019 06:44:40.748 # CONFIG REWRITE executed with success.
1:M 05 Nov 2019 06:44:41.881 * Replica redis-replica-2:6379 asks for synchronization
1:M 05 Nov 2019 06:44:41.881 * Partial resynchronization request from redis-replica-2:6379 accepted. Sending 881 bytes of backlog starting from offset 46004.
1:S 05 Nov 2019 06:44:43.132 # Connection with replica redis-replica-2:6379 lost.
1:S 05 Nov 2019 06:44:43.132 * Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
1:S 05 Nov 2019 06:44:43.132 * REPLICAOF 10.0.20.2:6379 enabled (user request from 'id=24 addr=10.0.20.7:60636 fd=15 name=sentinel-38a1e461-cmd age=3 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=291 qbuf-free=32477 obl=36 oll=0 omem=0 events=r cmd=exec')
1:S 05 Nov 2019 06:44:43.133 # CONFIG REWRITE executed with success.
1:S 05 Nov 2019 06:44:43.484 * Connecting to MASTER 10.0.20.2:6379
1:S 05 Nov 2019 06:44:43.484 * MASTER <-> REPLICA sync started
1:S 05 Nov 2019 06:44:43.484 * Non blocking connect for SYNC fired the event.
1:S 05 Nov 2019 06:44:43.484 * Master replied to PING, replication can continue...
1:S 05 Nov 2019 06:44:43.484 * Trying a partial resynchronization (request 77213f07383dd307e4b6d917b6a8789de42cad20:46885).
1:S 05 Nov 2019 06:44:43.484 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master
1:S 05 Nov 2019 06:44:44.489 * Connecting to MASTER 10.0.20.2:6379
1:S 05 Nov 2019 06:44:44.489 * MASTER <-> REPLICA sync started
1:S 05 Nov 2019 06:44:44.489 * Non blocking connect for SYNC fired the event.
1:S 05 Nov 2019 06:44:44.489 * Master replied to PING, replication can continue...
1:S 05 Nov 2019 06:44:44.490 * Trying a partial resynchronization (request 77213f07383dd307e4b6d917b6a8789de42cad20:46885).
1:S 05 Nov 2019 06:44:44.490 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master
1:S 05 Nov 2019 06:44:45.489 * Connecting to MASTER 10.0.20.2:6379
1:S 05 Nov 2019 06:44:45.490 * MASTER <-> REPLICA sync started
1:S 05 Nov 2019 06:44:45.490 * Non blocking connect for SYNC fired the event.
1:S 05 Nov 2019 06:44:45.490 * Master replied to PING, replication can continue...
1:S 05 Nov 2019 06:44:45.490 * Trying a partial resynchronization (request 77213f07383dd307e4b6d917b6a8789de42cad20:46885).
1:S 05 Nov 2019 06:44:45.490 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master
1:S 05 Nov 2019 06:44:46.493 * Connecting to MASTER 10.0.20.2:6379
1:S 05 Nov 2019 06:44:46.493 * MASTER <-> REPLICA sync started
1:S 05 Nov 2019 06:44:46.493 * Non blocking connect for SYNC fired the event.
1:S 05 Nov 2019 06:44:46.493 * Master replied to PING, replication can continue...
1:S 05 Nov 2019 06:44:46.493 * Trying a partial resynchronization (request 77213f07383dd307e4b6d917b6a8789de42cad20:46885).
1:S 05 Nov 2019 06:44:46.494 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master
1:S 05 Nov 2019 06:44:47.493 * Connecting to MASTER 10.0.20.2:6379
1:S 05 Nov 2019 06:44:47.494 * MASTER <-> REPLICA sync started
1:S 05 Nov 2019 06:44:47.494 * Non blocking connect for SYNC fired the event.
1:S 05 Nov 2019 06:44:47.494 * Master replied to PING, replication can continue...
1:S 05 Nov 2019 06:44:47.494 * Trying a partial resynchronization (request 77213f07383dd307e4b6d917b6a8789de42cad20:46885).
<-- omitted few entries for the same errors as above for better readability -->
1:S 05 Nov 2019 06:45:21.575 * Connecting to MASTER 10.0.20.2:6379
1:S 05 Nov 2019 06:45:21.575 * MASTER <-> REPLICA sync started
1:S 05 Nov 2019 06:45:21.575 * Non blocking connect for SYNC fired the event.
1:S 05 Nov 2019 06:45:21.575 * Master replied to PING, replication can continue...
1:S 05 Nov 2019 06:45:21.575 * Trying a partial resynchronization (request 77213f07383dd307e4b6d917b6a8789de42cad20:46885).
1:S 05 Nov 2019 06:45:21.575 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master
1:S 05 Nov 2019 06:45:22.456 * REPLICAOF 10.0.20.10:6379 enabled (user request from 'id=113 addr=10.0.20.7:60950 fd=12 name=sentinel-38a1e461-cmd age=5 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=150 qbuf-free=32618 obl=36 oll=0 omem=0 events=r cmd=exec')
1:S 05 Nov 2019 06:45:22.456 # CONFIG REWRITE executed with success.
1:S 05 Nov 2019 06:45:22.577 * Connecting to MASTER 10.0.20.10:6379
1:S 05 Nov 2019 06:45:22.577 * MASTER <-> REPLICA sync started
1:S 05 Nov 2019 06:45:22.577 * Non blocking connect for SYNC fired the event.
1:S 05 Nov 2019 06:45:22.577 * Master replied to PING, replication can continue...
1:S 05 Nov 2019 06:45:22.577 * Trying a partial resynchronization (request 77213f07383dd307e4b6d917b6a8789de42cad20:46885).
1:S 05 Nov 2019 06:45:22.577 * Successful partial resynchronization with master.
1:S 05 Nov 2019 06:45:22.577 # Master replication ID changed to 3235720aad34423d6f82f9db4a953042c1f16d58
1:S 05 Nov 2019 06:45:22.577 * MASTER <-> REPLICA sync: Master accepted a Partial Resynchronization.
sentinel log file ( have added additional line breaks when failover starts)
root#3708cf05eca4:/data# cat sentinel.log
1:X 05 Nov 2019 06:40:49.116 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:X 05 Nov 2019 06:40:49.116 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
1:X 05 Nov 2019 06:40:49.116 # Configuration loaded
1:X 05 Nov 2019 06:40:49.117 * Running mode=sentinel, port=26379.
1:X 05 Nov 2019 06:40:49.117 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:X 05 Nov 2019 06:40:49.119 # Sentinel ID is 38a1e461910e17fb7be79e695040074df2dde2df
1:X 05 Nov 2019 06:40:49.119 # +monitor master eaas-redis-master 10.0.20.2 6379 quorum 2
1:X 05 Nov 2019 06:40:49.120 * +slave slave redis-replica-1:6379 10.0.20.5 6379 # eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:40:51.183 * +sentinel sentinel 3b0831ce9f6aff70f9bf45f4211d66ebfd1c6a21 10.0.20.33 26379 # eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:40:59.150 * +slave slave redis-replica-2:6379 10.0.20.10 6379 # eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:40:59.202 * +fix-slave-config slave redis-replica-1:6379 10.0.20.5 6379 # eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:41:01.362 * +sentinel sentinel 464f3750404b419fccf513784f40baf7f6622cba 10.0.20.41 26379 # eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:41:09.249 * +fix-slave-config slave redis-replica-2:6379 10.0.20.10 6379 # eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:43:48.513 # +sdown master eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:43:48.594 # +new-epoch 1
1:X 05 Nov 2019 06:43:48.595 # +vote-for-leader 464f3750404b419fccf513784f40baf7f6622cba 1
1:X 05 Nov 2019 06:43:48.613 # +odown master eaas-redis-master 10.0.20.2 6379 #quorum 2/2
1:X 05 Nov 2019 06:43:48.613 # Next failover delay: I will not start a failover before Tue Nov 5 06:43:59 2019
1:X 05 Nov 2019 06:43:49.732 # +config-update-from sentinel 464f3750404b419fccf513784f40baf7f6622cba 10.0.20.41 26379 # eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:43:49.732 # +switch-master eaas-redis-master 10.0.20.2 6379 10.0.20.5 6379
1:X 05 Nov 2019 06:43:49.732 * +slave slave 10.0.20.10:6379 10.0.20.10 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:43:49.732 * +slave slave 10.0.20.2:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:43:49.785 * +slave slave redis-replica-2:6379 10.0.20.10 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:43:59.816 * +convert-to-slave slave 10.0.20.2:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:09.832 * +slave slave redis-master:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:40.453 # +sdown master eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:40.524 # +odown master eaas-redis-master 10.0.20.5 6379 #quorum 2/2
1:X 05 Nov 2019 06:44:40.524 # +new-epoch 2
1:X 05 Nov 2019 06:44:40.524 # +try-failover master eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:40.525 # +vote-for-leader 38a1e461910e17fb7be79e695040074df2dde2df 2
1:X 05 Nov 2019 06:44:40.525 # 3b0831ce9f6aff70f9bf45f4211d66ebfd1c6a21 voted for 3b0831ce9f6aff70f9bf45f4211d66ebfd1c6a21 2
1:X 05 Nov 2019 06:44:40.528 # 464f3750404b419fccf513784f40baf7f6622cba voted for 38a1e461910e17fb7be79e695040074df2dde2df 2
1:X 05 Nov 2019 06:44:40.580 # +elected-leader master eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:40.580 # +failover-state-select-slave master eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:40.681 # +selected-slave slave redis-master:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:40.681 * +failover-state-send-slaveof-noone slave redis-master:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:40.748 * +failover-state-wait-promotion slave redis-master:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:41.003 # +promoted-slave slave redis-master:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:41.003 # +failover-state-reconf-slaves master eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:41.101 * +slave-reconf-sent slave 10.0.20.10:6379 10.0.20.10 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:41.598 # -odown master eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:42.050 * +slave-reconf-inprog slave 10.0.20.10:6379 10.0.20.10 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:42.050 * +slave-reconf-done slave 10.0.20.10:6379 10.0.20.10 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:42.107 * +slave-reconf-sent slave redis-replica-2:6379 10.0.20.10 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:43.056 * +slave-reconf-inprog slave redis-replica-2:6379 10.0.20.10 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:43.056 * +slave-reconf-done slave redis-replica-2:6379 10.0.20.10 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:43.132 * +slave-reconf-sent slave 10.0.20.2:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:44.111 * +slave-reconf-inprog slave 10.0.20.2:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:46.056 # +failover-end-for-timeout master eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:46.056 # +failover-end master eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:46.056 * +slave-reconf-sent-be slave redis-master:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:46.056 * +slave-reconf-sent-be slave 10.0.20.2:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.5 6379
1:X 05 Nov 2019 06:44:46.056 # +switch-master eaas-redis-master 10.0.20.5 6379 10.0.20.2 6379
1:X 05 Nov 2019 06:44:46.057 * +slave slave 10.0.20.10:6379 10.0.20.10 6379 # eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:44:46.057 * +slave slave 10.0.20.5:6379 10.0.20.5 6379 # eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:44:51.062 # +sdown slave 10.0.20.5:6379 10.0.20.5 6379 # eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:45:11.226 # +sdown master eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:45:16.233 # +new-epoch 3
1:X 05 Nov 2019 06:45:16.234 # +vote-for-leader 464f3750404b419fccf513784f40baf7f6622cba 3
1:X 05 Nov 2019 06:45:16.535 # +odown master eaas-redis-master 10.0.20.2 6379 #quorum 3/2
1:X 05 Nov 2019 06:45:16.535 # Next failover delay: I will not start a failover before Tue Nov 5 06:45:26 2019
1:X 05 Nov 2019 06:45:17.285 # +config-update-from sentinel 464f3750404b419fccf513784f40baf7f6622cba 10.0.20.41 26379 # eaas-redis-master 10.0.20.2 6379
1:X 05 Nov 2019 06:45:17.285 # +switch-master eaas-redis-master 10.0.20.2 6379 10.0.20.10 6379
1:X 05 Nov 2019 06:45:17.285 * +slave slave 10.0.20.5:6379 10.0.20.5 6379 # eaas-redis-master 10.0.20.10 6379
1:X 05 Nov 2019 06:45:17.285 * +slave slave 10.0.20.2:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.10 6379
1:X 05 Nov 2019 06:45:22.456 * +fix-slave-config slave 10.0.20.2:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.10 6379
1:X 05 Nov 2019 06:45:26.347 * +slave slave redis-replica-1:6379 10.0.20.5 6379 # eaas-redis-master 10.0.20.10 6379
1:X 05 Nov 2019 06:45:26.348 * +slave slave redis-master:6379 10.0.20.2 6379 # eaas-redis-master 10.0.20.10 6379
root#3708cf05eca4:/data#
So I want to know why sentinel rewrites the configuration file with replicaof for master node only(it happens only for master node and not for replica nodes when they are promoted to MASTER mode)
How can I improve this scenario so that master node can run in MASTER mode again if sentinel picks it up during failover.
Please let me know if any more information is required.
Below are redis configuration files for master and replica nodes when I start the docker swarm stack.
redis.conf(master)
dir /data/
replica-announce-ip {{REDIS_MASTER}}
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbchecksum yes
redis.conf(replica)
replicaof {{REDIS_MASTER}} 6379
dir /data/
replica-announce-ip {{REDIS_REPLICA}}
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbchecksum yes
There is some specific issue with docker service vs docker container IPs: https://github.com/moby/moby/issues/30963
So, when you set REDIS_MASTER_HOST: redis-master (for example) environment it points to docker service IP, but not actual redis-master container IP - and this behavior breaks sentinel logic.
I used this configurations in docker-compose (note dnsrr endpoint_mode):
redis-master:
image: bitnami/redis:5.0.9
environment:
REDIS_REPLICATION_MODE: master
ALLOW_EMPTY_PASSWORD: 'yes'
REDIS_AOF_ENABLED: 'no'
deploy:
endpoint_mode: dnsrr
This configuration provides one IP for redis-master DNS service record and container itself. But in this case IP-address after redis-master container restart will change, so after failure I recreate sentinel configuration with commands:
# on redis-master
redis-cli SLAVEOF <new IP master> 6379
# on all sentinel nodes
redis-cli -p 26379 SENTINEL REMOVE mymaster
redis-cli -p 26379 SENTINEL monitor mymaster <new IP redis-master> 6379 2
After this manipulation redis-master will change role correctly.
Another option:
I asked for function to get correct IP for redis in docker swarm environments: https://github.com/bitnami/bitnami-docker-redis/issues/174
And I created fork: https://github.com/tartemov/bitnami-docker-redis (There is one change with dns_lookup function)
In this case, docker-compose file will look like:
services:
redis-master:
image: ndocker-registry/redis:5.0.9
hostname: "redis-master"
environment:
REDIS_REPLICATION_MODE: master
ALLOW_EMPTY_PASSWORD: 'yes'
REDIS_AOF_ENABLED: 'no'
redis-slave-1:
image: docker-registry/redis:5.0.9
environment:
REDIS_REPLICATION_MODE: slave
REDIS_MASTER_HOST: redis-master
ALLOW_EMPTY_PASSWORD: 'yes'
REDIS_AOF_ENABLED: 'no'
sentinel:
image: bitnami/redis-sentinel:5.0.9-debian-10-r49
environment:
REDIS_MASTER_HOST: redis-master
ALLOW_EMPTY_PASSWORD: 'yes'
REDIS_AOF_ENABLED: 'no'
REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS: 5000
REDIS_SENTINEL_FAILOVER_TIMEOUT: 60000
deploy:
mode: replicated
replicas: 3
Note hostname property - with this property and new dns_lookup function redis can annotate IP address
And you don't need to run any manual actions - service address will not change and sentinel will correctly set roles for any of redis nodes

Redis cluster in docker-swarm mode updated recipe

Anyone has a working recipe of Redis cluster in swarm mode? I tried everything I know and searched the internet but seems like an impossible task.
Here is what I have so far:
version: '3.4'
services:
redis-master:
image: redis
networks:
- redisdb
ports:
- 6379:6379
volumes:
- redis-master:/data
redis-slave:
image: redis
networks:
- redisdb
command: redis-server --slaveof redis-master 6379
volumes:
- redis-slave:/data
sentinel:
image: redis
networks:
- redisdb
ports:
- 26379:26379
command: >
bash -c "echo 'port 26379' > sentinel.conf &&
echo 'dir /tmp' >> sentinel.conf &&
echo 'sentinel monitor redis-master redis-master 6379 2' >> sentinel.conf &&
echo 'sentinel down-after-milliseconds redis-master 5000' >> sentinel.conf &&
echo 'sentinel parallel-syncs redis-master 1' >> sentinel.conf &&
echo 'sentinel failover-timeout redis-master 5000' >> sentinel.conf &&
cat sentinel.conf &&
redis-server sentinel.conf --sentinel"
links:
- redis-master
- redis-slave
volumes:
redis-master:
driver: local
redis-slave:
driver: local
networks:
redisdb:
attachable: true
driver: overlay
I use the following command to deploy as a service:
docker stack deploy --compose-file docker-compose-test.yml redis
The result is deployed services, were redis-master and redis-slave are connecting and I can see the synchronization processes happening as follows:
redis-master log:
1:C 16 Oct 2019 04:19:42.720 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 16 Oct 2019 04:19:42.720 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 16 Oct 2019 04:19:42.720 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 16 Oct 2019 04:19:42.723 * Running mode=standalone, port=6379.
1:M 16 Oct 2019 04:19:42.723 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 16 Oct 2019 04:19:42.723 # Server initialized
1:M 16 Oct 2019 04:19:42.723 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 16 Oct 2019 04:19:42.723 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 16 Oct 2019 04:19:42.723 * Ready to accept connections
1:M 16 Oct 2019 04:19:43.976 * Replica 10.0.27.2:6379 asks for synchronization
1:M 16 Oct 2019 04:19:43.976 * Full resync requested by replica 10.0.27.2:6379
1:M 16 Oct 2019 04:19:43.976 * Starting BGSAVE for SYNC with target: disk
1:M 16 Oct 2019 04:19:43.976 * Background saving started by pid 15
15:C 16 Oct 2019 04:19:43.982 * DB saved on disk
15:C 16 Oct 2019 04:19:43.982 * RDB: 0 MB of memory used by copy-on-write
1:M 16 Oct 2019 04:19:44.053 * Background saving terminated with success
1:M 16 Oct 2019 04:19:44.053 * Synchronization with replica 10.0.27.2:6379 succeeded
Redis-slave log:
1:C 16 Oct 2019 04:19:40.776 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 16 Oct 2019 04:19:40.776 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 16 Oct 2019 04:19:40.776 # Configuration loaded
1:S 16 Oct 2019 04:19:40.779 * Running mode=standalone, port=6379.
1:S 16 Oct 2019 04:19:40.779 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:S 16 Oct 2019 04:19:40.779 # Server initialized
1:S 16 Oct 2019 04:19:40.779 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:S 16 Oct 2019 04:19:40.779 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:S 16 Oct 2019 04:19:40.779 * Ready to accept connections
1:S 16 Oct 2019 04:19:40.779 * Connecting to MASTER redis-master:6379
1:S 16 Oct 2019 04:19:40.817 # Unable to connect to MASTER: Invalid argument
1:S 16 Oct 2019 04:19:41.834 * Connecting to MASTER redis-master:6379
1:S 16 Oct 2019 04:19:41.851 # Unable to connect to MASTER: Invalid argument
1:S 16 Oct 2019 04:19:42.866 * Connecting to MASTER redis-master:6379
1:S 16 Oct 2019 04:19:42.942 # Unable to connect to MASTER: Invalid argument
1:S 16 Oct 2019 04:19:43.970 * Connecting to MASTER redis-master:6379
1:S 16 Oct 2019 04:19:43.975 * MASTER <-> REPLICA sync started
1:S 16 Oct 2019 04:19:43.975 * Non blocking connect for SYNC fired the event.
1:S 16 Oct 2019 04:19:43.975 * Master replied to PING, replication can continue...
1:S 16 Oct 2019 04:19:43.976 * Partial resynchronization not possible (no cached master)
1:S 16 Oct 2019 04:19:43.977 * Full resync from master: 39bb36f74ef0cdefdc08a2dc8d4a86112ea69f12:0
1:S 16 Oct 2019 04:19:44.053 * MASTER <-> REPLICA sync: receiving 175 bytes from master
1:S 16 Oct 2019 04:19:44.054 * MASTER <-> REPLICA sync: Flushing old data
1:S 16 Oct 2019 04:19:44.054 * MASTER <-> REPLICA sync: Loading DB in memory
1:S 16 Oct 2019 04:19:44.054 * MASTER <-> REPLICA sync: Finished with success
Redis-sentinel log:
port 26379
port 26379
dir /tmp
dir /tmp
sentinel monitor redis-master redis-master 6379 2
sentinel monitor redis-master redis-master 6379 2
sentinel down-after-milliseconds redis-master 5000
sentinel down-after-milliseconds redis-master 5000
sentinel parallel-syncs redis-master 1
sentinel parallel-syncs redis-master 1
sentinel failover-timeout redis-master 5000
sentinel failover-timeout redis-master 5000
1:X 16 Oct 2019 04:19:49.506 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
*** FATAL CONFIG FILE ERROR ***
1:X 16 Oct 2019 04:19:49.506 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:X 16 Oct 2019 04:19:49.506 # Configuration loaded
Reading the configuration file, at line 3
>>> 'sentinel monitor redis-master redis-master 6379 2'
1:X 16 Oct 2019 04:19:49.508 * Running mode=sentinel, port=26379.
1:X 16 Oct 2019 04:19:49.508 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
Can't resolve master instance hostname.
1:X 16 Oct 2019 04:19:49.511 # Sentinel ID is aad9553d4999951f3b37eede5968b4aa262c07a9
1:X 16 Oct 2019 04:19:49.511 # +monitor master redis-master 10.0.27.5 6379 quorum 2
1:X 16 Oct 2019 04:19:49.512 * +slave slave 10.0.27.2:6379 10.0.27.2 6379 # redis-master 10.0.27.5 6379
1:X 16 Oct 2019 04:19:54.514 # +sdown slave 10.0.27.2:6379 10.0.27.2 6379 # redis-master 10.0.27.5 6379
So the slave-announce-ip, replica-announce-ip etc are having issues:
1- because the ips keep changing
2- Overlay networks ips are not always what they show they are
So failover dose not work and if master went down slave dont kick in!

Elasticsearch timing out on index.exists / HEAD requests

I'm having a weird issue with ES 5.6.5 in a Docker container in swarm mode. Here's the code in question:
from elasticsearch import Elasticsearch
from settings import ES_HOSTS
db = Elasticsearch(hosts=ES_HOSTS)
db.indices.exists(index=product_id)
It performs an HTTP HEAD request in the background and that request times out, never getting a response. I confirmed by doing the same HEAD request using curl (curl -X HEAD http://elasticsearch:9200/85a9b708-e89d-11e7-887a-02420aff0008) and it does indeed time out. Other requests work just fine. For example, if I do a GET request to the aforementioned URL, I get the expected error saying the index does not exist.
When I run the same ES image on a standalone docker container on my machine, configured exactly the same way and with the same code making the calls, it works without a problem.
Here's the relevant swarm configuration section:
elasticsearch:
image: "docker.elastic.co/elasticsearch/elasticsearch:5.6.5"
environment:
- cluster.name=raul_elasticsearch
- xpack.security.enabled=false
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
deploy:
resources:
limits:
memory: 6G
reservations:
memory: 6G
And this is the command I ran for having a standalone ES docker container:
docker run --rm -p 9200:9200 -p 9300:9300 -e "bootstrap.memory_lock=true" -e "discovery.type=single-node" -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" -e "xpack.security.enabled=false" -d --name raul_elasticsearch docker.elastic.co/elasticsearch/elasticsearch:5.6.5
Any thoughts on what could be causing the issue?
UPDATE1: looking at the debug logs from the ES running in the docker swarm, I am getting the following messages:
Dec 24 17:10:33: [2017-12-24T17:10:33,839][DEBUG][r.suppressed ] path: /85a9b708-e89d-11e7-887a-02420aff0008, params: {index=85a9b708-e89d-11e7-887a-02420aff0008}
Dec 24 17:10:33: org.elasticsearch.index.IndexNotFoundException: no such index
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.infe(IndexNameExpressionResolver.java:676) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:630) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:578) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:168) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:144) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:77) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.admin.indices.get.TransportGetIndexAction.checkBlock(TransportGetIndexAction.java:63) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.admin.indices.get.TransportGetIndexAction.checkBlock(TransportGetIndexAction.java:47) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:134) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.start(TransportMasterNodeAction.java:126) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:104) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:54) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:170) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:142) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:84) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1256) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getIndex(AbstractClient.java:1357) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.rest.action.admin.indices.RestGetIndicesAction.lambda$prepareRequest$0(RestGetIndicesAction.java:97) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:80) [elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:262) [elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:200) [elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.http.netty4.Netty4HttpServerTransport.dispatchRequest(Netty4HttpServerTransport.java:505) [transport-netty4-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:80) [transport-netty4-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:68) [transport-netty4-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Additionally, I also get these messages, most of which seem to be related to a periodic task, but one caught my eye: Can not start an object, expecting field name (context: Object)
Dec 24 17:10:36: [2017-12-24T17:10:36,919][DEBUG][o.e.x.m.a.GetDatafeedsStatsAction$TransportAction] [hPjV7-n] Get stats for datafeed '_all'
Dec 24 17:10:36: [2017-12-24T17:10:36,923][DEBUG][o.e.x.m.e.l.LocalExporter] monitoring index templates and pipelines are installed on master node, service can start
Dec 24 17:10:46: [2017-12-24T17:10:46,932][DEBUG][o.e.x.m.a.GetDatafeedsStatsAction$TransportAction] [hPjV7-n] Get stats for datafeed '_all'
Dec 24 17:10:46: [2017-12-24T17:10:46,935][DEBUG][o.e.x.m.e.l.LocalExporter] monitoring index templates and pipelines are installed on master node, service can start
Dec 24 17:10:56: [2017-12-24T17:10:56,920][DEBUG][o.e.x.m.a.GetDatafeedsStatsAction$TransportAction] [hPjV7-n] Get stats for datafeed '_all'
Dec 24 17:10:56: [2017-12-24T17:10:56,927][DEBUG][o.e.x.m.e.l.LocalExporter] monitoring index templates and pipelines are installed on master node, service can start
Dec 24 17:10:58: [2017-12-24T17:10:58,707][DEBUG][o.e.x.w.e.ExecutionService] [hPjV7-n] saving watch records [4]
Dec 24 17:10:58: [2017-12-24T17:10:58,711][DEBUG][o.e.x.w.e.ExecutionService] [hPjV7-n] executing watch [fxCyOMU8STOiNqoLUtOQhQ_kibana_version_mismatch]
Dec 24 17:10:58: [2017-12-24T17:10:58,711][DEBUG][o.e.x.w.e.ExecutionService] [hPjV7-n] executing watch [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_cluster_status]
Dec 24 17:10:58: [2017-12-24T17:10:58,711][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_kibana_version_mismatch_4e9bb936-dc65-4795-8be3-b2b2c1660460-2017-12-24T17:10:58.707Z] found [0] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,711][DEBUG][o.e.x.w.e.ExecutionService] [hPjV7-n] executing watch [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_version_mismatch]
Dec 24 17:10:58: [2017-12-24T17:10:58,712][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_kibana_version_mismatch_4e9bb936-dc65-4795-8be3-b2b2c1660460-2017-12-24T17:10:58.707Z] found [0] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,713][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_version_mismatch_0a13442f-96dc-4732-b0de-1e87a9dc05ab-2017-12-24T17:10:58.707Z] found [0] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,714][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_version_mismatch_0a13442f-96dc-4732-b0de-1e87a9dc05ab-2017-12-24T17:10:58.707Z] found [0] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,714][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_cluster_status_b35f5be6-4d4b-4fa2-a50c-6f18d4b6d949-2017-12-24T17:10:58.707Z] found [15178] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,715][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_cluster_status_b35f5be6-4d4b-4fa2-a50c-6f18d4b6d949-2017-12-24T17:10:58.707Z] hit [{
Dec 24 17:10:58: "error" : "Can not start an object, expecting field name (context: Object)"
Dec 24 17:10:58: }]
Dec 24 17:10:58: [2017-12-24T17:10:58,716][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_cluster_status_b35f5be6-4d4b-4fa2-a50c-6f18d4b6d949-2017-12-24T17:10:58.707Z] found [1] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,716][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_cluster_status_b35f5be6-4d4b-4fa2-a50c-6f18d4b6d949-2017-12-24T17:10:58.707Z] hit [{
Dec 24 17:10:58: "error" : "Can not start an object, expecting field name (context: Object)"
Dec 24 17:10:58: }]
Dec 24 17:10:58: [2017-12-24T17:10:58,718][DEBUG][o.e.x.w.e.ExecutionService] [hPjV7-n] executing watch [fxCyOMU8STOiNqoLUtOQhQ_logstash_version_mismatch]
Dec 24 17:10:58: [2017-12-24T17:10:58,718][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_logstash_version_mismatch_c88db510-0a7e-4520-a085-8381f4278288-2017-12-24T17:10:58.707Z] found [0] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,719][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_logstash_version_mismatch_c88db510-0a7e-4520-a085-8381f4278288-2017-12-24T17:10:58.707Z] found [0] hits
(I have no idea why it's complaining about Kibana and Logstash, I don't have them installed)
UPDATE 2: Using curl's --head parameter instead of -X HEAD makes it work. No idea why. Asking for verbose output yields this:
$ curl -v --head http://localhost:9200/85a9b708-e89d-11e7-887a-02420aff0008
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9200 (#0)
> HEAD /85a9b708-e89d-11e7-887a-02420aff0008 HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
HTTP/1.1 404 Not Found
< content-type: application/json; charset=UTF-8
content-type: application/json; charset=UTF-8
< content-length: 467
content-length: 467
Which is the expected response, and the command exits normally.
However, this never exits:
$ curl -v -X HEAD http://localhost:9200/85a9b708-e89d-11e7-887a-02420aff0008
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9200 (#0)
> HEAD /85a9b708-e89d-11e7-887a-02420aff0008 HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< content-type: application/json; charset=UTF-8
< content-length: 467
<
What's with that warning from the second command?
Looks like it was a bug in the elasticsearch==5.4.0 library I was using. Updating it to a newer version (5.5.1) fixed the issue.

Kubectl keeps restarting container when ran

I'm currently trying to move our company's squid server to a dockerized version and I'm struggling to get it working with Kubernetes.
I have built a Docker image that works perfectly fine when run with "docker run".
The complete Docker Run command is:
sudo docker run -d -i -t --privileged --volume=/proc/sys/net/ipv4/ip_nonlocal_bind:/var/proc/sys/net/ipv4/ip_nonlocal_bind --net=host --cap-add=SYS_MODULE --cap-add=NET_ADMIN --cap-add=NET_RAW -v /dev:/dev -v /lib/modules:/lib/modules -p80:80 -p8080:8080 -p53:53/udp -p5353:5353/udp -p5666:5666/udp -p4500:4500/udp -p500:500/udp -p3306:3306 --name=edge crossense/edge:latest /bin/bash
When I try to run the Image with Kubernetes, with the something like:
kubectl run --image=crossense/edge:latest --port=80 --port=8080 --port=53 --port=5353 --port=5666 --port=4500 --port=500 --port=3306 edge
seems like Kubernetes tries to get the container up and running, but without any success...
$kubectl get po
NAME READY REASON RESTARTS AGE
edge-sz7wp 0/1 Running 10 15m
And the $kubectl describe pod edge command gives me lots of these:
Thu, 09 Nov 2017 17:13:05 +0000 Thu, 09 Nov 2017 17:13:05 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id abcc2ff25a624a998871e02bcb62d42d6f39e9db0a39f601efa4d357dd8334aa
Thu, 09 Nov 2017 17:13:15 +0000 Thu, 09 Nov 2017 17:13:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 878778836bd3cc25bdf1e3b9cc2f2f6fa22b75b938a481172f08a6ec50571582
Thu, 09 Nov 2017 17:13:15 +0000 Thu, 09 Nov 2017 17:13:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 878778836bd3cc25bdf1e3b9cc2f2f6fa22b75b938a481172f08a6ec50571582
Thu, 09 Nov 2017 17:13:25 +0000 Thu, 09 Nov 2017 17:13:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id aa51e94536216b905ff9ba07951fedbc0007476b55dfdb2e5106418fb6aee05c
Thu, 09 Nov 2017 17:13:25 +0000 Thu, 09 Nov 2017 17:13:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id aa51e94536216b905ff9ba07951fedbc0007476b55dfdb2e5106418fb6aee05c
Thu, 09 Nov 2017 17:13:35 +0000 Thu, 09 Nov 2017 17:13:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id f4661e5ea33471cd1ba30816b40c8ba2d204fa22509b973da4af6eedb64c592e
Thu, 09 Nov 2017 17:13:35 +0000 Thu, 09 Nov 2017 17:13:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id f4661e5ea33471cd1ba30816b40c8ba2d204fa22509b973da4af6eedb64c592e
Thu, 09 Nov 2017 17:13:45 +0000 Thu, 09 Nov 2017 17:13:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 75f83dcb9b4f8af5134d6fd2edcd9342ecf56111e132a45f4e9787e83466e28b
Thu, 09 Nov 2017 17:13:45 +0000 Thu, 09 Nov 2017 17:13:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 75f83dcb9b4f8af5134d6fd2edcd9342ecf56111e132a45f4e9787e83466e28b
Thu, 09 Nov 2017 17:13:55 +0000 Thu, 09 Nov 2017 17:13:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id c9d0535b3962ec9da29c068dbb0a6b64426a5ac3e52f72e79bcbaf03c9f3d403
Thu, 09 Nov 2017 17:13:55 +0000 Thu, 09 Nov 2017 17:13:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id c9d0535b3962ec9da29c068dbb0a6b64426a5ac3e52f72e79bcbaf03c9f3d403
Thu, 09 Nov 2017 17:14:05 +0000 Thu, 09 Nov 2017 17:14:05 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 579f4428e9804404bd746cceee88bb6c73066a33263202bb5f1eb15f6ff26d7b
Thu, 09 Nov 2017 17:14:05 +0000 Thu, 09 Nov 2017 17:14:05 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 579f4428e9804404bd746cceee88bb6c73066a33263202bb5f1eb15f6ff26d7b
Thu, 09 Nov 2017 17:14:15 +0000 Thu, 09 Nov 2017 17:14:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id d36b2c9ddf0b1a05d86b43d2a92eb3c00ae92d00e155d5a1be1da8e2682f901b
Thu, 09 Nov 2017 17:14:15 +0000 Thu, 09 Nov 2017 17:14:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id d36b2c9ddf0b1a05d86b43d2a92eb3c00ae92d00e155d5a1be1da8e2682f901b
Thu, 09 Nov 2017 17:14:25 +0000 Thu, 09 Nov 2017 17:14:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 2d7b24537414f5e6f2981bf5f01596b19ea1abdb0eb4b81508fc7f44e8c34609
Thu, 09 Nov 2017 17:14:25 +0000 Thu, 09 Nov 2017 17:14:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 2d7b24537414f5e6f2981bf5f01596b19ea1abdb0eb4b81508fc7f44e8c34609
Thu, 09 Nov 2017 17:14:35 +0000 Thu, 09 Nov 2017 17:14:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id fdae44c599b77d44839e4897b750203c183001a6053c926432ef5a3c7f4deb38
Thu, 09 Nov 2017 17:14:35 +0000 Thu, 09 Nov 2017 17:14:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id fdae44c599b77d44839e4897b750203c183001a6053c926432ef5a3c7f4deb38
Thu, 09 Nov 2017 17:14:45 +0000 Thu, 09 Nov 2017 17:14:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 544351dda838d698e3bc125840edb6ad71cd0165a970cce46825df03b826eb38
Thu, 09 Nov 2017 17:14:45 +0000 Thu, 09 Nov 2017 17:14:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 544351dda838d698e3bc125840edb6ad71cd0165a970cce46825df03b826eb38
Thu, 09 Nov 2017 17:14:55 +0000 Thu, 09 Nov 2017 17:14:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 00fe4c286c1cc9b905c9c0927f82b39d45d41295a9dd0852131bba087bb19610
Thu, 09 Nov 2017 17:14:55 +0000 Thu, 09 Nov 2017 17:14:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 00fe4c286c1cc9b905c9c0927f82b39d45d41295a9dd0852131bba087bb19610
Any help would be much appreciated!
While I can't say this conclusively without the ability to re-produce and lack of logs, one of the differences which can be noticed easily is the privileges you have provided in docker command for example NET_ADMIN or NET_RAW etc. which are missing in Kubernetes run command.
Kubernetes also provides the ability to assign such privileges to a pod with capabilities within the securityContext in a pod declaration.
I am not sure if you can do this with Kubectl, but if you use the YAML declaration for the pod, the specs look roughly like:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myshell
image: "ubuntu:14.04"
command:
- /bin/sleep
- "300"
securityContext:
capabilities:
add:
- NET_ADMIN
For more reference, I would suggest a quick look at:
This post on Weave blog which lists all capabilities and an example which I have borrowed above as well
Official Kubernetes documentation which provides all details needed around security context
For all the poor souls out there, who couldn't find out the answer,
the reason for the pod to keep restarting is that the command executed by it has exited with code 0 (meaning successfully).
In my case, I was running /bin/bash as the entrypoint command, as specified in my pod configuration .yaml file:
apiVersion: v1
kind: Pod
metadata:
name: edge
spec:
containers:
- name: edge
image: "crossense/edge:production"
command:
- /bin/bash
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
- NET_RAW
volumeMounts:
- name: ip-nonlocal-bind
mountPath: /host/proc/sys/net/ipv4
- name: dev
mountPath: /host/dev
- name: modules
mountPath: /host/lib/modules
....
The solution was simply adding a non exiting command to the
entrypoint. This can be any process run on foreground or simply a
/bin/sleep
For the sake of example and future learning, my final pod configuration file looked like this:
apiVersion: v1
kind: Pod
metadata:
name: edge
spec:
hostNetwork: true
containers:
- name: edge
image: "crossense/edge:production"
command: ["/bin/bash", "-c"]
args: ["service rsyslog restart; service proxysql start; service mongodb start; service pdns-recursor start; service supervisor start; service danted start; touch /var/run/squid.pid; chown proxy /var/run/squid.pid; service squid restart; service ipsec start; /sbin/iptables-restore < /etc/iptables/rules.v4; sleep infinity"]
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
- NET_RAW
volumeMounts:
- mountPath: /dev/shm
name: dshm
- name: ip-nonlocal-bind
mountPath: /host/proc/sys/net/ipv4
- name: dev
mountPath: /dev
- name: modules
mountPath: /lib/modules
ports:
- containerPort: 80
- containerPort: 8080
- containerPort: 53
protocol: UDP
- containerPort: 5353
protocol: UDP
- containerPort: 5666
- containerPort: 4500
- containerPort: 500
- containerPort: 3306
volumes:
- name: dshm
emptyDir:
medium: Memory
- name: ip-nonlocal-bind
hostPath:
path: /proc/sys/net/ipv4
- name: dev
hostPath:
path: /dev
type: Directory
- name: modules
hostPath:
path: /lib/modules
type: Directory
For any questions, feel free to comment of this thread, or ask me at max.vlashchuk#gmail.com :)

Resources