esp8266 mDNS was working, but has now stopped - network-programming

I got the mDNS responder working on my ESP8266. It's been working for weeks, I can ping, nslookup and go to the web page. I'm using win10 and Android phone, both worked great.
Then it appeared to stop responding to the local name (IP still works fine).
The weird part is, when I turned on the debug messages on the console, mDNS still appears to be working, but the clients still can't find it.
Below are debug logs, but from what I can tell it should be working. Any mDNS experts out there that can see what is going wrong?
Thanks!
Here it is starting up:
Hard resetting via RTS pin...
=============================================== [SUCCESS] Took 37.10 seconds ===============================================
--- Available filters and text transformations: colorize, debug, default, direct, esp8266_exception_decoder, hexlify, log2file, nocontrol, printable, send_on_enter, time
--- More details at ...
--- Miniterm on COM3 115200,8,N,1 ---
--- Quit: Ctrl+C | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
-- BME280 connected on address 0x76 --
-- Configuring GPIO --
isMode: 2
Connecting to WiFi
.
.
.
.
192.168.86.71
[MDNSResponder] _allocUDPContext
MDNS started
[MDNSResponder] addService: Succeeded to add 'therm1.http.TCP'!
ezTime debug level set to INFO
Syncing NTP
Waiting for time sync
Querying time.google.com ... success (round trip 51 ms)
Received time: Thursday, 10-Mar-22 13:52:34.870 UTC
Time is in sync
our Timezone: America/Chicago
Timezone lookup for: America/Chicago ... [MDNSResponder] _callProcess (8098, triggered by: 192.168.86.163)
[MDNSResponder] _parseMessage (Time: 8098 ms, heap: 43184 bytes, from 192.168.86.163(5353), to 224.0.0.251(5353))
[MDNSResponder] _readRRQuestion
[MDNSResponder] _readRRQuestion _%9E5E7C8F47989526C9BCD95D24084F6F0B27C5ED._sub._googlecast._tcp.local Type:0x000C Class:0x0001 Multicast
[MDNSResponder] _readRRQuestion
[MDNSResponder] _readRRQuestion _googlecast._tcp.local Type:0x000C Class:0x0001 Multicast
[MDNSResponder] _parseQuery: No reply needed
[MDNSResponder] _parseMessage: Done (Succeeded after 34 ms, ate 0 bytes, remaining 43184)
[MDNSResponder] _callProcess (8141, triggered by: 192.168.86.163)
[MDNSResponder] _parseMessage (Time: 8146 ms, heap: 42896 bytes, from 192.168.86.21(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 10 ms, ate 0 bytes, remaining 42896)
[MDNSResponder] _callProcess (8164, triggered by: 192.168.86.21)
[MDNSResponder] _parseMessage (Time: 8169 ms, heap: 42768 bytes, from 192.168.86.112(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 10 ms, ate 0 bytes, remaining 42768)
[MDNSResponder] _callProcess (8187, triggered by: 192.168.86.112)
[MDNSResponder] _parseMessage (Time: 8193 ms, heap: 42760 bytes, from 192.168.86.127(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 9 ms, ate 0 bytes, remaining 42760)
[MDNSResponder] _callProcess (8211, triggered by: 192.168.86.127)
[MDNSResponder] _parseMessage (Time: 8216 ms, heap: 42784 bytes, from 192.168.86.127(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 10 ms, ate 0 bytes, remaining 42784)
(round-trip 486 ms) success.
Olson: America/Chicago
Posix: CST6CDT,M3.2.0,M11.1.0
Central Time: Thursday, 10-Mar-2022 07:52:35 CST
[MDNSResponder] _updateProbeStatus: Starting host probing...
[MDNSResponder] _sendHostProbe (therm1, 8435)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:0 OP:0 AA:0 TC:0 RD:0 RA:0 R:0 QD:1 AN:0 NS:1 AR:0
[MDNSResponder] _writeMDNSQuestion
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Did sent host probe
[MDNSResponder] _sendServiceProbe (therm1.http.TCP, 8502)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:0 OP:0 AA:0 TC:0 RD:0 RA:0 R:0 QD:1 AN:0 NS:2 AR:2
[MDNSResponder] _writeMDNSQuestion
[MDNSResponder] _writeMDNSAnswer_PTR_NAME
[MDNSResponder] _writeMDNSAnswer_SRV
[MDNSResponder] _writeMDNSAnswer_TXT
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Did sent service probe (1)
[MDNSResponder] _sendHostProbe (therm1, 8708)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:0 OP:0 AA:0 TC:0 RD:0 RA:0 R:0 QD:1 AN:0 NS:1 AR:0
[MDNSResponder] _writeMDNSQuestion
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Did sent host probe
[MDNSResponder] _sendServiceProbe (therm1.http.TCP, 8787)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:0 OP:0 AA:0 TC:0 RD:0 RA:0 R:0 QD:1 AN:0 NS:2 AR:2
[MDNSResponder] _writeMDNSQuestion
[MDNSResponder] _writeMDNSAnswer_PTR_NAME
[MDNSResponder] _writeMDNSAnswer_SRV
[MDNSResponder] _writeMDNSAnswer_TXT
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Did sent service probe (2)
[MDNSResponder] _sendHostProbe (therm1, 8981)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:0 OP:0 AA:0 TC:0 RD:0 RA:0 R:0 QD:1 AN:0 NS:1 AR:0
[MDNSResponder] _writeMDNSQuestion
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Did sent host probe
[MDNSResponder] _sendServiceProbe (therm1.http.TCP, 9071)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:0 OP:0 AA:0 TC:0 RD:0 RA:0 R:0 QD:1 AN:0 NS:2 AR:2
[MDNSResponder] _writeMDNSQuestion
[MDNSResponder] _writeMDNSAnswer_PTR_NAME
[MDNSResponder] _writeMDNSAnswer_SRV
[MDNSResponder] _writeMDNSAnswer_TXT
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Did sent service probe (3)
[MDNSResponder] _updateProbeStatus: Done host probing.
[MDNSResponder] _updateProbeStatus: Prepared host announcing.
[MDNSResponder] _updateProbeStatus: Done service probing therm1.http.TCP
[MDNSResponder] _updateProbeStatus: Prepared service announcing.
[MDNSResponder] _announce: Announcing host therm1 (content 0x3)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:2 NS:0 AR:0
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _writeMDNSAnswer_PTR_IP4 (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing host (1).
[MDNSResponder] _announceService: Announcing service therm1.http.TCP (content 0xF0)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:4 NS:0 AR:1
[MDNSResponder] _writeMDNSAnswer_PTR_TYPE
[MDNSResponder] _writeMDNSAnswer_PTR_NAME
[MDNSResponder] _writeMDNSAnswer_SRV
[MDNSResponder] _writeMDNSAnswer_TXT
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing service therm1.http.TCP (1)
[MDNSResponder] _announce: Announcing host therm1 (content 0x3)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:2 NS:0 AR:0
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _writeMDNSAnswer_PTR_IP4 (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing host (2).
[MDNSResponder] _announceService: Announcing service therm1.http.TCP (content 0xF0)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:4 NS:0 AR:1
[MDNSResponder] _writeMDNSAnswer_PTR_TYPE
[MDNSResponder] _writeMDNSAnswer_PTR_NAME
[MDNSResponder] _writeMDNSAnswer_SRV
[MDNSResponder] _writeMDNSAnswer_TXT
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing service therm1.http.TCP (2)
[MDNSResponder] _announce: Announcing host therm1 (content 0x3)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:2 NS:0 AR:0
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _writeMDNSAnswer_PTR_IP4 (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing host (3).
[MDNSResponder] _announceService: Announcing service therm1.http.TCP (content 0xF0)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:4 NS:0 AR:1
[MDNSResponder] _writeMDNSAnswer_PTR_TYPE
[MDNSResponder] _writeMDNSAnswer_PTR_NAME
[MDNSResponder] _writeMDNSAnswer_SRV
[MDNSResponder] _writeMDNSAnswer_TXT
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing service therm1.http.TCP (3)
[MDNSResponder] _announce: Announcing host therm1 (content 0x3)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:2 NS:0 AR:0
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _writeMDNSAnswer_PTR_IP4 (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing host (4).
[MDNSResponder] _announceService: Announcing service therm1.http.TCP (content 0xF0)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:4 NS:0 AR:1
[MDNSResponder] _writeMDNSAnswer_PTR_TYPE
[MDNSResponder] _writeMDNSAnswer_PTR_NAME
[MDNSResponder] _writeMDNSAnswer_SRV
[MDNSResponder] _writeMDNSAnswer_TXT
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing service therm1.http.TCP (4)
[MDNSResponder] _announce: Announcing host therm1 (content 0x3)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:2 NS:0 AR:0
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _writeMDNSAnswer_PTR_IP4 (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing host (5).
[MDNSResponder] _announceService: Announcing service therm1.http.TCP (content 0xF0)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:4 NS:0 AR:1
[MDNSResponder] _writeMDNSAnswer_PTR_TYPE
[MDNSResponder] _writeMDNSAnswer_PTR_NAME
[MDNSResponder] _writeMDNSAnswer_SRV
[MDNSResponder] _writeMDNSAnswer_TXT
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing service therm1.http.TCP (5)
[MDNSResponder] _announce: Announcing host therm1 (content 0x3)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:2 NS:0 AR:0
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _writeMDNSAnswer_PTR_IP4 (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing host (6).
[MDNSResponder] _announceService: Announcing service therm1.http.TCP (content 0xF0)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:4 NS:0 AR:1
[MDNSResponder] _writeMDNSAnswer_PTR_TYPE
[MDNSResponder] _writeMDNSAnswer_PTR_NAME
[MDNSResponder] _writeMDNSAnswer_SRV
[MDNSResponder] _writeMDNSAnswer_TXT
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing service therm1.http.TCP (6)
[MDNSResponder] _callProcess (16147, triggered by: 192.168.86.127)
[MDNSResponder] _parseMessage (Time: 16148 ms, heap: 42000 bytes, from 192.168.86.165(5353), to 224.0.0.251(5353))
[MDNSResponder] _readRRQuestion
[MDNSResponder] _readRRQuestion _googlecast._tcp.local Type:0x000C Class:0x0001 Multicast
[MDNSResponder] _readRRQuestion
[MDNSResponder] _readRRQuestion _233637DE._sub._googlecast._tcp.local Type:0x000C Class:0x0001 Multicast
[MDNSResponder] _parseQuery: No reply needed
[MDNSResponder] _parseMessage: Done (Succeeded after 31 ms, ate 0 bytes, remaining 42000)
[MDNSResponder] _callProcess (16187, triggered by: 192.168.86.165)
[MDNSResponder] _parseMessage (Time: 16192 ms, heap: 41704 bytes, from 192.168.86.21(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 10 ms, ate 0 bytes, remaining 41704)
[MDNSResponder] _callProcess (16339, triggered by: 192.168.86.21)
[MDNSResponder] _parseMessage (Time: 16340 ms, heap: 41704 bytes, from 192.168.86.112(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 4 ms, ate 0 bytes, remaining 41704)
[MDNSResponder] _callProcess (16352, triggered by: 192.168.86.112)
[MDNSResponder] _parseMessage (Time: 16358 ms, heap: 41704 bytes, from 192.168.86.119(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 10 ms, ate 0 bytes, remaining 41704)
[MDNSResponder] _callProcess (16376, triggered by: 192.168.86.119)
[MDNSResponder] _parseMessage (Time: 16381 ms, heap: 41672 bytes, from 192.168.86.127(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 10 ms, ate 0 bytes, remaining 41672)
[MDNSResponder] _callProcess (16399, triggered by: 192.168.86.127)
[MDNSResponder] _parseMessage (Time: 16405 ms, heap: 41688 bytes, from 192.168.86.112(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 10 ms, ate 0 bytes, remaining 41688)
[MDNSResponder] _announce: Announcing host therm1 (content 0x3)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:2 NS:0 AR:0
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _writeMDNSAnswer_PTR_IP4 (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing host (7).
[MDNSResponder] _announceService: Announcing service therm1.http.TCP (content 0xF0)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:4 NS:0 AR:1
[MDNSResponder] _writeMDNSAnswer_PTR_TYPE
[MDNSResponder] _writeMDNSAnswer_PTR_NAME
[MDNSResponder] _writeMDNSAnswer_SRV
[MDNSResponder] _writeMDNSAnswer_TXT
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Announcing service therm1.http.TCP (7)
[MDNSResponder] _callProcess (17221, triggered by: 192.168.86.112)
[MDNSResponder] _parseMessage (Time: 17222 ms, heap: 41496 bytes, from 192.168.86.165(5353), to 224.0.0.251(5353))
[MDNSResponder] _readRRQuestion
[MDNSResponder] _readRRQuestion _googlecast._tcp.local Type:0x000C Class:0x0001 Multicast
[MDNSResponder] _readRRQuestion
[MDNSResponder] _readRRQuestion _233637DE._sub._googlecast._tcp.local Type:0x000C Class:0x0001 Multicast
[MDNSResponder] _parseQuery: No reply needed
[MDNSResponder] _parseMessage: Done (Succeeded after 31 ms, ate 0 bytes, remaining 41496)
[MDNSResponder] _callProcess (17263, triggered by: 192.168.86.165)
[MDNSResponder] _parseMessage (Time: 17266 ms, heap: 41200 bytes, from 192.168.86.21(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 10 ms, ate 0 bytes, remaining 41200)
[MDNSResponder] _callProcess (17428, triggered by: 192.168.86.21)
[MDNSResponder] _parseMessage (Time: 17429 ms, heap: 41200 bytes, from 192.168.86.119(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 4 ms, ate 0 bytes, remaining 41200)
[MDNSResponder] _callProcess (17450, triggered by: 192.168.86.119)
[MDNSResponder] _parseMessage (Time: 17451 ms, heap: 41184 bytes, from 192.168.86.119(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 6 ms, ate 0 bytes, remaining 41184)
[MDNSResponder] _callProcess (17465, triggered by: 192.168.86.119)
[MDNSResponder] _parseMessage (Time: 17471 ms, heap: 41200 bytes, from 192.168.86.112(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 10 ms, ate 0 bytes, remaining 41200)
[MDNSResponder] _callProcess (17489, triggered by: 192.168.86.112)
[MDNSResponder] _parseMessage (Time: 17494 ms, heap: 41160 bytes, from 192.168.86.112(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 10 ms, ate 0 bytes, remaining 41160)
[MDNSResponder] _callProcess (17512, triggered by: 192.168.86.112)
[MDNSResponder] _parseMessage (Time: 17518 ms, heap: 41192 bytes, from 192.168.86.127(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 10 ms, ate 0 bytes, remaining 41192)
[MDNSResponder] _announce: Announcing host therm1 (content 0x3)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:2 NS:0 AR:0
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _writeMDNSAnswer_PTR_IP4 (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Done host announcing.
[MDNSResponder] _announceService: Announcing service therm1.http.TCP (content 0xF0)
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:4 NS:0 AR:1
[MDNSResponder] _writeMDNSAnswer_PTR_TYPE
[MDNSResponder] _writeMDNSAnswer_PTR_NAME
[MDNSResponder] _writeMDNSAnswer_SRV
[MDNSResponder] _writeMDNSAnswer_TXT
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _updateProbeStatus: Done service announcing for therm1.http.TCP
And here it is responding to a query when I ping it from win10 client:
[MDNSResponder] _callProcess (141349, triggered by: 192.168.86.112)
[MDNSResponder] _parseMessage (Time: 141351 ms, heap: 41184 bytes, from 192.168.86.112(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 11 ms, ate 0 bytes, remaining 41184)
[MDNSResponder] _callProcess (141381, triggered by: 192.168.86.112)
[MDNSResponder] _parseMessage (Time: 141382 ms, heap: 41192 bytes, from 192.168.86.127(5353), to 224.0.0.251(5353))
[MDNSResponder] _parseMessage: Done (Succeeded after 5 ms, ate 0 bytes, remaining 41192)
[MDNSResponder] _callProcess (156603, triggered by: 192.168.86.127)
[MDNSResponder] _parseMessage (Time: 156604 ms, heap: 41528 bytes, from 192.168.86.27(5353), to 224.0.0.251(5353))
[MDNSResponder] _readRRQuestion
[MDNSResponder] _readRRQuestion therm1.local Type:0x0001 Class:0x0001 Multicast
[MDNSResponder] _replyMaskForHost: 0x1
[MDNSResponder] _parseQuery: Host reply needed 0x1
[MDNSResponder] _parseQuery: Sending answer(0x1)...
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:1 NS:0 AR:0
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _parseMessage: Done (Succeeded after 50 ms, ate 152 bytes, remaining 41376)
[MDNSResponder] _callProcess (157763, triggered by: 192.168.86.27)
[MDNSResponder] _parseMessage (Time: 157764 ms, heap: 41528 bytes, from 192.168.86.27(5353), to 224.0.0.251(5353))
[MDNSResponder] _readRRQuestion
[MDNSResponder] _readRRQuestion therm1.local Type:0x0001 Class:0x0001 Multicast
[MDNSResponder] _replyMaskForHost: 0x1
[MDNSResponder] _parseQuery: Host reply needed 0x1
[MDNSResponder] _parseQuery: Sending answer(0x1)...
[MDNSResponder] _sendMDNSMessage_Multicast: Will send to '224.0.0.251'.
[MDNSResponder] _prepareMDNSMessage
[MDNSResponder] _prepareMDNSMessage: ID:0 QR:1 OP:0 AA:1 TC:0 RD:0 RA:0 R:0 QD:0 AN:1 NS:0 AR:0
[MDNSResponder] _writeMDNSAnswer_A (192.168.86.71)
[MDNSResponder] _parseMessage: Done (Succeeded after 49 ms, ate 152 bytes, remaining 41376)
[MDNSResponder] _callProcess (157822, triggered by: 192.168.86.27)
[MDNSResponder] _parseMessage (Time: 157827 ms, heap: 41352 bytes, from 192.168.86.27(5353), to 224.0.0.251(5353))
[MDNSResponder] _readRRQuestion
[MDNSResponder] _readRRQuestion therm1.local Type:0x001C Class:0x0001 Multicast
[MDNSResponder] _parseQuery: No reply needed
[MDNSResponder] _parseMessage: Done (Succeeded after 23 ms, ate 0 bytes, remaining 41352)

Picking this up two days later, it started working...nothing changed (in the code at least).

Related

Telegraf container. Error resolve itself domain

I have set up a docker stack with telegraf, influxdb and grafana to monitor urls using telegraf's http_request input.
When it monitors external URLs like google there is no problem, but when it launches the request for hostname mydomain. com and it resolves to the same ip the telegraf container gives a timeout.
I have tried from inside the container to launch a curl and indeed it fails, but from the host (outside the container) the curl works.
Any idea what could be going on or where I could move forward?
root#08ad708c4a09:/# curl -m 5 https://mydomain1.com:9443
curl: (28) Connection timed out after 5001 milliseconds
root#08ad708c4a09:/# ping mydomain1.com
PING mydomain1.com (itself.ip.host.machine) 56(84) bytes of data.
64 bytes from 1.vps.net (itself.ip.host.machine): icmp_seq=1 ttl=64 time=0.148 ms
64 bytes from 1.vps.net (itself.ip.host.machine): icmp_seq=2 ttl=64 time=0.138 ms
64 bytes from 1.vps.net (itself.ip.host.machine): icmp_seq=3 ttl=64 time=0.126 ms
^C
--- mydomain1.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 25ms
rtt min/avg/max/mdev = 0.126/0.137/0.148/0.013 ms
root#08ad708c4a09:/# curl -m 5 mydomain2.com
Hello world
thank you very much community.
I hope that telegraf's http_request knows how to resolve a domain that points to the same ip and does not respond with a timeout.

Unable to access ports from services across nodes in overlay network in swarm mode

I use the following compose file for stack deployment
version: '3.8'
x-deploy: &Deploy
replicas: 1
placement: &DeployPlacement
max_replicas_per_node: 1
restart_policy:
max_attempts: 15
window: 60s
resources: &DeployResources
reservations: &DeployResourcesReservations
cpus: '0.05'
memory: 10M
services:
serv1:
image: alpine
networks:
- test_nw
deploy:
<<: *Deploy
entrypoint: ["tail", "-f", "/dev/null"]
serv2:
image: nginx
networks:
- test_nw
deploy:
<<: *Deploy
placement:
<<: *DeployPlacement
constraints:
- "node.role!=manager"
expose: # deprecated, but I leave it here anyway
- "80"
networks:
test_nw:
name: test_nw
driver: overlay
For the sake of convenience, I'll use test_serv1 running via container in host1 and test_serv2 running via container2 in host2 for the rest of this port since actual host and container names keep changing for me.
When I get into the shell of test_serv1, the following happens when I ping serv2:
ubuntu#host1:~$ sudo docker exec -it test_serv1.1.container1 ash
/ # ping serv2
PING serv2 (10.0.7.5): 56 data bytes
64 bytes from 10.0.7.5: seq=0 ttl=64 time=0.084 ms
However, the ip of container2 as indicated while inspecting container2 is 10.0.7.6
ubuntu#host2:~$ sudo docker inspect test_serv2.1.container2
[
{
****************
"NetworkSettings": {
"Bridge": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"80/tcp": null
},
****************
"Networks": {
"test_nw": {
"IPAMConfig": {
"IPv4Address": "10.0.7.6"
},
"Links": null,
"Aliases": [
"80c06bb29a42"
],
"NetworkID": "sp56aiqxnt56yglsd8mc1zqpv",
"EndpointID": "dac52f1d7fa148f5acac20f89d6b709193b3c11fc90201424cd052785121e706",
"Gateway": "",
"IPAddress": "10.0.7.6",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:0a:00:07:06",
****************
}
}
}
]
I can see that container2 is listening on port 80 on all interfaces and by itself can ping both 10.0.7.5 and 10.0.7.6 (!!), and can access port 80 on both ips (!!).
ubuntu#host2:~$ sudo docker exec -it test_serv2.1.container2 bash
root#80c06bb29a42:/# ping 10.0.7.5
PING 10.0.7.5 (10.0.7.5) 56(84) bytes of data.
64 bytes from 10.0.7.5: icmp_seq=1 ttl=64 time=0.093 ms
64 bytes from 10.0.7.5: icmp_seq=2 ttl=64 time=0.094 ms
^C
--- 10.0.7.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 8ms
rtt min/avg/max/mdev = 0.093/0.093/0.094/0.009 ms
root#80c06bb29a42:/# ping 10.0.7.6
PING 10.0.7.6 (10.0.7.6) 56(84) bytes of data.
64 bytes from 10.0.7.6: icmp_seq=1 ttl=64 time=0.035 ms
64 bytes from 10.0.7.6: icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from 10.0.7.6: icmp_seq=3 ttl=64 time=0.053 ms
^C
--- 10.0.7.6 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 50ms
rtt min/avg/max/mdev = 0.035/0.049/0.059/0.010 ms
root#80c06bb29a42:/# netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 33110 1/nginx: master pro
tcp 0 0 127.0.0.11:35491 0.0.0.0:* LISTEN 0 32855 -
tcp6 0 0 :::80 :::* LISTEN 0 33111 1/nginx: master pro
udp 0 0 127.0.0.11:43477 0.0.0.0:* 0 32854 -
root#80c06bb29a42:/# curl 10.0.7.5:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root#80c06bb29a42:/# curl 10.0.7.6:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root#80c06bb29a42:/#
However, when I try the following from container1, I simply want to throw my laptop at a wall since I am unable to figure out how no one else faced such an issue and/or posted such a question :/
ubuntu#host1:~$ sudo docker exec -it test_serv1.1.container1 ash
/ # ping serv2
PING serv2 (10.0.7.5): 56 data bytes
64 bytes from 10.0.7.5: seq=0 ttl=64 time=0.084 ms
64 bytes from 10.0.7.5: seq=1 ttl=64 time=0.086 ms
^C
--- serv2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.084/0.085/0.086 ms
/ # curl serv2:80
^C
/ # curl --max-time 10 serv2:80
curl: (28) Connection timed out after 10001 milliseconds
/ # ping test_serv2
PING test_serv2 (10.0.7.5): 56 data bytes
64 bytes from 10.0.7.5: seq=0 ttl=64 time=0.071 ms
64 bytes from 10.0.7.5: seq=1 ttl=64 time=0.064 ms
64 bytes from 10.0.7.5: seq=2 ttl=64 time=0.125 ms
^C
--- test_serv2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.064/0.086/0.125 ms
/ # curl --max-time 10 test_serv2:80
curl: (28) Connection timed out after 10001 milliseconds
/ # ping 10.0.7.6
PING 10.0.7.6 (10.0.7.6): 56 data bytes
^C
--- 10.0.7.6 ping statistics ---
87 packets transmitted, 0 packets received, 100% packet loss
/ # curl --max-time 10 10.0.7.6:80
curl: (28) Connection timed out after 10001 milliseconds
/ #
I have checked that all the docker ports (TCP 2376, 2377, 7946, 80 and UDP 7946, 4789) are open on both nodes.
What is going on wrong here?? Any help truly appreciated!
I'm posting this for someone who might come looking since there is no answer yet.
A few things to consider (even though it is all mentioned in the question):
Please ensure all ports are open once again. Check iptables thoroughly even though you had set it once. Docker engine seems to change the configuration and at times leave it in an unusable state if you open the ports after docker had started (restarting won't fix it, you need to hard stop -> reset iptables -> start docker ce)
Ensure your machine's local IP addresses are not conflicting. This is big deal. While I am unable to describe it, you may try to understand various classes of IP and see if there is any conflict.
Probably the most trivial, but almost always excluded instruction: Remember to ALWAYS init or join a swarm with both --advertise-addr and --listen-addr. The --advertise-addr should be a public-facing IP address (even if not internet facing, it is the IP address that the other hosts use to reach this host). The --listen-addr is not documented well enough, but this must be the IP of the interface to which docker should bind to.
Having gone through the above, please note that AWS Ec2 does not play well with cross-provider hosts. If you have machines spread across providers (say, IBM, Azure, GCP etc.), Ec2 plays spoil-sport there. I'm very curious on how it is done (has to be a low level network infringement), but I've spent considerable amount of time trying to get it work and it wouldn't.

Docker in docker routing within Kubernetes

I've network related issue on the Kubernetes host, using Calico network layer. For continuous integration I need to run docker in docker, but running simple docker build with this Dockerfile:
FROM praqma/network-multitool AS build
RUN route
RUN ping -c 4 google.com
RUN traceroute google.com
produces output:
Step 1/4 : FROM praqma/network-multitool AS build
---> 3619cb81e582
Step 2/4 : RUN route
---> Running in 80bda13a9860
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 eth0
Removing intermediate container 80bda13a9860
---> d79e864eafaf
Step 3/4 : RUN ping -c 4 google.com
---> Running in 76354a92a413
PING google.com (216.58.201.110) 56(84) bytes of data.
--- google.com ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 53ms
---> 3619cb81e582
Step 4/4 : RUN traceroute google.com
---> Running in 3aa7908347ba
traceroute to google.com (216.58.201.110), 30 hops max, 46 byte packets
1 172.17.0.1 (172.17.0.1) 0.009 ms 0.005 ms 0.003 ms
Seems docker container has invalid routing while created off Kubernetes. Pods orchestrated by Kubernetes can access internet normally.
bash-5.0# ping -c 3 google.com
PING google.com (216.58.201.110) 56(84) bytes of data.
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=1 ttl=55 time=0.726 ms
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=2 ttl=55 time=0.586 ms
64 bytes from prg03s02-in-f14.1e100.net (216.58.201.110): icmp_seq=3 ttl=55 time=0.451 ms
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 10ms
rtt min/avg/max/mdev = 0.451/0.587/0.726/0.115 ms
bash-5.0# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 169.254.1.1 0.0.0.0 UG 0 0 0 eth0
169.254.1.1 * 255.255.255.255 UH 0 0 0 eth0
bash-5.0# traceroute google.com
traceroute to google.com (216.58.201.110), 30 hops max, 46 byte packets
1 10-68-149-194.kubelet.kube-system.svc.kube.example.com (10.68.149.194) 0.006 ms 0.005 ms 0.004 ms

How can I fix network for docker in kubernetes?

I have a kubernetes cluster and using Jenkins
pipeline jenkins:
podTemplate(label: 'pod-golang', containers: [
containerTemplate(name: 'golang', image: 'golang:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'docker', image: 'docker:17.11-dind', ttyEnabled: true, command: 'cat'),
],
volumes: [hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')]
) {
node('pod-golang') {
def app
String applicationName = "auth"
String buildNumber = "0.1.${env.BUILD_NUMBER}"
stage 'Checkout'
checkout scm
container('docker') {
stage 'Create docker image'
app = docker.build("test/${applicationName}")
}
}
}
When I run "docker build" command in new (creating) container not working network:
Step 1/6 : FROM alpine:latest
---> e21c333399e0
Step 2/6 : RUN apk --no-cache add ca-certificates
---> Running in 8483bb918ee8
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
[91mWARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz: operation timed out
[0mEXITCODE 0[91mWARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz: operation timed out
if I use "docker run" command on host machine I see, It does not work properly network in "manual" started docker image:
root#node2:~/tmp# docker run --rm -it alpine ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
^C
--- 8.8.8.8 ping statistics ---
14 packets transmitted, 0 packets received, 100% packet loss
root#node2:~/tmp# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=12.9 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=12.9 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 12.927/12.943/12.960/0.114 ms
but When I use pod from kubectl everything worked.
How can I fix that?
Open another windows run the tcpdump -vvv host 8.8.8.8 command see traffic going out.
Here is my host output.
# tcpdump -vvv host 8.8.8.8
tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
18:36:35.142633 IP (tos 0x0, ttl 63, id 2091, offset 0, flags [DF], proto ICMP (1), length 84)
webserver > google-public-dns-a.google.com: ICMP echo request, id 256, seq 0, length 64
18:36:35.170475 IP (tos 0x0, ttl 55, id 18270, offset 0, flags [none], proto ICMP (1), length 84)
google-public-dns-a.google.com > webserver: ICMP echo reply, id 256, seq 0, length 64
18:36:36.146145 IP (tos 0x0, ttl 63, id 2180, offset 0, flags [DF], proto ICMP (1), length 84)
webserver > google-public-dns-a.google.com: ICMP echo request, id 256, seq 1, length 64
# docker run --rm -it alpine ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=54 time=30.720 ms
64 bytes from 8.8.8.8: seq=1 ttl=54 time=25.576 ms
64 bytes from 8.8.8.8: seq=2 ttl=54 time=28.464 ms
64 bytes from 8.8.8.8: seq=3 ttl=54 time=33.860 ms
64 bytes from 8.8.8.8: seq=4 ttl=54 time=25.525 ms

Erlang node connectivity problem

Struggling with connecting 2 nodes running on separate boxes. Tried to
make sure that there is no usual problems with cookie synchronization,
DNS or firewall.
First, I run epmd in debug mode as recommended by Erlang docs:
epmd -d -d
Then on box #1:
erl -name xmpp1#server1.net -kernel inet_dist_listen_min 6000 inet_dist_listen_max 6050 -setcookie testcookie
and on box #2:
erl -name xmpp2#server2.net -kernel inet_dist_listen_min 6000 inet_dist_listen_max 6050 -setcookie testcookie
No luck with ping. For example, on box #2:
Erlang (BEAM) emulator version 5.6.4 [source] [64-bit] [smp:4] [async-threads:0] [kernel-poll:false]
Eshell V5.6.4 (abort with ^G)
(xmpp2#server2.net)1> net_adm:ping('xmpp1#server1.net').
pang
epmd on server1.net shows following:
epmd: Sun Sep 12 01:40:32 2010: opening connection on file descriptor 6
epmd: Sun Sep 12 01:40:32 2010: got 8 bytes
***** 00000000 00 06 7a 78 6d 70 70 31 |..zxmpp1|
epmd: Sun Sep 12 01:40:32 2010: ** got PORT2_REQ
epmd: Sun Sep 12 01:40:32 2010: got 18 bytes
***** 00000000 77 00 17 70 4d 00 00 05 00 05 00 05 78 6d 70 70 |w..pM.......xmpp|
***** 00000010 31 00 |1.|
epmd: Sun Sep 12 01:40:32 2010: ** sent PORT2_RESP (ok) for "xmpp1"
epmd: Sun Sep 12 01:40:32 2010: closing connection on file descriptor 6
i.e., appears to receive ping request from second node and respond with ok.
Tshark listening on epmd port (TCP 4369) gives following (I replaced real IPs with server names):
1 0.000000 server2.net -> server1.net TCP 43809 > epmd [SYN] Seq=0 Win=5840 Len=0 MSS=1460 SACK_PERM=1 TSV=776213773 TSER=0 WS=5
2 0.000433 server1.net -> server2.net TCP epmd > 43809 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 SACK_PERM=1 TSV=1595930818 TSER=776213773 WS=6
3 0.000483 server2.net -> server1.net TCP 43809 > epmd [ACK] Seq=1 Ack=1 Win=5856 Len=0 TSV=776213773 TSER=1595930818
4 0.000545 server2.net -> server1.net EPMD 43809 > epmd [PSH, ACK] Seq=1 Ack=1 Win=5856 Len=8 TSV=776213773 TSER=1595930818
5 0.001445 server1.net -> server2.net TCP epmd > 43809 [ACK] Seq=1 Ack=9 Win=5824 Len=0 TSV=1595930818 TSER=776213773
6 0.001466 server1.net -> server2.net EPMD epmd > 43809 [PSH, ACK] Seq=1 Ack=9 Win=5824 Len=18 TSV=1595930818 TSER=776213773
7 0.001474 server2.net -> server1.net TCP 43809 > epmd [ACK] Seq=9 Ack=19 Win=5856 Len=0 TSV=776213773 TSER=1595930818
8 0.001481 server1.net -> server2.net TCP epmd > 43809 [FIN, ACK] Seq=19 Ack=9 Win=5824 Len=0 TSV=1595930818 TSER=776213773
9 0.001623 server2.net -> server1.net TCP 43809 > epmd [FIN, ACK] Seq=9 Ack=20 Win=5856 Len=0 TSV=776213773 TSER=1595930818
10 0.001990 server1.net -> server2.net TCP epmd > 43809 [ACK] Seq=20 Ack=10 Win=5824 Len=0 TSV=1595930818 TSER=776213773
So it looks to me that there is no firewall issues, as epmd instances talk to each other. What am I missing?
Your advise is very much appreciated!
Best regards,
Boris
I am also a newbie to erlang
My first few experiments were with binding absolute IP address.
Erl -name coder#192.168.1.2 -setcookie thusismadness
Erl -name dumb#192.168.1.3 -setcookie thusismadness
If you are connecting over internet make sure that you open ports specified in inet_dist_listen_min & inet_dist_listen_max in your router (app port) + epmd port.
Server1 -> router1 ports open for epmd & app port
Server2 -> router2 ports open for epmd & app port
Please bind over IP address first before using namespace.
Turns out to be a firewall issue. Big thanks to Michael Santos who showed me the right direction. Check out his answer here.

Resources