Armbian Ubuntu Netplan match with different wifi adapters - netplan

I'm trying to configure my orangepi to connect to a wifi hotspot using different wifi adapters.
Configuring a single wifi adapter in my Netplan /etc/netplan/armbian-default.yaml works smoothly. config below:
network:
version: 2
ethernets:
eth0:
renderer: networkd
dhcp4: no
addresses:
[192.168.1.114/24]
gateway4: 192.168.1.1
nameservers:
addresses: [8.8.8.8, 4.4.4.4]
wifis:
wlx00e1b0101341:
renderer: networkd
access-points:
"wifissid":
password: "wifipass"
dhcp4: no
addresses:
[192.168.43.7/24, 192.168.42.7/24]
My wifi adapters names all start with "wlx" and my goal is to have a wildcard configuration and avoid configuring each one alone. But when I try to add a match parameter to it as below
network:
version: 2
ethernets:
eth0:
renderer: networkd
dhcp4: no
addresses:
[192.168.1.114/24]
gateway4: 192.168.1.1
nameservers:
addresses: [8.8.8.8, 4.4.4.4]
wifis:
match:
name: wlx*
renderer: networkd
access-points:
"wifissid":
password: "wifipass"
dhcp4: no
addresses:
[192.168.43.7/24, 192.168.42.7/24]
I get the below error when using netplan --debug apply
Error in network definition //etc/netplan/armbian-default.yaml line 13
column 6: unknown key name
Any ideas?

This is what i have in my EC2 ubuntu18.04 box to match multiple ethernet interfaces names which are usually assigned dynamically
network:
version: 2
ethernets:
ens:
match:
name: ens*
dhcp4: true
dhcp6: false
nameservers:
addresses: [8.8.8.8, 8.8.4.4]
search: [~.]
eth:
match:
name: eth*
dhcp4: true
dhcp6: false
nameservers:
addresses: [8.8.8.8, 8.8.4.4]
search: [~.]

Related

Attaching a second network to a Docker NGINX container causes it to stop responding to any of them

I've been trying to setup what might be a rather complicated docker setup, and have run into a very weird issue. What I currently have is a collection of containers, all running different web services, and an Nginx container that routes them to be publicly accessible over HTTPS. This has worked fine, but meant I can only setup services that use HTTPS, and was run over one of my 5 static IPs my ISP has given me, by routing it through my UniFi network. When I went to add GitLab, I realized I needed to connect it to a separate public address, so that I could access port 22 for SSH based Git clones. Since I already had the switch port connected to my modem on a vlan (topology weirdness, it works fine,) I simply tagged the server port to allow that VLan through, and started using a macvlan network. As soon as I added the macvlan to my nginx container, it stopped working all together. After spending several hours making sure my static ips were actually setup correctly, I found out that if I attach more than one network to my Nginx server, it stops responding to anything at all. If I stick just the macvlan on it, it can respond just fine, even over my static ip. But if there is more than one, everything stops working. Pings, TCP requests, everything. If I use docker network disconnect to remove the network from the running instance, it starts working immediately again. I've tried this with just netcat on an alpine instance, and can confirm that all inbound traffic stops immediately when a second network is attached, and resumes as soon as it's removed. I'm including a sample docker-compose that shows this effect just by adding or removing the networks.
docker version:
Client: Docker Engine - Community
Version: 20.10.13
API version: 1.41
Go version: go1.16.15
Git commit: a224086
Built: Thu Mar 10 14:07:51 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.13
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 906f57f
Built: Thu Mar 10 14:05:44 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.10
GitCommit: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
runc:
Version: 1.0.3
GitCommit: v1.0.3-0-gf46b6ba
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker info:
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.8.0-docker)
compose: Docker Compose (Docker Inc., v2.2.3)
scan: Docker Scan (Docker Inc., v0.12.0)
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 9
Server Version: 20.10.13
Storage Driver: zfs
Zpool: Storage
Zpool Health: ONLINE
Parent Dataset: Storage/docker
Space Used By Parent: 87704957952
Space Available: 8778335683049
Parent Quota: no
Compression: off
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux nvidia runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
runc version: v1.0.3-0-gf46b6ba
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.0-104-generic
Operating System: Ubuntu 20.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 39.18GiB
Name: server2
ID: <Redacted>
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
EDIT: forgot to add the docker compose file. Here it is:
services:
nginx:
image: nginx:1.21.6-alpine
networks:
public_interface:
ipv4_address: 123.456.789.102 //Replaced with nonsense for privacy reasons
private_interface:
ipv4_address: 192.168.5.2
web_interface:
networks:
web_interface:
public_interface:
driver: macvlan
driver_opts:
parent: enp10s0.100
ipam:
config:
- subnet: 123.456.789.101/29 //Replaced with nonsense for privacy reasons
gateway: 123.456.789.108 //Replaced with nonsense for privacy reasons
private_interface:
driver: macvlan
driver_opts:
parent: enp10s0.305
ipam:
config:
- subnet: 192.168.5.0/24
gateway: 192.168.5.1
Ok, time to answer this so I don't become the next #979. Turns out I was right about the routing, and my issue lay not actually in docker, but in
how the network router in the kernel works. I confirmed this by running an application without docker (just a simple python HTTP server), and testing, finding the exact same issue.
The solution, it turns out, is to use a combination of routing tables, iptables, and packet marks. The first depends on your network backend. I'm using Netplan, 'cause Ubuntu, which means I have to tell Netplan to setup routing tables:
network:
version: 2
ethernets:
eth0:
dhcp4: true
dhcp6: false
gateway4: 192.168.1.1
eth1:
dhcp4: false
dhcp6: false
addresses:
- 123.456.789.20/24 #Server address + subnet
routes:
- to: 0.0.0.0/0
via: 123.456.789.1 #Gateway address
metric: 500
table: 100
routing-policy:
- from: 123.456.789.20 #Server address
table: 100
If you're not using Docker, this patches everything nicely, and things "just work". If you are, you'll need to also add a packet mark, and tell iptables to keep said mark when transferring the packet to the docker container. First, mark incoming packets:
ip rule add fwmark 0x1 table 100
Followed by telling iptables to keep the marks:
iptables -t mangle -A PREROUTING -i eth1 -m conntrack --ctstate NEW --ctdir ORIGINAL -j CONNMARK --set-mark 0x1
iptables -t mangle -A PREROUTING -m conntrack ! --ctstate NEW --ctdir REPLY -m connmark ! --mark 0x0 -j CONNMARK --restore-mark
iptables -t mangle -A OUTPUT -m conntrack ! --ctstate NEW --ctdir REPLY -m connmark ! --mark 0x0 -j CONNMARK --restore-mark
Hopefully that helps future docker users. It was certainly an experience.
I also wrote all of this up on my blog, along with a bit more detail of where things started, why I was in this pickle, and how I figured it out: https://wiki.faeranne.com/en/blogs/nexus-labs/docker-netplan-woes

minikube ip returns 127.0.0.1 | Kubernetes NodePort service not accessable

I have two kubernetes objects,
apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
component: web
spec:
containers:
- name: client
image: stephengrider/multi-client
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3000
apiVersion: v1
kind: Service
metadata:
name: client-node-port
spec:
type: NodePort
selector:
component: web
ports:
- port: 3050
targetPort: 3000
nodePort: 31515
and i applied both using kubectl apply -f <file_name> after that, here is the output
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
client-node-port NodePort 10.100.230.224 <none> 3050:31515/TCP 30m
the pod output
NAME READY STATUS RESTARTS AGE
client-pod 1/1 Running 0 28m
but when i run minikube ip it returns 127.0.0.1,
i'm using minikube with docker driver.
After following this issue https://github.com/kubernetes/minikube/issues/7344.
i got the node-ip using
kubectl get node -o json |
jq --raw-output \
'.items[0].status.addresses[]
| select(.type == "InternalIP")
.address
'
But even then i am not able to access the service. After more searching i find out
minikube service --url client-node-port
šŸƒ Starting tunnel for service client-node-port.
|-----------|------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------------------|-------------|------------------------|
| default | client-node-port | | http://127.0.0.1:52694 |
|-----------|------------------|-------------|------------------------|
http://127.0.0.1:52694
ā— Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
i can access the service using minikube service.
Question:
But i want to know why the nodePort exposed didn't work ?
why did i do this workaround to access the application.
More Information:
minikube version
minikube version: v1.10.1
commit: 63ab801ac27e5742ae442ce36dff7877dcccb278
docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:21:11 2020
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:29:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
if you need more info, i'm willing to provide.
minikube ssh
docker#minikube:~$ ip -4 a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0
valid_lft forever preferred_lft forever
945: eth0#if946: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
I had the same problem. The issue is not with the IP 127.0.0.1. The issue was that I was calling the port I have defined in the YAML file for NodePort. It looks like minikube will assign a different port for external access.
The way I did:
List all services in a nice formatted table:
$minikube service list
Show IP and external port:
$minikube service Type-Your-Service-Name
If you do that minikube will open the browser and will run your app.
This command will help.
minikube service --url $SERVICE
I had the same problem.
Download and install VirtualBox(VirtualBox.org)
Install minikube
brew reinstall minikube (if already install)
minikube start --vm-driver=virtualbox
minikube ip (This will return IP)
Which can be used to open in browser and will run your app.

Docker swarm: public service not reachable from inside container of the same deployment

My Setup: I have a single machine docker swarm "cluster".
Simply said, I have a stack deployment that is composed of two services A and B. Both services are connected (through an external overlay network) to another stack running a traefik proxy to expose those services to the public.
I can reach both services via their traefik routing from my browser.
What doesn't work though:
I can not reach service A from within service B using A's public domain (via its traefik routing). I always get a connection timeout when attempting a HTTP call.
I this some regular expected behavior that can be fixed with some option or is my setup somehow broken? I read that endpoint_mode: dnsrr might help in some situations of this kind but it really didn't make a difference for me. I tried it on both services as well as on the traefik service.
I don't want to overwhelm you with all the configuration details of my machine and swarm deployments right here as that might be overkill if I just made a configuration mistake that's obvious from my problem description.
For the ambitious reader, here are some further details:
$ docker info
Client:
Debug Mode: false
Server:
Containers: 125
Running: 59
Paused: 0
Stopped: 66
Images: 328
Server Version: 19.03.8
Storage Driver: overlay2
Backing Filesystem: <unknown>
Supports d_type: true
Native Overlay Diff: true
Logging Driver: loki
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: juwtdqk60ufj3rityctog7ev0
Is Manager: true
ClusterID: 3mte2zcw4nfc1jq17dzwvtoi3
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 178.254.21.80
Manager Addresses:
178.254.21.80:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-96-generic
Operating System: Ubuntu 18.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.41GiB
Name: rv1324
ID: L7JD:POVR:EPMY:JGMF:3BFX:DQJA:B5KK:O3PG:YH44:TNLJ:YD3I:GROZ
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
The swarm stack I'm trying to deploy can be found here
https://github.com/skuzzle/cmp/blob/0f8004b41f1a486fee7b6705c4bcbc39a2414412/swarm-stack/feature.yml
The connectivity problem is between the cmpauthorization and the cpmfrontend service. In order to finish an OAuth2 authorization process, the latter service needs to send a POST request to the authorization service's public domain.

Docker resolver connect failed: dial udp connect network is unreachable

How can I investigate the root cause of a network is unreachable in Docker? I'm using Docker in swarm mode with docker-compose. My logs look like this they are full with this.
Jun 05 12:23:22 myServer dockerd[6151]: time="2019-06-05T12:23:22.996311465+02:00" level=warning msg="[resolver] connect failed: dial udp 10.58.194.11:53: connect: network is unreachable"
Jun 05 12:23:22 myServer dockerd[6151]: time="2019-06-05T12:23:22.996246500+02:00" level=warning msg="[resolver] connect failed: dial udp 10.58.194.16:53: connect: network is unreachable"
Jun 05 12:23:22 myServer dockerd[6151]: time="2019-06-05T12:23:22.996342243+02:00" level=warning msg="[resolver] connect failed: dial udp 10.58.194.11:53: connect: network is unreachable"
RedHat 7.5 Linux server
Docker version 18.02.0-ce, build fc4de44
docker-compose version 1.19.0, build 9e633ef
docker-compose.yml version: version: "3.4"
Docker info:
Containers: 121
Running: 115
Paused: 0
Stopped: 6
Images: 752
Server Version: 18.02.0-ce
Storage Driver: overlay
Backing Filesystem: xfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: v7bi5f4cnbzmq4hrnbw6xlev8
Is Manager: true
ClusterID: 5wpoqhzk0vphn1grirb7y34jk
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 10.49.0.242
Manager Addresses:
10.49.0.242:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9b55aab90508bd389d7654c4baf173a981477d55
runc version: 9f9c96235cc97674e935002fc3d78361b696a69e
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-862.14.4.el7.x86_64
Operating System: Red Hat Linux
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 47GiB
Name: myServer
ID: V5I2:SAMK:WAEE:TLGM:ZA7M:5C4D:A2DK:TQVA:462K:DA7M:RTAT:QDSF
Docker Root Dir: /net/volume/fs0/dckrdata
Debug Mode (client): false
Debug Mode (server): false
HTTP Proxy: http://localhost:3128
HTTPS Proxy: http://localhost:3128
No Proxy: .myApp.com,localhost,127.0.0.1
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
docker.myApp.com
registry:5000
127.0.0.0/8
Live Restore Enabled: false

How to connect two local machines via docker swarm?

I would like to test some docker swarm features and for that IĀ have a windows PC and a mac book pro, both in my private Network.
I installed Docker for Windows (Windows 10 pro, using linux containers) and also Docker for mac.
Then I started both of them and also configured my router to allow the ports they need for TCPĀ and UDP:
Port 2377 TCP for node communication
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container ingress network.
Also I deactivated the firewall both on my pc and on my mac.
Then I ran docker swarm init on my macbook, which gave me a join token.
On my windows PC I entered that join command in the console and....... it failed!
I got an error message that ends with "... connection refused".
So, can you give me some advise or links to how to properly connect to local machines via docker swarm?Ā I would LOVE to test it and use it for local development and testing of my apps. thanks!
Docker Info from Mac
$ docker info
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 185
Server Version: 18.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: v3fhiinezmdbbn98l0s6bgqzo
Is Manager: true
ClusterID: o9mcdlgtq37t5r86ganupstez
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.65.3
Manager Addresses:
192.168.65.3:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.87-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 4.095GiB
Name: linuxkit-025000000001
ID: 2D57:Q3QP:6UZ2:S6JV:WXLG:JN4H:TR6G:V3C3:P6ZP:2ENA:L7ES:OIJD
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
HTTP Proxy: docker.for.mac.http.internal:3128
HTTPS Proxy: docker.for.mac.http.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Docker Info from Windows
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.09.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 09c8266bf2fcf9519a651b04ae54c967b9ab86ec
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.125-linuxkit
Operating System: Docker for Windows
OSType: linux
Architecture: x86_64
CPUs: 3
Total Memory: 7.768GiB
Name: linuxkit-00155d674805
ID: S7LD:PA6I:QGZR:YFQH:BR62:JS5C:DZLS:C6O3:RZUL:7ZXE:PRI6:HPRD
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 22
Goroutines: 46
System Time: 2019-04-11T13:28:11.3484452Z
EventsListeners: 1
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
Docker swarm join command output
$ docker swarm join --token SWMTKN-1-5rp7ownwv3ob27vl52ogo8z6d3mbxasdfasdfsadfkrf8hqjk1b5-bi2p5u7i7blk5wepw389sba0w 192.168.x.x:2377
Error response from daemon: rpc error: code = Unavailable desc = all
SubConns are in TransientFailure, latest connection error:
connection error:
desc = "transport: Error while dialing dial tcp 192.168.x.x:2377:
connect: connection refused"
The problem is that netiher docker Desktop for Mac nor for Windows with Linux containers are "true" dockers. Both are using virtual machines with Linux os where true docker engine works.
If I'm correct, 192.162.65.3 is not the IP of your Mac but the IP of vm within some virtual mac network.
Basing on this article https://docs.docker.com/docker-for-mac/docker-toolbox/ and this sentence "Also note that Docker Desktop for Mac canā€™t route traffic to containers, so you canā€™t directly access an exposed port on a running container from the hosting machine." Connecting Mac and Windows on Linux containers might not be easy.
I'd recommend for testing either get some cloud VMs or on Windows you can use docker-machine command to spawn multiple Linux VMs on which you can setup local swarm to test features you wish.

Resources