Cannot deploy Docker Swarm stack in rootless mode, mkdir /var/lib/docker: permission denied - docker

I've set up Docker in rootless mode under Ubuntu 20.04 and Debian 11 (in my case, using Ansible and this role). I want to deploy a simple Docker stack to the node via Docker Swarm. No other hosts are involved, just one Swarm node from the same machine, acting as a manager.
I can run this project with Docker and Docker Compose just fine, also in rootless mode. All that changes for the rootless setup is that DOCKER_HOST is overwritten in .bashrc:
export XDG_RUNTIME_DIR="/run/user/1000"
export DOCKER_HOST="unix:///run/user/1000/docker.sock"
When I deploy the stack though, none of the services can start (here is an excerpt of the status):
$ docker stack deploy -c docker-stack.yml demo-stack
$ docker stack ps demo-stack --no-trunc
jig6zyewkem2g225509x91nt5 demo-stack_db.1 registry.example.com/db:v1.20.2 bullseye Shutdown Rejected 15 seconds ago "mkdir /var/lib/docker: permission denied"
ox6x5w7du9o5ew2v70g5mfg9e demo-stack_redis.1 registry.example.com/redis:v1.20.2 bullseye Shutdown Rejected 15 seconds ago "mkdir /var/lib/docker: permission denied"
ipme447wrrsjc8jw6cpfak4hq demo-stack_web.1 registry.example.com/web:v1.20.2 bullseye Shutdown Rejected 14 seconds ago "mkdir /var/lib/docker: permission denied"
The services all error with mkdir /var/lib/docker: permission denied. I suppose that it tries to start them as if the system was using rootful Docker, but it's a rootless installation.
I guess the question is: how do I get the Swarm node (which is the very same machine) to use the correct Docker rootless configuration for launching the services? That would include using the correct DOCKER_HOST configuration.
I am unsure if this is even supposed to work. I hear that overlay networks are not supported, but I am only on one machine, so I don't really need this. I do need Swarm for its usable implementation of secrets (compared to the mock implementation from Docker Compose).
Note that I have the same setup with Docker running in (normal) rootful mode, and there, all services can be started. It's therefore not an issue with the Docker stack file itself.
More details with docker info:
Client:
Context: default
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 12
Server Version: 20.10.13
Storage Driver: fuse-overlayfs
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: hpzsmez48acse9yo1frnx37fo
Is Manager: true
ClusterID: zkv7wsoun193kyvbxe1k3hdph
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 127.0.0.1
Manager Addresses:
127.0.0.1:2377
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
runc version: v1.0.3-0-gf46b6ba2
init version: de40ad0
Security Options:
seccomp
Profile: default
rootless
cgroupns
Kernel Version: 5.10.0-13-amd64
Operating System: Debian GNU/Linux 11 (bullseye)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.936GiB
Name: bullseye
ID: 3R5P:2UV6:FIP4:UIJV:TDNQ:35DT:DEDI:SMGN:FDUY:JSWO:FRU6:O2HF
Docker Root Dir: /home/vagrant/.local/share/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
WARNING: No cpuset support
WARNING: No io.weight support
WARNING: No io.weight (per device) support
WARNING: No io.max (rbps) support
WARNING: No io.max (wbps) support
WARNING: No io.max (riops) support
WARNING: No io.max (wiops) support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

The solution is simple: Docker Rootless does not work with Docker Swarm. You can have either, but not both.

Related

How to list only images located in a specific, private registry

I'm having problems getting a listing of images from a specific registry that I've set up on a local server, or, maybe, I'm having issues publishing them to that registry in the first place, as this is my first adventure into docker registries, I may just be confused with the terms used.
There's an old question, here, that kind of looks like what I want to achieve, but it appears that docker has gained built-in support for this, in the meanwhile, so the methods mentioned here are no longer relevant.
I have 2 servers (for the purpose of this question):
rancher-server: This server has a rancher:v2.6.0 container running and a registry:2 container.
k8s-server: This is just a freshly installed server, with the docker and kubernetes packages installed, that I want the rancher server to administer.
On k8s-server, I'm trying to spin up a docker image rancher/rancher-agent:v2.6.0 with a few arguments, that should let it relinquish control to the rancher server.
The trick here is, that this is all required to work without internet access (currently there IS internet access, but it's a PoC for a task that requires to be air-gapped). For the purposes of this question, I really just want to be able to spin up docker containers on k8s-server, using the registry on rancher-server.
Currently, this is the state of rancher-server:
# docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9a15ea00d5e registry:2 "/entrypoint.sh /e..." About an hour ago Up About an hour 0.0.0.0:5000->5000/tcp local-registry
1b6bc6b88a8e 08c9693b4357 "entrypoint.sh 08c..." 26 hours ago Up 2 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp goofy_minsky
# docker image ls --all (the list is big, this is just a sample):
REPOSITORY TAG IMAGE ID CREATED
rancher/rancher-agent v2.6.0 9c35a790aa16 2 weeks ago
rancher-server.example.com:5000/rancher/rancher-agent v2.6.0 9c35a790aa16 2 weeks ago
# docker info
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 225
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: /usr/libexec/docker/docker-init-current
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: 66aedde759f33c190954815fb765eedc1d782dd9 (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: fec3683b971d9c3ef73f284f176672c44b448662 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
WARNING: You're not using the default seccomp profile
Profile: /etc/docker/seccomp.json
selinux
Kernel Version: 3.10.0-1160.41.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 2
Total Memory: 3.701 GiB
Name: rancher-server
ID: SA2T:G2IA:CGER:6BC5:HIV2:4T6T:LF3Q:2YVS:SYU7:SQ5V:ACUS:BMEX
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
rancher-server.example.com:5000
127.0.0.0/8
Live Restore Enabled: false
Registries: docker.io (secure)
On the k8s-server, I try to list the contents of that registry:
# docker image ls --all rancher-server.example.com:5000
REPOSITORY TAG IMAGE ID CREATED SIZE
# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: /usr/libexec/docker/docker-init-current
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: 66aedde759f33c190954815fb765eedc1d782dd9 (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: fec3683b971d9c3ef73f284f176672c44b448662 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
WARNING: You're not using the default seccomp profile
Profile: /etc/docker/seccomp.json
selinux
Kernel Version: 3.10.0-1160.41.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 2
Total Memory: 3.701 GiB
Name: k8s-server
ID: QETJ:QSPQ:VS36:OOOA:ZPYL:CDHK:AJ5G:N4BD:ZQUH:UL6O:PHAB:5UOE
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
rancher-server.example.com:5000
127.0.0.0/8
Live Restore Enabled: false
Registries: docker.io (secure)
I had to jump through a few hoops to get there, in the first place, marking the registry as unsafe in /etc/docker/daemon.json on the k8s-server and disabling selinux on the rancher-server, for example.
I've tried to docker login rancher-server.example.com:5000 first, but that made no difference. It does look like, to me, that the k8s-server is configured correctly, but that the images on rancher-server haven't been tagged/pushed properly, but when I look back at the registry, I don't know how to do it differently, and, as far as I understand the registry, it looks fine to me?
I've changed the server names for anonymity and the output has been lightly edited for presentation.
EDIT:
I think I found a clue to what's happening here, it turns out that I can actually run the images from this registry remotely, just fine, it just so happens that I have no way to discover the names of the images, however, if I do a docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher-server.example.com:5000/rancher/rancher-agent:v2.6.0 --server https://rancher-server.example.com:5000 --token <token> --ca-checksum <ca-checksum> --etcd --controlplane it actually pulls and runs the container, so it looks like the registry itself is fine, but maybe the index isn't?

br_netfilter error when deploying docker containers to swarm on ubuntu 20.04

I've been struggling to deploy my containers to Docker swarm on Ubuntu server 20.04.
I'm trying to use Docker swarm on a single VPS host for zero-downtime deployments.
Running containers with docker-compose everything works.
Now trying to deploy the same docker-compose file to docker swarm.
# docker swarm init
Swarm initialized: current node (wlshyv0s1n5c85mao8jt9wo5j) is now a manager.
To add a worker to this swarm, run the following command:
...
# docker stack deploy --compose-file docker-compose.yml dash
Ignoring unsupported options: build
Creating network dash_default
Creating service dash_db
Creating service dash_nginx
...
After finishing the deploy command, with docker ps i see that there are no running containers.
Now checking with docker ps -a I see a lot of containers and all their statuses say "Created".
Next when i inspect one container, then it's state shows that:
"State": {
"Status": "created",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 128,
"Error": "error creating external connectivity network: cannot restrict inter-container communication: please ensure that br_netfilter kernel module is loaded",
"StartedAt": "0001-01-01T00:00:00Z",
"FinishedAt": "0001-01-01T00:00:00Z"
}
Checking for loaded modules:
# lsmod | grep br_netfilter
br_netfilter 4242 -2
bridge 4242 -2 br_netfilter,ebtable_broute
After running docker info i saw 2 warnings:
# docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 40
Server Version: 20.10.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: ij25ein3xvcr8p5ky765ol8t0
Is Manager: true
ClusterID: mdb2r7vnngw62lg8uoj5ef55k
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
...
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.4.0
Operating System: Ubuntu 20.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 4GiB
...
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Searching for a solution I found that I should call the sysctl command, but I still get an error.
# sysctl net.bridge.bridge-nf-call-ip6tables=1
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
Now searching solution for that I found the next command, but that does not work as well.
# modprobe br_netfilter
modprobe: FATAL: Module br_netfilter not found in directory /lib/modules/5.4.0
I don't know anymore what to do, to make swarm work.
Everything works on my Windows machine while using swarm mode.
Any suggestions on what should I do/check next?
That problem was in the hosting provider.
Provider told us that other customers have tried to configure Docker Swarm on their VPS too, but no one has figured out how to get it to work.
The provider didn't allow any kernel modification or anything else on the lower level.
Now we are using another hosting provider and everything works fine.

Cannot exec to a running container

After running docker container,docker run -d --name nginx nginx, I cannot use "docker exec", docker exec nginx echo 123, on this container.
I'm receiving an error:
ERRO[2018-08-19T11:09:10.909894729+03:00] stream copy error: reading from a closed fifo
ERRO[2018-08-19T11:09:10.909988081+03:00] stream copy error: reading from a closed fifo
ERRO[2018-08-19T11:09:10.931102317+03:00] Error running exec 19c6ae3c5d796180e02577f037f6a1bd1453b70393098643719dea3537933ae2 in container: OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "process_linux.go:86: executing setns process caused \"exit status 22\"": unknown`
OS: ubuntu 14.04
Kernel: 3.13.0-153-generic
Docker: Docker version 18.06.0-ce, build 0ffa825
Docker Info:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 18.06.0-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/165536.165536/aufs
Backing Filesystem: extfs
Dirs: 5
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
apparmor
userns
Kernel Version: 3.13.0-153-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.86GiB
Name: **************
ID: OL25:ISXX:RWR7:EY76:OQ6O:XLWG:ETWJ:FV2A:MC6A:ROP7:6DWD:DJX4
Docker Root Dir: /var/lib/docker/165536.165536
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Thanks!
That can happen when them use ENTRYPOINT instead of CMD. Check your image/container with "docker inspect". Your commandline argument becomes a CMD of ENTRYPOINT.
https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact
I could reproduce this issue whenever I executed docker run -it opensuse/leap followed by exit command. The container is actually stopped after exit command, but still showed running in docker ps.
Solution: Restart your docker daemon. And then try running your containers once again. If they stop, they won't show running status.
command: service docker restart
This worked in my case.
Please update your Kernel. Although Docker should work with most Kernel 3.10+ versions, there are often low level issues with older Kernels. See also https://github.com/moby/moby/issues/36084#issuecomment-364886573 for a seemingly same issue with a working solution:
updated to HWE ( 4.13.0-32-generic) and exec works again, however keep in mind that stock 16.04 uses 4.4.0 kernels - there should some kind of warning (at least) that specific versions combination will not work

Can't recover docker swarm from pending state

There was a crash and I have this issue now where it says docker swarm status is pending and the node status is UNKNOWN. This is my docker info result
swarm#swarm-manager-1:~$ docker info
Containers: 270
Running: 0
Paused: 0
Stopped: 270
Images: 160
Server Version: 1.12.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 1211
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null overlay
Swarm: pending
NodeID: d9hq8wzz6skh9pzrxzhbckm97
Is Manager: true
ClusterID: 5zgab5w50qgvvep35eqcbote2
Managers: 1
Nodes: 2
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: HIDDEN
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-91-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 6.804 GiB
Name: swarm-manager-1
ID: AXPO:VFSV:TDT3:6X7Y:QNAO:OZJN:U23R:V5S2:FU33:WUNI:CRPK:2E2C
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
This is my docker node ls result:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
9tlo3rg7tuc23xzc3am28lak1 swarm-worker-1 Unknown Active
d9hq8wzz6skh9pzrxzhbckm97 * swarm-manager-1 Unknown Active Leader
I've tried restarting docker engine and the VM but doesn't help in any way. The system is actually running as when I say docker ps in the worker it shows all the containers but on the manager there is nothing on docker ps.
Any idea?
In my experience with Swarm the only solution to similar trouble was to destroy the swarm. And when you do this you should probably also do a docker system prune (only if theres nothing valuable that could be deleted) and service docker restart. And then set up a new swarm.
It sucks. I know
Instead of just rebuilding the whole swarm all at once, you can attempt to remove and re-add each node one at a time - the advantage of this is that the swarm state is not destroyed and, on larger swarms, services can continue while you fix it. This process is considerably more complicated when you don't have a quorum of managers, though.
First, note the node IDs (I'll refer to here as $WORKER_ID and $MANAGER_ID).
On manager node:
docker node update --availability drain $WORKER_ID
^ This is optional, but it's a good habit when working with live services on a swarm.
docker swarm join-token manager
^ This command will give you the join command to run on each node after it's removed. I'll refer to it as $JOIN_COMMAND below. We will demote the worker once the manager re-joins.
On worker:
docker swarm leave
$JOIN_COMMAND
This node is now re-joined as a manager, but I'll continue calling it the 'worker' to avoid confusion.
On manager:
docker node rm $WORKER_ID
docker node update --availability drain $MANAGER_ID
docker swarm leave -f
$JOIN_COMMAND
docker node rm $MANAGER_ID
docker node ls
Find the worker's new id (pay attention to the hostname, not the role) -> $NEW_WORKER_ID
docker node demote $NEW_WORKER_ID
Your swarm should be refreshed - if there were more nodes, the services running on each would have migrated across the swarm when you drained each node.
If it still doesn't work (and regardless), you really should consider upgrading to docker v17.06 or newer. Swarm networking was very unstable before that, causing a lot of issues stemming from race conditions.

Running Docker Compose on Docker Swarm

I've started a docker master with:
docker swarm init --advertise-addr <MANAGER-IP>
so, im trying to have my shell point to swarm master via:
eval $(docker-machine env --swarm <MANAGER-IP>)
but it's giving me an error: Host does not exists
docker info:
-bash-4.2$ docker info
Containers: 18
Running: 1
Paused: 0
Stopped: 17
Images: 20
Server Version: 1.12.0
Storage Driver: devicemapper
Pool Name: docker-253:1-25646-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 3.124 GB
Data Space Total: 107.4 GB
Data Space Available: 13.4 GB
Metadata Space Used: 5.071 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.142 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2016-06-09)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null overlay
Swarm: active
NodeID: 05szzy2z96ypgl5k21swggoil
Is Manager: true
ClusterID: a2wrfuga2tu4cm4k0lxxorqtm
Managers: 1
Nodes: 3
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot interval: 10000
Heartbeat tick: 1
Election tick: 3
Dispatcher:
Heartbeat period: 5 seconds
CA configuration:
Expiry duration: 3 months
Node Address: 10.193.46.89
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-327.28.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.51 GiB
Name: scsor0004331002.rtp.openenglab.netapp.com
ID: T52U:6MWQ:XEDM:2TGH:ITLQ:YD6B:R3MR:MWF5:CFBM:G6PX:W4LG:6SR7
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: eugenepark3
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
127.0.0.0/8
Anyone know what i need to put for eval $(docker-machine env --swarm <MANAGER-IP>) so my compose can run on swarm cluster?
Im supposed to put master name but i dont know how to find it
-bash-4.2$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
05szzy2z96ypgl5k21swggoil * scsor0004331002.rtp.openenglab.netapp.com Ready Active Leader
59t110b0wjhitj1fr8erys600 scsor0004331003.rtp.openenglab.netapp.com Ready Active
dhm6utu2w3dw1to0zh3n71moq scsor0004331004.rtp.openenglab.netapp.com Ready Active
You're mixing up the container based swarm commands with the newer swarmkit based Swarm that's been embedded directly into the Docker CLI. With the new version of Swarm, docker-compose isn't directly supported, yet. Consider this a beta product that works well for a limited scope. You can try the experimental release of the docker engine which adds support for DAB files that are managed with the docker stack CLI. The DAB files are exported from docker-compose bundle and then imported into Docker. This feature is still very experimental and expected to change.
Without that, anything with docker-compose will only operate on a single docker engine since the swarm access is all done under a different docker service CLI interface.

Resources