minikube ip returns 127.0.0.1 | Kubernetes NodePort service not accessable - docker

I have two kubernetes objects,
apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
component: web
spec:
containers:
- name: client
image: stephengrider/multi-client
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 3000
apiVersion: v1
kind: Service
metadata:
name: client-node-port
spec:
type: NodePort
selector:
component: web
ports:
- port: 3050
targetPort: 3000
nodePort: 31515
and i applied both using kubectl apply -f <file_name> after that, here is the output
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
client-node-port NodePort 10.100.230.224 <none> 3050:31515/TCP 30m
the pod output
NAME READY STATUS RESTARTS AGE
client-pod 1/1 Running 0 28m
but when i run minikube ip it returns 127.0.0.1,
i'm using minikube with docker driver.
After following this issue https://github.com/kubernetes/minikube/issues/7344.
i got the node-ip using
kubectl get node -o json |
jq --raw-output \
'.items[0].status.addresses[]
| select(.type == "InternalIP")
.address
'
But even then i am not able to access the service. After more searching i find out
minikube service --url client-node-port
🏃 Starting tunnel for service client-node-port.
|-----------|------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------------------|-------------|------------------------|
| default | client-node-port | | http://127.0.0.1:52694 |
|-----------|------------------|-------------|------------------------|
http://127.0.0.1:52694
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
i can access the service using minikube service.
Question:
But i want to know why the nodePort exposed didn't work ?
why did i do this workaround to access the application.
More Information:
minikube version
minikube version: v1.10.1
commit: 63ab801ac27e5742ae442ce36dff7877dcccb278
docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:21:11 2020
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:29:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
if you need more info, i'm willing to provide.
minikube ssh
docker#minikube:~$ ip -4 a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0
valid_lft forever preferred_lft forever
945: eth0#if946: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

I had the same problem. The issue is not with the IP 127.0.0.1. The issue was that I was calling the port I have defined in the YAML file for NodePort. It looks like minikube will assign a different port for external access.
The way I did:
List all services in a nice formatted table:
$minikube service list
Show IP and external port:
$minikube service Type-Your-Service-Name
If you do that minikube will open the browser and will run your app.

This command will help.
minikube service --url $SERVICE

I had the same problem.
Download and install VirtualBox(VirtualBox.org)
Install minikube
brew reinstall minikube (if already install)
minikube start --vm-driver=virtualbox
minikube ip (This will return IP)
Which can be used to open in browser and will run your app.

Related

Attaching a second network to a Docker NGINX container causes it to stop responding to any of them

I've been trying to setup what might be a rather complicated docker setup, and have run into a very weird issue. What I currently have is a collection of containers, all running different web services, and an Nginx container that routes them to be publicly accessible over HTTPS. This has worked fine, but meant I can only setup services that use HTTPS, and was run over one of my 5 static IPs my ISP has given me, by routing it through my UniFi network. When I went to add GitLab, I realized I needed to connect it to a separate public address, so that I could access port 22 for SSH based Git clones. Since I already had the switch port connected to my modem on a vlan (topology weirdness, it works fine,) I simply tagged the server port to allow that VLan through, and started using a macvlan network. As soon as I added the macvlan to my nginx container, it stopped working all together. After spending several hours making sure my static ips were actually setup correctly, I found out that if I attach more than one network to my Nginx server, it stops responding to anything at all. If I stick just the macvlan on it, it can respond just fine, even over my static ip. But if there is more than one, everything stops working. Pings, TCP requests, everything. If I use docker network disconnect to remove the network from the running instance, it starts working immediately again. I've tried this with just netcat on an alpine instance, and can confirm that all inbound traffic stops immediately when a second network is attached, and resumes as soon as it's removed. I'm including a sample docker-compose that shows this effect just by adding or removing the networks.
docker version:
Client: Docker Engine - Community
Version: 20.10.13
API version: 1.41
Go version: go1.16.15
Git commit: a224086
Built: Thu Mar 10 14:07:51 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.13
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 906f57f
Built: Thu Mar 10 14:05:44 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.10
GitCommit: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
runc:
Version: 1.0.3
GitCommit: v1.0.3-0-gf46b6ba
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker info:
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.8.0-docker)
compose: Docker Compose (Docker Inc., v2.2.3)
scan: Docker Scan (Docker Inc., v0.12.0)
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 9
Server Version: 20.10.13
Storage Driver: zfs
Zpool: Storage
Zpool Health: ONLINE
Parent Dataset: Storage/docker
Space Used By Parent: 87704957952
Space Available: 8778335683049
Parent Quota: no
Compression: off
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux nvidia runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
runc version: v1.0.3-0-gf46b6ba
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.0-104-generic
Operating System: Ubuntu 20.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 39.18GiB
Name: server2
ID: <Redacted>
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
EDIT: forgot to add the docker compose file. Here it is:
services:
nginx:
image: nginx:1.21.6-alpine
networks:
public_interface:
ipv4_address: 123.456.789.102 //Replaced with nonsense for privacy reasons
private_interface:
ipv4_address: 192.168.5.2
web_interface:
networks:
web_interface:
public_interface:
driver: macvlan
driver_opts:
parent: enp10s0.100
ipam:
config:
- subnet: 123.456.789.101/29 //Replaced with nonsense for privacy reasons
gateway: 123.456.789.108 //Replaced with nonsense for privacy reasons
private_interface:
driver: macvlan
driver_opts:
parent: enp10s0.305
ipam:
config:
- subnet: 192.168.5.0/24
gateway: 192.168.5.1
Ok, time to answer this so I don't become the next #979. Turns out I was right about the routing, and my issue lay not actually in docker, but in
how the network router in the kernel works. I confirmed this by running an application without docker (just a simple python HTTP server), and testing, finding the exact same issue.
The solution, it turns out, is to use a combination of routing tables, iptables, and packet marks. The first depends on your network backend. I'm using Netplan, 'cause Ubuntu, which means I have to tell Netplan to setup routing tables:
network:
version: 2
ethernets:
eth0:
dhcp4: true
dhcp6: false
gateway4: 192.168.1.1
eth1:
dhcp4: false
dhcp6: false
addresses:
- 123.456.789.20/24 #Server address + subnet
routes:
- to: 0.0.0.0/0
via: 123.456.789.1 #Gateway address
metric: 500
table: 100
routing-policy:
- from: 123.456.789.20 #Server address
table: 100
If you're not using Docker, this patches everything nicely, and things "just work". If you are, you'll need to also add a packet mark, and tell iptables to keep said mark when transferring the packet to the docker container. First, mark incoming packets:
ip rule add fwmark 0x1 table 100
Followed by telling iptables to keep the marks:
iptables -t mangle -A PREROUTING -i eth1 -m conntrack --ctstate NEW --ctdir ORIGINAL -j CONNMARK --set-mark 0x1
iptables -t mangle -A PREROUTING -m conntrack ! --ctstate NEW --ctdir REPLY -m connmark ! --mark 0x0 -j CONNMARK --restore-mark
iptables -t mangle -A OUTPUT -m conntrack ! --ctstate NEW --ctdir REPLY -m connmark ! --mark 0x0 -j CONNMARK --restore-mark
Hopefully that helps future docker users. It was certainly an experience.
I also wrote all of this up on my blog, along with a bit more detail of where things started, why I was in this pickle, and how I figured it out: https://wiki.faeranne.com/en/blogs/nexus-labs/docker-netplan-woes

Kubernetes HyperV Cluster Expose Service

TL;DR;
How do I connect to my kubernetes cluster from my host machine, through Hyper-V and into the Kubernetes Proxy (kube-proxy).
So I have hyper-v setup with two Ubuntu 18.04.1 LTS Servers. Identical setup.
One is a master
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.0
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
The other a node:
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.0
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
It has these pods running by default:
kube-system coredns-78fcdf6894-6ld8l 1/1 Running 1 4h
kube-system coredns-78fcdf6894-ncp79 1/1 Running 1 4h
kube-system etcd-node1 1/1 Running 1 4h
kube-system kube-apiserver-node1 1/1 Running 1 4h
kube-system kube-controller-manager-node1 1/1 Running 1 4h
kube-system kube-proxy-942xh 1/1 Running 1 4h
kube-system kube-proxy-k6jl4 1/1 Running 1 4h
kube-system kube-scheduler-node1 1/1 Running 1 4h
kube-system kubernetes-dashboard-6948bdb78-9fbv8 1/1 Running 0 25m
kube-system weave-net-fzj8h 2/2 Running 2 3h
kube-system weave-net-s648g 2/2 Running 3 3h
These two nodes are exposed to my LAN via two IP addresses:
192.168.1.116
192.168.1.115
I've exposed my deployment:
service.yml:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort # internal cluster management
ports:
- port: 80 # container port
nodePort: 30001 # outside port
protocol: TCP
targetPort: http
selector:
app: my-api
tier: backend
List out:
$ kubectl get svc -o wide
my-service NodePort 10.105.166.48 <none> 80:30001/TCP 50m app=my-api,tier=backend
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h <none>
If I sit on my master node and curl the pod
$ kubectl get pods -o wide
my-api-86db46fc95-2d6wf 1/1 Running 0 22m 10.32.0.7 node2
$ curl 10.32.0.7:80/api/health
{"success": true}
My api is clearly up in the pods.
When I query the service IP
$ curl 10.105.166.48:80/api/health
OR
$ curl 10.105.166.48:30001/api/health
It just timeouts
My network config for the master:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.116 netmask 255.255.255.0 broadcast 192.168.1.255
weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet 10.40.0.0 netmask 255.240.0.0 broadcast 10.47.255.255
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
My iptables are just listing everything as source anywhere destination anywhere with loads of references to KUBE and DOCKER.
I've even tried to setup dashboard to no avail...
accessing the url:
https://192.168.1.116:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Doing nslookup reveals no host name:
$ nslookup my-service
Server: 127.0.0.53
Address: 127.0.0.53#53
** server can't find eyemenu-api-service: SERVFAIL
To hit the nodeport 30001, you need to use your node's ip.
curl nodeip:30001/api/health
Pods inside the cluster doesn't know about the node port 30001.
The nodePort will expose the port to all worker nodes of the kubernetes cluster, hence you can use either:
curl node1:30001/api/health or
curl node2:30001/api/health

Docker container is not available locally

Once I create the container using docker-compose up -d the containers are up and running but they are not available in the local network (127.0.0.1).
I use the same project in another PC and still working.. So the docker-compose.yml is the same and it's working.
~ → docker info
Containers: 6
Running: 1
Paused: 0
Stopped: 5
Images: 19
Server Version: 18.03.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.31-1-MANJARO
Operating System: Manjaro Linux
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 15.67GiB
Name: phantom
ID: JO4V:TAN5:64SP:5VRL:RUOQ:ZRTX:SUGL:T5NF:IXB7:YHS6:2CA6:3HCT
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Checking the network cards seems they are all properly set
~ → ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 2c:fd:a1:73:7e:38 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.120/24 brd 192.168.1.255 scope global dynamic noprefixroute enp6s0
valid_lft 80202sec preferred_lft 80202sec
3: br-0e93106ef232: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:74:c6:77:24 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-0e93106ef232
valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:40:ad:aa:5b brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
74: veth73892ae#if73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether ce:fd:5c:af:d2:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Looking at the iptable rules, seems nothing is blocking the connection with the container.
Note: Just to be sure it wasn't creating any conflict, I disabled the IPv6, but nothing changed.
Here the docker-compose.yml file:
version: "3.1"
services:
redis:
image: redis:alpine
container_name: proj-redis
rabbitmq:
image: rabbitmq:alpine
container_name: proj-rabbitmq
mysql:
image: mysql:8.0
container_name: proj-mysql
working_dir: /application
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=database
- MYSQL_USER=database
- MYSQL_PASSWORD=database
webserver:
image: nginx:alpine
container_name: proj-webserver
working_dir: /application
volumes:
- ./htdoc:/application
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
- "9003:9003" # xDebug
- "15672:15672" # RabbitMQ
links:
- php-fpm
php-fpm:
build:
context: .
dockerfile: phpdocker/php-fpm/Dockerfile
container_name: proj-php-fpm
working_dir: /application
environment:
XDEBUG_CONFIG: "remote_host=172.21.0.1"
PHP_IDE_CONFIG: "serverName=dev.local"
volumes:
- ./htdoc:/application
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.0/fpm/conf.d/99-overrides.ini
links:
- mysql
- rabbitmq

How to correctly set up docker network to use localhost connection?

I have two services. Service A calls service B like this:
HttpGet request = new HttpGet("http://127.0.0.1:8083/getTest");
HttpResponse httpResponse = httpClient.execute(request);
I have the error:
There was an unexpected error (type=Internal Server Error, status=500).
Connect to 127.0.0.1:8083 [/127.0.0.1] failed: Connection refused: connect
This is docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2a8eaf08881a service_A "./gradlew bootRun" 5 seconds ago Up 4 seconds 0.0.0.0:80->80/tcp service_A
b7436a77e438 service_B "go-wrapper run" About an hour ago Up 4 seconds 0.0.0.0:8083->8083/tcp service_B
I created docker network:
docker network create webproxy
My docker-compose.yml is:
version: '3'
services:
service_A:
container_name: service_A
build: ./service_A
hostname: service_A
restart: always
ports:
- "8083:8083"
service_B:
container_name: service_B
build: ./service_B
hostname: service_B
restart: always
ports:
- "80:80"
networks:
default:
external:
name: webproxy
This is container's ip addr show eth0:
project$ docker exec -it 2a8eaf08881a ip addr show eth0
68: eth0#if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.19.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
project$ docker exec -it b7436a77e438 ip addr show eth0
66: eth0#if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.19.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
This is docker network information:
project$ docker network ls
NETWORK ID NAME DRIVER SCOPE
9fcac7636448 bridge bridge local
83a0f3fe901d host host local
215ab1608f91 none null local
95909545832d predictor_default bridge local
be19665e791d webproxy bridge local
Also I can ping 172.19.0.3 from containers.
How to correctly to communicate between service A and service B?
In docker-compose.yml add networks field to each service
version: '3'
services:
service_A:
networks:
- webproxy
service_B:
networks:
- webproxy
networks:
webproxy:
driver: bridge
Then you can use service names to send requests
HttpGet request = new HttpGet("http://service_A:8083/getTest");
HttpResponse httpResponse = httpClient.execute(request);
https://stackoverflow.com/users/1125714/tj-biddle has given the working answer:
HttpGet request = new HttpGet("http://service_A:8083/getTest");
HttpResponse httpResponse = httpClient.execute(request);

docker - Error response from daemon: rpc error: code = 2 desc = name conflicts with an existing object

While creating docker service, i'm facing following error.. Error response from daemon: rpc error: code = 2 desc = name conflicts with an existing object
Steps
docker-machine create --driver virtualbox swarm-1
docker-machine create --driver virtualbox swarm-2
docker-machine create --driver virtualbox swarm-3
eval $(docker-machine env swarm-1)
docker swarm init --advertise-addr $(docker-machine ip swarm-1)
docker-machine ssh swarm-2
docker swarm join <token> and IP
docker-machine ssh swarm-3
docker swarm join <token> and IP
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
hdip26vwi9xvr131u1rr7yeia swarm-3 Ready Active
v7e56wf0j7fhkarnqsp5c32qo swarm-2 Ready Active
yjv3r4r4ls4qx47jnm0yov06u * swarm-1 Ready Active Leader
docker network create --driver overlay webnet
docker service create --name redisdb --network webnet --replicas 1 redis
Error response from daemon: rpc error: code = 2 desc = name conflicts with an existing object
I tried
docker service create --name redisdb --network webnet --replicas 1 redis:alpine
docker service create --name redisdb --network webnet --replicas 1 redis:alpine
docker service create --name redisdb --network webnet --replicas 1
rlesouef/alpine-redis
didn't work..
Any suggestion?
adding additional information
docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 0
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: p5bao7gz89hghllnykw8phaek
Is Manager: true
ClusterID: rn5xgfioygwp1b91gfm5znd7v
Managers: 1
Nodes: 3
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 192.168.99.100
Manager Addresses:
192.168.99.100:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1
runc version: 9df8b306d01f59d3a8029be411de015b7304dd8f
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.4.47-boot2docker
Operating System: Boot2Docker 1.13.1 (TCL 7.2); HEAD : b7f6033 - Wed Feb 8 20:31:48 UTC 2017
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 995.8 MiB
Name: swarm-1
ID: JGLZ:XY2M:TTZX:DIT7:QCMX:DCNO:6BR4:IJVM:HOQ7:N3Y6:YGNG:LBD4
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 41
Goroutines: 191
System Time: 2017-02-13T18:28:57.184074564Z
EventsListeners: 0
Username: pranaysankpal
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Please suggest.
I encountered the same issue.
Solved it by the following:
1) fetching the list of services by running: sudo docker service ls.
you suppose to see the service you're trying to create (redisdb)
2) take the ID shown next to the redisdb service in the list
3) run: sudo docker service rm ID
4) now try to run the create command once again
Hope that helps

Resources