Etcd cluster setup failure - docker

I am trying to setup 3 node etcd cluster on Ubuntu machines as docker data store for networking. I successfully created etcd cluster using etcd docker image. Now when I am trying to replicate it, the steps fail on one node. Even after removing the failing node from the step up, the cluster is still looking for the removed node. The same error is being faced when I am using etcd binary.
Used following command by changing ip accordingly on all nodes:
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
--name etcd quay.io/coreos/etcd \
-name etcd0 \
-advertise-client-urls http://172.27.59.141:2379,http://172.27.59.141:4001 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
-initial-advertise-peer-urls http://172.27.59.141:2380 \
-listen-peer-urls http://0.0.0.0:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster etcd0=http://172.27.59.141:2380,etcd1=http://172.27.59.244:2380,etcd2=http://172.27.59.232:2380 \
-initial-cluster-state new
Two of the nodes connect properly but the service of third node stops. Following is the log of the third node.
2016-06-16 17:16:34.293248 I | etcdmain: etcd Version: 2.3.6
2016-06-16 17:16:34.294368 I | etcdmain: Git SHA: 128344c
2016-06-16 17:16:34.294584 I | etcdmain: Go Version: go1.6.2
2016-06-16 17:16:34.294781 I | etcdmain: Go OS/Arch: linux/amd64
2016-06-16 17:16:34.294962 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2016-06-16 17:16:34.295142 W | etcdmain: no data-dir provided, using default data-dir ./node2.etcd
2016-06-16 17:16:34.295438 I | etcdmain: listening for peers on http://0.0.0.0:2380
2016-06-16 17:16:34.295654 I | etcdmain: listening for client requests on http://0.0.0.0:2379
2016-06-16 17:16:34.295846 I | etcdmain: listening for client requests on http://0.0.0.0:4001
2016-06-16 17:16:34.296193 I | etcdmain: stopping listening for client requests on http://0.0.0.0:4001
2016-06-16 17:16:34.301139 I | etcdmain: stopping listening for client requests on http://0.0.0.0:2379
2016-06-16 17:16:34.301454 I | etcdmain: stopping listening for peers on http://0.0.0.0:2380
2016-06-16 17:16:34.301718 I | etcdmain: --initial-cluster must include node2=http://172.27.59.232:2380 given --initial-advertise-peer-urls=http://172.27.59.232:2380
Even after removing the failing node I can see that the two nodes are waiting for the third node to connect.
2016-06-16 17:16:12.063893 N | etcdserver: added member 17879927ec74147b [http://172.27.59.232:238] to cluster ba4424e006edb53e
2016-06-16 17:16:12.064431 N | etcdserver: added local member 24d9feabb7e2f26f [http://172.27.59.244:2380] to cluster ba4424e006edb53e
2016-06-16 17:16:12.065229 N | etcdserver: added member 2bda70be57138cfe [http://172.27.59.141:2380] to cluster ba4424e006edb53e
2016-06-16 17:16:12.218560 I | raft: 24d9feabb7e2f26f [term: 1] received a MsgVote message with higher term from 2bda70be57138cfe [term: 29]
2016-06-16 17:16:12.218964 I | raft: 24d9feabb7e2f26f became follower at term 29
2016-06-16 17:16:12.219276 I | raft: 24d9feabb7e2f26f [logterm: 1, index: 3, vote: 0] voted for 2bda70be57138cfe [logterm: 1, index: 3] at term 29
2016-06-16 17:16:12.222667 I | raft: raft.node: 24d9feabb7e2f26f elected leader 2bda70be57138cfe at term 29
2016-06-16 17:16:12.335904 I | etcdserver: published {Name:node1 ClientURLs:[http://172.27.59.244:2379 http://172.27.59.244:4001]} to cluster ba4424e006edb53e
2016-06-16 17:16:12.336459 N | etcdserver: set the initial cluster version to 2.2
2016-06-16 17:16:42.059177 W | rafthttp: the connection to peer 17879927ec74147b is unhealthy
2016-06-16 17:17:12.060313 W | rafthttp: the connection to peer 17879927ec74147b is unhealthy
2016-06-16 17:17:42.060986 W | rafthttp: the connection to peer 17879927ec74147b is unhealthy
It can be seen that despite starting the cluster with two nodes it is still searching for the third node.
Is there a location on local disk where data is being saved and its picking up old data despite it being not provided.
Please suggest what I am missing.

Is there a location on local disk where data is being saved and its picking up old data despite it being not provided.
Yes, the data of membership already stored at node0.etcd and node1.etcd.
You can get the following message from the log which indicates that the server already belongs to a cluster:
etcdmain: the server is already initialized as member before, starting as etcd member...
In order to run a new cluster with two members, just add another argument to your command :
--data-dir bak

Related

Not able to add a Node in VerneMQ Cluster

I have Ubuntu 20.04.4
I am new to VerneMQ and i was tryying to setup 3 node cluster,
i am sucessfully able to have a cluster of 2 nodes but when i try to join the 3rd node it shows done, But when i type the command sudo vmq-admin cluster show the output is
+-------------------------+---------+
| Node | Running |
+-------------------------+---------+
| VerneMQ#192.168.41.17 | true |
+-------------------------+---------+
| VerneMQTR#192.168.41.20 | true |
+-------------------------+---------+
it only shows 2 nodes but when in check in GUI status on web it shows
it should show Cluster size 3 as it as 3 nodes.

Docker on Windows: Error starting protocol stack: listen unix /root/.ethereum/geth.ipc: bind: operation not permitted

On a Windows 10 system, I am trying to run a Docker containiner running geth which listens to port 8545. This docker-compose.yml has been tested to run perfectly on both Ubuntu and Mac OS X.
docker-compose version 1.21.1, build 7641a569 is being used on the Windows 10 system.
Problem: Docker throws an error after executing docker-compose up.
Fatal: Error starting protocol stack: listen unix /root/.ethereum/geth.ipc: bind: operation not permitted
What might be causing this error, and how can we solve it?
docker-compose.yml
version: '3'
services:
geth:
image: ethereum/client-go:latest
volumes:
- ./nodedata:/root/.ethereum
- ./files/genesis.json:/root/genesis.json:ro
ports:
- 30303:30303
- "30303:30303/udp"
- 8545:8545
- 8546:8546
command: --networkid 1337 --cache 512 --port 30303 --maxpeers 50 --rpc --rpcaddr "0.0.0.0" --rpcapi "eth,personal,web3,net" --bootnodes enode://0b37f58139bef9fef04ff50c1d2d95acade0b6989433ed2148683f294a12e8ca7eb17915864a0dd61d5533e898b7040b75df1a17cca27e90d106f95dea255b45#167.99.55.99:30303
container_name: geth-nosw
Output after running docker-compose up
Starting geth-node ... done
Attaching to geth-node
geth-node | INFO [07-22|20:43:11.482] Maximum peer count ETH=50 LES=0 total=50
geth-node | INFO [07-22|20:43:11.488] Starting peer-to-peer node instance=Geth/v1.8.13-unstable-526abe27/linux-amd64/go1.10.3
geth-node | INFO [07-22|20:43:11.488] Allocated cache and file handles database=/root/.ethereum/geth/chaindata cache=384 handles=1024
geth-node | INFO [07-22|20:43:11.521] Initialised chain configuration config="{ChainID: 1337 Homestead: 1 DAO: <nil> DAOSupport: false EIP150: 2 EIP155: 3 EIP158: 3 Byzantium: 4 Constantinople: <nil> Engine: clique}"
geth-node | INFO [07-22|20:43:11.521] Initialising Ethereum protocol versions="[63 62]" network=1366
geth-node | INFO [07-22|20:43:11.524] Loaded most recent local header number=0 hash=b85de5…3971b4 td=1
geth-node | INFO [07-22|20:43:11.524] Loaded most recent local full block number=0 hash=b85de5…3971b4 td=1
geth-node | INFO [07-22|20:43:11.524] Loaded most recent local fast block number=0 hash=b85de5…3971b4 td=1
geth-node | INFO [07-22|20:43:11.525] Loaded local transaction journal transactions=0 dropped=0
geth-node | INFO [07-22|20:43:11.530] Regenerated local transaction journal transactions=0 accounts=0
geth-node | INFO [07-22|20:43:11.530] Starting P2P networking
geth-node | INFO [07-22|20:43:13.670] UDP listener up self=enode://3e0e8e9a886a347fffb0150e670b45c8ae19f0f87ebb6d3fa0f7f312f17220b426913ac96df9527ae0ca00138c9e50ffe646255d5655e6023c47ef10aabf0224#[::]:30303
geth-node | INFO [07-22|20:43:13.672] Stats daemon started
geth-node | INFO [07-22|20:43:13.674] RLPx listener up self=enode://3e0e8e9a886a347fffb0150e670b45c8ae19f0f87ebb6d3fa0f7f312f17220b426913ac96df9527ae0ca00138c9e50ffe646255d5655e6023c47ef10aabf0224#[::]:30303
geth-node | INFO [07-22|20:43:13.676] Blockchain manager stopped
geth-node | INFO [07-22|20:43:13.677] Stopping Ethereum protocol
geth-node | INFO [07-22|20:43:13.677] Ethereum protocol stopped
geth-node | INFO [07-22|20:43:13.677] Transaction pool stopped
geth-node | INFO [07-22|20:43:13.681] Database closed database=/root/.ethereum/geth/chaindata
geth-node | INFO [07-22|20:43:13.681] Stats daemon stopped
geth-node | Fatal: Error starting protocol stack: listen unix /root/.ethereum/geth.ipc: bind: operation not permitted
geth-node | Fatal: Error starting protocol stack: listen unix /root/.ethereum/geth.ipc: bind: operation not permitted
geth-node exited with code 1
The problem is that you cannot create a unix socket on a volume that is linked to a windows file system.
Here's a link on how to work around that.

Kubernetes Multi Master setup

[SOLVED] flannel dont work with that I changed to weave net. If you dont want to provide the pod-network-cidr: "10.244.0.0/16" flag in the config.yaml
I want to make a multi master setup with kubernetes and tried alot of different ways. Even the last way I take don´t work. The problem is that the dns and the flannel network plugin don´t want to start. They get every time the CrashLoopBackOff status. The way I do it is listed below.
First create a external etcd cluster with this command on every node (only the adresses changed)
nohup etcd --name kube1 --initial-advertise-peer-urls http://192.168.100.110:2380 \
--listen-peer-urls http://192.168.100.110:2380 \
--listen-client-urls http://192.168.100.110:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://192.168.100.110:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380 \
--initial-cluster-state new &
Then I created a config.yaml file for the kubeadm init command.
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 192.168.100.110
etcd:
endpoints:
- "http://192.168.100.110:2379"
- "http://192.168.100.108:2379"
- "http://192.168.100.104:2379"
apiServerExtraArgs:
apiserver-count: "3"
apiServerCertSANs:
- "192.168.100.110"
- "192.168.100.108"
- "192.168.100.104"
- "127.0.0.1"
token: "64bhyh.1vjuhruuayzgtykv"
tokenTTL: "0"
Start command: kubeadm init --config /root/config.yaml
So now copy the /etc/kubernetes/pki on the other nodes and the config and start the other master nodes the same way. But it doesn´t work.
So what is the right way to initialize a multi master kubernetes cluster or why does my flannel network not start?
Status from a flannel pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "run"
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "cni"
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "flannel-token-swdhl"
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "flannel-cfg"
Normal Pulling 8m kubelet, kube2 pulling image "quay.io/coreos/flannel:v0.10.0-amd64"
Normal Pulled 8m kubelet, kube2 Successfully pulled image "quay.io/coreos/flannel:v0.10.0-amd64"
Normal Created 8m kubelet, kube2 Created container
Normal Started 8m kubelet, kube2 Started container
Normal Pulled 8m (x4 over 8m) kubelet, kube2 Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
Normal Created 8m (x4 over 8m) kubelet, kube2 Created container
Normal Started 8m (x4 over 8m) kubelet, kube2 Started container
Warning BackOff 3m (x23 over 8m) kubelet, kube2 Back-off restarting failed container
etcd version
etcd --version
etcd Version: 3.3.6
Git SHA: 932c3c01f
Go Version: go1.9.6
Go OS/Arch: linux/amd64
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Last lines in nohup from etcd
2018-06-06 19:44:28.441304 I | etcdserver: name = kube1
2018-06-06 19:44:28.441327 I | etcdserver: data dir = kube1.etcd
2018-06-06 19:44:28.441331 I | etcdserver: member dir = kube1.etcd/member
2018-06-06 19:44:28.441334 I | etcdserver: heartbeat = 100ms
2018-06-06 19:44:28.441336 I | etcdserver: election = 1000ms
2018-06-06 19:44:28.441338 I | etcdserver: snapshot count = 100000
2018-06-06 19:44:28.441343 I | etcdserver: advertise client URLs = http://192.168.100.110:2379
2018-06-06 19:44:28.441346 I | etcdserver: initial advertise peer URLs = http://192.168.100.110:2380
2018-06-06 19:44:28.441352 I | etcdserver: initial cluster = kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380
2018-06-06 19:44:28.443825 I | etcdserver: starting member a4df4f699dd66909 in cluster 73f203cf831df407
2018-06-06 19:44:28.443843 I | raft: a4df4f699dd66909 became follower at term 0
2018-06-06 19:44:28.443848 I | raft: newRaft a4df4f699dd66909 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2018-06-06 19:44:28.443850 I | raft: a4df4f699dd66909 became follower at term 1
2018-06-06 19:44:28.447834 W | auth: simple token is not cryptographically signed
2018-06-06 19:44:28.448857 I | rafthttp: starting peer 9e0f381e79b9b9dc...
2018-06-06 19:44:28.448869 I | rafthttp: started HTTP pipelining with peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450791 I | rafthttp: started peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450803 I | rafthttp: added peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450809 I | rafthttp: starting peer fc9c29e972d01e69...
2018-06-06 19:44:28.450816 I | rafthttp: started HTTP pipelining with peer fc9c29e972d01e69
2018-06-06 19:44:28.453543 I | rafthttp: started peer fc9c29e972d01e69
2018-06-06 19:44:28.453559 I | rafthttp: added peer fc9c29e972d01e69
2018-06-06 19:44:28.453570 I | etcdserver: starting server... [version: 3.3.6, cluster version: to_be_decided]
2018-06-06 19:44:28.455414 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455431 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455445 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream MsgApp v2 reader)
2018-06-06 19:44:28.455578 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream Message reader)
2018-06-06 19:44:28.455697 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
2018-06-06 19:44:28.455704 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
#
If you do not have any hosting preferences and if you are ok with creating cluster on AWS then it can be done very easily using KOPS.
https://github.com/kubernetes/kops
Via KOPS you can easily configure the autoscaling group for master and can specify the number of master and nodes required for your cluster.
Flannel dont work with that so I changed to weave net. If you dont want to use provide the pod-network-cidr: "10.244.0.0/16" flag in the config.yaml

Unable to build app using dockers

I have setup of my application on DigitaOcean using dockers. It was working fine but few days back it stopped working. Whenever I want to build application and deploy it doesn't shows any progress.
By using following commands
docker-compose build && docker-compose stop && docker-compose up -d
systems stucks on the following output
db uses an image, skipping
elasticsearch uses an image, skipping
redis uses an image, skipping
Building app
It doesn't shows any furthur progress.
Following are the logs of docker-compose
db_1 | LOG: received smart shutdown request
db_1 | LOG: autovacuum launcher shutting down
db_1 | LOG: shutting down
db_1 | LOG: database system is shut down
db_1 | LOG: database system was shut down at 2018-01-10
02:25:36 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
redis_1 | 11264:C 26 Mar 15:20:17.028 # Failed opening the RDB
file root (in server root dir /run) for saving: Permission denied
redis_1 | 1:M 26 Mar 15:20:17.127 # Background saving error
redis_1 | 1:M 26 Mar 15:20:23.038 * 1 changes in 3600 seconds.
Saving...
redis_1 | 1:M 26 Mar 15:20:23.038 * Background saving started by pid 11265
elasticsearch | [2018-03-06T01:18:25,729][WARN ][o.e.b.BootstrapChecks ] [_IRIbyW] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch | [2018-03-06T01:18:28,794][INFO ][o.e.c.s.ClusterService ] [_IRIbyW] new_master {_IRIbyW}{_IRIbyWCSoaUaKOLN93Fzg}{TFK38PIgRT6Kl62mTGBORg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
elasticsearch | [2018-03-06T01:18:28,835][INFO ][o.e.h.n.Netty4HttpServerTransport] [_IRIbyW] publish_address {172.17.0.4:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch | [2018-03-06T01:18:28,838][INFO ][o.e.n.Node ] [_IRIbyW] started
elasticsearch | [2018-03-06T01:18:29,104][INFO ][o.e.g.GatewayService ] [_IRIbyW] recovered [4] indices into cluster_state
elasticsearch | [2018-03-06T01:18:29,799][INFO ][o.e.c.r.a.AllocationService] [_IRIbyW] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[product_records][2]] ...]).
elasticsearch | [2018-03-07T16:11:18,449][INFO ][o.e.n.Node ] [_IRIbyW] stopping ...
elasticsearch | [2018-03-07T16:11:18,575][INFO ][o.e.n.Node ] [_IRIbyW] stopped
elasticsearch | [2018-03-07T16:11:18,575][INFO ][o.e.n.Node ] [_IRIbyW] closing ...
elasticsearch | [2018-03-07T16:11:18,601][INFO ][o.e.n.Node ] [_IRIbyW] closed
elasticsearch | [2018-03-07T16:11:37,993][INFO ][o.e.n.Node ] [] initializing ...
WARNING: Connection pool is full, discarding connection: 'Ipaddress'
I am using postgres , redis, elasticsearch and sidekiq images in my rails application
But i have no clue where the things are going wrong.

Running etcd in Docker container

I want to run etcd in a Docker container with this command:
docker run -p 2379:2379 -p 4001:4001 --name etcd -v /usr/share/ca-certificates/:/etc/ssl/certs quay.io/coreos/etcd:v2.3.0-alpha.1
and seems that everything is ok:
2016-02-23 12:22:27.815591 I | etcdmain: etcd Version: 2.3.0-alpha.0+git
2016-02-23 12:22:27.815631 I | etcdmain: Git SHA: 40d3e0d
2016-02-23 12:22:27.815635 I | etcdmain: Go Version: go1.5.3
2016-02-23 12:22:27.815638 I | etcdmain: Go OS/Arch: linux/amd64
2016-02-23 12:22:27.815659 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2016-02-23 12:22:27.815663 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
2016-02-23 12:22:27.815896 I | etcdmain: listening for peers on http://localhost:2380
2016-02-23 12:22:27.815973 I | etcdmain: listening for peers on http://localhost:7001
2016-02-23 12:22:27.816030 I | etcdmain: listening for client requests on http://localhost:2379
2016-02-23 12:22:27.816091 I | etcdmain: listening for client requests on http://localhost:4001
2016-02-23 12:22:27.816370 I | etcdserver: name = default
2016-02-23 12:22:27.816383 I | etcdserver: data dir = default.etcd
2016-02-23 12:22:27.816387 I | etcdserver: member dir = default.etcd/member
2016-02-23 12:22:27.816390 I | etcdserver: heartbeat = 100ms
2016-02-23 12:22:27.816392 I | etcdserver: election = 1000ms
2016-02-23 12:22:27.816395 I | etcdserver: snapshot count = 10000
2016-02-23 12:22:27.816404 I | etcdserver: advertise client URLs = http://localhost:2379,http://localhost:4001
2016-02-23 12:22:27.816408 I | etcdserver: initial advertise peer URLs = http://localhost:2380,http://localhost:7001
2016-02-23 12:22:27.816415 I | etcdserver: initial cluster = default=http://localhost:2380,default=http://localhost:7001
2016-02-23 12:22:27.821522 I | etcdserver: starting member ce2a822cea30bfca in cluster 7e27652122e8b2ae
2016-02-23 12:22:27.821566 I | raft: ce2a822cea30bfca became follower at term 0
2016-02-23 12:22:27.821579 I | raft: newRaft ce2a822cea30bfca [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2016-02-23 12:22:27.821583 I | raft: ce2a822cea30bfca became follower at term 1
2016-02-23 12:22:27.821739 I | etcdserver: starting server... [version: 2.3.0-alpha.0+git, cluster version: to_be_decided]
2016-02-23 12:22:27.822619 N | etcdserver: added local member ce2a822cea30bfca [http://localhost:2380 http://localhost:7001] to cluster 7e27652122e8b2ae
2016-02-23 12:22:28.221880 I | raft: ce2a822cea30bfca is starting a new election at term 1
2016-02-23 12:22:28.222304 I | raft: ce2a822cea30bfca became candidate at term 2
2016-02-23 12:22:28.222545 I | raft: ce2a822cea30bfca received vote from ce2a822cea30bfca at term 2
2016-02-23 12:22:28.222885 I | raft: ce2a822cea30bfca became leader at term 2
2016-02-23 12:22:28.223075 I | raft: raft.node: ce2a822cea30bfca elected leader ce2a822cea30bfca at term 2
2016-02-23 12:22:28.223529 I | etcdserver: setting up the initial cluster version to 2.3
2016-02-23 12:22:28.227050 N | etcdserver: set the initial cluster version to 2.3
2016-02-23 12:22:28.227351 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379 http://localhost:4001]} to cluster 7e27652122e8b2ae
But when I try to set a key (from same etcd node machine):
curl -L http://localhost:2379/v2/keys/mykey -XPUT -d value="this is awesome"
I get:
The requested URL could not be retrieved
Do I need to configure something more? Docker container is running ok:
docker ps
dba35d3b61c3 quay.io/coreos/etcd:v2.3.0-alpha.1 "/etcd" 2 seconds ago Up 1 seconds 0.0.0.0:2379->2379/tcp, 2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd
You should configure etcd to listen on 0.0.0.0, otherwise it's listening on 127.0.0.1 which is not accessible outside the docker container
docker run \
-p 2379:2379 \
-p 4001:4001 \
--name etcd \
-v /usr/share/ca-certificates/:/etc/ssl/certs \
quay.io/coreos/etcd:v2.3.0-alpha.1 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001

Resources