I want to run etcd in a Docker container with this command:
docker run -p 2379:2379 -p 4001:4001 --name etcd -v /usr/share/ca-certificates/:/etc/ssl/certs quay.io/coreos/etcd:v2.3.0-alpha.1
and seems that everything is ok:
2016-02-23 12:22:27.815591 I | etcdmain: etcd Version: 2.3.0-alpha.0+git
2016-02-23 12:22:27.815631 I | etcdmain: Git SHA: 40d3e0d
2016-02-23 12:22:27.815635 I | etcdmain: Go Version: go1.5.3
2016-02-23 12:22:27.815638 I | etcdmain: Go OS/Arch: linux/amd64
2016-02-23 12:22:27.815659 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2016-02-23 12:22:27.815663 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
2016-02-23 12:22:27.815896 I | etcdmain: listening for peers on http://localhost:2380
2016-02-23 12:22:27.815973 I | etcdmain: listening for peers on http://localhost:7001
2016-02-23 12:22:27.816030 I | etcdmain: listening for client requests on http://localhost:2379
2016-02-23 12:22:27.816091 I | etcdmain: listening for client requests on http://localhost:4001
2016-02-23 12:22:27.816370 I | etcdserver: name = default
2016-02-23 12:22:27.816383 I | etcdserver: data dir = default.etcd
2016-02-23 12:22:27.816387 I | etcdserver: member dir = default.etcd/member
2016-02-23 12:22:27.816390 I | etcdserver: heartbeat = 100ms
2016-02-23 12:22:27.816392 I | etcdserver: election = 1000ms
2016-02-23 12:22:27.816395 I | etcdserver: snapshot count = 10000
2016-02-23 12:22:27.816404 I | etcdserver: advertise client URLs = http://localhost:2379,http://localhost:4001
2016-02-23 12:22:27.816408 I | etcdserver: initial advertise peer URLs = http://localhost:2380,http://localhost:7001
2016-02-23 12:22:27.816415 I | etcdserver: initial cluster = default=http://localhost:2380,default=http://localhost:7001
2016-02-23 12:22:27.821522 I | etcdserver: starting member ce2a822cea30bfca in cluster 7e27652122e8b2ae
2016-02-23 12:22:27.821566 I | raft: ce2a822cea30bfca became follower at term 0
2016-02-23 12:22:27.821579 I | raft: newRaft ce2a822cea30bfca [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2016-02-23 12:22:27.821583 I | raft: ce2a822cea30bfca became follower at term 1
2016-02-23 12:22:27.821739 I | etcdserver: starting server... [version: 2.3.0-alpha.0+git, cluster version: to_be_decided]
2016-02-23 12:22:27.822619 N | etcdserver: added local member ce2a822cea30bfca [http://localhost:2380 http://localhost:7001] to cluster 7e27652122e8b2ae
2016-02-23 12:22:28.221880 I | raft: ce2a822cea30bfca is starting a new election at term 1
2016-02-23 12:22:28.222304 I | raft: ce2a822cea30bfca became candidate at term 2
2016-02-23 12:22:28.222545 I | raft: ce2a822cea30bfca received vote from ce2a822cea30bfca at term 2
2016-02-23 12:22:28.222885 I | raft: ce2a822cea30bfca became leader at term 2
2016-02-23 12:22:28.223075 I | raft: raft.node: ce2a822cea30bfca elected leader ce2a822cea30bfca at term 2
2016-02-23 12:22:28.223529 I | etcdserver: setting up the initial cluster version to 2.3
2016-02-23 12:22:28.227050 N | etcdserver: set the initial cluster version to 2.3
2016-02-23 12:22:28.227351 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379 http://localhost:4001]} to cluster 7e27652122e8b2ae
But when I try to set a key (from same etcd node machine):
curl -L http://localhost:2379/v2/keys/mykey -XPUT -d value="this is awesome"
I get:
The requested URL could not be retrieved
Do I need to configure something more? Docker container is running ok:
docker ps
dba35d3b61c3 quay.io/coreos/etcd:v2.3.0-alpha.1 "/etcd" 2 seconds ago Up 1 seconds 0.0.0.0:2379->2379/tcp, 2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd
You should configure etcd to listen on 0.0.0.0, otherwise it's listening on 127.0.0.1 which is not accessible outside the docker container
docker run \
-p 2379:2379 \
-p 4001:4001 \
--name etcd \
-v /usr/share/ca-certificates/:/etc/ssl/certs \
quay.io/coreos/etcd:v2.3.0-alpha.1 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001
Related
My DockerFile is:
FROM openjdk:8
VOLUME /tmp
ADD target/demo-0.0.1-SNAPSHOT.jar app.jar
#RUN bash -c 'touch /app.jar'
#EXPOSE 8080
ENTRYPOINT ["java","-Dspring.data.mongodb.uri=mongodb://mongo/players","-jar","/app.jar"]
And the docker-compose is:
version: "3"
services:
spring-docker:
build: .
restart: always
ports:
- "8080:8080"
depends_on:
- db
db:
image: mongo
volumes:
- ./data:/data/db
ports:
- "27000:27017"
restart: always
I have docker Image and when I use docker-compose up, anything goes well without any error.
But in the Postman, when I use GET method with localhost:8080/player I do not have any out put, so I used the IP of docker-machine such as 192.168.99.101:8080, but I have error 404 Not found in the Postman.
what is my mistake?!
The docker-compose logs:
$ docker-compose logs
Attaching to thesismongoproject_spring-docker_1, thesismongoproject_db_1
spring-docker_1 |
spring-docker_1 | . ____ _ __ _ _
spring-docker_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
spring-docker_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
spring-docker_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
spring-docker_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
spring-docker_1 | =========|_|==============|___/=/_/_/_/
spring-docker_1 | :: Spring Boot :: (v2.2.6.RELEASE)
spring-docker_1 |
spring-docker_1 | 2020-05-31 11:36:39.598 INFO 1 --- [ main] thesisM
ongoProject.Application : Starting Application v0.0.1-SNAPSHOT on e81c
cff8ba0e with PID 1 (/demo-0.0.1-SNAPSHOT.jar started by root in /)
spring-docker_1 | 2020-05-31 11:36:39.620 INFO 1 --- [ main] thesisM
ongoProject.Application : No active profile set, falling back to defau
lt profiles: default
spring-docker_1 | 2020-05-31 11:36:41.971 INFO 1 --- [ main] .s.d.r.
c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositori
es in DEFAULT mode.
spring-docker_1 | 2020-05-31 11:36:42.216 INFO 1 --- [ main] .s.d.r.
c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in
225ms. Found 4 MongoDB repository interfaces.
spring-docker_1 | 2020-05-31 11:36:44.319 INFO 1 --- [ main] o.s.b.w
.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
spring-docker_1 | 2020-05-31 11:36:44.381 INFO 1 --- [ main] o.apach
e.catalina.core.StandardService : Starting service [Tomcat]
spring-docker_1 | 2020-05-31 11:36:44.381 INFO 1 --- [ main] org.apa
che.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.
33]
spring-docker_1 | 2020-05-31 11:36:44.619 INFO 1 --- [ main] o.a.c.c
.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationC
ontext
spring-docker_1 | 2020-05-31 11:36:44.619 INFO 1 --- [ main] o.s.web
.context.ContextLoader : Root WebApplicationContext: initialization c
ompleted in 4810 ms
spring-docker_1 | 2020-05-31 11:36:46.183 INFO 1 --- [ main] org.mon
godb.driver.cluster : Cluster created with settings {hosts=[db:270
17], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'
, maxWaitQueueSize=500}
spring-docker_1 | 2020-05-31 11:36:46.781 INFO 1 --- [null'}-db:27017] org.mon
godb.driver.connection : Opened connection [connectionId{localValue:1
, serverValue:1}] to db:27017
spring-docker_1 | 2020-05-31 11:36:46.802 INFO 1 --- [null'}-db:27017] org.mon
godb.driver.cluster : Monitor thread successfully connected to ser
ver with description ServerDescription{address=db:27017, type=STANDALONE, state=
CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 7]}, minWireVersion
=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30,
roundTripTimeNanos=5468915}
spring-docker_1 | 2020-05-31 11:36:48.829 INFO 1 --- [ main] o.s.s.c
oncurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTas
kExecutor'
spring-docker_1 | 2020-05-31 11:36:49.546 INFO 1 --- [ main] o.s.b.w
.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with
context path ''
spring-docker_1 | 2020-05-31 11:36:49.581 INFO 1 --- [ main] thesisM
ongoProject.Application : Started Application in 11.264 seconds (JVM r
unning for 13.615)
spring-docker_1 | 2020-05-31 11:40:10.290 INFO 1 --- [extShutdownHook] o.s.s.c
oncurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTa
skExecutor'
db_1 | 2020-05-31T11:36:35.623+0000 I CONTROL [main] Automatically
disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none
'
db_1 | 2020-05-31T11:36:35.639+0000 W ASIO [main] No TransportL
ayer configured during NetworkInterface startup
db_1 | 2020-05-31T11:36:35.645+0000 I CONTROL [initandlisten] Mong
oDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=1a0e5bc0c503
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] db v
ersion v4.2.7
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] git
version: 51d9fe12b5d19720e72dcd7db0f2f17dd9a19212
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] Open
SSL version: OpenSSL 1.1.1 11 Sep 2018
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] allo
cator: tcmalloc
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] modu
les: none
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten] buil
d environment:
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
distmod: ubuntu1804
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
distarch: x86_64
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
target_arch: x86_64
db_1 | 2020-05-31T11:36:35.648+0000 I CONTROL [initandlisten] opti
ons: { net: { bindIp: "*" } }
db_1 | 2020-05-31T11:36:35.649+0000 I STORAGE [initandlisten] Dete
cted data files in /data/db created by the 'wiredTiger' storage engine, so setti
ng the active storage engine to 'wiredTiger'.
db_1 | 2020-05-31T11:36:35.650+0000 I STORAGE [initandlisten] wire
dtiger_open config: create,cache_size=256M,cache_overflow=(file_max=0M),session_
max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(f
ast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager
=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statis
tics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
db_1 | 2020-05-31T11:36:37.046+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:46670][1:0x7f393f9a0b00], txn-recover: Recovering log
9 through 10
db_1 | 2020-05-31T11:36:37.231+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:231423][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 10 through 10
db_1 | 2020-05-31T11:36:37.294+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:294858][1:0x7f393f9a0b00], txn-recover: Main recovery
loop: starting at 9/6016 to 10/256
db_1 | 2020-05-31T11:36:37.447+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:447346][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 9 through 10
db_1 | 2020-05-31T11:36:37.564+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:564841][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 10 through 10
db_1 | 2020-05-31T11:36:37.645+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:645216][1:0x7f393f9a0b00], txn-recover: Set global re
covery timestamp: (0, 0)
db_1 | 2020-05-31T11:36:37.681+0000 I RECOVERY [initandlisten] Wire
dTiger recoveryTimestamp. Ts: Timestamp(0, 0)
db_1 | 2020-05-31T11:36:37.703+0000 I STORAGE [initandlisten] Time
stamp monitor starting
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten]
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten] ** W
ARNING: Access control is not enabled for the database.
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten] **
Read and write access to data and configuration is unrestricted.
db_1 | 2020-05-31T11:36:37.705+0000 I CONTROL [initandlisten]
db_1 | 2020-05-31T11:36:37.712+0000 I SHARDING [initandlisten] Mark
ing collection local.system.replset as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.722+0000 I STORAGE [initandlisten] Flow
Control is enabled on this deployment.
db_1 | 2020-05-31T11:36:37.722+0000 I SHARDING [initandlisten] Mark
ing collection admin.system.roles as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.724+0000 I SHARDING [initandlisten] Mark
ing collection admin.system.version as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.726+0000 I SHARDING [initandlisten] Mark
ing collection local.startup_log as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.729+0000 I FTDC [initandlisten] Init
ializing full-time diagnostic data capture with directory '/data/db/diagnostic.d
ata'
db_1 | 2020-05-31T11:36:37.740+0000 I SHARDING [LogicalSessionCache
Refresh] Marking collection config.system.sessions as collection version: <unsha
rded>
db_1 | 2020-05-31T11:36:37.748+0000 I SHARDING [LogicalSessionCache
Reap] Marking collection config.transactions as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.748+0000 I NETWORK [listener] Listening
on /tmp/mongodb-27017.sock
db_1 | 2020-05-31T11:36:37.748+0000 I NETWORK [listener] Listening
on 0.0.0.0
db_1 | 2020-05-31T11:36:37.749+0000 I NETWORK [listener] waiting f
or connections on port 27017
db_1 | 2020-05-31T11:36:38.001+0000 I SHARDING [ftdc] Marking colle
ction local.oplog.rs as collection version: <unsharded>
db_1 | 2020-05-31T11:36:46.536+0000 I NETWORK [listener] connectio
n accepted from 172.19.0.3:40656 #1 (1 connection now open)
db_1 | 2020-05-31T11:36:46.653+0000 I NETWORK [conn1] received cli
ent metadata from 172.19.0.3:40656 conn1: { driver: { name: "mongo-java-driver|l
egacy", version: "3.11.2" }, os: { type: "Linux", name: "Linux", architecture: "
amd64", version: "4.14.154-boot2docker" }, platform: "Java/Oracle Corporation/1.
8.0_252-b09" }
db_1 | 2020-05-31T11:40:10.302+0000 I NETWORK [conn1] end connecti
on 172.19.0.3:40656 (0 connections now open)
db_1 | 2020-05-31T11:40:10.523+0000 I CONTROL [signalProcessingThr
ead] got signal 15 (Terminated), will terminate after current cmd ends
db_1 | 2020-05-31T11:40:10.730+0000 I NETWORK [signalProcessingThr
ead] shutdown: going to close listening sockets...
db_1 | 2020-05-31T11:40:10.731+0000 I NETWORK [listener] removing
socket file: /tmp/mongodb-27017.sock
db_1 | 2020-05-31T11:40:10.731+0000 I - [signalProcessingThr
ead] Stopping further Flow Control ticket acquisitions.
db_1 | 2020-05-31T11:40:10.796+0000 I CONTROL [signalProcessingThr
ead] Shutting down free monitoring
db_1 | 2020-05-31T11:40:10.800+0000 I FTDC [signalProcessingThr
ead] Shutting down full-time diagnostic data capture
db_1 | 2020-05-31T11:40:10.803+0000 I STORAGE [signalProcessingThr
ead] Deregistering all the collections
db_1 | 2020-05-31T11:40:10.811+0000 I STORAGE [signalProcessingThr
ead] Timestamp monitor shutting down
db_1 | 2020-05-31T11:40:10.828+0000 I STORAGE [TimestampMonitor] T
imestamp monitor is stopping due to: interrupted at shutdown
db_1 | 2020-05-31T11:40:10.828+0000 I STORAGE [signalProcessingThr
ead] WiredTigerKVEngine shutting down
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Shutting down session sweeper thread
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down session sweeper thread
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Shutting down journal flusher thread
db_1 | 2020-05-31T11:40:10.916+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down journal flusher thread
db_1 | 2020-05-31T11:40:10.917+0000 I STORAGE [signalProcessingThr
ead] Shutting down checkpoint thread
db_1 | 2020-05-31T11:40:10.917+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down checkpoint thread
db_1 | 2020-05-31T11:40:10.935+0000 I STORAGE [signalProcessingThr
ead] shutdown: removing fs lock...
db_1 | 2020-05-31T11:40:10.942+0000 I CONTROL [signalProcessingThr
ead] now exiting
db_1 | 2020-05-31T11:40:10.943+0000 I CONTROL [signalProcessingThr
ead] shutting down with code:0
for solving this problem I must put #EnableAutoConfiguration(exclude={MongoAutoConfiguration.class}) annotation
[SOLVED] flannel dont work with that I changed to weave net. If you dont want to provide the pod-network-cidr: "10.244.0.0/16" flag in the config.yaml
I want to make a multi master setup with kubernetes and tried alot of different ways. Even the last way I take don´t work. The problem is that the dns and the flannel network plugin don´t want to start. They get every time the CrashLoopBackOff status. The way I do it is listed below.
First create a external etcd cluster with this command on every node (only the adresses changed)
nohup etcd --name kube1 --initial-advertise-peer-urls http://192.168.100.110:2380 \
--listen-peer-urls http://192.168.100.110:2380 \
--listen-client-urls http://192.168.100.110:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://192.168.100.110:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380 \
--initial-cluster-state new &
Then I created a config.yaml file for the kubeadm init command.
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 192.168.100.110
etcd:
endpoints:
- "http://192.168.100.110:2379"
- "http://192.168.100.108:2379"
- "http://192.168.100.104:2379"
apiServerExtraArgs:
apiserver-count: "3"
apiServerCertSANs:
- "192.168.100.110"
- "192.168.100.108"
- "192.168.100.104"
- "127.0.0.1"
token: "64bhyh.1vjuhruuayzgtykv"
tokenTTL: "0"
Start command: kubeadm init --config /root/config.yaml
So now copy the /etc/kubernetes/pki on the other nodes and the config and start the other master nodes the same way. But it doesn´t work.
So what is the right way to initialize a multi master kubernetes cluster or why does my flannel network not start?
Status from a flannel pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "run"
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "cni"
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "flannel-token-swdhl"
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "flannel-cfg"
Normal Pulling 8m kubelet, kube2 pulling image "quay.io/coreos/flannel:v0.10.0-amd64"
Normal Pulled 8m kubelet, kube2 Successfully pulled image "quay.io/coreos/flannel:v0.10.0-amd64"
Normal Created 8m kubelet, kube2 Created container
Normal Started 8m kubelet, kube2 Started container
Normal Pulled 8m (x4 over 8m) kubelet, kube2 Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
Normal Created 8m (x4 over 8m) kubelet, kube2 Created container
Normal Started 8m (x4 over 8m) kubelet, kube2 Started container
Warning BackOff 3m (x23 over 8m) kubelet, kube2 Back-off restarting failed container
etcd version
etcd --version
etcd Version: 3.3.6
Git SHA: 932c3c01f
Go Version: go1.9.6
Go OS/Arch: linux/amd64
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Last lines in nohup from etcd
2018-06-06 19:44:28.441304 I | etcdserver: name = kube1
2018-06-06 19:44:28.441327 I | etcdserver: data dir = kube1.etcd
2018-06-06 19:44:28.441331 I | etcdserver: member dir = kube1.etcd/member
2018-06-06 19:44:28.441334 I | etcdserver: heartbeat = 100ms
2018-06-06 19:44:28.441336 I | etcdserver: election = 1000ms
2018-06-06 19:44:28.441338 I | etcdserver: snapshot count = 100000
2018-06-06 19:44:28.441343 I | etcdserver: advertise client URLs = http://192.168.100.110:2379
2018-06-06 19:44:28.441346 I | etcdserver: initial advertise peer URLs = http://192.168.100.110:2380
2018-06-06 19:44:28.441352 I | etcdserver: initial cluster = kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380
2018-06-06 19:44:28.443825 I | etcdserver: starting member a4df4f699dd66909 in cluster 73f203cf831df407
2018-06-06 19:44:28.443843 I | raft: a4df4f699dd66909 became follower at term 0
2018-06-06 19:44:28.443848 I | raft: newRaft a4df4f699dd66909 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2018-06-06 19:44:28.443850 I | raft: a4df4f699dd66909 became follower at term 1
2018-06-06 19:44:28.447834 W | auth: simple token is not cryptographically signed
2018-06-06 19:44:28.448857 I | rafthttp: starting peer 9e0f381e79b9b9dc...
2018-06-06 19:44:28.448869 I | rafthttp: started HTTP pipelining with peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450791 I | rafthttp: started peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450803 I | rafthttp: added peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450809 I | rafthttp: starting peer fc9c29e972d01e69...
2018-06-06 19:44:28.450816 I | rafthttp: started HTTP pipelining with peer fc9c29e972d01e69
2018-06-06 19:44:28.453543 I | rafthttp: started peer fc9c29e972d01e69
2018-06-06 19:44:28.453559 I | rafthttp: added peer fc9c29e972d01e69
2018-06-06 19:44:28.453570 I | etcdserver: starting server... [version: 3.3.6, cluster version: to_be_decided]
2018-06-06 19:44:28.455414 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455431 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455445 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream MsgApp v2 reader)
2018-06-06 19:44:28.455578 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream Message reader)
2018-06-06 19:44:28.455697 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
2018-06-06 19:44:28.455704 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
#
If you do not have any hosting preferences and if you are ok with creating cluster on AWS then it can be done very easily using KOPS.
https://github.com/kubernetes/kops
Via KOPS you can easily configure the autoscaling group for master and can specify the number of master and nodes required for your cluster.
Flannel dont work with that so I changed to weave net. If you dont want to use provide the pod-network-cidr: "10.244.0.0/16" flag in the config.yaml
I have setup of my application on DigitaOcean using dockers. It was working fine but few days back it stopped working. Whenever I want to build application and deploy it doesn't shows any progress.
By using following commands
docker-compose build && docker-compose stop && docker-compose up -d
systems stucks on the following output
db uses an image, skipping
elasticsearch uses an image, skipping
redis uses an image, skipping
Building app
It doesn't shows any furthur progress.
Following are the logs of docker-compose
db_1 | LOG: received smart shutdown request
db_1 | LOG: autovacuum launcher shutting down
db_1 | LOG: shutting down
db_1 | LOG: database system is shut down
db_1 | LOG: database system was shut down at 2018-01-10
02:25:36 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
redis_1 | 11264:C 26 Mar 15:20:17.028 # Failed opening the RDB
file root (in server root dir /run) for saving: Permission denied
redis_1 | 1:M 26 Mar 15:20:17.127 # Background saving error
redis_1 | 1:M 26 Mar 15:20:23.038 * 1 changes in 3600 seconds.
Saving...
redis_1 | 1:M 26 Mar 15:20:23.038 * Background saving started by pid 11265
elasticsearch | [2018-03-06T01:18:25,729][WARN ][o.e.b.BootstrapChecks ] [_IRIbyW] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch | [2018-03-06T01:18:28,794][INFO ][o.e.c.s.ClusterService ] [_IRIbyW] new_master {_IRIbyW}{_IRIbyWCSoaUaKOLN93Fzg}{TFK38PIgRT6Kl62mTGBORg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
elasticsearch | [2018-03-06T01:18:28,835][INFO ][o.e.h.n.Netty4HttpServerTransport] [_IRIbyW] publish_address {172.17.0.4:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch | [2018-03-06T01:18:28,838][INFO ][o.e.n.Node ] [_IRIbyW] started
elasticsearch | [2018-03-06T01:18:29,104][INFO ][o.e.g.GatewayService ] [_IRIbyW] recovered [4] indices into cluster_state
elasticsearch | [2018-03-06T01:18:29,799][INFO ][o.e.c.r.a.AllocationService] [_IRIbyW] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[product_records][2]] ...]).
elasticsearch | [2018-03-07T16:11:18,449][INFO ][o.e.n.Node ] [_IRIbyW] stopping ...
elasticsearch | [2018-03-07T16:11:18,575][INFO ][o.e.n.Node ] [_IRIbyW] stopped
elasticsearch | [2018-03-07T16:11:18,575][INFO ][o.e.n.Node ] [_IRIbyW] closing ...
elasticsearch | [2018-03-07T16:11:18,601][INFO ][o.e.n.Node ] [_IRIbyW] closed
elasticsearch | [2018-03-07T16:11:37,993][INFO ][o.e.n.Node ] [] initializing ...
WARNING: Connection pool is full, discarding connection: 'Ipaddress'
I am using postgres , redis, elasticsearch and sidekiq images in my rails application
But i have no clue where the things are going wrong.
I m trying to up the kafka server but I m having this error. I have no idea what is going on. I m running kafka in a docker container , the version that I m using is 1.0.1 . the zookeeper image is the latest...
kafka_1 | waiting for kafka to be ready
kafka_1 | [2018-03-13 12:48:19,886] FATAL (kafka.Kafka$)
kafka_1 | org.apache.kafka.common.config.ConfigException: Invalid value 0version=1.0.1 for configuration group.initial.rebalance.delay.ms: Not a number of type INT
kafka_1 | at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:713)
kafka_1 | at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:460)
kafka_1 | at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:453)
kafka_1 | at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:62)
kafka_1 | at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:897)
kafka_1 | at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:881)
kafka_1 | at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:878)
kafka_1 | at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
kafka_1 | at kafka.Kafka$.main(Kafka.scala:82)
I have tried to reduce the version of kafka, I have used the version 1.0.1, 10.0.0 and 0.11.0.2, still receiving the same error
any suggestion how to make kafka works?
thanks in advance
I am trying to setup 3 node etcd cluster on Ubuntu machines as docker data store for networking. I successfully created etcd cluster using etcd docker image. Now when I am trying to replicate it, the steps fail on one node. Even after removing the failing node from the step up, the cluster is still looking for the removed node. The same error is being faced when I am using etcd binary.
Used following command by changing ip accordingly on all nodes:
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
--name etcd quay.io/coreos/etcd \
-name etcd0 \
-advertise-client-urls http://172.27.59.141:2379,http://172.27.59.141:4001 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
-initial-advertise-peer-urls http://172.27.59.141:2380 \
-listen-peer-urls http://0.0.0.0:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster etcd0=http://172.27.59.141:2380,etcd1=http://172.27.59.244:2380,etcd2=http://172.27.59.232:2380 \
-initial-cluster-state new
Two of the nodes connect properly but the service of third node stops. Following is the log of the third node.
2016-06-16 17:16:34.293248 I | etcdmain: etcd Version: 2.3.6
2016-06-16 17:16:34.294368 I | etcdmain: Git SHA: 128344c
2016-06-16 17:16:34.294584 I | etcdmain: Go Version: go1.6.2
2016-06-16 17:16:34.294781 I | etcdmain: Go OS/Arch: linux/amd64
2016-06-16 17:16:34.294962 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2016-06-16 17:16:34.295142 W | etcdmain: no data-dir provided, using default data-dir ./node2.etcd
2016-06-16 17:16:34.295438 I | etcdmain: listening for peers on http://0.0.0.0:2380
2016-06-16 17:16:34.295654 I | etcdmain: listening for client requests on http://0.0.0.0:2379
2016-06-16 17:16:34.295846 I | etcdmain: listening for client requests on http://0.0.0.0:4001
2016-06-16 17:16:34.296193 I | etcdmain: stopping listening for client requests on http://0.0.0.0:4001
2016-06-16 17:16:34.301139 I | etcdmain: stopping listening for client requests on http://0.0.0.0:2379
2016-06-16 17:16:34.301454 I | etcdmain: stopping listening for peers on http://0.0.0.0:2380
2016-06-16 17:16:34.301718 I | etcdmain: --initial-cluster must include node2=http://172.27.59.232:2380 given --initial-advertise-peer-urls=http://172.27.59.232:2380
Even after removing the failing node I can see that the two nodes are waiting for the third node to connect.
2016-06-16 17:16:12.063893 N | etcdserver: added member 17879927ec74147b [http://172.27.59.232:238] to cluster ba4424e006edb53e
2016-06-16 17:16:12.064431 N | etcdserver: added local member 24d9feabb7e2f26f [http://172.27.59.244:2380] to cluster ba4424e006edb53e
2016-06-16 17:16:12.065229 N | etcdserver: added member 2bda70be57138cfe [http://172.27.59.141:2380] to cluster ba4424e006edb53e
2016-06-16 17:16:12.218560 I | raft: 24d9feabb7e2f26f [term: 1] received a MsgVote message with higher term from 2bda70be57138cfe [term: 29]
2016-06-16 17:16:12.218964 I | raft: 24d9feabb7e2f26f became follower at term 29
2016-06-16 17:16:12.219276 I | raft: 24d9feabb7e2f26f [logterm: 1, index: 3, vote: 0] voted for 2bda70be57138cfe [logterm: 1, index: 3] at term 29
2016-06-16 17:16:12.222667 I | raft: raft.node: 24d9feabb7e2f26f elected leader 2bda70be57138cfe at term 29
2016-06-16 17:16:12.335904 I | etcdserver: published {Name:node1 ClientURLs:[http://172.27.59.244:2379 http://172.27.59.244:4001]} to cluster ba4424e006edb53e
2016-06-16 17:16:12.336459 N | etcdserver: set the initial cluster version to 2.2
2016-06-16 17:16:42.059177 W | rafthttp: the connection to peer 17879927ec74147b is unhealthy
2016-06-16 17:17:12.060313 W | rafthttp: the connection to peer 17879927ec74147b is unhealthy
2016-06-16 17:17:42.060986 W | rafthttp: the connection to peer 17879927ec74147b is unhealthy
It can be seen that despite starting the cluster with two nodes it is still searching for the third node.
Is there a location on local disk where data is being saved and its picking up old data despite it being not provided.
Please suggest what I am missing.
Is there a location on local disk where data is being saved and its picking up old data despite it being not provided.
Yes, the data of membership already stored at node0.etcd and node1.etcd.
You can get the following message from the log which indicates that the server already belongs to a cluster:
etcdmain: the server is already initialized as member before, starting as etcd member...
In order to run a new cluster with two members, just add another argument to your command :
--data-dir bak