Minikube failing to start in Centos 7.9 - docker

Full Minikube start command minikube start --driver=docker --alsologtostderr -v=3
Output:
minikube start --driver=docker --alsologtostderr -v=3
2022-10-10 15:03:12.668 | . I1010 13:03:11.656428 24138 out.go:296] Setting OutFile to fd 1 ...
2022-10-10 15:03:12.668 | . I1010 13:03:11.657466 24138 out.go:343] TERM=,COLORTERM=, which probably does not support color
2022-10-10 15:03:12.668 | . I1010 13:03:11.657488 24138 out.go:309] Setting ErrFile to fd 2...
2022-10-10 15:03:12.668 | . I1010 13:03:11.657526 24138 out.go:343] TERM=,COLORTERM=, which probably does not support color
2022-10-10 15:03:12.668 | . I1010 13:03:11.657771 24138 root.go:333] Updating PATH: /home/builder/.minikube/bin
2022-10-10 15:03:12.668 | . W1010 13:03:11.658273 24138 root.go:310] Error reading config file at /home/builder/.minikube/config/config.json: open /home/builder/.minikube/config/config.json: no such file or directory
2022-10-10 15:03:12.668 | . I1010 13:03:11.670761 24138 out.go:303] Setting JSON to false
2022-10-10 15:03:12.669 | . I1010 13:03:11.702854 24138 start.go:115] hostinfo: {"hostname":"vkvm1.eng.marklogic.com","uptime":359354,"bootTime":1665072838,"procs":189,"os":"linux","platform":"redhat","platformFamily":"rhel","platformVersion":"7.9","kernelVersion":"3.10.0-1160.76.1.el7.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"8eb41075-ad46-4948-8bf3-6f56c8fc814f"}
2022-10-10 15:03:12.669 | . I1010 13:03:11.702985 24138 start.go:125] virtualization: guest
2022-10-10 15:03:12.669 | . I1010 13:03:11.705293 24138 out.go:177] * minikube v1.27.0 on Redhat 7.9 (amd64)
2022-10-10 15:03:12.669 | . * minikube v1.27.0 on Redhat 7.9 (amd64)
2022-10-10 15:03:12.669 | . I1010 13:03:11.706999 24138 notify.go:214] Checking for updates...
2022-10-10 15:03:12.669 | . W1010 13:03:11.707409 24138 preload.go:295] Failed to list preload files: open /home/builder/.minikube/cache/preloaded-tarball: no such file or directory
2022-10-10 15:03:12.669 | . W1010 13:03:11.708033 24138 out.go:239] ! Kubernetes 1.25.0 has a known issue with resolv.conf. minikube is using a workaround that should work for most use cases.
2022-10-10 15:03:12.669 | . ! Kubernetes 1.25.0 has a known issue with resolv.conf. minikube is using a workaround that should work for most use cases.
2022-10-10 15:03:12.669 | . W1010 13:03:11.708170 24138 out.go:239] ! For more information, see: https://github.com/kubernetes/kubernetes/issues/112135
2022-10-10 15:03:12.669 | . ! For more information, see: https://github.com/kubernetes/kubernetes/issues/112135
2022-10-10 15:03:12.669 | . I1010 13:03:11.708304 24138 driver.go:365] Setting default libvirt URI to qemu:///system
2022-10-10 15:03:12.669 | . I1010 13:03:11.779753 24138 docker.go:137] docker version: linux-20.10.18
2022-10-10 15:03:12.669 | . I1010 13:03:11.780033 24138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
2022-10-10 15:03:12.935 | . I1010 13:03:11.804722 24138 lock.go:35] WriteFile acquiring /home/builder/.minikube/last_update_check: {Name:mkfeeafdcd5b2a03a55be5c45e91f1633dbd4269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
2022-10-10 15:03:12.935 | .
2022-10-10 15:03:12.936 | . I1010 13:03:11.957361 24138 info.go:265] docker info: {ID:U2TF:AUHN:IGPM:LOIS:LYU5:UDYQ:NGVO:W6RZ:NRM4:ZUBC:VRBL:C54T Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-10-10 13:03:11.819053176 -0700 PDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1160.76.1.el7.x86_64 OperatingSystem:Red Hat Enterprise Linux Server 7.9 (Maipo) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:16654606336 GenericResources:<nil> DockerRootDir:/space/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vkvm1.eng.marklogic.com Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
2022-10-10 15:03:12.936 | . I1010 13:03:11.957632 24138 docker.go:254] overlay module found
2022-10-10 15:03:12.936 | . I1010 13:03:11.959650 24138 out.go:177] * Using the docker driver based on user configuration
2022-10-10 15:03:12.936 | . * Using the docker driver based on user configuration
2022-10-10 15:03:12.936 | . I1010 13:03:11.960738 24138 start.go:284] selected driver: docker
2022-10-10 15:03:12.936 | . I1010 13:03:11.960804 24138 start.go:808] validating driver "docker" against <nil>
2022-10-10 15:03:12.936 | . I1010 13:03:11.960861 24138 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
2022-10-10 15:03:12.936 | . I1010 13:03:11.961184 24138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
2022-10-10 15:03:13.209 | . I1010 13:03:12.105432 24138 info.go:265] docker info: {ID:U2TF:AUHN:IGPM:LOIS:LYU5:UDYQ:NGVO:W6RZ:NRM4:ZUBC:VRBL:C54T Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-10-10 13:03:11.999402744 -0700 PDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1160.76.1.el7.x86_64 OperatingSystem:Red Hat Enterprise Linux Server 7.9 (Maipo) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:16654606336 GenericResources:<nil> DockerRootDir:/space/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vkvm1.eng.marklogic.com Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
2022-10-10 15:03:13.209 | . I1010 13:03:12.105750 24138 start_flags.go:296] no existing cluster config was found, will generate one from the flags
2022-10-10 15:03:13.210 | . I1010 13:03:12.106519 24138 start_flags.go:377] Using suggested 3900MB memory alloc based on sys=15883MB, container=15883MB
2022-10-10 15:03:13.210 | . I1010 13:03:12.106744 24138 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
2022-10-10 15:03:13.210 | . I1010 13:03:12.109246 24138 out.go:177] * Using Docker driver with root privileges
2022-10-10 15:03:13.210 | . * Using Docker driver with root privileges
2022-10-10 15:03:13.210 | . I1010 13:03:12.110491 24138 cni.go:95] Creating CNI manager for ""
2022-10-10 15:03:13.210 | . I1010 13:03:12.110542 24138 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
2022-10-10 15:03:13.210 | . I1010 13:03:12.110582 24138 start_flags.go:310] config:
2022-10-10 15:03:13.210 | . {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34#sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/builder:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
2022-10-10 15:03:13.210 | . I1010 13:03:12.112214 24138 out.go:177] * Starting control plane node minikube in cluster minikube
2022-10-10 15:03:13.210 | . * Starting control plane node minikube in cluster minikube
2022-10-10 15:03:13.210 | . I1010 13:03:12.113593 24138 cache.go:120] Beginning downloading kic base image for docker with docker
2022-10-10 15:03:13.210 | . I1010 13:03:12.114952 24138 out.go:177] * Pulling base image ...
2022-10-10 15:03:13.210 | . * Pulling base image ...
... (lots of logs from pulling images removed as I hit max char limit)
2022-10-10 15:04:10.436 | . * Creating docker container (CPUs=2, Memory=3900MB) ...
2022-10-10 15:04:10.436 | . I1010 13:04:09.178221 24138 start.go:159] libmachine.API.Create for "minikube" (driver="docker")
2022-10-10 15:04:10.436 | . I1010 13:04:09.178329 24138 client.go:168] LocalClient.Create starting
2022-10-10 15:04:10.436 | . I1010 13:04:09.179098 24138 client.go:171] LocalClient.Create took 685.72µs
2022-10-10 15:04:12.358 | . I1010 13:04:11.181303 24138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
2022-10-10 15:04:12.358 | . I1010 13:04:11.181622 24138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
2022-10-10 15:04:12.358 | . W1010 13:04:11.224149 24138 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
2022-10-10 15:04:12.358 | . I1010 13:04:11.224495 24138 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
2022-10-10 15:04:12.358 | . stdout:
2022-10-10 15:04:12.358 | .
2022-10-10 15:04:12.358 | .
2022-10-10 15:04:12.358 | . stderr:
2022-10-10 15:04:12.358 | . Error: No such container: minikube
2022-10-10 15:04:12.618 | . I1010 13:04:11.501210 24138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
2022-10-10 15:04:12.618 | . W1010 13:04:11.542666 24138 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
2022-10-10 15:04:12.618 | . I1010 13:04:11.542848 24138 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
(repeated logs removed here due to char limit)
2022-10-10 15:04:15.676 | . stdout:
2022-10-10 15:04:15.676 | .
2022-10-10 15:04:15.676 | .
2022-10-10 15:04:15.676 | . stderr:
2022-10-10 15:04:15.676 | . Error: No such container: minikube
2022-10-10 15:04:15.676 | . I1010 13:04:14.569874 24138 start.go:128] duration metric: createHost completed in 5.395396901s
2022-10-10 15:04:15.676 | . I1010 13:04:14.569895 24138 start.go:83] releasing machines lock for "minikube", held for 5.396481666s
2022-10-10 15:04:15.676 | . W1010 13:04:14.569998 24138 start.go:602] error starting host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
2022-10-10 15:04:15.676 | . I1010 13:04:14.570230 24138 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
2022-10-10 15:04:15.676 | . W1010 13:04:14.609043 24138 cli_runner.go:211] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
2022-10-10 15:04:15.676 | . I1010 13:04:14.609181 24138 delete.go:46] couldn't inspect container "minikube" before deleting: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
2022-10-10 15:04:15.676 | . stdout:
2022-10-10 15:04:15.676 | .
2022-10-10 15:04:15.676 | .
2022-10-10 15:04:15.676 | . stderr:
2022-10-10 15:04:15.676 | . Error: No such container: minikube
2022-10-10 15:04:15.676 | . I1010 13:04:14.611918 24138 cli_runner.go:164] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
2022-10-10 15:04:15.676 | . W1010 13:04:14.649788 24138 cli_runner.go:211] sudo -n podman container inspect minikube --format={{.State.Status}} returned with exit code 1
2022-10-10 15:04:15.676 | . I1010 13:04:14.649837 24138 delete.go:46] couldn't inspect container "minikube" before deleting: unknown state "minikube": sudo -n podman container inspect minikube --format={{.State.Status}}: exit status 1
2022-10-10 15:04:15.676 | . stdout:
2022-10-10 15:04:15.676 | .
2022-10-10 15:04:15.676 | . stderr:
2022-10-10 15:04:15.676 | . sudo: a password is required
2022-10-10 15:04:15.676 | . W1010 13:04:14.649922 24138 start.go:607] delete host: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
2022-10-10 15:04:15.676 | . W1010 13:04:14.650376 24138 out.go:239] ! StartHost failed, but will try again: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
2022-10-10 15:04:15.676 | . ! StartHost failed, but will try again: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
2022-10-10 15:04:15.676 | . I1010 13:04:14.650435 24138 start.go:617] Will try again in 5 seconds ...
2022-10-10 15:04:20.953 | . I1010 13:04:19.653290 24138 start.go:364] acquiring machines lock for minikube: {Name:mke10511c9cb3816f0997f9cfc8a1716887d51cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
2022-10-10 15:04:20.953 | . I1010 13:04:19.653711 24138 start.go:368] acquired machines lock for "minikube" in 296.18µs
2022-10-10 15:04:20.954 | . I1010 13:04:19.653797 24138 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34#sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/builder:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.0 ContainerRuntime:docker ControlPlane:true Worker:true}
2022-10-10 15:04:20.954 | . I1010 13:04:19.654129 24138 start.go:125] createHost starting for "" (driver="docker")
2022-10-10 15:04:20.954 | . I1010 13:04:19.656399 24138 out.go:204] * Creating docker container (CPUs=2, Memory=3900MB) ...
2022-10-10 15:04:20.954 | . * Creating docker container (CPUs=2, Memory=3900MB) ...
2022-10-10 15:04:20.954 | . I1010 13:04:19.656656 24138 start.go:159] libmachine.API.Create for "minikube" (driver="docker")
2022-10-10 15:04:20.954 | . I1010 13:04:19.656760 24138 client.go:168] LocalClient.Create starting
2022-10-10 15:04:20.954 | . I1010 13:04:19.656936 24138 client.go:171] LocalClient.Create took 155.782µs
2022-10-10 15:04:22.871 | . I1010 13:04:21.657412 24138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
2022-10-10 15:04:22.871 | . I1010 13:04:21.658965 24138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
2022-10-10 15:04:22.871 | . W1010 13:04:21.696755 24138 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
2022-10-10 15:04:22.871 | . I1010 13:04:21.696968 24138 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
2022-10-10 15:04:22.871 | . stdout:
2022-10-10 15:04:22.871 | .
2022-10-10 15:04:22.871 | .
2022-10-10 15:04:22.871 | . stderr:
2022-10-10 15:04:22.871 | . Error: No such container: minikube
2022-10-10 15:04:22.871 | . I1010 13:04:21.897543 24138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
2022-10-10 15:04:22.871 | . W1010 13:04:21.936224 24138 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
2022-10-10 15:04:22.871 | . I1010 13:04:21.936374 24138 retry.go:31] will retry after 380.704736ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
(lines removed due to char limit)
2022-10-10 15:04:24.531 | . stdout:
2022-10-10 15:04:24.531 | .
2022-10-10 15:04:24.531 | .
2022-10-10 15:04:24.531 | . stderr:
2022-10-10 15:04:24.531 | . Error: No such container: minikube
2022-10-10 15:04:24.790 | . I1010 13:04:23.735213 24138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
2022-10-10 15:04:24.791 | . W1010 13:04:23.774692 24138 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
2022-10-10 15:04:24.791 | . I1010 13:04:23.774864 24138 retry.go:31] will retry after 545.000538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
2022-10-10 15:04:24.791 | . stdout:
2022-10-10 15:04:24.791 | .
2022-10-10 15:04:24.791 | .
2022-10-10 15:04:24.791 | . stderr:
2022-10-10 15:04:24.791 | . Error: No such container: minikube
2022-10-10 15:04:25.361 | . I1010 13:04:24.320916 24138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
2022-10-10 15:04:25.361 | . W1010 13:04:24.359812 24138 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
2022-10-10 15:04:25.361 | . I1010 13:04:24.359995 24138 retry.go:31] will retry after 660.685065ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
2022-10-10 15:04:26.188 | . stdout:
2022-10-10 15:04:26.188 | .
2022-10-10 15:04:26.188 | .
2022-10-10 15:04:26.188 | . stderr:
2022-10-10 15:04:26.188 | . Error: No such container: minikube
2022-10-10 15:04:26.188 | .
2022-10-10 15:04:26.188 | . W1010 13:04:25.061915 24138 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
2022-10-10 15:04:26.188 | . stdout:
2022-10-10 15:04:26.188 | .
2022-10-10 15:04:26.188 | .
2022-10-10 15:04:26.188 | . stderr:
2022-10-10 15:04:26.188 | . Error: No such container: minikube
2022-10-10 15:04:26.188 | . I1010 13:04:25.061939 24138 start.go:128] duration metric: createHost completed in 5.407782504s
2022-10-10 15:04:26.188 | . I1010 13:04:25.061969 24138 start.go:83] releasing machines lock for "minikube", held for 5.408232309s
2022-10-10 15:04:26.188 | . W1010 13:04:25.062386 24138 out.go:239] * Failed to start docker container. Running "minikube delete" may fix it: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
2022-10-10 15:04:26.188 | . * Failed to start docker container. Running "minikube delete" may fix it: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
2022-10-10 15:04:26.188 | . I1010 13:04:25.064746 24138 out.go:177]
2022-10-10 15:04:26.188 | .
2022-10-10 15:04:26.188 | . W1010 13:04:25.066259 24138 out.go:239] X Exiting due to GUEST_PROVISION_ACQUIRE_LOCK: Failed to start host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
2022-10-10 15:04:26.188 | . X Exiting due to GUEST_PROVISION_ACQUIRE_LOCK: Failed to start host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
2022-10-10 15:04:26.188 | . W1010 13:04:25.066434 24138 out.go:239] * Suggestion: Please try purging minikube using `minikube delete --all --purge`
2022-10-10 15:04:26.188 | . * Suggestion: Please try purging minikube using `minikube delete --all --purge`
2022-10-10 15:04:26.188 | . W1010 13:04:25.066568 24138 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11022
2022-10-10 15:04:26.188 | . * Related issue: https://github.com/kubernetes/minikube/issues/11022
2022-10-10 15:04:26.188 | . I1010 13:04:25.067905 24138 out.go:177]
I have tried running just the docker image and it works fine, I've tried purging minikube as it recommends and that has not solved the issue. I've also tried setting the MINIKUBE_HOME variable. I'm pretty new to K8S on centos7 so any advice would be greatly apreciated

Fixed by setting MINIKUBE_HOME variable to a value outside of the home directory. This github issue helped in solving this: https://github.com/kubernetes/minikube/issues/11022#issuecomment-848387322

Related

Could not successfully bind to port 2181

I'm following https://github.com/PacktPublishing/Apache-Kafka-Series---Kafka-Connect-Hands-on-Learning and I've below docker-compose file and using Mac.
version: '2'
services:
# this is our kafka cluster.
kafka-cluster:
image: landoop/fast-data-dev:cp3.3.0
environment:
ADV_HOST: localhost # Change to 192.168.99.100 if using Docker Toolbox
RUNTESTS: 0 # Disable Running tests so the cluster starts faster
ports:
- 2181:2181 # Zookeeper
- 3030:3030 # Landoop UI
- 8081-8083:8081-8083 # REST Proxy, Schema Registry, Kafka Connect ports
- 9581-9585:9581-9585 # JMX Ports
- 9092:9092 # Kafka Broker
and when I run
docker-compose up kafka-cluster
[+] Running 1/0
⠿ Container code-kafka-cluster-1 Created 0.0s
Attaching to code-kafka-cluster-1
code-kafka-cluster-1 | Setting advertised host to 127.0.0.1.
code-kafka-cluster-1 | runtime: failed to create new OS thread (have 2 already; errno=22)
code-kafka-cluster-1 | fatal error: newosproc
code-kafka-cluster-1 |
code-kafka-cluster-1 | runtime stack:
code-kafka-cluster-1 | runtime.throw(0x512269, 0x9)
code-kafka-cluster-1 | /usr/lib/go/src/runtime/panic.go:566 +0x95
code-kafka-cluster-1 | runtime.newosproc(0xc420026000, 0xc420035fc0)
code-kafka-cluster-1 | /usr/lib/go/src/runtime/os_linux.go:160 +0x194
code-kafka-cluster-1 | runtime.newm(0x5203a0, 0x0)
code-kafka-cluster-1 | /usr/lib/go/src/runtime/proc.go:1572 +0x132
code-kafka-cluster-1 | runtime.main.func1()
code-kafka-cluster-1 | /usr/lib/go/src/runtime/proc.go:126 +0x36
code-kafka-cluster-1 | runtime.systemstack(0x593600)
code-kafka-cluster-1 | /usr/lib/go/src/runtime/asm_amd64.s:298 +0x79
code-kafka-cluster-1 | runtime.mstart()
code-kafka-cluster-1 | /usr/lib/go/src/runtime/proc.go:1079
code-kafka-cluster-1 |
code-kafka-cluster-1 | goroutine 1 [running]:
code-kafka-cluster-1 | runtime.systemstack_switch()
code-kafka-cluster-1 | /usr/lib/go/src/runtime/asm_amd64.s:252 fp=0xc420020768 sp=0xc420020760
code-kafka-cluster-1 | runtime.main()
code-kafka-cluster-1 | /usr/lib/go/src/runtime/proc.go:127 +0x6c fp=0xc4200207c0 sp=0xc420020768
code-kafka-cluster-1 | runtime.goexit()
code-kafka-cluster-1 | /usr/lib/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200207c8 sp=0xc4200207c0
code-kafka-cluster-1 | Could not successfully bind to port 2181. Maybe some other service
code-kafka-cluster-1 | in your system is using it? Please free the port and try again.
code-kafka-cluster-1 | Exiting.
code-kafka-cluster-1 exited with code 1
Note: % sudo lsof -i :2181 - this command shows no output.
the landoop/fast-data-dev library is not working on arm64 Apple M1 chip.
Here you can fix the problem by updating the Dockerfile.
https://github.com/lensesio/fast-data-dev/issues/175#issuecomment-947001807
Change the zookeeper port mapping as below
ports:
- 2182:2181 # Zookeeper
You can build new docker image and run it with the following commands -
git clone https://github.com/faberchri/fast-data-dev.git
cd fast-data-dev
docker build -t faberchri/fast-data-dev .
docker run --rm -p 3030:3030 faberchri/fast-data-dev
After looking into Namig Aliyev answer, here is what worked for me.
Let's say your working directory is kafka and inside it, you have your file docker-compose.yml
Please follow these steps to reproduce same results :
git clone https://github.com/faberchri/fast-data-dev.git
update docker-compose.yml file :
In kafka-cluster service replace image parameter line with this "build: ./fast-data-dev/"
docker-compose run kafka-cluster
Wait a couple of minutes and it should work and be accessible via :
http://localhost:3030/
This what worked for me.
Error suggests you're running something else on port 2181 already. So either stop that, or remove the port mapping since you shouldn't be connecting to Zookeeper anyway for using Kafka. As of latest Kafka versions (which I doubt the linked course will be using), --zookeeper flags are removed from Kafka CLI tools
Other solution would be to not use the Landoop container. Plenty of other Docker Compose files exist on the web for Kafka
Overall, I'd suggest not using Docker at all for developing a Kafka Connector.

minikube status Unknown, Windows 10, docker

I was trying to see the dashboard, previously works fine...
Now I get using minikube dashboard
λ minikube dashboard
X Exiting due to GUEST_STATUS: state: unknown state "minikube": docker container inspect minikube --format=: exit status 1
stdout:
stderr:
Error: No such container: minikube
*
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please attach the following file to the GitHub issue: │
│ * - C:\Users\JOSELU~1\AppData\Local\Temp\minikube_dashboard_dc37e18dac9641f7847258501d0e823fdfb0604c_0.log │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
With minikube status
λ minikube status
E0604 13:13:20.260421 27600 status.go:258] status error: host: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:
stderr:
Error: No such container: minikube
E0604 13:13:20.261425 27600 status.go:261] The "minikube" host does not exist!
minikube
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent
With the command minikube profile list
λ minikube profile list
|----------|-----------|---------|--------------|------|---------|---------|-------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes |
|----------|-----------|---------|--------------|------|---------|---------|-------|
| minikube | docker | docker | 192.168.49.2 | 8443 | v1.20.2 | Unknown | 1 |
|----------|-----------|---------|--------------|------|---------|---------|-------|
Now,...
What would be it happens?
What would be the best solution?
Thansk...
Remove unused data:
docker system prune
Clear minikube's local state:
minikube delete
Start the cluster:
minikube start --driver=<driver_name>
(In your case driver name is docker as per minikube profile list info shared by you)
Check the cluster status:
minikube status
Use the following documentation for more information:
https://docs.docker.com/engine/reference/commandline/system_prune/#examples
https://v1-18.docs.kubernetes.io/docs/tasks/tools/install-minikube/

CRIT 002 Failed to initialize local MSP: could not load a valid signer certificate from directory /var/hyperledger/orderer/msp/signcerts

I try this toutorial https://github.com/grepruby/ERC20-Token-On-Hyperledger
node: v8.11.4
go: go1.12.6 darwin/amd64
hyperledgerfabric: ? (maybe 1.2.1)
Python3.4
When './buildERC20TokenNetwork.sh up' command executed, the error occur.
Error: failed to create deliver client: orderer client failed to connect to orderer.techracers.com:7050: failed to create new connection: context deadline exceeded
!!!!!!!!!!!!!!! Channel creation failed !!!!!!!!!!!!!!!!
========= ERROR !!! FAILED to execute End-2-End Scenario ===========
detail
./buildERC20TokenNetwork.sh up
Starting for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds
Continue? [Y/n] Y
proceeding ...
./buildERC20TokenNetwork.sh: line 46: /Users/ogasawara/hyperledger-fabric/ERC20-Token-On-Hyperledger/network/../bin/configtxlator: cannot execute binary file
LOCAL_VERSION=
DOCKER_IMAGE_VERSION=1.2.1
=================== WARNING ===================
Local fabric binaries and docker images are
out of sync. This may cause problems.
===============================================
peer1.org2.techracers.com is up-to-date
Starting orderer.techracers.com ...
peer0.org2.techracers.com is up-to-date
peer1.org1.techracers.com is up-to-date
Starting orderer.techracers.com ... done
cli is up-to-date
____ _____ _ ____ _____
/ ___| |_ _| / \ | _ \ |_ _|
\___ \ | | / _ \ | |_) | | |
___) | | | / ___ \ | _ < | |
|____/ |_| /_/ \_\ |_| \_\ |_|
Channel name : mychannel
Creating channel...
+ peer channel create -o orderer.techracers.com:7050 -c mychannel -f ./channel-artifacts/channel.tx --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/techracers.com/orderers/orderer.techracers.com/msp/tlscacerts/tlsca.techracers.com-cert.pem
+ res=1
+ set +x
Error: failed to create deliver client: orderer client failed to connect to orderer.techracers.com:7050: failed to create new connection: context deadline exceeded
!!!!!!!!!!!!!!! Channel creation failed !!!!!!!!!!!!!!!!
========= ERROR !!! FAILED to execute End-2-End Scenario ===========
ERROR !!!! Test failed
Docker container "orderer.techracers.com" is Exited, that causes the connection failer.
I checked docker cantainer log.
initializeLocalMsp -> CRIT 002 Failed to initialize local MSP: could not load a valid signer certificate from directory /var/hyperledger/orderer/msp/signcerts: stat /var/hyperledger/orderer/msp/signcerts: no such file or directory
How can I up the container "orderer.techracers.com" ?
It looks like the real error is further up:
./buildERC20TokenNetwork.sh: line 46: /Users/ogasawara/hyperledger-fabric/ERC20-Token-On-Hyperledger/network/../bin/configtxlator: cannot execute binary file
So either you do not have the binaries installed or they are not where they are expected.
(I would guess that the error you are seeing with the orderer is because the config hasn't been properly set up beforehand.)
Fabric 1.2.1 is quite dated now, you might be better starting off with Fabric 1.4 and working with the samples that come with Fabric and the standard documentation.

Kubernetes Multi Master setup

[SOLVED] flannel dont work with that I changed to weave net. If you dont want to provide the pod-network-cidr: "10.244.0.0/16" flag in the config.yaml
I want to make a multi master setup with kubernetes and tried alot of different ways. Even the last way I take don´t work. The problem is that the dns and the flannel network plugin don´t want to start. They get every time the CrashLoopBackOff status. The way I do it is listed below.
First create a external etcd cluster with this command on every node (only the adresses changed)
nohup etcd --name kube1 --initial-advertise-peer-urls http://192.168.100.110:2380 \
--listen-peer-urls http://192.168.100.110:2380 \
--listen-client-urls http://192.168.100.110:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://192.168.100.110:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380 \
--initial-cluster-state new &
Then I created a config.yaml file for the kubeadm init command.
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 192.168.100.110
etcd:
endpoints:
- "http://192.168.100.110:2379"
- "http://192.168.100.108:2379"
- "http://192.168.100.104:2379"
apiServerExtraArgs:
apiserver-count: "3"
apiServerCertSANs:
- "192.168.100.110"
- "192.168.100.108"
- "192.168.100.104"
- "127.0.0.1"
token: "64bhyh.1vjuhruuayzgtykv"
tokenTTL: "0"
Start command: kubeadm init --config /root/config.yaml
So now copy the /etc/kubernetes/pki on the other nodes and the config and start the other master nodes the same way. But it doesn´t work.
So what is the right way to initialize a multi master kubernetes cluster or why does my flannel network not start?
Status from a flannel pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "run"
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "cni"
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "flannel-token-swdhl"
Normal SuccessfulMountVolume 8m kubelet, kube2 MountVolume.SetUp succeeded for volume "flannel-cfg"
Normal Pulling 8m kubelet, kube2 pulling image "quay.io/coreos/flannel:v0.10.0-amd64"
Normal Pulled 8m kubelet, kube2 Successfully pulled image "quay.io/coreos/flannel:v0.10.0-amd64"
Normal Created 8m kubelet, kube2 Created container
Normal Started 8m kubelet, kube2 Started container
Normal Pulled 8m (x4 over 8m) kubelet, kube2 Container image "quay.io/coreos/flannel:v0.10.0-amd64" already present on machine
Normal Created 8m (x4 over 8m) kubelet, kube2 Created container
Normal Started 8m (x4 over 8m) kubelet, kube2 Started container
Warning BackOff 3m (x23 over 8m) kubelet, kube2 Back-off restarting failed container
etcd version
etcd --version
etcd Version: 3.3.6
Git SHA: 932c3c01f
Go Version: go1.9.6
Go OS/Arch: linux/amd64
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Last lines in nohup from etcd
2018-06-06 19:44:28.441304 I | etcdserver: name = kube1
2018-06-06 19:44:28.441327 I | etcdserver: data dir = kube1.etcd
2018-06-06 19:44:28.441331 I | etcdserver: member dir = kube1.etcd/member
2018-06-06 19:44:28.441334 I | etcdserver: heartbeat = 100ms
2018-06-06 19:44:28.441336 I | etcdserver: election = 1000ms
2018-06-06 19:44:28.441338 I | etcdserver: snapshot count = 100000
2018-06-06 19:44:28.441343 I | etcdserver: advertise client URLs = http://192.168.100.110:2379
2018-06-06 19:44:28.441346 I | etcdserver: initial advertise peer URLs = http://192.168.100.110:2380
2018-06-06 19:44:28.441352 I | etcdserver: initial cluster = kube1=http://192.168.100.110:2380,kube2=http://192.168.100.108:2380,kube3=http://192.168.100.104:2380
2018-06-06 19:44:28.443825 I | etcdserver: starting member a4df4f699dd66909 in cluster 73f203cf831df407
2018-06-06 19:44:28.443843 I | raft: a4df4f699dd66909 became follower at term 0
2018-06-06 19:44:28.443848 I | raft: newRaft a4df4f699dd66909 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2018-06-06 19:44:28.443850 I | raft: a4df4f699dd66909 became follower at term 1
2018-06-06 19:44:28.447834 W | auth: simple token is not cryptographically signed
2018-06-06 19:44:28.448857 I | rafthttp: starting peer 9e0f381e79b9b9dc...
2018-06-06 19:44:28.448869 I | rafthttp: started HTTP pipelining with peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450791 I | rafthttp: started peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450803 I | rafthttp: added peer 9e0f381e79b9b9dc
2018-06-06 19:44:28.450809 I | rafthttp: starting peer fc9c29e972d01e69...
2018-06-06 19:44:28.450816 I | rafthttp: started HTTP pipelining with peer fc9c29e972d01e69
2018-06-06 19:44:28.453543 I | rafthttp: started peer fc9c29e972d01e69
2018-06-06 19:44:28.453559 I | rafthttp: added peer fc9c29e972d01e69
2018-06-06 19:44:28.453570 I | etcdserver: starting server... [version: 3.3.6, cluster version: to_be_decided]
2018-06-06 19:44:28.455414 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455431 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (writer)
2018-06-06 19:44:28.455445 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream MsgApp v2 reader)
2018-06-06 19:44:28.455578 I | rafthttp: started streaming with peer 9e0f381e79b9b9dc (stream Message reader)
2018-06-06 19:44:28.455697 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
2018-06-06 19:44:28.455704 I | rafthttp: started streaming with peer fc9c29e972d01e69 (writer)
#
If you do not have any hosting preferences and if you are ok with creating cluster on AWS then it can be done very easily using KOPS.
https://github.com/kubernetes/kops
Via KOPS you can easily configure the autoscaling group for master and can specify the number of master and nodes required for your cluster.
Flannel dont work with that so I changed to weave net. If you dont want to use provide the pod-network-cidr: "10.244.0.0/16" flag in the config.yaml

Can't Ping a Pod after Ubuntu cluster setup

I have followed the most recent instructions (updated 7th May '15) to setup a cluster in Ubuntu** with etcd and flanneld. But I'm having trouble with the network... it seems to be in some kind of broken state.
**Note: I updated the config script so that it installed 0.16.2. Also a kubectl get minions returned nothing to start but after a sudo service kube-controller-manager restart they appeared.
This is my setup:
| ServerName | Public IP | Private IP |
------------------------------------------
| KubeMaster | 107.x.x.32 | 10.x.x.54 |
| KubeNode1 | 104.x.x.49 | 10.x.x.55 |
| KubeNode2 | 198.x.x.39 | 10.x.x.241 |
| KubeNode3 | 104.x.x.52 | 10.x.x.190 |
| MongoDev1 | 162.x.x.132 | 10.x.x.59 |
| MongoDev2 | 104.x.x.103 | 10.x.x.60 |
From any machine I can ping any other machine... it's when I create pods and services that I start getting issues.
Pod
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED
auth-dev-ctl-6xah8 172.16.37.7 sis-auth leportlabs/sisauth:latestdev 104.x.x.52/104.x.x.52 environment=dev,name=sis-auth Running 3 hours
So this pod has been spun up on KubeNode3... if I try and ping it from any machine other than it's KubeNode3 I get a Destination Net Unreachable error. E.g.
# ping 172.16.37.7
PING 172.16.37.7 (172.16.37.7) 56(84) bytes of data.
From 129.250.204.117 icmp_seq=1 Destination Net Unreachable
I can call etcdctl get /coreos.com/network/config on all four and get back {"Network":"172.16.0.0/16"}.
I'm not sure where to look from there. Can anyone help me out here?
Supporting Info
On the master node:
# ps -ef | grep kube
root 4729 1 0 May07 ? 00:06:29 /opt/bin/kube-scheduler --logtostderr=true --master=127.0.0.1:8080
root 4730 1 1 May07 ? 00:21:24 /opt/bin/kube-apiserver --address=0.0.0.0 --port=8080 --etcd_servers=http://127.0.0.1:4001 --logtostderr=true --portal_net=192.168.3.0/24
root 5724 1 0 May07 ? 00:10:25 /opt/bin/kube-controller-manager --master=127.0.0.1:8080 --machines=104.x.x.49,198.x.x.39,104.x.x.52 --logtostderr=true
# ps -ef | grep etcd
root 4723 1 2 May07 ? 00:32:46 /opt/bin/etcd -name infra0 -initial-advertise-peer-urls http://107.x.x.32:2380 -listen-peer-urls http://107.x.x.32:2380 -initial-cluster-token etcd-cluster-1 -initial-cluster infra0=http://107.x.x.32:2380,infra1=http://104.x.x.49:2380,infra2=http://198.x.x.39:2380,infra3=http://104.x.x.52:2380 -initial-cluster-state new
On a node:
# ps -ef | grep kube
root 10878 1 1 May07 ? 00:16:22 /opt/bin/kubelet --address=0.0.0.0 --port=10250 --hostname_override=104.x.x.49 --api_servers=http://107.x.x.32:8080 --logtostderr=true --cluster_dns=192.168.3.10 --cluster_domain=kubernetes.local
root 10882 1 0 May07 ? 00:05:23 /opt/bin/kube-proxy --master=http://107.x.x.32:8080 --logtostderr=true
# ps -ef | grep etcd
root 10873 1 1 May07 ? 00:14:09 /opt/bin/etcd -name infra1 -initial-advertise-peer-urls http://104.x.x.49:2380 -listen-peer-urls http://104.x.x.49:2380 -initial-cluster-token etcd-cluster-1 -initial-cluster infra0=http://107.x.x.32:2380,infra1=http://104.x.x.49:2380,infra2=http://198.x.x.39:2380,infra3=http://104.x.x.52:2380 -initial-cluster-state new
#ps -ef | grep flanneld
root 19560 1 0 May07 ? 00:00:01 /opt/bin/flanneld
So I noticed that the flannel configuration (/run/flannel/subnet.env) was different to what docker was starting up with (wouldn't have a clue how they got out of sync).
# ps -ef | grep docker
root 19663 1 0 May07 ? 00:09:20 /usr/bin/docker -d -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.85.1/24 --mtu=1472
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=172.16.60.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
Note that the docker --bip=172.16.85.1/24 was different to the flannel subnet FLANNEL_SUBNET=172.16.60.1/24.
So naturally I changed /etc/default/docker to reflect the new value.
DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.60.1/24 --mtu=1472"
But now a sudo service docker restart wasn't erroring out... so looking at /var/log/upstart/docker.log I could see the following
FATA[0000] Shutting down daemon due to errors: Bridge ip (172.16.85.1) does not match existing bridge configuration 172.16.60.1
So the final piece to the puzzle was deleting the old bridge and restarting docker...
# sudo brctl delbr docker0
# sudo service docker start
If sudo brctl delbr docker0 returns bridge docker0 is still up; can't delete it run ifconfig docker0 down and try again.
Please try this:
ip link del docker0
systemctl restart flanneld

Resources