minikube shared volume not shows files after sometime - docker

I have to share local .ssh directory content to pod. I search for hat and got answer from one of the post to share start as --mount-string.
$ minikube start --mount-string="$HOME/.ssh/:/ssh-directory" --mount
πŸ˜„ minikube v1.9.2 on Darwin 10.14.6
✨ Using the docker driver based on existing profile
πŸ‘ Starting control plane node m01 in cluster minikube
🚜 Pulling base image ...
πŸ”„ Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
β–ͺ kubeadm.pod-network-cidr=10.244.0.0/16
E0426 23:44:18.447396 80170 kubeadm.go:331] Overriding stale ClientConfig host https://127.0.0.1:32810 with https://127.0.0.1:32813
πŸ“ Creating mount /Users/myhome/.ssh/:/ssh-directory ...
🌟 Enabling addons: default-storageclass, storage-provisioner
πŸ„ Done! kubectl is now configured to use "minikube"
❗ /usr/local/bin/kubectl is v1.15.5, which may be incompatible with Kubernetes v1.18.0.
πŸ’‘ You can also use 'minikube kubectl -- get pods' to invoke a matching version
When I check the docker for the given Minikube, it return
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad64f642b63 gcr.io/k8s-minikube/kicbase:v0.0.8 "/usr/local/bin/entr…" 3 weeks ago Up 45 seconds 127.0.0.1:32815->22/tcp, 127.0.0.1:32814->2376/tcp, 127.0.0.1:32813->8443/tcp minikube
And check the .ssh directory content are there or not.
$ docker exec -it 5ad64f642b63 ls /ssh-directory
id_rsa id_rsa.pub known_hosts
I have deployment yml as
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
stack: api
app: api-web
spec:
replicas: 1
selector:
matchLabels:
app: api-web
template:
metadata:
labels:
app: api-web
spec:
containers:
- name: api-web-pod
image: tiangolo/uwsgi-nginx-flask
ports:
- name: api-web-port
containerPort: 80
envFrom:
- secretRef:
name: api-secrets
volumeMounts:
- name: ssh-directory
mountPath: /app/.ssh
volumes:
- name: ssh-directory
hostPath:
path: /ssh-directory/
type: Directory
When it ran, it gives error for /ssh-directory.
$ kubectl describe pod/api-deployment-f65db9c6c-cwtvt
Name: api-deployment-f65db9c6c-cwtvt
Namespace: default
Priority: 0
Node: minikube/172.17.0.2
Start Time: Sat, 02 May 2020 23:07:51 -0500
Labels: app=api-web
pod-template-hash=f65db9c6c
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/api-deployment-f65db9c6c
Containers:
api-web-pod:
Container ID:
Image: tiangolo/uwsgi-nginx-flask
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables from:
api-secrets Secret Optional: false
Environment: <none>
Mounts:
/app/.ssh from ssh-directory (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9shz5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ssh-directory:
Type: HostPath (bare host directory volume)
Path: /ssh-directory/
HostPathType: Directory
default-token-9shz5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9shz5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/api-deployment-f65db9c6c-cwtvt to minikube
Warning FailedMount 11m kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[ssh-directory], unattached volumes=[default-token-9shz5 ssh-directory]: timed out waiting for the condition
Warning FailedMount 2m13s (x4 over 9m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[ssh-directory], unattached volumes=[ssh-directory default-token-9shz5]: timed out waiting for the condition
Warning FailedMount 62s (x14 over 13m) kubelet, minikube MountVolume.SetUp failed for volume "ssh-directory" : hostPath type check failed: /ssh-directory/ is not a directory
When I check the content of /ssh-directory in docker.
It gives IO error.
$ docker exec -it 5ad64f642b63 ls /ssh-directory
ls: cannot access '/ssh-directory': Input/output error
I know there are default mount points for Minikube. As mentioned in https://minikube.sigs.k8s.io/docs/handbook/mount/,
+------------+----------+---------------+----------------+
| Driver | OS | HostFolder | VM |
+------------+----------+---------------+----------------+
| VirtualBox | Linux | /home |/hosthome |
+------------+----------+---------------+----------------+
| VirtualBox | macOS | /Users |/Users |
+------------+----------+---------------+----------------+
| VirtualBox | Windows |C://Users | /c/Users |
+------------+----------+---------------+----------------+
|VMware Fusio| macOS |/Users |/Users |
+------------+----------+---------------+----------------+
| KVM | Linux | Unsupported. | |
+------------+----------+---------------+----------------+
| HyperKit | Linux | Unsupported |(see NFS mounts)|
+------------+----------+---------------+----------------+
But I installed minikube as brew install minikube and its set driver as docker.
$ cat ~/.minikube/config/config.json
{
"driver": "docker"
}
There is no mapping for docker driver in mount point.
Initially, this directory has the files, but somehow, when I try to create the pod, it delete or something is wrong.

While reproducing this on ubuntu I encountered the exact issue.
The directory was indeed looked like mounted but the files were missing which lead me to think that this is a general issue with mounting directories with docker driver.
There is open issue on github about the same problem ( mount directory empty ) and open feature request to mount host volumes into docker driver.
Inspecting minikube container shows no record of that mounted volume and confirms information mentioned in the github request that the only volume shared with host as of now is the one that mounts by default (that is /var/lib/docker/volumes/minikube/_data mounted into minikube's /var directory).
$ docker inspect minikube
"Mounts": [
{
"Type": "volume",
"Name": "minikube",
"Source": "/var/lib/docker/volumes/minikube/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
As the workaround you could copy your .ssh directory into the running minikube docker container with following command:
docker cp $HOME/.ssh minikube:<DESIRED_DIRECTORY>
and then mount this desired directory into the pod.

Related

why kubernete pod reports `Insufficient memory` even if there are free memory on the host?

I am running minikube v1.15.1 on MacOS and installed helm v3.4.1. I run helm install elasticsearch elastic/elasticsearch --set resources.requests.memory=2Gi --set resources.limits.memory=4Gi --set replicas=1 to install elasticsearch on k8s cluster. The pod elasticsearch-master-0 is deployed but it is in pending status.
When I run kubectl describe pod elasticsearch-master-0 it gives me below warning:
Warning FailedScheduling 61s (x2 over 2m30s) default-scheduler 0/1 nodes are available: 1 Insufficient memory.
it ways Insufficient memory but my host has at least 4GB free memory. Does the memory issue means the minikube doesn't have enough memory? If yes, how can I increase its memory?
I have increased memory in minikube and restarted minikube but still has the same issue.
I did run minikube delete followed by minikube start. You can see below output that it using 4 CPUs and 8GB memory
minikube v1.15.1 on Darwin 11.0.1
✨ Automatically selected the docker driver. Other choices: hyperkit, virtualbox
πŸ‘ Starting control plane node minikube in cluster minikube
πŸ”₯ Creating docker container (CPUs=4, Memory=8096MB) ...
🐳 Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
πŸ”Ž Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Below is the code to get cpu and memory from config.
$ minikube config get cpus
4
$ minikube config get memory
8096
Below is the output from metrics-server.
$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
minikube 466m 5% 737Mi 37%
$ kubectl top pod
W1125 20:11:23.232025 46719 top_pod.go:265] Metrics not available for pod default/elasticsearch-master-0, age: 34m3.231199s
error: Metrics not available for pod default/elasticsearch-master-0, age: 34m3.231199s
The full output of kubectl describe pod is:
$ kubectl describe pod elasticsearch-master-0
Name: elasticsearch-master-0
Namespace: default
Priority: 0
Node: <none>
Labels: app=elasticsearch-master
chart=elasticsearch
controller-revision-hash=elasticsearch-master-677c65788d
release=elasticsearch
statefulset.kubernetes.io/pod-name=elasticsearch-master-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/elasticsearch-master
Init Containers:
configure-sysctl:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.10.0
Port: <none>
Host Port: <none>
Command:
sysctl
-w
vm.max_map_count=262144
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kthrd (ro)
Containers:
elasticsearch:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.10.0
Ports: 9200/TCP, 9300/TCP
Host Ports: 0/TCP, 0/TCP
Limits:
cpu: 1
memory: 4Gi
Requests:
cpu: 1
memory: 2Gi
Readiness: exec [sh -c #!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=green&timeout=1s" )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no
http () {
local path="${1}"
local args="${2}"
set -- -XGET -s
if [ "$args" != "" ]; then
set -- "$#" $args
fi
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
set -- "$#" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
fi
curl --output /dev/null -k "$#" "http://127.0.0.1:9200${path}"
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
HTTP_CODE=$(http "/" "-w %{http_code}")
RC=$?
if [[ ${RC} -ne 0 ]]; then
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}"
exit ${RC}
fi
# ready if HTTP code 200, 503 is tolerable if ES version is 6.x
if [[ ${HTTP_CODE} == "200" ]]; then
exit 0
elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then
exit 0
else
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
exit 1
fi
else
echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
] delay=10s timeout=5s period=10s #success=3 #failure=3
Environment:
node.name: elasticsearch-master-0 (v1:metadata.name)
cluster.initial_master_nodes: elasticsearch-master-0,
discovery.seed_hosts: elasticsearch-master-headless
cluster.name: elasticsearch
network.host: 0.0.0.0
ES_JAVA_OPTS: -Xmx1g -Xms1g
node.data: true
node.ingest: true
node.master: true
node.remote_cluster_client: true
Mounts:
/usr/share/elasticsearch/data from elasticsearch-master (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kthrd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
elasticsearch-master:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elasticsearch-master-elasticsearch-master-0
ReadOnly: false
default-token-kthrd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kthrd
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 76s (x2 over 77s) default-scheduler 0/1 nodes are available: 1 Insufficient memory.
Minikube on mac uses a virtual machine to host kubernetes. This is separate from the host and restricts the available memory for the single node cluster.
You can configure more memory for the VM using
minikube start --memory=4096
Minikube will pick up your memory settings on its first start but if you previously launched without that option you need to perform minikube delete and restart.
To check resources that your pod/nodes are utilizing you can enable metrics-server with minikube addons:
➜ ~ minikube addons enable metrics-server
🌟 The 'metrics-server' addon is enabled
You will have to wait a bit for the metrics to appear:
➜ ~ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
minikube 186m 4% 2344Mi 15%
➜ ~ kubectl top pod
NAME CPU(cores) MEMORY(bytes)
elasticsearch-master-0 6m 1272Mi

No Such Host: Docker daemon can't access kubernetes registry but wget on the same node can connect to the registry

I have an Alpine Linux based node on a single node kubernetes cluster(for testing). I have a private docker registry installed within my cluster at docker-registry.default:5000. I can login to the alpine node and use wget and access my private docker registry.
kubectl exec -it pod/nuclio-dashboard-5c5c48947b-lpgx8 -- /bin/sh
/ # wget -qO- https://docker:mypassword#docker-registry.default:5000/v2/_catalog
{"repositories":["nuclio/processor-helloworld3"]}
But I can't seem to access it using docker on the same pod. Both Client and Server are 2019 builds
kubectl exec -it pod/nuclio-dashboard-5c5c48947b-lpgx8 -- /bin/sh
/ # which docker
/usr/local/bin/docker
/ # docker login -u docker -p mypassword docker-registry.default:5000
Error response from daemon: Get https://docker-registry.default:5000/v2/: dial tcp: lookup docker-registry.default on 169.254.169.254:53: no such host
I can logon to the Docker Hub registry.
docker login -u my_hub_user -p my_hub_password
Login Succeeded
EDIT:
On kubectl describe pod nuclio-dashboard-5c5c48947b-lpgx8, we get.
kd pod/nuclio-dashboard-5c5c48947b-2dpnz
Name: nuclio-dashboard-5c5c48947b-2dpnz
Namespace: nuclio
Priority: 0
Node: gke-your-first-cluster-1-pool-1-fe915942-506h/10.128.0.30
Start Time: Tue, 31 Dec 2019 09:39:45 -0500
Labels: app=nuclio
nuclio.io/app=dashboard
nuclio.io/class=service
nuclio.io/name=nuclio-dashboard
pod-template-hash=5c5c48947b
release=nuclio
Annotations: nuclio.io/version: 1.3.4-amd64
Status: Running
IP: 10.4.0.9
Controlled By: ReplicaSet/nuclio-dashboard-5c5c48947b
Containers:
nuclio-dashboard:
Container ID: docker://4f358607618f89da911e191226313193e38ed5335a3e46c207eee16669f1dd46
Image: quay.io/nuclio/dashboard:1.3.4-amd64
Image ID: docker-pullable://quay.io/nuclio/dashboard#sha256:e6d94f7bf46601b2454a9e73ba292c62edac3d4684ea15057855af2277eab8a5
Port: 8070/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 31 Dec 2019 09:40:27 -0500
Ready: True
Restart Count: 0
Environment:
NUCLIO_DASHBOARD_REGISTRY_URL: <set to the key 'registry_url' of config map 'nuclio-registry-url'> Optional: true
NUCLIO_DASHBOARD_DEPLOYMENT_NAME: nuclio-dashboard
NUCLIO_CONTAINER_BUILDER_KIND: docker
NUCLIO_DASHBOARD_EXTERNAL_IP_ADDRESSES:
NUCLIO_DASHBOARD_HTTP_INGRESS_HOST_TEMPLATE:
Mounts:
/etc/nuclio/dashboard/registry-credentials from registry-credentials (ro)
/var/run/docker.sock from docker-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from nuclio-nuclio-token-d7fwp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
docker-sock:
Type: HostPath (bare host directory volume)
Path: /var/run/docker.sock
HostPathType:
registry-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: nuclio-registry-credentials
Optional: true
nuclio-nuclio-token-d7fwp:
Type: Secret (a volume populated by a Secret)
SecretName: nuclio-nuclio-token-d7fwp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Kubernetes will inject the internal DNS servers to the pod's /etc/resolv.conf file. That is why you can access the registry from Pod.
Usually, this DNS service will not be exposed outside of Pod network.
When you use the docker command, you are inside the host and the host will be pointing to a different DNS server that can't solve the internal service name of the registry.
To access the registry from your host, you need below.
1) Expose the registry Service as NodePort or LoadBalancer
(As you are in a test environment, use NodePort)doc link
2) Create proper DNS entry to resolve the name to IP (here IP will be the Node's IP incase of NodePort service). As you have only one node, create an entry in /etc/hosts file to resolve the registry FQDN.

Minikube mounted host folders are not working

I am using ubuntu 18 with minikube and virtual box and trying to mount the host's directory in order to get the input data my pod needs.
I found that minikube has issues with mounting host directories, but by default according to your OS and vm driver, there are directories that are mounted by default
I can't find those on my pods. They are simply not there.
I tried to create a persistent volume, it works, I can see it on my dashboard, but I cant mount it to the pod, I used this yaml to create the volume
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv0003",
"selfLink": "/api/v1/persistentvolumes/pv0001",
"uid": "28038976-9ee4-414d-8478-b312a24a6b94",
"resourceVersion": "2030",
"creationTimestamp": "2019-08-08T10:48:23Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolume\",\"metadata\":{\"annotations\":{},\"name\":\"pv0001\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"capacity\":{\"storage\":\"5Gi\"},\"hostPath\":{\"path\":\"/data/pv0001/\"}}}\n"
},
"finalizers": [
"kubernetes.io/pv-protection"
]
},
"spec": {
"capacity": {
"storage": "6Gi"
},
"hostPath": {
"path": "/user/data",
"type": ""
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Retain",
"volumeMode": "Filesystem"
},
"status": {
"phase": "Available"
}
}
And this yaml to create the job.
apiVersion: batch/v1
kind: Job
metadata:
name: pi31
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["sleep"]
args: ["300"]
volumeMounts:
- mountPath: /data
name: pv0003
volumes:
- name: pv0003
hostPath:
path: /user/data
restartPolicy: Never
backoffLimit: 1
I also tried to create the volumnes acording to the so called default mount paths but with no success.
I tried to add the volume claim to the job creation yaml, still nothing.
When I mount the drives and create them in the job creation yaml files, the jobs are able to see the data that other jobs create, but it's invisible to the host, and the host's data is invisible to them.
I am running minikube from my main user, and checked the logs in the dashboard, not getting any permissions error
Is there any way to get data into this minikube without setting up NFS? I am trying to use it for an MVP, the entire idea is for it to be simple...
It's not so easy as minikube is working inside VM created in Virtualbox that's why using hostPath you see that VM's file system instead of your PC.
I would really recommend to use minikube mount command - you can find description there
From docs:
minikube mount /path/to/dir/to/mount:/vm-mount-path is the recommended
way to mount directories into minikube so that they can be used in
your local Kubernetes cluster.
So after that you can share your host's files inside minikube Kubernetes.
Edit:
Here is log step-by-step how to test it:
➜ ~ minikube start
* minikube v1.3.0 on Ubuntu 19.04
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing virtualbox VM for "minikube" ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.15.2 on Docker 18.09.6 ...
* Relaunching Kubernetes using kubeadm ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
➜ ~ mkdir -p /tmp/test-dir
➜ ~ echo "test-string" > /tmp/test-dir/test-file
➜ ~ minikube mount /tmp/test-dir:/test-dir
* Mounting host path /tmp/test-dir into VM as /test-dir ...
- Mount type: <no value>
- User ID: docker
- Group ID: docker
- Version: 9p2000.L
- Message Size: 262144
- Permissions: 755 (-rwxr-xr-x)
- Options: map[]
* Userspace file server: ufs starting
* Successfully mounted /tmp/test-dir to /test-dir
* NOTE: This process must stay alive for the mount to be accessible ...
Now open another console:
➜ ~ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ cat /test-dir/test-file
test-string
Edit 2:
example job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: test
spec:
template:
spec:
containers:
- name: test
image: ubuntu
command: ["cat", "/testing/test-file"]
volumeMounts:
- name: test-volume
mountPath: /testing
volumes:
- name: test-volume
hostPath:
path: /test-dir
restartPolicy: Never
backoffLimit: 4

What is the reason for Back-off restarting failed container for elasticsearch kubernetes pod?

When I try to run my elasticsearch container through kubernetes deployments, my elasticsearch pod fails after some time, While it runs perfectly fine when directly run as docker container using docker-compose or Dockerfile. This is what I get as a result of kubectl get pods
NAME READY STATUS RESTARTS AGE
es-764bd45bb6-w4ckn 0/1 Error 4 3m
below is the result of kubectl describe pod
Name: es-764bd45bb6-w4ckn
Namespace: default
Node: administrator-thinkpad-l480/<node_ip>
Start Time: Thu, 30 Aug 2018 16:38:08 +0530
Labels: io.kompose.service=es
pod-template-hash=3206801662
Annotations: <none>
Status: Running
IP: 10.32.0.8
Controlled By: ReplicaSet/es-764bd45bb6
Containers:
es:
Container ID: docker://9be2f7d6eb5d7793908852423716152b8cefa22ee2bb06fbbe69faee6f6aa3c3
Image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
Image ID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch#sha256:9ae20c753f18e27d1dd167b8675ba95de20b1f1ae5999aae5077fa2daf38919e
Port: 9200/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 78
Started: Thu, 30 Aug 2018 16:42:56 +0530
Finished: Thu, 30 Aug 2018 16:43:07 +0530
Ready: False
Restart Count: 5
Environment:
ELASTICSEARCH_ADVERTISED_HOST_NAME: es
ES_JAVA_OPTS: -Xms2g -Xmx2g
ES_HEAP_SIZE: 2GB
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nhb9z (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-nhb9z:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nhb9z
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m default-scheduler Successfully assigned default/es-764bd45bb6-w4ckn to administrator-thinkpad-l480
Normal Pulled 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Container image "docker.elastic.co/elasticsearch/elasticsearch:6.2.4" already present on machine
Normal Created 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Created container
Normal Started 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Started container
Warning BackOff 1m (x15 over 5m) kubelet, administrator-thinkpad-l480 Back-off restarting failed container
Here is my elasticsearc-deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.1.0 (36652f6)
creationTimestamp: null
labels:
io.kompose.service: es
name: es
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: es
spec:
containers:
- env:
- name: ELASTICSEARCH_ADVERTISED_HOST_NAME
value: es
- name: ES_JAVA_OPTS
value: -Xms2g -Xmx2g
- name: ES_HEAP_SIZE
value: 2GB
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
name: es
ports:
- containerPort: 9200
resources: {}
restartPolicy: Always
status: {}
When i try to get logs using kubectl logs -f es-764bd45bb6-w4ckn, I get
Error from server: Get https://<slave node ip>:10250/containerLogs/default/es-764bd45bb6-w4ckn/es?previous=true: dial tcp <slave node ip>:10250: i/o timeout
What could be the reason and solution for this problem ?
I had the same problem, there can be couple of reasons for this issue. In my case the jar file was missing. #Lakshya has already answered this problem, I would like to add the steps that you can take to troubleshoot it.
Get the pod status, Command - kubectl get pods
Describe pod to have further look - kubectl describe pod "pod-name"
The last few lines of output gives you events and where your deployment failed
Get logs for more details - kubectl logs "pod-name"
Get container logs - kubectl logs "pod-name" -c "container-name"
Get the container name from the output of describe pod command
If your container is up, you can use the kubectl exec -it command to further analyse the container
Hope it helps community members in future issues.
I found the logs using docker logs for the es container and found that es was not starting because of the vm.max_map_count set to very low value.
I changed the vm.max_map_count to desired value using sysctl -w vm.max_map_count=262144 and the pod has started after that.
In my case, I just run kubectl run ubuntu --image=ubuntu get similar err and kubectl logs is empty
I guess the reason is ubuntu image without command will auto poweroff, so the solution is:
output k8s ubuntu conf yaml
in command make container command with don't container poweroff(for ex, add "sleep infinity", following is work conf yaml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "ubuntu",
"creationTimestamp": null,
"labels": {
"run": "ubuntu"
}
},
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:20.04",
"command": [
"sleep",
"infinity"
],
"resources": {},
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always"
},
"status": {}
}
Maybe config incorrect but valid, read pod logs and find error message. Fix configs and redeploy app

Unable to mount MySQL data volume to Kubernetes Minikube pod

I'm trying to set up a dev environment with Kubernetes via Minikube. I successfully mounted the same volume to the same data dir on the same image with Docker for Mac, but I'm having trouble with Minikube.
Relevant files and logs:
db-pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
name: msyql
name: db
namespace: default
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: volumesnew
volumes:
- name: volumesnew
hostPath:
path: "/Users/eric/Volumes/mysql"
kubectl get pods:
NAME READY STATUS RESTARTS AGE
db 0/1 Error 1 3s
kubectl logs db:
2016-08-29 20:05:55 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-08-29 20:05:55 0 [Note] mysqld (mysqld 5.6.32) starting as process 1 ...
2016-08-29 20:05:55 1 [Warning] Setting lower_case_table_names=2 because file system for /var/lib/mysql/ is case insensitive
kubectl describe pods db:
Name: db
Namespace: default
Node: minikubevm/10.0.2.15
Start Time: Wed, 31 Aug 2016 07:48:39 -0700
Labels: name=msyql
Status: Running
IP: 172.17.0.3
Controllers: <none>
Containers:
mysqldev:
Container ID: docker://af0937edcd9aa00ebc278bc8be00bc37d60cbaa403c69f71bc1b378182569d3d
Image: mysql/mysql-server:5.6.32
Image ID: docker://sha256:0fb418d5a10c9632b7ace0f6e7f00ec2b8eb58a451ee77377954fedf6344abc5
Port: 3306/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 31 Aug 2016 07:48:42 -0700
Finished: Wed, 31 Aug 2016 07:48:43 -0700
Ready: False
Restart Count: 1
Environment Variables:
MYSQL_ROOT_PASSWORD: test
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
volumesnew:
Type: HostPath (bare host directory volume)
Path: /Users/eric/Volumes/newmysql
default-token-il74e:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-il74e
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
7s 7s 1 {default-scheduler } Normal Scheduled Successfully assigned db to minikubevm
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id 568f9112dce0
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id 568f9112dce0
6s 4s 2 {kubelet minikubevm} spec.containers{mysqldev} Normal Pulled Container image "mysql/mysql-server:5.6.32" already present on machine
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id af0937edcd9a
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id af0937edcd9a
3s 2s 2 {kubelet minikubevm} spec.containers{mysqldev} Warning BackOff Back-off restarting failed docker container
3s 2s 2 {kubelet minikubevm} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysqldev" with CrashLoopBackOff: "Back-off 10s restarting failed container=mysqldev pod=db_default(012d5178-6f8a-11e6-97e8-c2daf2e2520c)"
I was able to mount the data directory from the host to the container in a test directory, but I'm having trouble mounting to the MySQL data directory. Also, I tried to mount an empty directory to the container's data dir with the appropriate MySQL environment variables set, which in Docker for Mac allowed me to perform a SQL dump in the new dir, but I'm seeing the same errors in Minikube.
Any thought on what might be the cause, or if I'm not setting up my dev environment the preferred Kubernetes/Minikube way, please share your thoughts.
I was able to resolve this with the following:
echo "/Users -network 192.168.99.0 -mask 255.255.255.0 -alldirs -maproot=root:wheel" | sudo tee -a /etc/exports
sudo nfsd restart
minikube start
minikube ssh -- sudo umount /Users
minikube ssh -- sudo /usr/local/etc/init.d/nfs-client start
minikube ssh -- sudo mount 192.168.99.1:/Users /Users -o rw,async,noatime,rsize=32768,wsize=32768,proto=tcp
I am running Minikube in VirtualBox. I don't know if this will work with other VM drivers - xhyve, etc.
Reference: https://github.com/kubernetes/minikube/issues/2
EDIT: I should mention that this works for minikube v0.14.0.
1. Mount the folder you want to share on your host, in minikube:
minikube mount ./path/to/mySharedData:/mnt1/shared1
Don't close the terminal. That process needs to be running all the time for the folder to be accessible.
2. Use that folder with hostPath:
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: my-volume
volumes:
- name: my-volume
hostPath:
path: "/mnt1/shared1"
3. Writing access issue?
In case you have a writing access issue, you might want to mount the volume with:
minikube mount ./path/to/mySharedData:/mnt1/shared1 --uid 10001 --gid 10001
Here, the volume mounted in minikube will have group id and user id 10001. This is the user id of Azure SQL Edge server inside the container.
I don't know which is the user id of mysql in your case. If you want to know, log into your container and type id, it will tell you the user id.

Resources