I have failed to use HostPath /var/lib/docker/containers as a volume with the following error:
Error response from daemon: linux mounts: Path /var/lib/docker/containers is
mounted on /var/lib/docker/containers but it is not a shared or slave mount.
Here is my YAML spec (note: this is just an example for reproducing my problem in doing log collection):
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: logging
labels:
app: test
spec:
selector:
matchLabels:
app : test
template:
metadata:
labels:
app: test
spec:
containers:
- name: nginx
image: nginx:stable-alpine
securityContext:
privileged: true
ports:
- containerPort : 8003
volumeMounts:
- name: docker
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: docker
hostPath:
path: /var/lib/docker/containers
And my kubernetes version.
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1",
GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean",
BuildDate:"2018-04-12T14:26:04Z", GoVersion:"go1.9.3", Compiler:"gc",
Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0",
GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean",
BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc",
Platform:"linux/amd64"}
Very much appreciating your help!
very appreciated for any help!
You are most probably hit by a version specific issue:
/var/lib/docker/containers is intentionally mounted by Docker with private mount
propagation and thus conflicts with Kubernetes trying to mount this directory
as rslave when running the container
You should try with 1.10.3+ where it is resolved. See the official changelog for kubernetes and check entry related to "Default mount propagation". Also check related (see the error) fluentd issue for more in-depth analysis.
Now, with that said...
David's seasoned comment with question and caution word still stands and I second that: this is quite an eyebrow raiser - nginx pod digging deep into docker engine internals (hope it is just for sake of minimal reproducible example, or log collection case, you know, something...)... Just make sure you know exactly what you are doing and why.
Related
EKS cluster version:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-19T11:45:27Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Given below is my deployment file :
kind: Deployment
apiVersion: apps/v1
metadata:
name: sample-pod
namespace: front-end
spec:
replicas: 1
selector:
matchLabels:
app: sample-pod
template:
metadata:
labels:
app: sample-pod
spec:
serviceAccountName: my-service-account
containers:
- name: sample-pod
image: <Account-id>.dkr.ecr.us-east-1.amazonaws.com/sample-pod-image:latest
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 500m
memory: 500Mi
env:
- name: name
value: sample-pod
- name: ACTIVE_SPRING_PROFILE
value: dev
imagePullPolicy: Always
ports:
- name: http
containerPort: 8091
imagePullSecrets:
- name: <my_region>-1-ecr-registry
And this is my docker file.
FROM amazoncorretto:latest
COPY bootstarp.sh /bootstarp.sh
RUN yum -y install aws-cli
CMD ["tail", "-f" , "/bootstarp.sh"]
Steps to reproduce:
kubectl apply -f my-dep.yaml
Let container be create.
Delete deployment using command
kubectl delete -f my-dep.yaml
Recreate using command
apply -f my-dep.yaml
Not a perfect soln but this is how i overcame it.
Root cause: The deployment was in the terminating stage and I was recreating the deployment which involves the reassignment of networking resources and due to deadlock the deployment fails.
Soln: I have added a cool dow period in between the termination and recreation of the deployment. Earlier I was deleting and recreating the deployment in one shot (using a shell script).
Earlier :
kubectl delete-f my-dep.yaml
some more instructions .....
kubectl apply -f my-dep.yaml
Now:
kubectl delete-f my-dep.yaml
some more instructions .....
**sleep 1m 30s**
kubectl apply -f my-dep.yaml
Because of the cool down, I can now predictably deploy the container.
Regards
Amit Meena
I am running kubernetes on Docker-desktop for windows. I am connecting to the cluster from my WSL.
all my pods are running correctly. I am trying to mount a volume on my jupyterlab (pod) using hostpath. below is my config
apiVersion: apps/v1
kind: Deployment
metadata:
name: jupyter
labels:
app: jupyter
spec:
replicas: 1
selector:
matchLabels:
app: jupyter
template:
metadata:
labels:
app: jupyter
spec:
containers:
- name: jupyter
image: jupyter:1.1
ports:
- containerPort: 8888
securityContext:
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- name: mydir
mountPath: /notebooks
volumes:
- name: mydir
hostPath:
# directory location on host
path: /home/<myuser>/data
# this field is optional
type: DirectoryOrCreate
The pod starts without any issues. but i dont see the notbooks which i have kept in my hostpath onto my jupyter labs and vice versa( if i save a notebook in jupyter lab it does not get saved to my hostpath).
i followed the tutorial on https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
i want to point out that i am using the #FROM jupyter/datascience-notebook:python-3.7.6" as my docker image.
i tried mounting /home/jovyan/ but it was giving me access related errors while starting the pod. so i reverted back to "/notebooks"
It looks like an issue with how the path is being written on Windows, I see the issue reported in the references below.
Solution:
If your file is in say C: drive, it should be converted to the below
/host_mnt/c/path/to/my/folder
If the above does not work you may want to remove the "type: DirectoryOrCreate" and retry.
References:
https://github.com/kubernetes/kubernetes/issues/59876#issuecomment-628955935
https://github.com/docker/for-win/issues/1703#issuecomment-366701358 .
If you are using WSL based engine on windows, the path should be /run/desktop/mnt/host/c/<folder>
I here for hours every day, reading and learning, but this is my first question, so bear with me.
I'm simply trying to get my Kubernetes cluster to start up.
Below is my skaffold.yaml file in the root of the project:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: omesadev/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
Below is my auth-depl.yaml file in the infra/k8s/ directory:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: omesadev/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
Below is the error message I'm receiving in the cli:
exiting dev mode because first deploy failed: unable to connect to Kubernetes: getting client config for Kubernetes client: error creating REST client config for kubeContext "": invalid configuration: [unable to read client-cert C:\Users\omesa\.minikube\profiles\minikube\client.crt for minikube due to open C:\Users\omesa\.minikube\profiles\minikube\client.crt: The system cannot find the path specified., unable to read client-key C:\Users\omesa\.minikube\profiles\minikube\client.key for minikube due to open C:\Users\omesa\.minikube\profiles\minikube\client.key: The system cannot find the path specified., unable to read certificate-authority C:\Users\omesa\.minikube\ca.crt for minikube due to open C:\Users\omesa\.minikube\ca.crt: The system cannot find the file specified.
I've tried to install kubernetes, minikube, and kubectl. I've added them to the path and removed them a few times in different ways because I thought my configuration or usage could have been incorrect.
Then, I read that if I'm using the Docker GUI that Kubernetes should be running in that, so I checked the settings in the Docker GUI to ensure Kubernetes was running through Docker and it is.
I have Hyper-V set up. I've used it in the past successfully with Docker and with Virtualbox, so I know my Hyper-V is not the issue.
I've also attached an image of my file directory, but I'm pretty sure everything is good to go here too.
src tree
Thanks in advance!
Enable Kubernetes!
The reason why you are getting is that Kubernetes is not enabled.
Posting #Jim solution from comments as community wiki for better visibility:
The problem was, I had two different contexts inside of my kubectl
config and the project I was trying to launch was using the wrong
cluster/context. I don't know how the minikube cluster and context
were created, but I deleted them and set the new context to
docker-desktop with "kubectl config use-context docker-desktop"
Helpful links:
Organizing Cluster Access Using kubeconfig Files
Configure Access to Multiple Clusters
I'm setting up a kubernetes deployment with an image that will execute docker commands (docker ps etc.).
My yaml looks as the following:
kind: Deployment
apiVersion: apps/v1
metadata:
name: discovery
namespace: kube-system
labels:
discovery-app: kubernetes-discovery
spec:
selector:
matchLabels:
discovery-app: kubernetes-discovery
strategy:
type: Recreate
template:
metadata:
labels:
discovery-app: kubernetes-discovery
spec:
containers:
- image: docker:dind
name: discover
ports:
- containerPort: 8080
name: my-awesome-port
imagePullSecrets:
- name: regcred3
volumes:
- name: some-volume
emptyDir: {}
serviceAccountName: kubernetes-discovery
Normally I will run a docker container as following:
docker run -v /var/run/docker.sock:/var/run/docker.sock docker:dind
Now, kubernetes yaml supports commands and args but for some reason does not support options.
What is the right thing to do?
Perhaps I should configure a volume, but then, is it volumeMount or just a volume?
I am new with kubernetes so it is important for me to do it the right way.
Thank you
You want to add the volume to the container.
spec:
containers:
- name: discover
image: docker:dind
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
It seems like a bad idea to interact directly with containers on any nodes in Kubernetes. The whole point of Kubernetes is to orchestrate. If you add containers outside of the Pod construct, then Kubernetes will not be aware the processes running on the nodes. This will affect resource allocation.
It also needs to be said that directly working with containers bypasses security.
Hi I am using latest kubernetes 1.13.1 and docker-ce (Docker version 18.06.1-ce, build e68fc7a).
I setup a deployment file that mount a file from the host (host-path) and mounts it inside a container (mountPath).
The bug is when I am trying to mount a find from the host to the container I get an error message that It's not a file. (Kubernetes think that the file is a directory for some reason)
When I am trying to run the containers using the command:
Kubectl create -f
it stay at ContainerCreating stage forever.
after deeper look on it using Kubectl describe pod it say:
Is has an error message the the file is not recognized as a file.
Here is the deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: notixxxion
name: notification
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: notification
spec:
containers:
- image: docker-registry.xxxxxx.com/xxxxx/nxxxx:laxxt
name: notixxxion
ports:
- containerPort: xxx0
#### host file configuration
volumeMounts:
- mountPath: /opt/notification/dist/hellow.txt
name: test-volume
readOnly: false
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /exec-ui/app-config/hellow.txt
# this field is optional
type: FileOrCreate
#type: File
status: {}
I have reinstalled the kubernetes cluster and it got little bit better.
kubernetes now can read files without any problem and the container in creating and running But, there is some other issue with the host path storage type:
hostPath containing mounts do not update as they change on the host even after I delete the pod and create it again
Check for file permissions which you are trying to mount!
As a last resort try using privileged mode.
Hope it helps!