I have problem when i want to deploy working well cAdvisor containter in openshift project.
I have a dedicated serviceaccount in openshift project and i add him privileged scc,
then i made some changes in cAdvisor.yaml and try to deploy, containter is working but with an error when i want to go in "Docker Containers" section on webside
Changes i made in cAdvisor.yaml:
metadata:
annotations:
openshift.io/scc: privileged
securityContext:
runAsUser: 0
serviceAccount: cadvisor-sa
serviceAccountName: cadvisor-sa
volumes:
volumes:
- hostPath:
path: /
name: rootfs
- hostPath:
path: /var/run
name: var-run
- hostPath:
path: /sys/fs/cgroup/cpu
name: sys
- hostPath:
path: /var/lib/docker
name: docker
Error on web:
failed to get docker info: Got permission denied while trying to
connect to the Docker daemon socket at unix:///var/run/docker.sock:
Get http://%2Fvar%2Frun%2Fdocker.sock/info: dial unix
/var/run/docker.sock: connect: permission denied"
Related
I'm trying to mount a windows local directory to a docker container in the Kubernetes pod but have encountered errors when specifying the mounting path. I'm a newbie to Kubernetes and not sure if I'm following the correct way to mount a Windows directory.
my .yaml file looks something like this,
apiVersion: v1
kind: Pod
metadata:
name: two-containers-local
labels:
name: app
spec:
containers:
- image: nginx
name: nginx-container
volumeMounts:
- mountPath: /usr/share/nginx/html
name: test-volume
ports:
- containerPort: 8080
volumes:
- name: test-volume
hostPath:
path: 'C:\test'
type: Directory
---
apiVersion: v1
kind: Service
metadata:
name: my-two-container-nginx-local
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30002
protocol: TCP
selector:
name: app
MountVolume.SetUp failed for volume "test-volume" : hostPath type check failed: C:\test is not a directory
Can you check whether the C:/Test directory exists on the host? When the type is Directory, if the directory does not exist on the host, kubelet will not create and it will print error.
As per this GITHUB LINk add as below :
type: DirectoryOrCreate
path: /run/desktop/mnt/host/c/users/public/your_folder #give exact working mount path
As You asked how to mount a local directory from my windows to docker, follow this link might help you
How should Mutagen be configured, to synchronise local-host source code files with a Docker volume, onto a Kubernetes cluster?
I used to mount my project directory onto a container, using hostPath:
kind: Deployment
spec:
volumes:
- name: "myapp-src"
hostPath:
path: "path/to/project/files/on/localhost"
type: "Directory"
...
containers:
- name: myapp
volumeMounts:
- name: "myapp-src"
mountPath: "/app/"
but this has permission and symlinks problems, that I need to solve using Mutagen.
At the moment, it works correctly when relying on docker-compose (run via mutagen project start -f path/to/mutagen.yml):
sync:
defaults:
symlink:
mode: ignore
permissions:
defaultFileMode: 0664
defaultDirectoryMode: 0775
myapp-src:
alpha: "../../"
beta: "docker://myapp-mutagen/myapp-src"
mode: "two-way-safe"
But it isn't clear to me how to configure the K8S Deployment, in order to use Mutagen for keeping the myapp-src volume in sync with localhost?
I'm trying to build a docker image using DIND with Atlassian Bamboo.
I've created the deployment/ StatefulSet as follows:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: bamboo
name: bamboo
namespace: csf
spec:
replicas: 1
serviceName: bamboo
revisionHistoryLimit: 10
selector:
matchLabels:
app: bamboo
template:
metadata:
creationTimestamp: null
labels:
app: bamboo
spec:
containers:
- image: atlassian/bamboo-server:latest
imagePullPolicy: IfNotPresent
name: bamboo-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
privileged: true
volumeMounts:
- name: bamboo-home
mountPath: /var/atlassian/application-data/bamboo
- mountPath: /opt/atlassian/bamboo/conf/server.xml
name: bamboo-server-xml
subPath: bamboo-server.xml
- mountPath: /var/run
name: docker-sock
volumes:
- name: bamboo-home
persistentVolumeClaim:
claimName: bamboo-home
- configMap:
defaultMode: 511
name: bamboo-server-xml
name: bamboo-server-xml
- name: docker-sock
hostPath:
path: /var/run
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Note that I've set privileged: true in securityContext to enable this.
However, when trying to run docker images, I get a permission error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See '/var/atlassian/application-data/bamboo/appexecs/docker run --help'
Am I missing something wrt setting up DIND?
The /var/run/docker.sock file on the host system is owned by a different user than the user that is running the bamboo-server container process.
Without knowing any details about your cluster, I would assume docker runs as 'root' (UID=0). The bamboo-server runs as 'bamboo', as can be seen from its Dockerfile, which will normally map to a UID in the 1XXX range on the host system. As these users are different and the container process did not receive any specific permissions over the (host) socket, the error is given.
So I think there are two approaches possible:
Or the container process continues to run as the 'bamboo' user, but is given sufficient permissions on the host system to access /var/run/docker.sock. This would normally mean adding the UID the bamboo user maps to on the host system to the docker group on the host system. However, making changes to the host system might or might not be an option depending on the context of your cluster, and is tricky in a cluster context because the pod could migrate to a different node where the changes were not applied and/or the UID changes.
Or the container is changed as to run as a sufficiently privileged user to begin with, being the root user. There are two ways to accomplish this: 1. you extend and customize the Atlassian provided base image to change the user or 2. you override the user the container runs as at run-time by means of the 'runAsUser' and 'runAsGroup' securityContext instructions as specified here. Both should be '0'.
As mentioned in the documentation here
If you want to run docker as non-root user then you need to add it to the docker group.
Create the docker group if it does not exist
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
$ newgrp docker
Verify that you can run docker commands without sudo
$ docker run hello-world
If that doesn't help you can change the permissions of docker socket to be able to connect to the docker daemon /var/run/docker.sock.
sudo chmod 666 /var/run
A better way to handle this is to run a sidecar container - docker:dind, and export DOCKER_HOST=tcp://dind:2375 in the main Bamboo container. This way you will invoke Docker in a dind container and won't need to mount /var/run/docker.sock
Summary
Running a declarative pipeline job in jenkins which was deployed to a kubernetes cluster fails when using the docker agent with the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=node&tag=10.15.1: dial unix /var/run/docker.sock: connect: permission denied
How can I solve this permission error in the kubernetes declaration?
Background
We have a jenkins server which was deployed to a kubernetes cluster using the jenkinsci/blueocean image. The kubernetes declaration as done as follows:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins-master
image: jenkinsci/blueocean
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
env:
- name: "JAVA_OPTS"
value: "-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=3600"
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: File
We then declare a declarative pipeline jenkins job as follows:
pipeline {
agent {
docker {
image 'node:10.15.1'
label 'master'
}
}
stages {
stage('Checkout source code') {
steps {
checkout scm
}
}
stage('Build project') {
steps {
sh 'npm install'
sh 'npm run compile'
}
}
stage('Run quality assurance') {
steps {
sh 'npm run style:check'
sh 'npm run test:coverage'
}
}
}
}
This job fails with the aforementioned error. My suspicion is that the docker socket was mounted into the system, but the user running the job does not have permission to execute the socket. I, however, cannot add the user to the group in the created pod using sudo usermod -a -G docker $USER since the pod will be recreated upon each redeploy.
Questions
Is it possible to mount the docker volume using the correct user in the kubernetes declaration?
Can I declare the pipeline differently, if it is not possible to set up the permission in the kubernetes declaration?
Is there some other solution which I have not thought about?
Thanks.
I, however, cannot add the user to the group in the created pod using
sudo usermod -a -G docker $USER since the pod will be recreated upon
each redeploy.
Actually, you can.
Define a usermod command for your container in the deployment yaml, e.g
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins-master
image: jenkinsci/blueocean
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
env:
- name: "JAVA_OPTS"
value: "-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=3600"
- name: "USER"
value: "Awemo"
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-socket
mountPath: /var/run/docker.sock
command: ["/bin/sh"]
args: ["-c", "usermod -aG docker $USER"]
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: File
So, whenever a new pod is created, a user will be added to the docker usergroup
I've been hitting errors when trying to set up a dev platform in Kubernetes & minikube. The config is creating a service, persistentVolume, persistentVolumeClaim & deployment.
The deployment is creating a single pod with a single container based on bitnami/mariadb:latest
I am mounting a local volume into the minikube vm via:
minikube mount <source-path>:/data
This local volume is mounting correctly and can be inspected when I ssh into the minikube vm via: minikube ssh
I now run:
kubectl create -f mariadb-deployment.yaml
to fire up the platform, the yaml config:
kind: Service
apiVersion: v1
metadata:
name: mariadb-deployment
labels:
app: supertubes
spec:
ports:
- port: 3306
selector:
app: supertubes
tier: mariadb
type: LoadBalancer
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: local-db-pv
labels:
type: local
tier: mariadb
app: supertubes
spec:
storageClassName: slow
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/staging/sumatra/mariadb-data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-db-pv-claim
labels:
app: supertubes
spec:
storageClassName: slow
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: local
tier: mariadb
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: mariadb-deployment
labels:
app: supertubes
spec:
selector:
matchLabels:
app: supertubes
tier: mariadb
template:
metadata:
labels:
app: supertubes
tier: mariadb
spec:
securityContext:
fsGroup: 1001
containers:
- image: bitnami/mariadb:latest
name: mariadb
env:
- name: MARIADB_ROOT_PASSWORD
value: <db-password>
- name: MARIADB_DATABASE
value: <db-name>
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- name: mariadb-persistent-storage
mountPath: /bitnami
volumes:
- name: mariadb-persistent-storage
persistentVolumeClaim:
claimName: local-db-pv-claim
The above config will then fail to boot the pod and inspecting the pods logs within minikube dashboard shows the following:
nami INFO Initializing mariadb
mariadb INFO ==> Cleaning data dir...
mariadb INFO ==> Configuring permissions...
mariadb INFO ==> Validating inputs...
mariadb INFO ==> Initializing database...
mariadb INFO ==> Creating 'root' user with unrestricted access...
mariadb INFO ==> Creating database pw_tbs...
mariadb INFO ==> Enabling remote connections...
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mariadb'
Looking at the above I believed the issue was to do with Bitnami using user: 1001 to launch their mariadb image:
https://github.com/bitnami/bitnami-docker-mariadb/issues/134
Since reading the above issue I've been playing with securityContext within the containers spec. At present you'll see I have it set to:
deployment.template.spec
securityContext:
fsGroup: 1001
but this isn't working. I've also tried:
securityContext:
privileged: true
but didn't get anywhere with that either.
One other check I made was to remove the volumeMount from deployment.template.spec.containers and see if things worked correctly without it, which they do :)
I then opened a shell into the pod to see what the permissions on /bitnami are:
Reading a bit more on the Bitnami issue posted above it says the user: 1001 is a member of the root group, therefore I'd expect them to have the neccessary permissions... At this stage I'm a little lost as to what is wrong.
If anyone could help me understand how to correctly set up this minikube vm volume within a container that would be amazing!
Edit 15/03/18
Following #Anton Kostenko's suggestions I added a busybox container as an initContainer which ran a chmod on the bitnami directory:
...
spec:
initContainers:
- name: install
image: busybox
imagePullPolicy: Always
command: ["chmod", "-R", "777", "/bitnami"]
volumeMounts:
- name: mariadb-persistent-storage
mountPath: /bitnami
containers:
- image: bitnami/mariadb:latest
name: mariadb
...
however even with setting global rwx permissions (777) the directory couldn't mount as the MariaDB container doesn't allow user 1001 to do so:
nami INFO Initializing mariadb
Error executing 'postInstallation': EPERM: operation not permitted, utime '/bitnami/mariadb/.restored'
Another Edit 15/03/18
Have now tried setting the user:group on my local machine (MacBook) so that when passed to the minikube vm they should already be correct:
Now mariadb-data has rwx permission for eveyone and user: 1001, group: 1001
I then removed the initContainer as I wasn't really sure what that would be adding.
SSHing onto the minikube vm I can see the permissions and user:group have been carried across:
The user & group now being set as docker
Firing up this container results in the same sort of error:
nami INFO Initializing mariadb
Error executing 'postInstallation': EIO: i/o error, utime '/bitnami/mariadb/.restored'
I've tried removing the securityContext, and also adding it as runAsUser: 1001, fsGroup: 1001, however neither made any difference.
Looks like that is an issue in Minikube.
You can try to use the init-container which will fix a permissions before main container will be started, like this:
...........
spec:
initContainers:
- name: "fix-non-root-permissions"
image: "busybox"
imagePullPolicy: "Always"
command: [ "chmod", "-R", "g+rwX", "/bitnami" ]
volumeMounts:
- name: datadir
mountPath: /bitnami
containers:
.........