I'm trying to build a docker image using DIND with Atlassian Bamboo.
I've created the deployment/ StatefulSet as follows:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: bamboo
name: bamboo
namespace: csf
spec:
replicas: 1
serviceName: bamboo
revisionHistoryLimit: 10
selector:
matchLabels:
app: bamboo
template:
metadata:
creationTimestamp: null
labels:
app: bamboo
spec:
containers:
- image: atlassian/bamboo-server:latest
imagePullPolicy: IfNotPresent
name: bamboo-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
privileged: true
volumeMounts:
- name: bamboo-home
mountPath: /var/atlassian/application-data/bamboo
- mountPath: /opt/atlassian/bamboo/conf/server.xml
name: bamboo-server-xml
subPath: bamboo-server.xml
- mountPath: /var/run
name: docker-sock
volumes:
- name: bamboo-home
persistentVolumeClaim:
claimName: bamboo-home
- configMap:
defaultMode: 511
name: bamboo-server-xml
name: bamboo-server-xml
- name: docker-sock
hostPath:
path: /var/run
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Note that I've set privileged: true in securityContext to enable this.
However, when trying to run docker images, I get a permission error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See '/var/atlassian/application-data/bamboo/appexecs/docker run --help'
Am I missing something wrt setting up DIND?
The /var/run/docker.sock file on the host system is owned by a different user than the user that is running the bamboo-server container process.
Without knowing any details about your cluster, I would assume docker runs as 'root' (UID=0). The bamboo-server runs as 'bamboo', as can be seen from its Dockerfile, which will normally map to a UID in the 1XXX range on the host system. As these users are different and the container process did not receive any specific permissions over the (host) socket, the error is given.
So I think there are two approaches possible:
Or the container process continues to run as the 'bamboo' user, but is given sufficient permissions on the host system to access /var/run/docker.sock. This would normally mean adding the UID the bamboo user maps to on the host system to the docker group on the host system. However, making changes to the host system might or might not be an option depending on the context of your cluster, and is tricky in a cluster context because the pod could migrate to a different node where the changes were not applied and/or the UID changes.
Or the container is changed as to run as a sufficiently privileged user to begin with, being the root user. There are two ways to accomplish this: 1. you extend and customize the Atlassian provided base image to change the user or 2. you override the user the container runs as at run-time by means of the 'runAsUser' and 'runAsGroup' securityContext instructions as specified here. Both should be '0'.
As mentioned in the documentation here
If you want to run docker as non-root user then you need to add it to the docker group.
Create the docker group if it does not exist
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
$ newgrp docker
Verify that you can run docker commands without sudo
$ docker run hello-world
If that doesn't help you can change the permissions of docker socket to be able to connect to the docker daemon /var/run/docker.sock.
sudo chmod 666 /var/run
A better way to handle this is to run a sidecar container - docker:dind, and export DOCKER_HOST=tcp://dind:2375 in the main Bamboo container. This way you will invoke Docker in a dind container and won't need to mount /var/run/docker.sock
Related
I have a pod running Linux, I have let others use it. Now I need to save the changes made by others. Since sometimes I need to delete/restart the pod, the changes are reverted and new pod get created. So I want to save the pod container as docker image and use that image to create a pod.
I have tried kubectl debug node/pool-89899hhdyhd-bygy -it --image=ubuntu then install docker, dockerd inside but they don't have root permission to perform operations, installed crictl they where listing the containers but they don't have options to save them.
Also created a privileged docker image, created a pod from it, then used the command kubectl exec --stdin --tty app-7ff786bc77-d5dhg -- /bin/sh then tried to get running container, but it was not listing the containers. Below is the deployment i used to the privileged docker container
kind: Deployment
apiVersion: apps/v1
metadata:
name: app
labels:
app: backend-app
backend-app: app
spec:
replicas: 1
selector:
matchLabels:
app: backend-app
task: app
template:
metadata:
labels:
app: backend-app
task: app
spec:
nodeSelector:
kubernetes.io/hostname: pool-58i9au7bq-mgs6d
volumes:
- name: task-pv-storage
hostPath:
path: /run/docker.sock
type: Socket
containers:
- name: app
image: registry.digitalocean.com/my_registry/docker_app#sha256:b95016bd9653631277455466b2f60f5dc027f0963633881b5d9b9e2304c57098
ports:
- containerPort: 80
volumeMounts:
- name: task-pv-storage
mountPath: /var/run/docker.sock
Is there any way I can achieve this, get the pod container and save it as a docker image? I am using digitalocean to run my kubernetes apps, I do not ssh access to the node.
This is not a feature of Kubernetes or CRI. Docker does support snapshotting a running container to an image however Kubernetes no longer supports Docker.
Thank you all for your help and suggestions. I found a way to achieve it using the tool nerdctl - https://github.com/containerd/nerdctl.
I have a container that I need to configure for k8s yaml. The workflow on docker run using the terminal looks like this.:
docker run -v $(pwd):/projects \
-w /projects \
gcr.io/base-project/myoh:v1 init *myproject*
This command creates a directory called myproject. To complete the workflow, I need to cd into this myproject folder and run:
docker run -v $(pwd):/project \
-w /project \
-p 8081:8081 \
gcr.io/base-project/myoh:v1
Any idea how to convert this to either a docker-compose or a k8s pods/deployment yaml? I have tried all that come to mind with no success.
The bind mount of the current directory can't be translated to Kubernetes at all. There's no way to connect a pod's filesystem back to your local workstation. A standard Kubernetes setup has a multi-node installation, and if it's possible to directly connect to a node (it may not be) you can't predict which node a pod will run on, and copying code to every node is cumbersome and hard to maintain. If you're using a hosted Kubernetes installation like GKE, it's even possible that the cluster autoscaler will create and delete nodes automatically, and you won't have an opportunity to manually copy things in.
You need to build your application code into a custom image. That can set the desired WORKDIR, COPY the code in, and RUN any setup commands that are required. Then you need to push that to an image repository, like GCR
docker build -t gcr.io/base-project/my-project:v1 .
docker push gcr.io/base-project/my-project:v1
Once you have that, you can create a minimal Kubernetes Deployment to run it. Set the GCR name of the image you built and pushed as its image:. You will also need a Service to make it accessible, even from other Pods in the same cluster.
Try this (untested yaml, but you will get the idea)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myoh-deployment
labels:
app: myoh
spec:
replicas: 1
selector:
matchLabels:
app: myoh
template:
metadata:
labels:
app: myoh
spec:
initContainers:
- name: init-myoh
image: gcr.io/base-project/myoh:v1
command: ['sh', '-c', "mkdir -p myproject"]
containers:
- name: myoh
image: gcr.io/base-project/myoh:v1
ports:
- containerPort: 8081
volumeMounts:
- mountPath: /projects
name: project-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
From the docker hub there is an image which is maintained by amazon.
Any one know how to configure and start the container as I cannot find any documentation
I got this working! I was having the same issue with you when you see Reading json config file path: /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json ... Cannot access /etc/cwagentconfig: lstat /etc/cwagentconfig: no such file or directoryValid Json input schema.
What you need to do is put your config file in /etc/cwagentconfig. A functioning dockerfile:
FROM amazon/cloudwatch-agent:1.230621.0
COPY config.json /etc/cwagentconfig
Where config.json is some cloudwatch agent configuration, such as given by LinPy's answer.
You can ignore the warning about /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json, or you can also COPY the config.json file to that location in the dockerfile as well.
I will also share how I found this answer:
I needed this run in ECS as a sidecar, and I could only find docs on how to run it in kubernetes. Following this documentation: https://docs.aws.amazon.com/en_pv/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-StatsD.html I decided to download all the example k8s manifests, when I saw this one:
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: amazonlinux
spec:
containers:
- name: amazonlinux
image: amazonlinux
command: ["/bin/sh"]
args: ["-c", "sleep 300"]
- name: cloudwatch-agent
image: amazon/cloudwatch-agent
imagePullPolicy: Always
resources:
limits:
cpu: 200m
memory: 100Mi
requests:
cpu: 200m
memory: 100Mi
volumeMounts:
- name: cwagentconfig
mountPath: /etc/cwagentconfig
volumes:
- name: cwagentconfig
configMap:
name: cwagentstatsdconfig
terminationGracePeriodSeconds: 60
So I saw that the volume mount cwagentconfig mounts to /etc/cwagentconfig and that's from the cwagentstatsdconfig configmap, and that's just the json file.
You just to run the container with log-opt, as the log agent is the main process of the container.
docker run --log-driver=awslogs --log-opt awslogs-region=us-west-2 --log-opt awslogs-group=myLogGroup amazon/cloudwatch-agent
You can find more details here and here.
I do not know why you need an agent in a container, but the best practice is to send each container log directly to cloud watch using aws log driver.
Btw this is entrypoint of the container.
"Entrypoint": [
"/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent"
],
All you need to call
/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent
Here is how I got it working in our Docker containers without systemctl or System V init.
This is from official Documentation:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:configuration-file-path -s
here the Docs:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html#start-CloudWatch-Agent-EC2-commands-fleet
Installation path may be different, but that is how the agent is started as per docs.
I'm using two VMs with Atomic Host (1 Master, 1 Node; Centos Image). I want to use NFS shares from another VM (Ubuntu Server 16.04) as persistent volumes for my pods. I can mount them manually and in Kubernetes (Version 1.5.2) the persistent volumes are successfully created and bound to my PVCs. Also they are mounted in my pods. But when I try to write or even read from the corresponding folder inside the pod, I get the error Permission denied. From my research I think, the problem lies within the folders permission/owner/group on my NFS Host.
My exports file on the Ubuntu VM (/etc/exports) has 10 shares with the following pattern (The two IPs are the IPs of my Atomic Host Master and Node):
/home/user/pv/pv01 192.168.99.101(rw,insecure,async,no_subtree_check,no_root_squash) 192.168.99.102(rw,insecure,async,no_subtree_check,no_root_squash)
In the image for my pods I create a new user named guestbook, so that the container doesn't use a privileged user, as this insecure. I read many post like this one, that state, you have to set the permissions to world-writable or using the same UID and GID for the shared folders. So in my Dockerfile I create the guestbook user with the UID 1003 and a group with the same name and GID 1003:
RUN groupadd -r guestbook -g 1003 && useradd -u 1003 -r -g 1003 guestbook
On my NFS Host I also have a user named guestbook with UID 1003 as a member of the group nfs with GID 1003. The permissions of the shared folders (with ls -l) are as following:
drwxrwxrwx 2 guestbook nfs 4096 Feb 19 11:23 pv01
(world writable, owner guestbook, group nfs). In my Pod I can see the permissions of the mounted folder /data (again with ls -l) as:
drwxrwxrwx. 2 guestbook guestbook 4096 Feb 9 13:37 data
The persistent Volumes are created with an YAML file with the pattern:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
annotations:
pv.beta.kubernetes.io/gid: "1003"
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /home/user/pv/pv01
server: 192.168.99.104
The Pod is created with this YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: get-started
spec:
replicas: 3
template:
metadata:
labels:
app: get-started
spec:
containers:
- name: get-started
image: docker.io/cebberg/get-started:custom5
ports:
- containerPort: 2525
env:
- name: GET_HOSTS_FROM
value: dns
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: database-password
volumeMounts:
- name: log-storage
mountPath: "/data/"
imagePullPolicy: Always
securityContext:
privileged: false
volumes:
- name: log-storage
persistentVolumeClaim:
claimName: get-started
restartPolicy: Always
dnsPolicy: ClusterFirst
And the PVC with YAML file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: get-started
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
I tried different configuration for the owner/group of the folders. If I use my normal user (which is the same on all systems) as owner and group, I can mount manually and read and write in the folder. But I don't want to use my normal user, but use another user (and especially not a privileged user).
What permissions do I have to set, so that the user I create in my Pod can write to the NFS volume?
I found the solution to my problem:
By accident I found log entries, that appear everytime I try to access the NFS volumes from my pods. They say, that SELinux has blocked the access to the folder because of different security context.
To resolve the issue, I simply had to turn on the corresponding SELinux boolean virt_use_nfs with the command
setsebool virt_use_nfs on
This has to be done on all nodes to make it work correctly.
EDIT:
I remembered, that I now use sec=sys as mount option in /etc/exports. This provides access controll based on UID and GID of the user creating a file (which seems to be the default). If you use sec=none you also have to turn on the SELinux boolean nfsd_anon_write, so that the user nfsnobody has the permission to create files.
In order to do ease development in Docker, the code is attached to the containers through volumes. In that way, there is no need to rebuild the images each time the code is changed.
So, is it correct to think to use the same idea in Kubernetes?
PS: I know that the concepts PersistentVolume and PersistentVolumeClaim allow to attach volume, but they are intended for data.
Update
To ease the development, I do need to use the volume for both code and data. This will avoid me to rebuild the images at each change of code.
Below this is what I am trying to do in minikube:
the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: '/home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube/src/'
the service
apiVersion: v1
kind: Service
metadata:
name: php-hostpath
namespace: default
labels:
app: php-hostpath
spec:
selector:
app: php-hostpath
ports:
- port: 80
targetPort: 80
type: "LoadBalancer"
The service and the deployment are well created in minikube:
$ kubectl get pods -l app=php-hostpath
NAME READY STATUS RESTARTS AGE
php-hostpath-3796606162-bt94w 1/1 Running 0 19m
$ kubectl get service -l app=php-hostpath
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
php-hostpath 10.0.0.110 <pending> 80:30135/TCP 27m
The folder src and the file src/index.php are also well created.
<?php
echo "This is my first docker project";
Now I want to check that every thing is running:
$ kubectl exec -ti php-hostpath-3796606162-bt94w bash
root#php-hostpath-3796606162-bt94w:/var/www/html# ls
root#php-hostpath-3796606162-bt94w:/var/www/html# exit
exit
The folder src and the file index.php are not in /var/www/html!
Have I missed something?
PS: if I were in production env, I will not put my code in a volume.
Thanks,
Based on this doc, Host folder sharing is not implemented in the KVM driver yet. This is the driver I am using actually.
To overcome this, there are 2 solutions:
Use the virtualbox driver so that you can mount your hostPath volume by changing the path on you localhost /home/THE_USR/... to /hosthome/THE_USR/...
Mount your volume to the minikube VM based on the command $ minikube mount /home/THE_USR/.... The command will return you the path of your mounted volume on the minikube VM. Example is given down.
Example
(a) mounting a volume on the minikube VM
the minikube mount command returned that path /mount-9p
$ minikube mount -v 3 /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube
Mounting /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube into /mount-9p on the minikubeVM
This daemon process needs to stay alive for the mount to still be accessible...
2017/03/31 06:42:27 connected
2017/03/31 06:42:27 >>> 192.168.42.241:34012 Tversion tag 65535 msize 8192 version '9P2000.L'
2017/03/31 06:42:27 <<< 192.168.42.241:34012 Rversion tag 65535 msize 8192 version '9P2000'
(b) Specification of the path on the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: /mount-9p
(c) Checking if mounting the volume worked well
amine#amine-Inspiron-N5110:~/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube$ kubectl exec -ti php-hostpath-3498998593-6mxsn bash
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo "This is my first docker project";
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root#php-hostpath-3498998593-6mxsn:/var/www/html#
PS: this kind of volume mounting is only development environment. If I were in production environment, the code will not be mounted: it will be in the image.
PS: I recommend the virtualbox in stead of KVM.
Hope it helps others.
There is hostPath that allows you to bind mount a directory on the node into the a container.
In a multi node cluster you will want to restrict your dev pod to a particular node with nodeSelector (use the built-in label kubernetes.io/hostname: mydevhost).
With minikube look at the Mounted Host Folders section.
In my honest opinion, you can do it, but you shouldn't. One of the features of using containers is that you can have artifacts (containers) with always the same behaviour. A new version of your code should generate a new container. This way you can be sure, when testing, that any new issue detected will be directly related to the new code.
An hybrid approach (that I don't like either but I think is better) is to create a docker that downloads your code (selecting the correct release with envs) and runs it.
Using hostPaths is not a bad idea but can be a mess, if you have a not-so-small cluster.
Of course you can use PV, after all your code is data. You can use a distributed storage filesystem like NFS to do it.