When having configmap update, how to automatically trigger the reload of parameters by the application? Application uses POSIX signals for that.
Depending on how you are consuming the configmap values, there could be two ways in which you can reload the configmap updates into a running pod.
If you are consuming the configs as environment variables, you can write a controller, which watches for the updates in configs and restarts your pods with new config whenever the config changes.
If you are consuming the configmap via volumes, you can watch for file changes and notify that to your process in the container and handle the update in application. Please see https://github.com/jimmidyson/configmap-reload for example.
There are good solutions mentioned around here but I tried to find a solution that could be done without modifying existing deployment pipelines, etc.
Here is an example of a filebeat Daemonset from a Helm chart that reloads when the filebeat config changes. The approach is not new: use the liveness probe to trigger a reload of the pod from within the pod itself. The postStart calculates an md5 sum of the configmap directory; the liveness probe checks it. That's all.
The '...' are just to cut out the cruft. You can see that the filebeat.yml file is mounted directly into /etc and used by filebeat itself. The configmap is mounted again, specifically for the purposes of watching the configmap contents for changes.
Once configmap is edited (or otherwise modified), it takes some time before the pod is actually restarted. You can tweak all of that separately.
#apiVersion: apps/v1
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ...-filebeat
...
containers:
- name: ...-filebeat
image: "{{ .Values.filebeat.image.url }}:{{ .Values.filebeat.image.version }}"
imagePullPolicy: "{{ .Values.filebeat.image.pullPolicy }}"
command: [ "filebeat" ]
args: [
"-c", "/etc/filebeat-config/filebeat.yml",
"-e"
]
env:
...
resources:
...
lifecycle:
postStart:
exec:
command: ["sh", "-c", "ls -LRih /etc/filebeat-config | md5sum >> /tmp/filebeat-config-md5.txt"]
livenessProbe:
exec:
# Further commands can be strung to the statement e.g. calls with curl
command:
- sh
- -c
- >
x=$(cat /tmp/filebeat-config-md5.txt);
y=$(ls -LRih /etc/filebeat-config | md5sum);
if [ "$x" != "$y" ]; then exit 1; fi
initialDelaySeconds: 60
periodSeconds: 60
volumeMounts:
- name: filebeat-config
mountPath: /etc/filebeat-config
readOnly: true
....
Related
I'm using Kubernetes 1.15.7 and my issue is similar to https://github.com/kubernetes/kubernetes/issues/62362
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
restartPolicy: Always
initContainers:
- name: install
image: busybox
command:
- sh
- -c
- sleep 60
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
On the node the container is runner if I issue a docker container prune ,it removes the exited busybox (init) container. Only to restart it again and trigger the pod to restart too.
I found the github issue similar to this but without much explaination. These exited container as such do not show up to consumer much same using docker system df but it doesnt allow me to run the prune command as a whole on the node.
Kubelet manages garbage collection of docker images so you dont have to.
Take a look at k8s documentation for more info on this topic.
From k8s documentation:
Garbage collection is a helpful function of kubelet that will clean up unused images and unused containers. Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist
From the docker hub there is an image which is maintained by amazon.
Any one know how to configure and start the container as I cannot find any documentation
I got this working! I was having the same issue with you when you see Reading json config file path: /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json ... Cannot access /etc/cwagentconfig: lstat /etc/cwagentconfig: no such file or directoryValid Json input schema.
What you need to do is put your config file in /etc/cwagentconfig. A functioning dockerfile:
FROM amazon/cloudwatch-agent:1.230621.0
COPY config.json /etc/cwagentconfig
Where config.json is some cloudwatch agent configuration, such as given by LinPy's answer.
You can ignore the warning about /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json, or you can also COPY the config.json file to that location in the dockerfile as well.
I will also share how I found this answer:
I needed this run in ECS as a sidecar, and I could only find docs on how to run it in kubernetes. Following this documentation: https://docs.aws.amazon.com/en_pv/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-StatsD.html I decided to download all the example k8s manifests, when I saw this one:
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: amazonlinux
spec:
containers:
- name: amazonlinux
image: amazonlinux
command: ["/bin/sh"]
args: ["-c", "sleep 300"]
- name: cloudwatch-agent
image: amazon/cloudwatch-agent
imagePullPolicy: Always
resources:
limits:
cpu: 200m
memory: 100Mi
requests:
cpu: 200m
memory: 100Mi
volumeMounts:
- name: cwagentconfig
mountPath: /etc/cwagentconfig
volumes:
- name: cwagentconfig
configMap:
name: cwagentstatsdconfig
terminationGracePeriodSeconds: 60
So I saw that the volume mount cwagentconfig mounts to /etc/cwagentconfig and that's from the cwagentstatsdconfig configmap, and that's just the json file.
You just to run the container with log-opt, as the log agent is the main process of the container.
docker run --log-driver=awslogs --log-opt awslogs-region=us-west-2 --log-opt awslogs-group=myLogGroup amazon/cloudwatch-agent
You can find more details here and here.
I do not know why you need an agent in a container, but the best practice is to send each container log directly to cloud watch using aws log driver.
Btw this is entrypoint of the container.
"Entrypoint": [
"/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent"
],
All you need to call
/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent
Here is how I got it working in our Docker containers without systemctl or System V init.
This is from official Documentation:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:configuration-file-path -s
here the Docs:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html#start-CloudWatch-Agent-EC2-commands-fleet
Installation path may be different, but that is how the agent is started as per docs.
I have a builder image / container which is supposed to run tests on a directory with tests sources.
The container is run in a Kubernetes pod, in AWS EKS, through helm test. I.e. not docker, so I can't simply use -v volume mount.
I am struggling to find the right way to bring this directory to the container, in a simple way. This is a Helm template I have. All works except for the volume.
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-gatling-test"
annotations:
"helm.sh/hook": test-success
spec:
restartPolicy: Never
containers:
- name: {{ .Release.Name }}-gatling-test
image: {{ .Values.builderImage }}
command: ["sh", "-c", 'mvn -B gatling:test -pl csa-testing -DCSA_SERVER={{ template "project.fullname" . }} -DCSA_PORT={{ .Values.service.appPort }}']
## TODO: The builder image also counts with having /tmp/build, so it needs a mount: -v '${job.WORKDIR}:/tmp/build'
volumeMounts:
- name: mavenRepoToBuild
mountPath: /tmp/build
volumes:
- name: mavenRepoToBuild
hostPath:
path: {{.Values.fromJenkins.WORKDIR}}
I've read on few places that it can't be done directly. So what's the easy way to do it indirectly? Zip and upload to S3 and download? Or add it to the image as a layer? Or should I create a Kubernetes volume resource?
The hostPath directory or file must be existing on all your cluster nodes.
You can attach some types on the hostPath to determine whether its files or directories.
List of types you can use on hostpath can be found in kubernetes documentation.
https://kubernetes.io/docs/concepts/storage/volumes/
Btw, what error do you get? Permission denied? You can do helm dry-run to see the rendered template.
In order to do ease development in Docker, the code is attached to the containers through volumes. In that way, there is no need to rebuild the images each time the code is changed.
So, is it correct to think to use the same idea in Kubernetes?
PS: I know that the concepts PersistentVolume and PersistentVolumeClaim allow to attach volume, but they are intended for data.
Update
To ease the development, I do need to use the volume for both code and data. This will avoid me to rebuild the images at each change of code.
Below this is what I am trying to do in minikube:
the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: '/home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube/src/'
the service
apiVersion: v1
kind: Service
metadata:
name: php-hostpath
namespace: default
labels:
app: php-hostpath
spec:
selector:
app: php-hostpath
ports:
- port: 80
targetPort: 80
type: "LoadBalancer"
The service and the deployment are well created in minikube:
$ kubectl get pods -l app=php-hostpath
NAME READY STATUS RESTARTS AGE
php-hostpath-3796606162-bt94w 1/1 Running 0 19m
$ kubectl get service -l app=php-hostpath
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
php-hostpath 10.0.0.110 <pending> 80:30135/TCP 27m
The folder src and the file src/index.php are also well created.
<?php
echo "This is my first docker project";
Now I want to check that every thing is running:
$ kubectl exec -ti php-hostpath-3796606162-bt94w bash
root#php-hostpath-3796606162-bt94w:/var/www/html# ls
root#php-hostpath-3796606162-bt94w:/var/www/html# exit
exit
The folder src and the file index.php are not in /var/www/html!
Have I missed something?
PS: if I were in production env, I will not put my code in a volume.
Thanks,
Based on this doc, Host folder sharing is not implemented in the KVM driver yet. This is the driver I am using actually.
To overcome this, there are 2 solutions:
Use the virtualbox driver so that you can mount your hostPath volume by changing the path on you localhost /home/THE_USR/... to /hosthome/THE_USR/...
Mount your volume to the minikube VM based on the command $ minikube mount /home/THE_USR/.... The command will return you the path of your mounted volume on the minikube VM. Example is given down.
Example
(a) mounting a volume on the minikube VM
the minikube mount command returned that path /mount-9p
$ minikube mount -v 3 /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube
Mounting /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube into /mount-9p on the minikubeVM
This daemon process needs to stay alive for the mount to still be accessible...
2017/03/31 06:42:27 connected
2017/03/31 06:42:27 >>> 192.168.42.241:34012 Tversion tag 65535 msize 8192 version '9P2000.L'
2017/03/31 06:42:27 <<< 192.168.42.241:34012 Rversion tag 65535 msize 8192 version '9P2000'
(b) Specification of the path on the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: /mount-9p
(c) Checking if mounting the volume worked well
amine#amine-Inspiron-N5110:~/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube$ kubectl exec -ti php-hostpath-3498998593-6mxsn bash
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo "This is my first docker project";
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root#php-hostpath-3498998593-6mxsn:/var/www/html#
PS: this kind of volume mounting is only development environment. If I were in production environment, the code will not be mounted: it will be in the image.
PS: I recommend the virtualbox in stead of KVM.
Hope it helps others.
There is hostPath that allows you to bind mount a directory on the node into the a container.
In a multi node cluster you will want to restrict your dev pod to a particular node with nodeSelector (use the built-in label kubernetes.io/hostname: mydevhost).
With minikube look at the Mounted Host Folders section.
In my honest opinion, you can do it, but you shouldn't. One of the features of using containers is that you can have artifacts (containers) with always the same behaviour. A new version of your code should generate a new container. This way you can be sure, when testing, that any new issue detected will be directly related to the new code.
An hybrid approach (that I don't like either but I think is better) is to create a docker that downloads your code (selecting the correct release with envs) and runs it.
Using hostPaths is not a bad idea but can be a mess, if you have a not-so-small cluster.
Of course you can use PV, after all your code is data. You can use a distributed storage filesystem like NFS to do it.
How can I inject code/files directly into a container in Kubernetes on Google Cloud Engine, similar to the way that you can mount host files / directories with Docker, e.g.
docker run -d --name nginx -p 443:443 -v "/nginx.ssl.conf:/etc/nginx/conf.d/default.conf"
Thanks
It is possible to use ConfigMaps to achieve that goal:
The following example mounts a mariadb configuration file into a mariadb POD:
ConfigMap
apiVersion: v1
data:
charset.cnf: |
[client]
# Default is Latin1, if you need UTF-8 set this (also in server section)
default-character-set = utf8
[mysqld]
#
# * Character sets
#
# Default is Latin1, if you need UTF-8 set all this (also in client section)
#
character-set-server = utf8
collation-server = utf8_unicode_ci
kind: ConfigMap
metadata:
name: mariadb-configmap
MariaDB deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mariadb
labels:
app: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
version: 10.1.16
spec:
containers:
- name: mariadb
image: mariadb:10.1.16
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: rootpassword
volumeMounts:
- name: mariadb-data
mountPath: /var/lib/mysql
- name: mariadb-config-file
mountPath: /etc/mysql/conf.d
volumes:
- name: mariadb-data
hostPath:
path: /var/lib/data/mariadb
- name: mariadb-config-file
configMap:
name: mariadb-configmap
It is also possible to use subPath feature that is available in kubernetes from version 1.3, as stated here.
I'm not sure you can do that exactly. Kubernetes does things quite differently than docker, and isn't really ideal for interacting with the 'host' you are probably used to with docker.
A few alternative possibilities come to mind. First, and probably least ideal but closest to what you are asking, would be to add the file after the container is running, either by adding commands or args to the pod spec, or using kubectl exec and echo'ing the contents into the file. Second would be to create a volume where that file already exists, e.g. create a GCE or EBS disk, add that file, and then mount the file location (read-only) in the container's spec. Third, would be to create a new docker image where that file or other code already exists.
For the first option, the kubectl exec would be for one-off jobs, it isn't very scalable/repeatable. Any creation/fetching at runtime adds that much overhead to the start time for the container, so I normally go with the third option, building a new docker image whenever the file or code changes. The more you change it, the more you'll probably want a CI system (like drone) to help automate the process.
Add a comment if I should expand any of these options with more details.
Kubernetes allows you to mount volumes into your pod. One such volume type is hostPath (link) which allows you to mount a directory from the host into the pod.