How to start the cloudwatch agent in container? - docker

From the docker hub there is an image which is maintained by amazon.
Any one know how to configure and start the container as I cannot find any documentation

I got this working! I was having the same issue with you when you see Reading json config file path: /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json ... Cannot access /etc/cwagentconfig: lstat /etc/cwagentconfig: no such file or directoryValid Json input schema.
What you need to do is put your config file in /etc/cwagentconfig. A functioning dockerfile:
FROM amazon/cloudwatch-agent:1.230621.0
COPY config.json /etc/cwagentconfig
Where config.json is some cloudwatch agent configuration, such as given by LinPy's answer.
You can ignore the warning about /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json, or you can also COPY the config.json file to that location in the dockerfile as well.
I will also share how I found this answer:
I needed this run in ECS as a sidecar, and I could only find docs on how to run it in kubernetes. Following this documentation: https://docs.aws.amazon.com/en_pv/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-StatsD.html I decided to download all the example k8s manifests, when I saw this one:
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: amazonlinux
spec:
containers:
- name: amazonlinux
image: amazonlinux
command: ["/bin/sh"]
args: ["-c", "sleep 300"]
- name: cloudwatch-agent
image: amazon/cloudwatch-agent
imagePullPolicy: Always
resources:
limits:
cpu: 200m
memory: 100Mi
requests:
cpu: 200m
memory: 100Mi
volumeMounts:
- name: cwagentconfig
mountPath: /etc/cwagentconfig
volumes:
- name: cwagentconfig
configMap:
name: cwagentstatsdconfig
terminationGracePeriodSeconds: 60
So I saw that the volume mount cwagentconfig mounts to /etc/cwagentconfig and that's from the cwagentstatsdconfig configmap, and that's just the json file.

You just to run the container with log-opt, as the log agent is the main process of the container.
docker run --log-driver=awslogs --log-opt awslogs-region=us-west-2 --log-opt awslogs-group=myLogGroup amazon/cloudwatch-agent
You can find more details here and here.
I do not know why you need an agent in a container, but the best practice is to send each container log directly to cloud watch using aws log driver.
Btw this is entrypoint of the container.
"Entrypoint": [
"/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent"
],
All you need to call
/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent

Here is how I got it working in our Docker containers without systemctl or System V init.

This is from official Documentation:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:configuration-file-path -s
here the Docs:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html#start-CloudWatch-Agent-EC2-commands-fleet
Installation path may be different, but that is how the agent is started as per docs.

Related

Can't delete exited Init Container

I'm using Kubernetes 1.15.7 and my issue is similar to https://github.com/kubernetes/kubernetes/issues/62362
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
restartPolicy: Always
initContainers:
- name: install
image: busybox
command:
- sh
- -c
- sleep 60
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
On the node the container is runner if I issue a docker container prune ,it removes the exited busybox (init) container. Only to restart it again and trigger the pod to restart too.
I found the github issue similar to this but without much explaination. These exited container as such do not show up to consumer much same using docker system df but it doesnt allow me to run the prune command as a whole on the node.
Kubelet manages garbage collection of docker images so you dont have to.
Take a look at k8s documentation for more info on this topic.
From k8s documentation:
Garbage collection is a helpful function of kubelet that will clean up unused images and unused containers. Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist

Permission denied with Docker in Docker in Atlassian Bamboo Server

I'm trying to build a docker image using DIND with Atlassian Bamboo.
I've created the deployment/ StatefulSet as follows:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: bamboo
name: bamboo
namespace: csf
spec:
replicas: 1
serviceName: bamboo
revisionHistoryLimit: 10
selector:
matchLabels:
app: bamboo
template:
metadata:
creationTimestamp: null
labels:
app: bamboo
spec:
containers:
- image: atlassian/bamboo-server:latest
imagePullPolicy: IfNotPresent
name: bamboo-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
privileged: true
volumeMounts:
- name: bamboo-home
mountPath: /var/atlassian/application-data/bamboo
- mountPath: /opt/atlassian/bamboo/conf/server.xml
name: bamboo-server-xml
subPath: bamboo-server.xml
- mountPath: /var/run
name: docker-sock
volumes:
- name: bamboo-home
persistentVolumeClaim:
claimName: bamboo-home
- configMap:
defaultMode: 511
name: bamboo-server-xml
name: bamboo-server-xml
- name: docker-sock
hostPath:
path: /var/run
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Note that I've set privileged: true in securityContext to enable this.
However, when trying to run docker images, I get a permission error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See '/var/atlassian/application-data/bamboo/appexecs/docker run --help'
Am I missing something wrt setting up DIND?
The /var/run/docker.sock file on the host system is owned by a different user than the user that is running the bamboo-server container process.
Without knowing any details about your cluster, I would assume docker runs as 'root' (UID=0). The bamboo-server runs as 'bamboo', as can be seen from its Dockerfile, which will normally map to a UID in the 1XXX range on the host system. As these users are different and the container process did not receive any specific permissions over the (host) socket, the error is given.
So I think there are two approaches possible:
Or the container process continues to run as the 'bamboo' user, but is given sufficient permissions on the host system to access /var/run/docker.sock. This would normally mean adding the UID the bamboo user maps to on the host system to the docker group on the host system. However, making changes to the host system might or might not be an option depending on the context of your cluster, and is tricky in a cluster context because the pod could migrate to a different node where the changes were not applied and/or the UID changes.
Or the container is changed as to run as a sufficiently privileged user to begin with, being the root user. There are two ways to accomplish this: 1. you extend and customize the Atlassian provided base image to change the user or 2. you override the user the container runs as at run-time by means of the 'runAsUser' and 'runAsGroup' securityContext instructions as specified here. Both should be '0'.
As mentioned in the documentation here
If you want to run docker as non-root user then you need to add it to the docker group.
Create the docker group if it does not exist
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
$ newgrp docker
Verify that you can run docker commands without sudo
$ docker run hello-world
If that doesn't help you can change the permissions of docker socket to be able to connect to the docker daemon /var/run/docker.sock.
sudo chmod 666 /var/run
A better way to handle this is to run a sidecar container - docker:dind, and export DOCKER_HOST=tcp://dind:2375 in the main Bamboo container. This way you will invoke Docker in a dind container and won't need to mount /var/run/docker.sock

Kubernetes - How to use files from a directory in a pod?

I have a builder image / container which is supposed to run tests on a directory with tests sources.
The container is run in a Kubernetes pod, in AWS EKS, through helm test. I.e. not docker, so I can't simply use -v volume mount.
I am struggling to find the right way to bring this directory to the container, in a simple way. This is a Helm template I have. All works except for the volume.
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-gatling-test"
annotations:
"helm.sh/hook": test-success
spec:
restartPolicy: Never
containers:
- name: {{ .Release.Name }}-gatling-test
image: {{ .Values.builderImage }}
command: ["sh", "-c", 'mvn -B gatling:test -pl csa-testing -DCSA_SERVER={{ template "project.fullname" . }} -DCSA_PORT={{ .Values.service.appPort }}']
## TODO: The builder image also counts with having /tmp/build, so it needs a mount: -v '${job.WORKDIR}:/tmp/build'
volumeMounts:
- name: mavenRepoToBuild
mountPath: /tmp/build
volumes:
- name: mavenRepoToBuild
hostPath:
path: {{.Values.fromJenkins.WORKDIR}}
I've read on few places that it can't be done directly. So what's the easy way to do it indirectly? Zip and upload to S3 and download? Or add it to the image as a layer? Or should I create a Kubernetes volume resource?
The hostPath directory or file must be existing on all your cluster nodes.
You can attach some types on the hostPath to determine whether its files or directories.
List of types you can use on hostpath can be found in kubernetes documentation.
https://kubernetes.io/docs/concepts/storage/volumes/
Btw, what error do you get? Permission denied? You can do helm dry-run to see the rendered template.

Is it correct to attach code through volume in kubernetes?

In order to do ease development in Docker, the code is attached to the containers through volumes. In that way, there is no need to rebuild the images each time the code is changed.
So, is it correct to think to use the same idea in Kubernetes?
PS: I know that the concepts PersistentVolume and PersistentVolumeClaim allow to attach volume, but they are intended for data.
Update
To ease the development, I do need to use the volume for both code and data. This will avoid me to rebuild the images at each change of code.
Below this is what I am trying to do in minikube:
the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: '/home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube/src/'
the service
apiVersion: v1
kind: Service
metadata:
name: php-hostpath
namespace: default
labels:
app: php-hostpath
spec:
selector:
app: php-hostpath
ports:
- port: 80
targetPort: 80
type: "LoadBalancer"
The service and the deployment are well created in minikube:
$ kubectl get pods -l app=php-hostpath
NAME READY STATUS RESTARTS AGE
php-hostpath-3796606162-bt94w 1/1 Running 0 19m
$ kubectl get service -l app=php-hostpath
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
php-hostpath 10.0.0.110 <pending> 80:30135/TCP 27m
The folder src and the file src/index.php are also well created.
<?php
echo "This is my first docker project";
Now I want to check that every thing is running:
$ kubectl exec -ti php-hostpath-3796606162-bt94w bash
root#php-hostpath-3796606162-bt94w:/var/www/html# ls
root#php-hostpath-3796606162-bt94w:/var/www/html# exit
exit
The folder src and the file index.php are not in /var/www/html!
Have I missed something?
PS: if I were in production env, I will not put my code in a volume.
Thanks,
Based on this doc, Host folder sharing is not implemented in the KVM driver yet. This is the driver I am using actually.
To overcome this, there are 2 solutions:
Use the virtualbox driver so that you can mount your hostPath volume by changing the path on you localhost /home/THE_USR/... to /hosthome/THE_USR/...
Mount your volume to the minikube VM based on the command $ minikube mount /home/THE_USR/.... The command will return you the path of your mounted volume on the minikube VM. Example is given down.
Example
(a) mounting a volume on the minikube VM
the minikube mount command returned that path /mount-9p
$ minikube mount -v 3 /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube
Mounting /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube into /mount-9p on the minikubeVM
This daemon process needs to stay alive for the mount to still be accessible...
2017/03/31 06:42:27 connected
2017/03/31 06:42:27 >>> 192.168.42.241:34012 Tversion tag 65535 msize 8192 version '9P2000.L'
2017/03/31 06:42:27 <<< 192.168.42.241:34012 Rversion tag 65535 msize 8192 version '9P2000'
(b) Specification of the path on the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: /mount-9p
(c) Checking if mounting the volume worked well
amine#amine-Inspiron-N5110:~/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube$ kubectl exec -ti php-hostpath-3498998593-6mxsn bash
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo "This is my first docker project";
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root#php-hostpath-3498998593-6mxsn:/var/www/html#
PS: this kind of volume mounting is only development environment. If I were in production environment, the code will not be mounted: it will be in the image.
PS: I recommend the virtualbox in stead of KVM.
Hope it helps others.
There is hostPath that allows you to bind mount a directory on the node into the a container.
In a multi node cluster you will want to restrict your dev pod to a particular node with nodeSelector (use the built-in label kubernetes.io/hostname: mydevhost).
With minikube look at the Mounted Host Folders section.
In my honest opinion, you can do it, but you shouldn't. One of the features of using containers is that you can have artifacts (containers) with always the same behaviour. A new version of your code should generate a new container. This way you can be sure, when testing, that any new issue detected will be directly related to the new code.
An hybrid approach (that I don't like either but I think is better) is to create a docker that downloads your code (selecting the correct release with envs) and runs it.
Using hostPaths is not a bad idea but can be a mess, if you have a not-so-small cluster.
Of course you can use PV, after all your code is data. You can use a distributed storage filesystem like NFS to do it.

HostPath with minikube - Kubernetes

UPDATE:
I connected to the minikubevm and I see my host directory mounted but there is no files there. Also when I create a file there it will not in my host machine. Any link are between them
I try to mount an host directory for developing my app with kubernetes.
As the doc recommended, I am using minikube for running my kubernetes cluster on my pc. The goal is to create a develop environment with docker and kubernetes for develop my app. I want to mount a local directory so my docker will read the code app from there. But it is not work. Any help would be really appreciate.
my test app (server.js):
var http = require('http');
var handleRequest = function(request, response) {
response.writeHead(200);
response.end("Hello World!");
}
var www = http.createServer(handleRequest);
www.listen(8080);
my Dockerfile:
FROM node:latest
WORKDIR /code
ADD code/ /code
EXPOSE 8080
CMD server.js
my pod kubernetes configuration: (pod-configuration.yaml)
apiVersion: v1
kind: Pod
metadata:
name: apiserver
spec:
containers:
- name: node
image: myusername/nodetest:v1
ports:
- containerPort: 8080
volumeMounts:
- name: api-server-code-files
mountPath: /code
volumes:
- name: api-server-code-files
hostPath:
path: /home/<myuser>/Projects/nodetest/api-server/code
my folder are:
/home/<myuser>/Projects/nodetest/
- pod-configuration.yaml
- api-server/
- Dockerfile
- code/
- server.js
When I running my docker image without the hostPath volume it is of course works but the problem is that on each change I will must recreate my image that is really not powerful for development, that's why I need the volume hostPath.
Any idea ? why i don't success to mount my local directory ?
Thanks for the help.
EDIT: Looks like the solution is to either use a privilaged container, or to manually mount your home folder to allow the MiniKube VM to read from your hostPath -- https://github.com/boot2docker/boot2docker#virtualbox-guest-additions. (Credit to Eliel for figuring this out).
It is absolutely possible to configure a hostPath volume with minikube - but there are a lot of quirks and there isn't very good support for this particular issue.
Try removing ADD code/ /code from your Dockerfile. Docker's "ADD" instruction is copying the code from your host machine into your container's /code directory. This is why rebuilding the image successfully updates your code.
When Kubernetes tries to mount the container's /code directory to the host path, it finds that this directory is already full of the code that was baked into the image. If you take this out of the build step, Kubernetes should be able to successfully mount the host path at runtime.
Also be sure to check the permissions of the code/ directory on your host machine.
My only other thought is related to mounting in the root directory. I had issues when mounting Kubernetes hostPath volumes to/from directories in the root directory (I assume this was permissions related). So, something else to try would be a mountPath like /var/www/html.
Here's an example of a functional hostPath volume:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
volumes:
- name: example-volume
hostPath:
path: '/Users/example-user/code'
containers:
- name: example-container
image: example-image
volumeMounts:
- mountPath: '/var/www/html'
name: example-volume
They have now given the minikube mount which works on all environment
https://github.com/kubernetes/minikube/blob/master/docs/host_folder_mount.md
Tried on Mac:
$ minikube mount ~/stuff/out:/mnt1/out
Mounting /Users/macuser/stuff/out into /mnt1/out on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
And in pod:
apiVersion: v1
kind: Pod
metadata:
name: myServer
spec:
containers:
- name: myServer
image: myImage
volumeMounts:
- mountPath: /mnt1/out
name: volume
# Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
volumes:
- name: volume
hostPath:
path: /mnt1/out
Best practice would be building the code into your image, you should not run an image with code just coming from the disk. Your Dockerfile should look more like:
FROM node:latest
COPY /code/server.js /code/server.js
EXPOSE 8080
CMD /code/server.js
Then you run the Image on Kubernetes without any volumes. You need to rebuild the image and update the pod every time you update the code.
Also, I'm currently not aware that minikube allows for mounts between the VM it creates and the host you are running it on.
If you really want the extreme fast feedback cycle of changing code while the container is running, you might be able to use just Docker by itself with -v /path/to/host/code:/code without Kubernetes and then once you are ready build the image and deploy and test it on minikube. However, I'm not sure that would work if you're changing the main .js file of your node app.

Resources