Is there a serverless kubernetes datadog agent? - docker

I have a unique type of Kubernetes cluster that cannot install the Kubernetes Datadog agent. I would like to collect the logs of individual docker containers in my Kubernetes pods similar to how the Docker agent works.
I am currently collecting docker logs from Kubernetes and then using a script with the Datadog custom log forwarder to upload them to Datadog. I was curious if there is a better way to achieve this serverless collection of docker logs from Kubernetes clusters in datadog? The ideal situation I want is to plug my kubeconfig somewhere and then let Datadog take care of the rest without deploying anything onto my Kubernetes cluster.
Is there an option for that outside of creating a custom script?

A better way would be to use a sidecar container with a logging agent, it won't increase the load on the API server.
Reference: https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-a-logging-agent
Datadog agent looks like doesn't support /suggest running as a sidecar (https://github.com/DataDog/datadog-agent/issues/2203#issuecomment-416180642)
I suggest looking at using other logging agent and pointing the backend to datadog.
Some options are:
fluentd: https://blog.powerupcloud.com/kubernetes-pod-management-using-fluentd-as-a-sidecar-container-and-prestop-lifecycle-hook-part-iv-428b5f4f7fc7
fluentd-bit: https://github.com/leahnp/fluentbit-sidecar
filebeat: https://www.elastic.co/beats/filebeat
Datadog supports them
https://docs.datadoghq.com/integrations/fluentd/
https://docs.datadoghq.com/integrations/filebeat/

Related

Jenkins on k8s and pipeline with docker agent

I want to run my Jenkins behind k8s. We can achieve that with any standard helm chart or our own manifest files. In this case, Jenkins (master only) will run inside a container (Pod).
Now I also want to have a pipeline job that uses docker agent as described here
I am getting confused, about
how and where this docker container will be run (on the same node where Jenkins is running? and suppose the node capacity is over then it needs to run docker agent on a different node)
how does Jenkins will authenticate to run containers on k8s nodes?
I saw the Kubernetes plugin/docker plugin. But those plugins create containers beforehand (or at least we need to set up a template, which decides how containers will start, which image will be used and many more) and connects Jenkins with help of JNLP / ssh. I lose the flexibility to have an image as an agent in that case.
going further, I also like to build custom images on the fly with help of Dockerfile shipped along with code. An example is available in the same link.
I believe this documentation is answering all of your questions: https://devopscube.com/jenkins-build-agents-kubernetes/
With this method, you are not losing your flexibility because your Jenkins master going to create a K8s pod on the fly. Yes, additionally you need JNLP authentication but you can think of that as a sidecar container.
About your first question: If you use exactly that way, your Jenkins jobs going to run under Jenkins master with the same Docker that your Jenkins Master is using.

How to configure k8s (GKE) to pull images from docker-registry-proxy

I have one global container registry.
I will have many k8s clusters in different cloud providers. For now I use GKE.
I want to have in each k8s cluster a local docker registry cache. It reduces the pulling latency and I will be safer if the global container registry has short downtime.
It should work like: when I deploy something on k8s cluster, the k8s starts pulling the image and goes via this proxy. If this proxy already has this image it will serve it quickly, if not it will pull it from the global container registry and will serve it.
I tried to setup https://hub.docker.com/r/rpardini/docker-registry-proxy
I run it, but I can't configure k8s cluster to use it as a proxy. In docs I see how to do it, but it is ok when you have your own k8s clusters on servers and you can change dockerd or containerd service files, but I have managed k8s in Google Cloud (GKE), so I can't easily permanent change files on nodes.
Do you have any ideas on how to achieve what I want?

How to handle "docker-in-docker" problem when using Jenkins inside K8S

New to Kubernetes, a little complex question needs help.
Background
Using Jenkins in GKE (Google Kubernetes Engine)
Want to use jenkins-docker plugin to provide the specific test environment for each type of tests
Don't want to mixin docker binary in the Jenkins image (because it is large)
Don't want docker-in-docker
More specifically, I don't want the Jenkins Pod be a new Docker Server
What I want
Each test environment can create a new pod in GKE Cluster, rather than creating containers inside the Jenkins Pod
P.S.
I have just read some articles, but half of them are telling about "how to use K8S to scale up the Jenkins (using jenkins-slave + jenkins-kubernates plugin)", another half are telling about how to "use docker plugin in a dockerized jenkins container on a bare metal machine (you can use /var/run/docker.sock to communicate between the host and the docker container)", but I cannot find **how to use docker plugin (to provide a specific environment) in a dockerized jenkins container inside K8S

Why does DataDog prefer the Docker-based Agent installation?

According to the DataDog Docker Integration Docs:
There are two ways to run the [DataDog] Agent: directly on each host, or within a docker-dd-agent container. We recommend the latter.
Why is a Docker-based agent installation preferred over just installing the DataDog agent directly as a service on the box that's running the Docker containers?
One of Dockers main features is portability and it makes sense to bind datadog into that environment. That way they are packaged and deployed together and you don't have the overhead of installing datadog manually everywhere you choose to deploy.
What they are also implying is that you should use docker-compose and turn your application / docker container into an multi-container Docker application, running your image(s) alongside the docker agent. Thus you will not need to write/build/run/manage a container via Dockerfile, but rather add the agent image to your docker-compose.yml along with its configuration. Starting your multi-container application will still be easy via:
docker-compose up
Its really convenient and gives you additional features like their autodiscovery service.

How to configure a high-availability cluster of MariaDB and Redis in Mesos or CoreOS

In most tutorials, presentations and demos, only stateless services are presented that are load balanced either via DNS (SkyDNS, skydock, etc.) or via reverse proxy, such as HAproxy or Vulcand, which are configured with etcd or ZooKeeper.
Is there a best practice for deploying a cluster of MariaDB and Redis using:
CoreOS + fleet + Docker; or
Mesos + Marathon + Docker
Any other cluster management solution
How can one configure a Redis cluster and a MariaDB cluster (Galera), when the host running Master may change?
https://github.com/sheldonh/coreos-vagrant/tree/master/redis
http://www.severalnines.com/blog/how-deploy-galera-cluster-mysql-using-docker-containers
After posting the question, I was lucky and came across a few repositories that have achieved what I am looking for:
Redis
https://github.com/mdevilliers/docker-rediscluster - A Redis cluster with two Redis instances and three Redis Sentinel monitors. If the Master fails, the Sentinels promote the Slave as a Master. Mark has also created a project that configures HAProxy to use the promoted Master - https://github.com/mdevilliers/redishappy
Percona/Galera cluster
An out-of-the-box working docker image - https://github.com/paulczar/docker-percona_galera
You could use CoreOS (or any other plattform where Docker can run) and Kubernetes with SkyDNS integration this would you allow to fetch the IP-address of the master. Also Kubernetes comes with a proxy (for service discovery) which sets environmental variables in your pods. You could access them at runtime. I think the best way (and a way you need to go) is to use a service discovery tool like SkyDNS or something similar. Here is a simple Kubernetes example.
Also you could do this with fleet and side-kicks but I think Kuberentes does somethings a little bit easier for you and is better to use. It is just a little bit tricky to set it up :)
I didn't used Mesos and Marathon so far but I think they should do this too. They (https://github.com/mesosphere/marathon#features) have all the tools you need to set your cluster up.

Resources