logging nginx events from a docker container managed by kubernetes - docker

Currently, to my understanding, kubernetes offers no logging solutions on it's own and it also does not allow one to specify the logging driver when using docker as the container technology due to scope encapsulation concerns.
This leaves folks with the ugly solution of tailing json logs from shared volumes using either fluentd, filebeat, or some other file tailing demon, parsing these, then sending them to the desired storage backend.
My question is, is there any repo or public knowledge config store for this type of scenario for people that have gone through this before? My use case would involve tailing the logs of a nginx docker image and writing out the fluentd/grok pattern myself seems really painful, plus i wouldn't want to struggle on an issue already solved by someone else.
Thanks

We tried logdna and the integration with k8s is pretty solid. Most of the time I just tail the log of some container using kubectl logs -f [CONTAINER_ID]. I'm guessing you're looking for a persistent approach.

Related

All docker stack are restarting automatically

I have a multi-services environment that is hosted with docker swarm. There are multiple stacks that are created. All the docker containers which are running have an inbuild Spring Boot application. The issue is coming that all my stacks get restarted on their own. Now I know that in compose file I have mentioned that restart_policy as on failure. Hence it auto restarted. The issue comes that when services are restarted, I get errors from a particular service and this breaks everything.
I am not able to figure out what actually happens.
I did quite a lot of research and found out about these things.
Docker daemon is not restarted. I double-checked this with the uptime of the docker daemon.
I checked the docker service ps <Service_ID> and there I can see service showing shutdown and starting. No other information.
I checked the docker service logs <Service_ID> but no error in there too.
I checked for resource crunch. I can assure you that there was quite a good resource available at the host as well as each container level.
Can someone help where exactly to find logs for this even? Any other thoughts on this?
My host is actually a VM hosted on VMWare Vcenter.
After a lot of research and going through all docker logs, I could not find the solution. Later on, I discovered that there was a memory snapshot taken for backup every 24 hours.
Here is what I observe:
Whenever we take a snapshot, all docker services running on the host restart automatically. There will be no errors in that but they will just restart gracefully.
I found some questions already having this problem with VMware snapshots.
As far as I know, when we take a snapshot, it points to a different memory location and saves the previous one. I am not able to find why it's happening but yes Root cause of the problem was this. If anyone is a VMWare snapshots expert, please let us know.

Rsyslog can't start inside of a docker container

I've got a docker container running a service, and I need that service to send logs to rsyslog. It's an ubuntu image running a set of services in the container. However, the rsyslog service cannot start inside this container. I cannot determine why.
Running service rsyslog start (this image uses upstart, not systemd) returns only the output start: Job failed to start. There is no further information provided, even when I use --verbose.
Furthermore, there are no error logs from this failed startup process. Because rsyslog is the service that can't start, it's obviously not running, so nothing is getting logged. I'm not finding anything relevant in Upstart's logs either: /var/log/upstart/ only contains the logs of a few things that successfully started, as well as dmesg.log which simply contains dmesg: klogctl failed: Operation not permitted. which from what I can tell is because of a docker limitation that cannot really be fixed. And it's unknown if this is even related to the issue.
Here's the interesting bit: I have the exact same container running on a different host, and it's not suffering from this issue. Rsyslog is able to start and run in the container just fine on that host. So obviously the cause is some difference between the hosts. But I don't know where to begin with that: There are LOTS of differences between the hosts (the working one is my local windows system, the failing one is a virtual machine running in a cloud environment), so I wouldn't know where to even begin about which differences could cause this issue and which ones couldn't.
I've exhausted everything that I know to check. My only option left is to come to stackoverflow and ask for any ideas.
Two questions here, really:
Is there any way to get more information out of the failure to start? start itself is a binary file, not a script, so I can't open it up and edit it. I'm reliant solely on the output of that command, and it's not logging anything anywhere useful.
What could possibly be different between these two hosts that could cause this issue? Are there any smoking guns or obvious candidates to check?
Regarding the container itself, unfortunately it's a container provided by a third party that I'm simply modifying. I can't really change anything fundamental about the container, such as the fact that it's entrypoint is /sbin/init (which is a very bad practice for docker containers, and is the root cause of all of my troubles). This is also causing some issues with the docker logging driver, which is why I'm stuck using syslog as the logging solution instead.

How to persist infinispan Session after Keycloak docker restart

I have a running keycloak 8's docker but whenever I restart it, all non-offline session disappears. Result, all users are being disconnected whenever I come to update keycloak.
Causes:
I've read this thread here and understood why access token aren't persisted (mainly performance issue).
As solution I've wanted to use Clusters (as recommended here), and I understood, that the core part is only well managing Infinispan.
Ideas:
I first wanted to store that infinispan outside docker container (in a volume), then search where does the JBoss saves Infinispan in a docker, but i didn't found anything.
Secondly I thought about an SPI to manage user sessions externally, but it seems not to be the right solution, as infinispan does already a good Job.
Setting up then a cluster, helped by this article about Cross-Datacenter support in Keycloak and this other one about Keycloak Cross Data Center Setup in AWS seemed to be a good starting point, but I'm still actually using dockers and I not sure if it's a better idea for me to build docker images from those tutorials.
Any more Idea would be welcome :)
Just yet I've tried using docker cluster a second time, but now using docker swarm with the info from here:
The PING discovery protocol is used by default in udp stack (which is used by default in standalone-ha.xml). Since the Keycloak image runs in clustered mode by default, all you need to do is to run it:
docker run jboss/keycloak
If you run two instances of it locally, you will notice that they form a cluster.
I've deployed very simply 3 instances of keycloak in clustered mode with an external database (postgres) using docker stack and it worked well.
Simpler said, keycloak docker does already handle this use-case when using clusters.
For more about the cluster use-case, please refer to this tutorial about how to setup Keycloak Cluster

Persisting docker container logs in kubernetes

I’m looking for a really simple, lightweight way of persisting logs from a docker container running in kubernetes. I just want the stdout (and stderr I guess) to go to persistent disk, I don’t want anything else for analysing the logs, to send them over the internet to a third party, etc. as part of this.
Having done some reading I’ve been considering a DaemonSet with the application container, but then another container which has /var/lib/docker/containers mounted and also a persistent volume (maybe NFS) mounted too. That container would then need a way to copy logs from the default docker JSON logging driver in /var/lib/docker/containers to the persistent volume, maybe rsync running regularly.
Would that work (presumably if the rsync container goes down it's going to miss stuff because nothing's queuing, perhaps that's ok rather than trying to queue potentially huge amounts of logs), is this a sensible approach for the desired outcome? It’s only for one or two containers if that makes a difference. Thanks.
Fluentd supports a simple file output plugin (https://docs.fluentd.org/output/file) which you can easily aim at a PersistentVolume mount. Otherwise you would configure Fluentd (or Bit if you prefer) just like normal for Kubernetes so find your favorite guide and follow it.

Graylog stream rule with application running on docker

I have an application that running on a docker container, and logs to our Graylog server, however, the Graylog source field is actually the container ID:
source
97c0212d3d75
Since the container ID changes frequently, I cannot apply the source to the stream rules.
I had a look at the message and seems like there is nothing much I can rely on to create stream rules for this application:
Can someone please share some experience on this case? My problem here is that I cannot identify the application nor environment.
I am looking for ideas like:
Is there a way to make container id static (prob not)
Is there a way to send more information to graylog without making code changes or making code to specify the specific values
Any better ideas
I just realised I can set the hostname to my docker containers, so just by adding the following to my docker-compose file should work
hostname: billing-rq-${ENV_NAME}

Resources