json.log file with "Loki Logging Driver" Docker plugin - docker

I run a Docker container via Ansible playbook with, among the others, the following options in order to enable logging with Loki and then visualize them in Grafana
log_driver: loki
log_options: "loki-url=http://myaddress/loki/api/v1/push"
volumes:
- "/var/log/docker/{{applicationName}}/:/var/log/{{applicationName}}/:rw"
All works (or seems to work) fine but at a certain point the container dimension grows progressively until the host disk space is saturated. If I restart the docker the problem seems to be resolved but it'll come back after a while.
Could the problem have to do with the file
/var/lib/docker/plugins/---id-plugin---/rootfs/var/log/docker/---container-id---/json.log
where ---id-plugin--- is the id of the Loki Logging Driver returned by docker plugin ls ?
Anyway, what is that file?
Thank you so much in advance.

Related

Best client for loki-grafana with docker applications in all OS

I am implementing the loki-grafana log management system and I have several questions.
First of all I want to put you in the context of my environment:
Applications in java which log to different files / daemons
They are in docker linux containers
These containers can run on a linux/windows/debian OS ....
I guess the right option is to run both Loki and grafana in docker containers on the machine together with the rest of the containers.
My question comes with : Which client do I use to join the logs of my services/applications to loki-grafana? Grafana gives us the following alternatives
Promtail : This is the default one used by the loki-grafana guide, but I haven't been able or haven't seen yet the way to make it read the log of other applications in docker. I was thinking about doing it sharing volumes with the host, but it seems to me that there may be clients that make this easier for me ...
AWS : I don't use the cloud, discarded too.
Docker driver :It is the one that recommends you with docker , but not being able to run plugins on windows is discarded. (Which is a problem)
Fluentbit : It is a very powerful metrics processor, but in principle I only want to pass the logs to grafana and manage it from loki/grafana. Would I be interested in this option for my case ?
Fluentd : I find it very similar to logstash , but it seems that you can configure the pass/user which puts it above logstash .
Logstash : in principle it is linked to Loki and runs the same image seems like a very good option.
Here is the info on the clients.
Any contribution are welcome.
You can get logs from docker to loki with promtail, you only need to bind logs dir from docker to promtail container an.
Fluent stack works good too, but promtail is more ready to use.

How can I set a memory limit for a docker container created by visual studio code remote devcontainer configuration

My configuration:
Laptop with VSCode (with VSCode remote development extension) & Docker Desktop.
Server with Docker Host.
My project locally just have: devcomposer.json, and a docker-compose.json.
My process:
Open SSH tunnel to server.
In VSCode I hit open in container (which builds the container).
My remote development container builds fine, and I'm able to use everything there.
My little PROBLEM:
When running a simple chown over hugely populated folders, such as node_modules, the container memory requirements go mad, crashing not just the container, but my whole server....
I've tried:
Setting runArgs with:
["-m","3g"]
["--memory=3g"]
["--memory=\"3g\""]
["--memory-reservation=3g"]
Setting a ulimit in my Dockerfile, for the user that later runs the chown command.
Deleting all images and containers in Docker host, to force --no-cache (which I haven't found how to inject it neither :-/ )
HELP! Nothing works... Someone has any clue as to what could I do to prevent the container to consume all the memory from the server?
Repo with config: https://github.com/gsusI/vscode-remote_dev-config_test
In case anyone find themselves in the same infinite loop. I found what's the issue!!
It appears that runArgs is not used when using docker-compose, hence configurations here have no effect.
I know!! You'll expect a warning somewhere, right?
The next best option is to do this through the docker-compose.yml file, right? Well, this is only true if you're using docker-composer version 2, as version 3 would only work with Docker swarm. In my case, I switched to version 2, and now everything works smoothly.
TL;DR
Your docker-compose.yml file should look like this:
version: '2'
services:
<your-service-name>:
...
mem_limit: 2g
mem_reservation: 2g
Check this for syntax hints: https://docs.docker.com/compose/compose-file/compose-file-v2/#cpu-and-other-resources
you just need to set gb instead of g:
"runArgs": [
"--network=bridge",
"--memory=6gb"],
This works for me fine.
If you are using DockerFile, then you can use the below code to increase the memory used by the container.
{
"name" : "ProjectName",
"build" : {
"dockerfile": "Dockerfile"
},
"runArgs": [
"--gpus", "all",
"--shm-size=50gb" // gb or mb
],
}
Using --memory had no impact on the docker container.
This has worked successfully for me.
You can refer more information from https://datawookie.dev/blog/2021/11/shared-memory-docker/.

Docker Compose "Ghost Containers"

I am using docker-compose to deploy an application combining a number of different images.
Using Docker version 18.09.2, build 6247962
Docker-compose 1.117
Primarily, I have
ZooKeeper
Kafka
MYSQLDb
I notice a strange problem where i could not start my application with docker-compose up due to port already being assigned. I then checked docker stats and saw that there were three containers named "test_ZooKeeper.1slehgaior"
"test_Kafka.kgjdorgsr"
"test_MYSQLDB.kgjdorgsr"
I have tried kill the containers, removing them and pruning the system. When ever I kill one of these containers, it instantly restarts and I cannot for the life of me determine where they are being created from!
Please help :)
If you look into your docker-compose.yaml I'm pretty sure you'll find a restart:always somewhere. If you want to correctly shut down a running docker container managed by docker-compose, one way is to use docker-compose down from the directory where your yaml sits.
More information on the subject:
https://docs.docker.com/config/containers/start-containers-automatically/
Otherwise, you might try out to stop a single running container instead of killing it, which according to my memory tells docker not to restart it again, while a killed container looks to the service like it just has crashed. Not too sure about the last part though.

docker-compose: Log to persistent file

I know that docker-compose default logs to a file defined by docker inspect --format='{{.LogPath}}' my_container. This file is gone as soon as I kill the container. As I deploy a new version of the image frequently, I loose a lot of log entries.
What I'd like to do is to have my container's log entries stored in a persistent log file, just like regular linux processes use. I can have my deployment script do something like this, but I'm thinking there's a less hack-ish way of doing this:
docker-compose logs -t -f >> output-`date +"%Y-%m-%d_%H%M"`.log'
One option would be to configure docker-compsose to log to syslog, but for the time being I'd like to log to a dedicated file.
How have others dealt with the issue of persistent logging?
So docker has a concept called logging-drivers. https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers
The default is the file that you mentioned. The ideal way to do this is to pass the --log-driver <driver-name> to your run command. Then have another process on the same machine picking these up and pushing to your central logging system.
Most popular of these is fluentd or splunk, I guess. But you can also choose to write to json or journald.
The docker manual for these are below
Splunk - https://docs.docker.com/config/containers/logging/splunk/
Fluentd - https://docs.docker.com/config/containers/logging/fluentd/

Redirect docker daemon logs to elasticsearch

I have a docker swarm cluster and am able to get all docker "container" logs to ELK stack.
But am unable to get docker daemon logs. Can someone please guide me to achieve this.
FYI : My stack is in Linux.
You can use Filebeat plugin to send the logs from the daemon logs file to your ELK (plugin presentation page.
There is an article on this point on the elasic.co blog. Your configuration will be different since you don't want containers logs but Docker daemon logs found at the path /var/log/docker.log or /var/log/daemon.log.
EDIT 1:
Since in your environment, the logs are readable with journalctl, I digged up the internet and I have found an ELK plugin that allows you to send the logs from the journald: https://github.com/logstash-plugins/logstash-input-journald
I Hope it'll help.
1st: you'd need to find out where your docker daemon is saving the logs, which depends on linux distribution. See this response with a list of possible places:
https://stackoverflow.com/a/30970134/3165889
2nd: you can use the suggestion of Paul Rey and use Filebeat. As an alternative, I also suggest the use of Fluentd, which usually you can use in place of Logstash, then having EFK instead of ELK, or simply as an extra tool to your ELK environment.
It can also read from a file using the tail input plugin
It can also insert data to Elasticsearch using the elasticsearch out plugin
This tutorial teaches how to log containers, but then you'd need to change your input plugin to tail from that file: Docker logging via EFK
I'd also like to add that, if you're interested in logging the daemon, you probably want to log even if docker is failing to start. So I'd install Fluentd directly on the host. NOT in a container.

Resources