How to create rolling logs for Filebeat within a docker container - docker

I'm new to log4j2 and the elastic stack.
I have a filebeat docker container that doesn't work exactly how I want and now I want to take a look at the logs. But when I do docker-compose logs I get a lot debug messages and json objects. It's unreadable how much there is.
How can I create a log4j2 properties setup to create some rolling log files. Maybe put the old logs into a monthly based folder or something? and where do I put this log4j2.properties file?

It's generating a lot of logs because you're running docker-compose logs, which will get the logs for all containers in your docker compose file.
What you want is probably:
docker logs <name-of-filebeat-container>. The name of the filebeat container can be found doing a docker ps.
docker compose logs <name-of-filebeat-service>. The name of the service can be found on your docker-composer.yml file.
Regarding the JSON outputs, you can query your Docker engine default logging driver with:
# docker info | grep 'Logging Driver'
Logging Driver: json-file
If your container have a different Logging Driver you can check with:
docker inspect -f '{{.HostConfig.LogConfig.Type}}' <name-or-id-of-the-container>
You can find all log drivers in this link
To run containers with a different log-driver you can do:
With docker run: docker run -it --log-driver <log-driver> alpine ash
With docker-compose:
`logging:
driver: syslog
options:
syslog-address: "tcp://192.168.0.42:123"`
Regarding your log rotation questio, I'd say the easyest way is to configure the logging driver with the syslog driver, configure it to your local machine (or your syslog server) and then logrotate the files.
You can find several logrotate articles for Linux (which I assume you're using), for example this one

Related

Docker-compose not show logs

I'm runing my api of express and mongo with docker-compose using the command docker-compose up, all fine but when i try show the logs have the next output error:
With docker-compose logs you need to use the name of the service in the docker-compose.yaml not the name of the container.
You ran docker-compose logs -f backend_api_1, which is the name of the container. If your docker-compose file does not contain any special renaming, the following should work: docker-compose logs -f backend_api (assuming the service is called backend_api)
This is a common confusion point with docker-compose orchestration. Docker compose deals with services, which then can start one or more containers for a service.
You can clarify this for yourself by looking at the manual page for whatever command you plan to use, as it will tell you whether it requires a service name or a container name.
For docker-compose logs the manual shows:
Usage: logs [options] [SERVICE...]
Since we don't have your docker-compose.yaml to refer to, we can only infer that you may have named the service backend_api. I'm just repeating the answer provided by Dennis van de Hoef, which is a reasonable guess based on how docker will name containers for you.
docker-compose logs -f backend_api
The docker logs command can be used to look at the logs of a container.
docker logs -f backend_api_1

Grafana config volume mapping not working while running from Docker

1) I am running Grafana v6.7.2 from Docker.
2) I wanted to enable grafana log. Since I am running from Docker, /etc/grafana/grafana.ini is read only
3) Now, cloned that grafana.ini to my host where docker is running from. I un-commented this line to enabling logging: logs = /var/log/grafana
#################################### Paths ####################################
[paths]
# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)
;data = /var/lib/grafana
# Temporary files in `data` directory older than given duration will be removed
;temp_data_lifetime = 24h
# Directory where grafana can store logs
logs = /var/log/grafana
4) I made sure to stop Grafana container. Then, issued following command to to re-start Grafana. This time it has volume mapping for config:
docker run -d -p 3000:3000 -v "$PWD/grafana.ini:/etc/grafana/grafana.ini" -v grafana-storage:/var/lib/grafana grafana_internal:latest
5) I made sure Grafana container running, and I can access the UI
6) Then, I went here to see if log is generated: /var/log/grafana/ using docker exec <yourimage> ls /var/log/grafana
The issue is that there was no Grafana log. Now, this led me to believe config volume mapping may not be working as expected.
Any pointers would be helpful.
thanks.
If you look at the running grafana instance using e.g. ps, you'll see this:
$ ps -fe | grep grafana
1 grafana 0:00 grafana-server --homepath=/usr/share/grafana --config=/etc/grafana/grafana.ini --packaging=docker cfg:default.log.mode=console cfg:default.paths.data=/var/lib/grafana cf
g:default.paths.logs=/var/log/grafana cfg:default.paths.plugins=/var/lib/grafana/plugins cfg:default.paths.provisioning=/etc/grafana/provisioning
If you take a close look at those config options, you'll see:
cfg:default.log.mode=console
That means that Grafana will log only to the console. You can inspect these logs using docker logs. There's not really any reason to have Grafana log to a file also (or instead of).
If you really want Grafana to log to a file, you need to include the following in your grafana.ini:
[log]
mode = console file
With this in my grafana.ini, I see output on the docker console and I see logs in /var/log/grafana/grafana.log.
But like I said, I don't see any point in create the logfile when you can capture the same information from docker logs.

Do logs get saved on Google Kubernetes

I am running a deployment which contains three containers the app, nginx and cloud sql instance. I have a lot of print statements in my python based app.
Every time a user interacts with the app, outputs are printed. I want to know if these logs are saved by default at any location.
I am worried that these logs might consume the space on the nodes in the cluster running it. Does this happen ? or Kubernetes deployments by default don't save any logs by default?
The applications run in containers usually under Docker and the stdout/stderr logs are saved for the lifetime of the container in the graph directory (usually /var/lib/docker)
You can look at the logs with either:
$ kubectl logs <pod-name> -c <container-in-pod>
Or:
$ ssh <node>
$ docker logs <container>
If you'd like to know more where they are stored you can go into the /var/lib/docker directory and see the logs stored in JSON format:
$ cd /var/lib/docker/containers
$ find . | grep json.log
./3454a0681100986248fd81856fadfe7cd95a1a6467eba32adb33da74c2c5443d/3454a0681100986248fd81856fadfe7cd95a1a6467eba32adb33da74c2c5443d-json.log
./80a87a9529a55f8d3fb9b814f0158dc91686704222e252b256455bcde48f56a5/80a87a9529a55f8d3fb9b814f0158dc91686704222e252b256455bcde48f56a5-json.log
...
If you'd like to do garbage collection on 'Exited' containers you can read more about it here.
Another way is to set up a cron job that runs periodically on your nodes that does this:
$ docker system prune -a --force

How to let syslog workable in docker?

My application will send out syslog local0 messages.
When I move my application into docker, I found it is difficult to show the syslog.
I've tried to run docker as --log-dirver as syslog or journald, both works strange, the /var/log/local0.log show console output of docker container instead of my application's syslog when I try to run this command inside container
logger -p local0.info -t a message
So, I try to install syslog-ng inside the docker container.
The outside docker box is Arch Linux (kernel 4.14.8 + systemctl).
The docker container is running as CentOS 6. If I install syslog-ng inside the container and start it, it shows following message.
# yum install -y syslog-ng # this will install syslog-ng 3.2.5
# /etc/init.d/syslog-ng start
Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
Starting syslog-ng: Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'
Error initializing source driver; source='s_sys', id='s_sys#0'
Error initializing message pipeline;
I also had problems getting the standard "syslog" output from my app after it has been dockerized.
I have attacked the problem from a different direction. I wanted to get the container syslogs on the host /var/log/syslog
I have ran my container with an extra mount the /dev/log device and voila it worked like a charm.
docker run -v /dev/log:/dev/log sysloggingapp:latest
CentOS 6:
1.
Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
Starting syslog-ng: Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'
You can fix above error by installing syslog-ng-libdbi package:
yum install -y syslog-ng-libdbi
2.
Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'
Error initializing source driver; source='s_sys', id='s_sys#0'
Error initializing message pipeline;
Since syslog-ng doesn't have direct access on the kernel messages, you need to disable (comment) that in its configuration:
sed -i 's|file ("/proc/kmsg"|#file ("/proc/kmsg"|g' /etc/syslog-ng/syslog-ng.conf
CentOS 7:
1.
Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'
The system() source is in default configuration. This source reads platform-specific sources automatically, and reads /dev/kmsg on Linux if the kernel is version 3.5 or newer. So, we need to disable (comment) system() source in configuration file:
sed -i 's/system()/# system()/g' /etc/syslog-ng/syslog-ng.conf
2. When we start it in foreground mode syslog-ng -F we get the following:
# syslog-ng -F
syslog-ng: Error setting capabilities, capability management disabled; error='Operation not permitted'
So, we need to run syslog-ng as root, without capability-support:
syslog-ng --no-caps -F
Another way is to set up central logging with syslog/ rsyslog server, then use the syslog docker driver for logging. The syntax to use on the docker run command line is:
$ docker run --log-driver=syslog \
--log-opt syslog-address=udp://address:port image-name
Destination syslog server protocol can be udp or tcp and the server address can be a remote server, VM, a different container or local container address.
Replace image-name with your application docker image name.
A ready rsyslog docker image is available on https://github.com/jumanjihouse/docker-rsyslog
References: Docker Logging at docker.com,
Docker CLI, https://www.aquasec.com/wiki/display/containers/Docker+Containers+vs.+Virtual+Machines
For anyone trying to figure this out in the future,
The best way I've found is to just set LOG_PERROR flag in openlog().
That way, your syslog will print to stderr, which docker will then log by default (you don't need to run syslog process in docker for this). This is much easier then trying to figure out how to run a syslog process alongside your application inside your docker container (which docker probably isn't designed to do anyway).

saving docker log files with volumes produces permission denied

I am trying to test saving log files of docker containers in playing in this site which gives you a linux root shell with docker installed. I'v used solution provided here:
docker run -ti -v /dev/log:/root/data --name zizimongodb mongo
This is what I got in the console:
docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:57: mounting \\\"/dev/log\\\" to rootfs \\\"/graph/overlay2/7f1eb83902e3688c0a1204c2fe8dfd8fbf43e1093bc578e4c41028e8b03e4b38/merged\\\" at \\\"/graph/overlay2/7f1eb83902e3688c0a1204c2fe8dfd8fbf43e1093bc578e4c41028e8b03e4b38/merged/root/data\\\" caused \\\"permission denied\\\"\"".
But the container has started:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8adaa75ba6f7 mongo "docker-entrypoint..." 2 minutes ago Created zizimongodb
docker logs -f zizimongodb returns nothing. When I stop the container, nothing is saved in the /root/data. Any idea how I can correctly save all logs?
Since you are using the official mongo image from DockerHub, it is worth pointing out that this official image (like many--or all?--of the official images) does not send log output to their default log locations that you might expect if you download a Linux distro version of the same software.
Instead, most software that is capable of being told where to log, are forced to log to stdout/stderr so that docker log plugins and the docker log command itself work properly.
For the mongodb case you can see this somewhat complicated code here that tells the mongodb process to use the /proc filesystem file descriptor that maps to "stdout", as long as it is writeable when the container is started. Because of some bugs this is more complicated that other Dockerfile customization of log output (you can read more if interested at the links in the comments).
I think a more reasonable way to try and do some form of log consolidation or collection is to read about docker log drivers and see if any of those options works for you. For example, if you like journald there is a driver which will take all container logs and pass them to journald on the host.

Resources