Secure Logging drivers with Docker? - docker

I noticed that the fluentd engine uses the out_forward output to send logs. Meaning all logs are sent in the clear. Is there a way to specify the output type? I'd like to be able to have Docker send logs with out_secure_forward instead.
Are there plans to enable more configuration? Should I use a different logging driver if I want security? Perhaps use the JSON file engine and then use fluentd to ship those securely?

IMO the best option to do what you want is:
introduce an additional docker container (A) to run Fluentd in it
configure your docker containers to send logs (over fluentd log drivers) to that container (A)
send these logs to another site from the fluentd in container (A) by using secure-forward

Related

How to handle STDOUT logs in K8s?

In a Docker environment my Java-App logs on STDOUT via log4j, the messages will be sent to a Graylog instance. There is no special logging config besides configuring the Console-Appender to use JsonLayout.
My docker-compose.yml snippet:
logging:
driver: gelf
options:
gelf-address: "tcp://[GRAYLOG_HOST]:[PORT]"
tag: "[...]"
Everything works fine there. But we are thinking about changing this environment to K8s.
There will be a Graylog instance in K8s, too. It looks like that there is no K8s equivalent for the docker-compose.yml logging settings. It seems that I have to use some kind of logging agent, e.g. fluent-bit. But the documentation of fluent-bit looks like that it only can collect logs from a log file as input (and some more), but not from STDOUT.
I have the following questions:
Is there another possibility to read the logs directly from STDOUT and send them into Graylog?
If I have to log the log messages into a log file to be read from fluent-bit: Do I have to configure log4j to do some roll-over strategies to prevent, that the log file will be bigger and bigger? I do not want to "waste" my resources "just" for logging.
How do you handle application logs in K8s?
Maybe I misunderstand the logging principles in K8s. Feel free to explain it to me.
Is there another possibility to read the logs directly from STDOUT and send them into Graylog?
Fluent Bit allows for data collection through STDIN. Redirect your application STDOUT to Fluent Bit's STDIN and you are set.
If I have to log the log messages into a log file to be read from fluent-bit: Do I have to configure log4j to do some roll-over strategies to prevent, that the log file will be bigger and bigger? I do not want to "waste" my resources "just" for logging.
In this case you can use logrotate
How do you handle application logs in K8s?
Three possible ways:
Application directly output their traces in external systems (eg. databases).
Sidecar container with embedded logging agent that collect application traces and send them to a store (again database for example).
Cluster-wide centralized logging (eg. ELK stack)
I'd recommend you to use sidecar container for log collection. This is probably most widely used solution.

Is it possible to use stdout/stderr as fluentd source?

Question:
Is it possible to use stdout/stderr as fluentd source?
If not, are there some sort of workaround to implement this?
Background:
I have to containerize a NodeJS web server that uses json-log as a logging resource.
Since containers are ephemeral, I want to extract it's logs for debugging purposes.
To do this, I've decided to use EFK stack.
However, since...
The philosophy of json-log is...
Write to stdout/err
I can only get the logs of the web server from stdout.
After going through the fluentd documentation, I didn't find a way to use stdout/stderr as a source.
Related question:
Is it possible to use stdout as a fluentd source to capture specific logs for write to elasticsearch?
The question has an answer but it is inapplicable in my case.
See https://www.npmjs.com/package/json-log#write-to-stdouterr
You can send logs from json-log to syslog.
So you can use fluent-plugin-syslog to receive logs from json-log, and send them to Fluentd.

Can we send logs to multiple locations from a single docker logging driver

I want to send logs to multiple locations from a docker logging driver, is it possible with any logging driver?
For php you can use Monolog.
Find monolog here,
https://github.com/Seldaek/monolog
Monolog is not a driver, its a php package.
It depends on your setup.Can you elaborate more?

docker track logs from dynamically created containers

I have an app that is dynamically creating docker containers and I can't intercept the way it is created.
I want to see logs from all the machines that are up. no matter if it was via docker-compose or just docker command line. I need to see all the logs.
Is it possible?
right no I need to run docker ps, see all the created machines and run docker log container.
I can't really monitor what is going inside.
Thanks
An approach is to use a dedicated logging container that can gather log events from other containers, aggregate them, then store or forward the events to a third-party service, this approach eliminates the dependencies on a host.
Further, dedicated logging containers can automatically collect, monitor, and analyze log events, It can scale your log events automatically without configuration. It can retrieve logs through multiple streams of log events, stats, and Docker API data.
You can check this link also for some help.
Docker Logging Best Practices

Can docker have multiple logging drivers?

is it possible to use multiple logging drivers for the same container - say fluentd and json?
Thank you.
As of 18.03, Docker Engine Enterprise(EE) supports multiple log drivers, but it is not in the Community Edition(CE):
https://docs.docker.com/ee/engine/release-notes/#18031-ee-1-2018-06-27
No, you can only specify a single logging driver/container.
To have separate sinks for your logs, you'd have to rely on something like fluentd to receive the logs (or read the json log files) and configure a pipeline to distribute them.
Dual logging is available in docker CE since version 20.10.1.
The feature was previously only available in Docker Enterprise since version 18.03.1-ee-1.
The official documentation chapter "Dual Logging" doesn't reflect this (as of 2021-01-04 ).
The feature has been open-sourced in pull request #40543 and was merged into master on 2020-02-27.
The related GitHub issue #17910 in moby/moby was closed with the following comment:
The upcoming Docker 20.10 release will come with the feature described above ("dual logging"), which uses the local logging driver as a ring-buffer, which makes docker logs work when using a logging driver that does not have "read" support (for example, logging drivers that send logs to a remote logging aggregator).
No you can specify a single logging driver as stated in the official documentation :
You cannot specify more than one log driver.
The log-driver documentation indicates that too :
To configure the Docker daemon to default to a specific logging
driver, set the value of log-driver to the name of the logging driver
in the daemon.json file...`
{
"log-driver": "syslog"
}
You can see that "log-driver" expects a string and not an array.
In fact, since Docker Engine Enterprise 18.03.1-ee-1, Docker has "just" enable a dual logging feature that allows to configure any logging driver log while being still the possibility to read them with docker logs.
For example before that feature, specifying that driver in the daemon.json :
{
"log-driver": "syslog"
}
allowed to redirect the logs to a syslog server but that also made Docker to not publish any longer logs to the local logging driver.
Now that is not the case, the information is available in both destination.
Starting with Docker Engine Enterprise 18.03.1-ee-1, you can use
docker logs to read container logs regardless of the configured
logging driver or plugin. This capability, sometimes referred to as
dual logging, allows you to use docker logs to read container logs
locally in a consistent format, regardless of the remote log driver
used, because the engine is configured to log information to the
“local” logging driver.

Resources