GELF logging driver with Windows Containers - docker

I'm trying get an ELK stack up (Elastic Search, Logstash and Kibana) and would like to get the GELF logging driver to forward events to Logstash, however whenever I run my container with the specified driver I get docker: Error response from daemon: logger: no log driver named 'gelf' is registered. even though I'm on 1.12.2-cs2-ws-beta. Is there a way to get this working on Windows Server 2016?

The supported log drivers section does list GELF (Graylog Extended Log Format), but by default on docker for Linux (so within a Linux VM on other platforms)
The official GELF documention does recommend in its installation page
Some modern Linux distribution (Debian Linux, Ubuntu Linux, or CentOS recommended)
So a Windows server 2016 might not include a Graylog server in its Docker.

Related

Docker Windows master node "docker swarm init" causes worker nodes in same Virtual Network to no longer see the master node

I have strange behaviour related to docker swarm mode on windows. What I have done:
Deployed two "Windows Server 2019 Datacenter with Containers - Gen1" virtual machines in Azure
Setting RDP access from my IP to the virtual machines
Ensures they are in the same virtual network and their subnet is associated with the virtual network
Downloaded all windows updates
Used telnet to check if worker machine sees master by running "telnet 10.0.0.4 3389". This works.
Used telnet to check if master machine sees worker by running "telnet 10.0.0.5 3389". This works.
Ensured that Docker Swarm ports are open in Windows Firefall too for both machines: 4789, 7946 (UDP) and 2377, 7946 (TCP)
Initialized docker swarm mode on master node with the command: "docker swarm init --advertise-addr 10.0.0.4"
Checked that "docker node ls" lists the master as Ready
Immediately after this tried to use "telnet 10.0.0.4 3389" from worker node to see if master is still accessible - it no longer works!
Not surprisingly, trying to join the docker swarm from the worker also fails in the usual "timeout" error
Due to the fact that telnet 10.0.0.4 3389 worked before master node entered swarm mode, but not after, it seems docker windows is doing some changes to the firewall priorities or rules, or changing the active network or something... Which is bonkers. I have not found a solution to this problem, which is making docker-for-windows unusable. Note: This problem only occurs in Azure. Using virtual machines in Exoscale and manually installing docker with powershell scripts did not show the same issue, which makes me think perhaps the "Windows Server 2019 Datacenter with Containers - Gen1" servers have some faulty configurations.
Edit:
I can confirm that this behaviour does not appear when manually installing docker for 2019 data centers using the following guide: https://blog.sixeyed.com/getting-started-with-docker-on-windows-server-2019/ (sixeyed is a known Docker for Windows expert). In other words "Windows Server 2019 Datacenter" image works.
I can confirm that this behaviour does not appear when manually installing docker for 2019 data centers using the following guide: https://blog.sixeyed.com/getting-started-with-docker-on-windows-server-2019/ (sixeyed is a known Docker for Windows expert). In other words "Windows Server 2019 Datacenter" image works.
So, do not use the "Windows Server 2019 Datacenter with Containers - Gen1" image. Instead, use the standard image and follow standard docker-for-windows-server-2019 installation guides to get swarm mode working.

Docker CE and syslog

Docker logging drivers are specified online, and these limitations.
Limitations of logging drivers
Users of Docker Enterprise can make use of “dual logging”, which enables you to use the docker logs command for any logging driver. Refer to reading logs when using remote logging drivers for information about using docker logs to read container logs locally for many third party logging solutions, including:
syslog
gelf
fluentd
awslogs
splunk
etwlogs
gcplogs
Logentries
When using Docker Community Engine, the docker logs command is only available on the following drivers:
local
json-file
journald
Reading log information requires decompressing rotated log files, which causes a temporary increase in disk usage (until the log entries from the rotated files are read) and an increased CPU usage while decompressing.
The capacity of the host storage where the Docker data directory resides determines the maximum size of the log file information.
I am using Docker CE, but I have a question about this documentation. Does this mean, using CE, I cant do syslog at all? or just that I cant do syslog and have docker logs?
There is nothing stopping you from using syslog within the container, but you can't read those logs using the 'docker logs' command. There is also nothing stopping you from writing your logs to stdout and piping your logs to as many log shippers as you want.
Here's an article that explains how to do syslog in a docker container: https://medium.com/better-programming/docker-centralized-logging-with-syslog-97b9c147bd30
I think that fluentd and fluent-bit are better choices than syslog these days given the structure they provide to the msg field, though syslog-ng looks interesting. Fluent-bit is incredibly good though, so you might want to take a look at it.

How to forward application logs to Splunk from docker container?

We're interested in forwarding the logs from a node.js server running in a Docker container to Splunk.
Some options we've considered include a side-car container running a Splunk forwarder. The side-car would write to a shared volume that the side-car would observe and send on.
Ideally, we would just use a syslog drain or another mechanism, but I can't seem to find any documentation on how to set that up?
There are a lot of options to send logs from containers to Splunk.
For logs, sent to Standard Output and Error:
Splunk Logging Driver https://docs.docker.com/v17.09/engine/admin/logging/splunk/
Splunk Docker logging plugin https://github.com/splunk/docker-logging-plugin - an improved version of Splunk Logging Driver
For application logs (logs written inside of the container):
Sidecars with UF
Our company (https://www.outcoldsolutions.com) offers one solution that can simply forward container (https://www.outcoldsolutions.com/docs/monitoring-docker/v5/) and application logs (https://www.outcoldsolutions.com/docs/monitoring-docker/v5/annotations/#application-logs) from the Docker hosts, and collect metrics. We also provide you with an application in Splunk for tracking the health and performance of your clusters https://splunkbase.splunk.com/app/3723/. Our application is not free, but cheap compared to the time you can spend building something similar.
Another option is using fluentd as an intermediary.
Fluentd exists as docker logging driver as well, but you can use it to redirect the logs to several backends (Splunk, Elasticsearch). You are not as tightly coupled to Splunk.
Additionally that's the way proposed by Openshift.
It looks like Docker has a logging driver that handles this
https://docs.docker.com/v17.09/engine/admin/logging/splunk/

See docker container logs on host while using gelf driver

I am using gelf as log driver for my docker container. In log options i provided udp endpoint.
Now when i start the container, everything is working as expected.
My question is, if it is possible to see the container logs in the host where it is running(not at UDP endpoint)?
This depends on Docker version.
Docker 20.10 and up introduces “dual logging”, which uses a local buffer that allows you to use the docker logs command for any logging driver.
If you are talking about seeing the logs via docker logs command on the machine running the docker containers, its not possible to do so when using other logging drivers.
See limitations of logging drivers.
If you know where the log is at inside the container, a work around would be to write a script which copies the log file from the container and displays it, or maybe just exec's into the container and displays it. But I really wouldn't recommend that.
Something like:
#!/bin/bash
docker cp mycontainer:/var/log/mylog.log $(pwd)/logs/mylog.log
tail -f $(pwd)/logs/mylog.log

Where do docker logs go when splunk server is not reachable

I am trying to get docker logs from a running docker container. I have configured splunk as logging driver in my docker compose and I understand that if splunk server is not reachable then container won't start.
proxysecurity:
image: test/image
network_mode: host
depends_on:
- zookeeper
ports:
- '8083:8083'
logging:
driver: "splunk"
options:
splunk-url: "http://XX.X2.X3.X1:XXX7/"
splunk-token: "XXXXX5-9CA1-44B8-B9E8-2XXX25"
splunk-format: json
tag: "{{.ImageName}}/{{.Name}}/{{.ID}}"
environment:
XXXCONNECT: localhost:32181
XXXXXRS: http://localhost:8083
Now if splunk server is not reachable when container is up and running , is there any fall back mechanism wherein we can tell docker container to log locally ?
Or is there any way to log to splunk as well as locally inside container ?
I am the author of the Splunk Logging Driver.
In case if Splunk is unavailable, driver holds small buffer in memory and keeps retrying. Configuration for the size of the buffer is documented no official docs for the driver https://docs.docker.com/engine/admin/logging/splunk/
SPLUNK_LOGGING_DRIVER_BUFFER_MAX
If driver cannot connect to remote server, what is the maximum amount of messages it can hold in buffer for retries.
Unfortunately this is not ideal, considering that this buffer can be filled pretty quickly and that increasing the buffer to higher number can affect your containers. But this is how most of the drivers written.
I have built another solution delivering logs and metrics to Splunk. This solution includes tiny image with collector and Splunk Certified Application. It is built on top of json-file driver, which means that when Splunk is unavailable it will just keep retrying from the position of the log files. The logs files can have their own settings for rotation, this can be configured with dockerd daemon configuration. You can read another benefits of our solution Comparing with Splunk Logging Driver. And how to get started with Monitoring Docker.

Resources