I can tail the logs of a single docker container by doing:
docker logs -f container1
But, how can I tail the logs of multiple containers on the same screen?
docker logs container1 container2
doesn’t work. It gives an error:
“docker logs” requires exactly 1 argument(s).
Thank you.
If you are using docker-compose, this will show all logs from the diferent containers
docker-compose logs -f
If you have access and root to the docker server:
tail -f /var/lib/docker/containers/*/*.log
The docker logs command can't stream multiple logs files.
Logging Drivers
You could use one of the logging drivers other than the default json to ship the logs to a common point. The systemd journald or syslog drivers would readily work on most systems. Any of the other centralised log systems would work too.
Note that configuring syslog on the Docker daemon means that docker logs command can no longer query the logs, they will only be stored where your syslog puts them.
A simple daemon.json for syslog:
{
"log-driver": "syslog",
"log-opts": {
"syslog-address": "tcp://10.8.8.8:514",
"syslog-format": "rfc5424"
}
}
Compose
docker-compose is capable of streaming the logs for all containers it controls under a project.
API
You could write tool that attaches to each container via the API and streams the logs via a websocket. Two of the Java libararies are docker-client and docker-java.
Hack
Or run multiple docker logs and munge the output, in node.js:
const { spawn } = require('child_process')
function run(id){
let dkr = spawn('docker', [ 'logs', '--tail', '1', '-t', '--follow', id ])
dkr.stdout.on('data', data => console.log('%s: stdout', id, data.toString().replace(/\r?\n$/,'')))
dkr.stderr.on('data', data => console.error('%s: stderr', id, data.toString().replace(/\r?\n$/,'')))
dkr.on('close', exit_code => {
if ( exit_code !== 0 ) throw new Error(`Docker logs ${id} exited with ${exit_code}`)
})
}
let args = process.argv.splice(2)
args.forEach(arg => run(arg))
Which dumps data as docker logs writes it.
○→ node docker-logs.js 958cc8b41cd9 1dad69882b3d db4b844d9478
958cc8b41cd9: stdout 2018-03-01T06:37:45.152010823Z hello2
1dad69882b3d: stdout 2018-03-01T06:37:49.392475996Z hello
db4b844d9478: stderr 2018-03-01T06:37:47.336367247Z hello2
958cc8b41cd9: stdout 2018-03-01T06:37:55.155137606Z hello2
db4b844d9478: stderr 2018-03-01T06:37:57.339710598Z hello2
1dad69882b3d: stdout 2018-03-01T06:37:59.393960369Z hello
Related
I am trying to deploy logspout container in docker, but keep running into an issue which I have searched in this website and github but to no avail, so hoping someone knows.
I followed the following commands as per the Readme here: https://github.com/gliderlabs/logspout
(1) docker pull gliderlabs/logspout:latest (also tried with logspout:master, same results)
(2) docker run -d --name="logspout" --volume=/var/run/docker.sock:/var/run/docker.sock --publish=127.0.0.1:8000:80 gliderlabs/logspout (also tried with -v /var/run/docker.sock:/var/run/docker.sock, same results)
The container gets created but stops immediately. When I check the container logs (docker container logs logspout), I only see the following entries:
2021/12/19 06:37:12 # logspout v3.2.14 by gliderlabs
2021/12/19 06:37:12 # adapters: raw syslog tcp tls udp multiline
2021/12/19 06:37:12 # options :
2021/12/19 06:37:12 persist:/mnt/routes
2021/12/19 06:37:12 # jobs : pump routes http[health,logs,routes]:80
2021/12/19 06:37:12 # routes : none
2021/12/19 06:37:12 pump ended: Get http://unix.sock/containers/json?: dial unix /var/run/docker.sock: connect: no such file or directory
I checked docker.sock as ls -la /var/run/docker.sock results in srw-rw---- 1 root docker 0 Dec 12 09:49 /var/run/docker.sock. So docker.sock does exist, which adds to the confusion as to why the container can't find it.
I am new to linux/docker, but my understanding is that using -v or --version would automatically mount the location to the container, but does not seem to be happening here. So I am wondering if anyone has any suggestion on what needs to be done so that the logspout container can find the docker.sock.
System Info: Docker version 20.10.11, build dea9396; Raspberry Pi 4 ARM 64, OS: Debian GNU/Linux 11 (bullseye)
EDIT: added comment about -v tag in step (2) above
The container must be able to access the Docker Unix socket to mount it. This is typically a problem when namespace remapping is enabled. To disable remapping for the logspout container, pass the --userns=host flag to docker run, .. create, etc.
We have a Java application that can be run in Docker containers. It produces messages to stdout and stderr with a different level of detail for different audiences.
Configuring Splunk as log driver all log lines received by Splunk a marked with source stdout although there must be log lines being logged to stderr.
Splunk log driver configuration in docker-compose:
logging:
driver: splunk
options:
splunk-url: https://splunkhf:8088
splunk-token: [TOKEN]
splunk-index: splunk_index
splunk-insecureskipverify: "true"
splunk-sourcetype: log4j
splunk-format: "json"
tag: "{{.Name}}/{{.ID}}"
Example log message sent to splunk:
{
line: 2021-01-12 11:37:49,191;10718;INFO ;[Thread-1];Logger; ;Executed all shutdown events.
source: stdout
tag: service_95f2bac29286/582385192fde
}
How can I configure Docker or Splunk to differentiate correctly between those different streams?`
If you run the service from docker-compose without -d then the logs lose their original source. It seems that Docker and Docker-Compose put everything from the container's output streams to stdout and use stderr for their logs.
Using the -d flag the log messages do not lose their original output stream.
With the default json-file logging driver, is there a way to log rotate docker container logs with container names, instead of the container IDs?
The container IDs in the log file name look not so readable, which is when i thought of saving the logs with container names instead?
It's possible to configure the engine with log options to include labels in the logs:
# cat /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "com.docker.stack.namespace,com.docker.swarm.service.name,environment"
}
}
# docker run --label environment=dev busybox echo hello logs
hello logs
root#vm-11:/etc/docker# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9615c898c2d2 busybox "echo hello logs" 8 seconds ago Exited (0) 7 seconds ago eloquent_germain
# docker logs --details 961
environment=dev hello logs
# more /var/lib/docker/containers/9615c898c2d2aa7439581e08c2e685f154e4bf2bb9fd5ded0c384da3242c6c9e/9615c898c2d2aa7439581e08c2e685f154e4bf2bb9fd5ded0c384da3242c6c9e-json.log
{"log":"hello logs\n","stream":"stdout","attrs":{"environment":"dev"},"time":"2020-09-22T11:12:41.279155826Z"}
You need to reload the docker engine after making changes to the daemon.json, and changes only apply to newly created containers. For systemd, reloading is done with systemctl reload docker.
To specifically pass the container name, which isn't a label, you can pass a "tag" setting:
# docker run --name test-log-opts --log-opt tag="{{.Name}}/{{.ID}}" busybox echo hello log opts
hello log opts
# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c201d0a2504a busybox "echo hello log opts" 6 seconds ago Exited (0) 5 seconds ago test-log-opts
# docker logs --details c20
tag=test-log-opts%2Fc201d0a2504a hello log opts
# more /var/lib/docker/containers/c201d0a2504addedb2b6785850a83e8931052d0d9778438e9dcc27391f45fec2/c201d0a2504addedb2b6785850a83e8931052d0d9778438e9dcc27391f45fec2-json.log
{"log":"hello log opts\n","stream":"stdout","attrs":{"tag":"test-log-opts/c201d0a2504a"},"time":"2020-09-22T11:15:26.998956544Z"}
For more details:
JSON log driver options: https://docs.docker.com/config/containers/logging/json-file/#options
Container logging tags: https://docs.docker.com/config/containers/logging/log_tags/
I'm trying to set up an interaction between running Docker container's logs and Logstash.
I run my Docker container with the following command:
docker run --log-driver gelf --log-opt gelf-address=udp://127.0.0.1:12201 nfrankel/simplelog:1
and the Logstash config.json is:
input {
gelf {}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
}
stdout {}
}
Logstash logs are fine, I see
New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
Starting gelf listener (udp) ... {:address=>"0.0.0.0:12201"}
Successfully started Logstash API endpoint {:port=>9600}
Nevertheless, it doesn't work. I don't see either logs in Logstash console or that Elasticsearch index is created.
Could you help me to resolve my issue?
Worth to mention that the running Docker container produces logs and I see them in a Cygwin from where I launched it.
Perhaps try configuring the gelf input to accept udp on port 12201 e.g.
input {
gelf {
use_udp => true
port_udp => 12201
}
}
I have a set of dockerized applications scattered across multiple servers and trying to setup production-level centralized logging with ELK. I'm ok with the ELK part itself, but I'm a little confused about how to forward the logs to my logstashes.
I'm trying to use Filebeat, because of its loadbalance feature.
I'd also like to avoid packing Filebeat (or anything else) into all my dockers, and keep it separated, dockerized or not.
How can I proceed?
I've been trying the following. My Dockers log on stdout so with a non-dockerized Filebeat configured to read from stdin I do:
docker logs -f mycontainer | ./filebeat -e -c filebeat.yml
That appears to work at the beginning. The first logs are forwarded to my logstash. The cached one I guess. But at some point it gets stuck and keep sending the same event
Is that just a bug or am I headed in the wrong direction? What solution have you setup?
Here's one way to forward docker logs to the ELK stack (requires docker >= 1.8 for the gelf log driver):
Start a Logstash container with the gelf input plugin to reads from gelf and outputs to an Elasticsearch host (ES_HOST:port):
docker run --rm -p 12201:12201/udp logstash \
logstash -e 'input { gelf { } } output { elasticsearch { hosts => ["ES_HOST:PORT"] } }'
Now start a Docker container and use the gelf Docker logging driver. Here's a dumb example:
docker run --log-driver=gelf --log-opt gelf-address=udp://localhost:12201 busybox \
/bin/sh -c 'while true; do echo "Hello $(date)"; sleep 1; done'
Load up Kibana and things that would've landed in docker logs are now visible. The gelf source code shows that some handy fields are generated for you (hat-tip: Christophe Labouisse): _container_id, _container_name, _image_id, _image_name, _command, _tag, _created.
If you use docker-compose (make sure to use docker-compose >= 1.5) and add the appropriate settings in docker-compose.yml after starting the logstash container:
log_driver: "gelf"
log_opt:
gelf-address: "udp://localhost:12201"
Docker allows you to specify the logDriver in use. This answer does not care about Filebeat or load balancing.
In a presentation I used syslog to forward the logs to a Logstash (ELK) instance listening on port 5000.
The following command constantly sends messages through syslog to Logstash:
docker run -t -d --log-driver=syslog --log-opt syslog-address=tcp://127.0.0.1:5000 ubuntu /bin/bash -c 'while true; do echo "Hello $(date)"; sleep 1; done'
Using filebeat you can just pipe docker logs output as you've described. Behavior you are seeing definitely sounds like a bug, but can also be the partial line read configuration hitting you (resend partial lines until newline symbol is found).
A problem I see with piping is possible back pressure in case no logstash is available. If filebeat can not send any events, it will buffer up events internally and at some point stop reading from stdin. No idea how/if docker protects from stdout becoming unresponsive. Another problem with piping might be restart behavior of filebeat + docker if you are using docker-compose. docker-compose by default reuses images + image state. So when you restart, you will ship all old logs again (given the underlying log file has not been rotated yet).
Instead of piping you can try to read the log files written by docker to the host system. The default docker log driver is the json log driver . You can and should configure the json log driver to do log-rotation + keep some old files (for buffering up on disk). See max-size and max-file options. The json driver puts one line of 'json' data for every line to be logged. On the docker host system the log files are written to /var/lib/docker/containers/container_id/container_id-json.log . These files will be forwarded by filebeat to logstash. If logstash or network becomes unavailable or filebeat is restarted, it continues forwarding log lines where it left of (given files have been not deleted due to log rotation). No events will be lost. In logstash you can use the json_lines codec or filter to parse the json lines and a grok filter to gain some more information from your logs.
There has been some discussion about using libbeat (used by filebeat for shipping log files) to add a new log driver to docker. Maybe it is possible to collect logs via dockerbeat in the future by using the docker logs api (I'm not aware of any plans about utilising the logs api, though).
Using syslog is also an option. Maybe you can get some syslog relay on your docker host load balancing log events. Or have syslog write log files and use filebeat to forward them. I think rsyslog has at least some failover mode. You can use logstash syslog input plugin and rsyslog to forward logs to logstash with failover support in case the active logstash instance becomes unavailable.
I created my own docker image using the Docker API to collect the logs of the containers running on the machine and ship them to Logstash thanks to Filebeat. No need to install or configure anything on the host.
Check it out and tell me if it suits your needs: https://hub.docker.com/r/bargenson/filebeat/.
The code is available here: https://github.com/bargenson/docker-filebeat
Just for helping others that need to do this, you can simply use Filebeat to ship the logs. I would use the container by #brice-argenson, but I needed SSL support so I went with a locally installed Filebeat instance.
The prospector from filebeat is (repeat for more containers):
- input_type: log
paths:
- /var/lib/docker/containers/<guid>/*.log
document_type: docker_log
fields:
dockercontainer: container_name
It sucks a bit that you need to know the GUIDs as they could change on updates.
On the logstash server, setup the usual filebeat input source for logstash, and use a filter like this:
filter {
if [type] == "docker_log" {
json {
source => "message"
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
mutate {
rename => { "log" => "message" }
}
date {
match => [ "time", "ISO8601" ]
}
}
}
This will parse the JSON from the Docker logs, and set the timestamp to the one reported by Docker.
If you are reading logs from the nginx Docker image, you can add this filter as well:
filter {
if [fields][dockercontainer] == "nginx" {
grok {
match => { "message" => "(?m)%{IPORHOST:targethost} %{COMBINEDAPACHELOG}" }
}
mutate {
convert => { "[bytes]" => "integer" }
convert => { "[response]" => "integer" }
}
mutate {
rename => { "bytes" => "http_streamlen" }
rename => { "response" => "http_statuscode" }
}
}
}
The convert/renames are optional, but fixes an oversight in the COMBINEDAPACHELOG expression where it does not cast these values to integers, making them unavailable for aggregation in Kibana.
I verified what erewok wrote above in a comment:
According to the docs, you should be able to use a pattern like this
in your prospectors.paths: /var/lib/docker/containers/*/*.log – erewok
Apr 18 at 21:03
The docker container guids, represented as the first '*', are correctly resolved when filebeat starts up. I do not know what happens as containers are added.