Is there a way to pick up the log messages which are logged to a log file, when using the syslog log driver of Docker?
Whatever I write to sysout are getting picked by Rsyslog but anything logged to a log file is not picked. I don't see any option in the syslog driver option which could help indicate a log file to be picked up.
Thanks
Dockers logging interface is defined as stdout and stderr, so the best way is to modify the log settings of your process to send any log data to stdout and stderr.
Some applications can configure logging to go directly to syslog. Java processes using log4j are a good example of this.
If logging to file is the only option available, scripts, logstash, fluentd, rsyslog, and syslog-ng can all ingest text files and output syslog. This can either be done inside the a container with an additional service, or using a shared, standardised logging area on each Docker host and running the ingestion from there.
Related
I have a Docker container that sends its logs to Graylog via udp.
Previously I just used it to output raw messages, but now I've come up with a solution that logs in GELF format.
However, Docker just puts it into "message" field (screen from Graylog Web Interface):
Or in plain text:
{
"version":"1.1",
"host":"1eefd38079fa",
"short_message":"Content root path: /app",
"full_message":"Content root path: /app",
"timestamp":1633754884.93817,
"level":6,
"_contentRoot":"/app",
"_LoggerName":"Microsoft.Hosting.Lifetime",
"_threadid":"1",
"_date":"09-10-2021 04:48:04,938",
"_level":"INFO",
"_callsite":"Microsoft.Extensions.Hosting.Internal.ConsoleLifetime.OnApplicationStarted"
}
GELF-driver is configured in docker-compose file:
logging:
driver: "gelf"
options:
gelf-address: "udp://sample-ip:port"
How to make Docker just forward these already formatted logs?
Is there any way to process these logs and append them as custom fields to docker logs?
The perfect solution would be to somehow enable gelf log driver, but disable pre-processing / formatting since logs are already GELF.
PS. For logs I'm using NLog library, C# .NET 5 and its NuGet package https://github.com/farzadpanahi/NLog.GelfLayout
In my case, there was no need to use NLog at all. It was just a logging framework which no one attempted to dive into.
So a better alternative is to use GELF logger provider for Microsoft.Extensions.Logging: Gelf.Extensions.Logging - https://github.com/mattwcole/gelf-extensions-logging
Don't forget to disable GELF for docker container if it is enabled.
It supports additional fields, parameterization of the formatted string (parameters in curly braces {} become the graylog fields) and is easily configured via appsettings.json
Some might consider this not be an answer since I was using NLog, but for me -- this is a neat way to send customized logs without much trouble. As for NLog, I could not come up with a solution.
My web application works fine with the created log4j2.xml file on an aws ec2 instance. But now I containerized it and it's running in ECS Fargate. I can see catalina logs in CloudWatch but not application specific logs that I configured in log4j2.xml file. log4j2.xml is located in a specific path like /var/webapp/conf and I've put the path in catalina.properties as shared.loader=/var/webapp/conf. Also, I see this ERROR in my catalina logs:
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'log4j2.debug' to show Log4j2 internal initialization logging.
Note: I don't want to change tomcat default logging. I'm just trying to send my application logs to the console as well, so I can see all the logs in one CloudWatch log stream.
Configuration for log4j logging driver is not being recognised by your Fargate Task. The reason being, with Fargate tasks we can only setup some specific logging drivers via the Task Definition.
Amazon ECS task definitions for Fargate support the awslogs, splunk, firelens, and fluentd log drivers for the log configuration.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html
I recommend to use CloudWatch log driver:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html
I currently have a gradle Spring Boot app running as a Docker image in a GCP Compute Engine instance. In my Application I added the lombok #Slf4j annotation and in the main method I added the line log.info("Hello world"); and ran the image in my GCE instance via docker run -d --rm -it -p 8888:8080 {image} and checked the Stackdriver logs.
I would expect to be able to filter via log level (INFO, WARNING, etc.), but it seems that the logs are not mapping the log level appropriately, meaning they only show up when the "log level: Any" filter is chosen.
The above log.info() statement shows up in Stackdriver as so:
[2m2019-10-01 17:55:41.159[0;39m [32m INFO[0;39m [35m1[0;39m [2m---[0;39m [2m[nio-8080-exec-5][0;39m [36mc.g.o.Application [0;39m [2m:[0;39m Hello world
with the Json payload:
jsonPayload: {
container: {}
instance: {}
message: "[2m2019-10-01 17:55:41.159[0;39m [32m INFO[0;39m [35m1[0;39m [2m---[0;39m [2m[nio-8080-exec-5][0;39m [36mc.g.o.Application [0;39m [2m:[0;39m Hello world" }
and "logname" is projects/my-project/logs/gcplogs-docker-driver.
Why isn't Stackdriver capturing the log levels from Slf4j even though gcplogs-docker-driver is being used?
It looks like Docker's gcplogs-docker-driver causes the output to be sent to GCP's Stackdriver Logging (aka Cloud Logging). The gcplogs driver just sends each input line as-is with no further processing. There doesn't seem to be any appetite in docker/moby to do additional processing such as attempting to extract severities.
You might be able to do some after-the-fact labelling, but I've never tried.
Note that some platforms support performing additional processing before submitting the entries to Stackdriver Logging. For example, GKE logs console output using the Stackdriver Logging Agent which supports structured logs, which are JSON-encoded payloads. Or you might be able to configure your application's logging framework to log directly to Stackdriver Logging.
I know the way by checking /var/lib/docker/containers/<containerid>-json.log from the host(including mapping this volume), and through docker client API, but is there any other way viewing log from inside a container?
If your image runs a non-interactive process such as a web server or a database, that application may send its output to log files instead of STDOUT and STDERR.
*** If you use a logging driver which sends logs to a file, an external host, a database, or another logging back-end you may not see the logs.
I have rails application on passenger web server running in docker container. I'm trying to redirect application logs to Logstash. I redirect rails logs to STDOUT and configure container to use gelf log driver, wich redirects STDOUT to given Logstash server. But problem arises: Passenger web server writes his own logs to STDOUT too. And I get mixture of two logs, what make it difficult to separate and analyze.
What is best practices in such situation? How could I label each log stream to separate it in logstash?
If you really wanted, you could configure Passenger to write to its own stdout log, but I would avoid using STDOUT as an intermediary for logstash.
Try a library like logstash-logger. You could then write to a separate file, socket, or database. I think that's a cleaner approach, and potentially faster depending on the log destination.