I have rails application on passenger web server running in docker container. I'm trying to redirect application logs to Logstash. I redirect rails logs to STDOUT and configure container to use gelf log driver, wich redirects STDOUT to given Logstash server. But problem arises: Passenger web server writes his own logs to STDOUT too. And I get mixture of two logs, what make it difficult to separate and analyze.
What is best practices in such situation? How could I label each log stream to separate it in logstash?
If you really wanted, you could configure Passenger to write to its own stdout log, but I would avoid using STDOUT as an intermediary for logstash.
Try a library like logstash-logger. You could then write to a separate file, socket, or database. I think that's a cleaner approach, and potentially faster depending on the log destination.
Related
I have a Docker container that sends its logs to Graylog via udp.
Previously I just used it to output raw messages, but now I've come up with a solution that logs in GELF format.
However, Docker just puts it into "message" field (screen from Graylog Web Interface):
Or in plain text:
{
"version":"1.1",
"host":"1eefd38079fa",
"short_message":"Content root path: /app",
"full_message":"Content root path: /app",
"timestamp":1633754884.93817,
"level":6,
"_contentRoot":"/app",
"_LoggerName":"Microsoft.Hosting.Lifetime",
"_threadid":"1",
"_date":"09-10-2021 04:48:04,938",
"_level":"INFO",
"_callsite":"Microsoft.Extensions.Hosting.Internal.ConsoleLifetime.OnApplicationStarted"
}
GELF-driver is configured in docker-compose file:
logging:
driver: "gelf"
options:
gelf-address: "udp://sample-ip:port"
How to make Docker just forward these already formatted logs?
Is there any way to process these logs and append them as custom fields to docker logs?
The perfect solution would be to somehow enable gelf log driver, but disable pre-processing / formatting since logs are already GELF.
PS. For logs I'm using NLog library, C# .NET 5 and its NuGet package https://github.com/farzadpanahi/NLog.GelfLayout
In my case, there was no need to use NLog at all. It was just a logging framework which no one attempted to dive into.
So a better alternative is to use GELF logger provider for Microsoft.Extensions.Logging: Gelf.Extensions.Logging - https://github.com/mattwcole/gelf-extensions-logging
Don't forget to disable GELF for docker container if it is enabled.
It supports additional fields, parameterization of the formatted string (parameters in curly braces {} become the graylog fields) and is easily configured via appsettings.json
Some might consider this not be an answer since I was using NLog, but for me -- this is a neat way to send customized logs without much trouble. As for NLog, I could not come up with a solution.
Is there a way to pick up the log messages which are logged to a log file, when using the syslog log driver of Docker?
Whatever I write to sysout are getting picked by Rsyslog but anything logged to a log file is not picked. I don't see any option in the syslog driver option which could help indicate a log file to be picked up.
Thanks
Dockers logging interface is defined as stdout and stderr, so the best way is to modify the log settings of your process to send any log data to stdout and stderr.
Some applications can configure logging to go directly to syslog. Java processes using log4j are a good example of this.
If logging to file is the only option available, scripts, logstash, fluentd, rsyslog, and syslog-ng can all ingest text files and output syslog. This can either be done inside the a container with an additional service, or using a shared, standardised logging area on each Docker host and running the ingestion from there.
I know the way by checking /var/lib/docker/containers/<containerid>-json.log from the host(including mapping this volume), and through docker client API, but is there any other way viewing log from inside a container?
If your image runs a non-interactive process such as a web server or a database, that application may send its output to log files instead of STDOUT and STDERR.
*** If you use a logging driver which sends logs to a file, an external host, a database, or another logging back-end you may not see the logs.
We use the default ruby logging module to log the errors. We use the delayed_job which is using many worker processes. So we could not manage the log files.
We need the ruby based log server with rolling file appender and archive facility so that we can push the logs to the log server and let the log server to manage the logging task.
Do we have ruby based solution or other recommended solutions to manage this problem?
Have you looked at Ruby's syslog in the standard library? Normally the docs are non-existent, but the Rails docs seem to cover it, kinda. http://stdlib.rubyonrails.org/libdoc/syslog/rdoc/index.html
Otherwise you can find out some info by looking through the syslog files themselves and reading http://glu.ttono.us/articles/2007/07/25/ruby-syslog-readme which is what I did when I started using it.
You don't say what OS you are on, but Mac OS and Linux have a very powerful syslog built in, which Ruby's syslog piggybacks on so you should be able to send to the system's syslog and have it split out your stream, roll the files, forward them, and do lots of other stuff with them.
I've got everything working with ferret and acts_as_ferret for development (or localhost DRb), but I can't get my multiple host deployment working. All of the remote systems get ECONNREFUSED when accessing the port. On the ferret server, the daemon is listening on localhost only despite the configuration listing the FQDN as the host.
I also tried switching to a UNIX socket to share data between the ferret DRb daemon and the app code but it too gets ECONNREFUSED. (The socket is available to all of the machines via an NFS mount).
Is there a better way to do this or should I be looking for another search indexer? Thanks.
I did figure out that if the address is changed to druby://0.0.0.0:port that it would listen on all ips on the DRb server; however, it doesn't provide any protection against bad code injection into the DRb process.
Basically don't use ferret. I'm on to Xapian with acts_as_xapian for RoR. It supports multiple processes reading but only one writing, so it's an offline index. However, I will be able to make use of sharing the same index between multiple servers via the shared file system (NFS).
Check out Pitfalls of acts_as_ferret, with DrbServer to the rescue
http://www.subelsky.com/2007/03/pitfalls-of-actsasferret-with-drbserver.html
worked pretty well for me. The only thing I'd add is be sure to set the host value to where you're ferret is running.