How to capture fastcgi logs and output in lighttpd - fastcgi

It seems after lighttpd internally spawns fastcgi, the output and error messages of my fastcgi app are redirected to /dev/null and lost.
So my questions are:
Is there a possible way to capture them in some file, while fastcgi is internally spawned ?
if yes how to rotate the log file daily.

solved my issue.
I redirected the output to the default log file and then rotated the file using the logrotate utility.

Related

Puma won't keep listening after initialization

It is an existing and working application that I have in other server, same deployment process and same versions of everything, but can't understand what's happening with Puma.
Initially thought that was an issue with the SystemD service, but it turns out that after puma invokation it ends instantly and does not keep listening.
It seems that configuration is loaded and no errors are made.
The command shows the info, but it ends immediately.
What can be happening?
First, look in the log files.
Path to log files you can see in config/production/puma.rb

Docker Tomcat logging to catalina.out and to console

Docker tomcat container saves the startup log of the tomcat application in catalina.out file (last line would INFO: Server startup in 136607 ms). But rest of the activity of the tomcat app is logged onto to console and can be viewed with docker logs container_id.
Is there a way to log to file and console as well? I need the activity log inside catalina.out.2021.log in the tomcat container so I can run a script that collects analyze the logs and process it and sends email and It needs to run inside the container.
Tomcat is started with custom logging properties file /usr/local/tomcat/conf/logging.properties but the output is ending up on console and not in file.
In the image you are using (hobsonspipe/centos-tomcat7:latest) the server is started with:
-Djava.util.logging.config.file=/etc/tomcat/logging.properties
So you need to modify the /etc/tomcat/logging.properties file instead of the one used in your question. This file already does what you want: it sends all Tomcat logs to the standard output and /var/log/catalina.<date>.log, except log messages from the applications (if they use the SevletContext#log method) which go to /var/log/localhost.<date>.log.
If your application doesn't use neither ServletContext#log or java.util.logging, you must configure the specific logging framework elsewhere.

log file handling with docker syslog logging driver

Is there a way to pick up the log messages which are logged to a log file, when using the syslog log driver of Docker?
Whatever I write to sysout are getting picked by Rsyslog but anything logged to a log file is not picked. I don't see any option in the syslog driver option which could help indicate a log file to be picked up.
Thanks
Dockers logging interface is defined as stdout and stderr, so the best way is to modify the log settings of your process to send any log data to stdout and stderr.
Some applications can configure logging to go directly to syslog. Java processes using log4j are a good example of this.
If logging to file is the only option available, scripts, logstash, fluentd, rsyslog, and syslog-ng can all ingest text files and output syslog. This can either be done inside the a container with an additional service, or using a shared, standardised logging area on each Docker host and running the ingestion from there.

Logging to Logstash: separate logs of different applications in one container

I have rails application on passenger web server running in docker container. I'm trying to redirect application logs to Logstash. I redirect rails logs to STDOUT and configure container to use gelf log driver, wich redirects STDOUT to given Logstash server. But problem arises: Passenger web server writes his own logs to STDOUT too. And I get mixture of two logs, what make it difficult to separate and analyze.
What is best practices in such situation? How could I label each log stream to separate it in logstash?
If you really wanted, you could configure Passenger to write to its own stdout log, but I would avoid using STDOUT as an intermediary for logstash.
Try a library like logstash-logger. You could then write to a separate file, socket, or database. I think that's a cleaner approach, and potentially faster depending on the log destination.

ruby socket log server

We use the default ruby logging module to log the errors. We use the delayed_job which is using many worker processes. So we could not manage the log files.
We need the ruby based log server with rolling file appender and archive facility so that we can push the logs to the log server and let the log server to manage the logging task.
Do we have ruby based solution or other recommended solutions to manage this problem?
Have you looked at Ruby's syslog in the standard library? Normally the docs are non-existent, but the Rails docs seem to cover it, kinda. http://stdlib.rubyonrails.org/libdoc/syslog/rdoc/index.html
Otherwise you can find out some info by looking through the syslog files themselves and reading http://glu.ttono.us/articles/2007/07/25/ruby-syslog-readme which is what I did when I started using it.
You don't say what OS you are on, but Mac OS and Linux have a very powerful syslog built in, which Ruby's syslog piggybacks on so you should be able to send to the system's syslog and have it split out your stream, roll the files, forward them, and do lots of other stuff with them.

Resources