Configuring uwsgi to output json encoded logs on separate lines - uwsgi

I would like uwsgi to output log messages in json on separate lines. I have tried adding the following to the uwsgi.ini file:
[uwsgi]
log-encoder = json {"unix":${unix}, "msg":"${msg}"}
but then all the logs get smushed together on one line:
$ uwsgi --ini uwsgi.ini
{"unix":1534303044, "msg":"nodename: RC00W00K3HTD6"}{"unix":1534303044, "msg":"machine: x86_64"}{"unix":1534303044, "msg":"clock source: unix"}{"unix":1534303044, "msg":"pcre jit disabled"}{"unix":1534303044, "msg":"detected number of CPU cores: 8"}
I can make it output on separate lines if I pass --log-encoder as a command line argument to uwsgi:
uwsgi --ini uwsgi.ini --log-encoder=$'json {"unix":${unix}, "msg":"${msg}"}\n'
However I would prefer all the configuration lives in the one .ini file. I tried adding \n to the end of the line like so:
[uwsgi]
log-encoder = json {"unix":${unix}, "msg":"${msg}"}\n
But that just causes \n to be printed between messages.
I am running uwsgi v2.0.17.1.

I found the answer by actually reading the docs:
Encoders can be added by plugins, and can be enabled in chain (the output of an encoder will be the input of the following one and so on).
There is a built-in newline encoder so we can combine that with the json encoder like so:
[uwsgi]
log-encoder = json {"unix":${unix}, "msg":"${msg}"}
log-encoder = nl

Related

Use a file as a Capture filter Wireshark

Is it possible to use a file containing filters as a filter itself? Instead of having to write each filter -f ...... -f ....... have a file that contains all the filters I wish to use to capture? What should the format of this file be? How do I create said file? "Filter1" udp "Filter2" ip6 ........ When using this file using CMD what would the expression be? dumpcap -i 5 -???????? -w capture.pcapng
I expect an expression of what to type in CMD in order to use a file as a capture filter instead of manually writing all filters as -f ........ -f .......

Monitor a service running on a port other than 80 in Nagios

How do we monitor a remote service running on a machine using Nagios.
I have created a cfg file as follows:
define command {
command_name check_http
command_line /usr/lib64/nagios/plugins/check_http -H $HOSTADDRESS$ -p 8082
}
Now when I reload the configuration file, it throws following error:
Warning: Duplicate definition found for command 'check_http' (config file '/etc/nagios/servers/cfbase-prod.cfg', starting on line 19)
Error: Could not add object property in file '/etc/nagios/servers/cfbase-prod.cfg' on line 20.
Error processing object config files!
I am not able to figure out what is the problem.
Please help!
The basic problem is that the command_name value conflicts with the original/standard check_http command. You have (at least) a couple choices:
Set a unique command_name, e.g. check_http_8082.
Define a command to check http on an arbitrary port that gets passed as an argument. E.g.
define command{
command_name check_http_port
command_line /usr/lib64/nagios/plugins/check_http -H $HOSTADDRESS$ -p $ARG1$
}

how to change saxon param=values

SAXON 6.5.4 from Michael Kay
Usage: java com.icl.saxon.StyleSheet [options] source-doc style-doc {param=value}...
Options:
-a Use xml-stylesheet PI, not style-doc argument
-ds Use standard tree data structure
-dt Use tinytree data structure (default)
-o filename Send output to named file or directory
-m classname Use specified Emitter class for xsl:message output
-r classname Use specified URIResolver class
-t Display version and timing information
-T Set standard TraceListener
-TL classname Set a specific TraceListener
-u Names are URLs not filenames
-w0 Recover silently from recoverable errors
-w1 Report recoverable errors and continue (default)
-w2 Treat recoverable errors as fatal
-x classname Use specified SAX parser for source file
-y classname Use specified SAX parser for stylesheet
-? Display this message
If your stylesheet declares a parameter
<xsl:param name="iridescent"/>
Then you can set it from the command line with (for example)
java com.icl.saxon.Stylesheet source.xml style.xsl iridescent=no

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

PHP CLI doesn't use stderr to output errors

I'm running the PHP CLI through a NSTask in MacOS, but this question is more about the CLI itself.
I'm listening to the stderr pipe, but nothing is output there no matter what file I try to run:
If the file type is not a plain text, stdout sets to ?.
If the file is a php script with errors, the error messages are still printed to stdout.
Is there a switch to the interpreter to handle errors through stderr? Do I have an option to detect errors other than parsing stdout?
The display_errors directive (can be set everywhere) takes optionally the parameter "stderr" for it to report errors to stderr instead of stdout or completely disabled error output. Quoting from the PHP manual entry:
Value "stderr" sends the errors to stderr instead of stdout. The value is available as of PHP 5.2.4.
Alternatively if you're using the commandline interface and you want to output the errors your own you can re-use the command-line nput/output streams:
fwrite(STDERR, 'error message');
Here STDERR is an already opened stream to stderr.
Alternatively if you want to do it just for this script and not in CLI you can open a filed handler to php://stderr and write the error messages there.
$fe = fopen('php://stderr', 'w');
fwrite($fe, 'error message');
If you want the error messages sent by the php interpreter should go to the stderr-pipe, you must set display_errors to stderr
This is required to return from PHP realm into shell environment in order to parse properly error message. You still need to exit(1) or whatever integer in order to return exit status code from PHP to shell.
fwrite(STDERR, 'error message'); //output message into 2> buffer
exit(0x0a); //return error status code to shell
Then, your crontab entry will look like:
30 3 * * * /usr/bin/php /full/path/to/phpFile.php >> /logdir/fullpath/journal.log 2>> /logdir/fullpath/error_journal.log
You can also use file_put_contents() with "php://stderr" to output to standard error, like:
php -r 'file_put_contents("php://stderr", "Hiya, PHP!\n"); echo "Bye!\n";' 1>/dev/null
which outputs "Hiya, PHP!\n" to standard error and nothing to standard output when executed in a Bash shell.

Resources