Understanding supervisord logging; stdout only displaying stderr messages - stdout

I have a very simple python program I am running in supervisord.
The supervisord.conf is completely default besides the program section:
[program:logwatcher]
command=/path/to/python -u "/path/to/logwatcher.py"
The python code:
import sys
print("print\n")
print("file=sys.stdout\n", file=sys.stdout)
print("file=sys.stderr\n", file=sys.stderr)
sys.stdout.write("sys.stdout.write\n")
sys.stderr.write("sys.stderr.write\n")
Produces this output:
supervisor> tail logwatcher
print
file=sys.stdout
sys.stdout.write
supervisor> tail logwatcher stdout
file=sys.stderr
sys.stderr.write
supervisor> tail logwatcher stderr
file=sys.stderr
sys.stderr.write
why is tail stdout only showing stderr messages, and not stdout messages?
why is tail stdout showing stderr at all?
if tail is supposed to mimic tail stdout why don't they match?
tested on supervisor 3.3.5 and supervisor 4.0.1

Related

Apache logging twice to /proc/1/fd/1

I am trying to use tee to log into two locations:
file in persistent storage
Docker stdout
Error log line from VirtualHost config:
ErrorLog "|/usr/bin/tee -a /var/log/apache/error.log /proc/1/fd/1"
Now the problem is that errors are logged twice in /proc/1/fd/1 (as docker logs states), yet error is only logged once into /var/log/apache/error.log
I´ve also tried run from cli:
echo 123 | /usr/bin/tee -a /tmp/test /proc/1/fd/1
This succesfully writes only once to both file and stdout.
Is there some reason why Apache logs writes twice to /proc/1/fd/1 while it logs only once to file and /usr/bin/tee also works as expected?

Differentiate between STDOUT and STDERR in Docker Google Cloud Logging driver

I set my /etc/docker/daemon.json to send container logs to GCP:
{
"log-driver": "gcplogs",
"log-opts": {
"gcp-meta-name": "some-cool-name",
"gcp-project": "some-project-name"
}
}
This works fine, but it seems there is no distinction between STDERR and STDOUT, both entries have a Severity of 'Default'
In container:
root#0bbcf70a30ed:/var/www/app# echo 'xx' > /proc/1/fd/2
root#0bbcf70a30ed:/var/www/app# echo 'xx' > /proc/1/fd/1
In GCP:
Is there anything I can do to make the logs from STDERR have a Severity of 'Error' ?
And if not, is there anything I can do to make all STDERR entries have a string like 'ERROR' prepended, so I can at least filter on them?
For example, in my Dockerfile I do:
RUN touch /var/log/apache2/error.log
RUN ln -sf /proc/1/fd/2 /var/log/apache2/error.log
This makes sure the apache2 error logs go to the containers' STDERR. If I could somehow make all those logging entries have a string like 'ERROR' prepended, I would at least have a semi-workable solution.
But really, having STDERR entries automatically get Severity 'Error' is ideal.

What is the stdout output?

Newbie; I'm learning the Linux cmd line and one of my questions is; "inside the ‘stream-redirection’ folder there is a program called ‘program’. When you run this it will output to stdout and stderr. What is the stdout output??? I'm unsure how to read the output and even where it is??
I've tried
./program 1> stdout then cat stdout
I'm lost and I have to do the same with stderr.

docker exec command doesn't return after completing execution

I started a docker container based on an image which has a file "run.sh" in it. Within a shell script, i use docker exec as shown below
docker exec <container-id> sh /test.sh
test.sh completes execution but docker exec does not return until i press ctrl+C. As a result, my shell script never ends. Any pointers to what might be causing this.
I could get it working with adding the -it parameters:
docker exec -it <container-id> sh /test.sh
Mine works like a charm with this command. Maybe you only forgot the path to the binary (/bin/sh)?
docker exec 7bd877d15c9b /bin/bash /test.sh
File location at
/test.sh
File Content:
#!/bin/bash
echo "Hi"
echo
echo "This works fine"
sleep 5
echo "5"
Output:
ArgonQQ#Terminal ~ docker exec 7bd877d15c9b /bin/bash /test.sh
Hi
This works fine
5
ArgonQQ#Terminal ~
My case is a script a.sh with content
like
php test.php &
if I execute it like
docker exec contianer1 a.sh
It also never returned.
After half a day googling and trying
changed a.sh to
php test.php >/tmp/test.log 2>&1 &
It works!
So it seems related with stdin/out/err.
>/tmp/test.log 2>&1
Please try.
And please note that my test.php is a dead loop script that monitors a specified process, if the process is down, it will restart it. So test.php will never exit.
As described here, this "hanging" behavior occurs when you have processes that keep stdout or stderr open.
To prevent this from happening, each long-running process should:
be executed in the background, and
close both stdout and stderr or redirect them to files or /dev/null.
I would therefore make sure that any processes already running in the container, as well as the script passed to docker exec, conform to the above.
OK, I got it.
docker stop a590382c2943
docker start a590382c2943
then will be ok.
docker exec -ti a590382c2943 echo "5"
will return immediately, while add -it or not, no use
actually, in my program, the deamon has the std input and std output, std err. so I change my python deamon like following, things work like a charm:
if __name__ == '__main__':
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0:
# exit first parent
os._exit(0)
except OSError, e:
print "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
os._exit(0)
# decouple from parent environment
#os.chdir("/")
os.setsid()
os.umask(0)
#std in out err, redirect
si = file('/dev/null', 'r')
so = file('/dev/null', 'a+')
se = file('/dev/null', 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# do second fork
while(True):
try:
pid = os.fork()
if pid == 0:
serve()
if pid > 0:
print "Server PID %d, Daemon PID: %d" % (pid, os.getpid())
os.wait()
time.sleep(3)
except OSError, e:
#print "fork #2 failed: %d (%s)" % (e.errno, e.strerror)
os._exit(0)

How to output tcpdump with grep expression to stdout / file?

I am trying to output the following tcpdump grep expression to a file :
tcpdump -vvvs 1024 -l -A tcp port 80 | grep -E 'X-Forwarded-For:' --line-buffered | awk '{print $2}
I understand it is related to the line-buffered option, that sends the output to stdin. However, if I don't use --line-buffered I don't get any output at all from my tcpdump.
How can I use grep so that it will send my output directly to stdout / file in this case ?
I am trying to output the following tcpdump grep expression to a file
Then redirect the output of the last command in the pipeline to the file:
tcpdump -vvvs 1024 -l -A tcp port 80 | grep -E 'X-Forwarded-For:' --line-buffered | awk '{print $2}' >file
I understand it is related to the line-buffered option, that sends the output to stdin.
No, that's not with --line-buffered does:
$ man grep
...
--line-buffered
Force output to be line buffered. By default, output is line
buffered when standard output is a terminal and block buffered
otherwise.
so it doesn't change where the output goes, it just changes when the data is actually written to the output descriptor if it's not a terminal. It's not a terminal in this case - it's a pipe - so, by default, it's block buffered, so if grep writes 4 lines of output, and that's less than a full buffer block (buffer blocks, in this context, are typically 4K bytes in most modern UN*Xes and on Windows, so it's likely that those 4 lines won't fill the buffer), those lines will not immediately be written by grep to the pipe, so they won't show up immediately.
--line-buffered changes that behavior, so that each line is written to the pipe as it's generated, and awk sees it sooner.
You're using -l with tcpdump, which has the same effect, at least on UN*X:
$ man tcpdump
...
-l Make stdout line buffered. Useful if you want to see the data
while capturing it. E.g.,
tcpdump -l | tee dat
or
tcpdump -l > dat & tail -f dat
Note that on Windows,``line buffered'' means ``unbuffered'', so
that WinDump will write each character individually if -l is
specified.
-U is similar to -l in its behavior, but it will cause output to
be ``packet-buffered'', so that the output is written to stdout
at the end of each packet rather than at the end of each line;
this is buffered on all platforms, including Windows.
So the pipeline, as you've written it, will cause grep to see each line that tcpdump prints as soon as tcpdump prints it, and cause awk to see each of those lines that contains "X-Forwarded-For:" as soon as grep sees it and matches it.
However, if I don't use --line-buffered I don't get any output at all from my tcpdump.
You'll see it eventually, as long as grep produces a buffer's worth of output; however, that could take a very long time. --line-buffered causes grep to write out each line as it's produced, so it shows up as soon as grep produces it, rather than the buffer is full.
How can I use grep so that it will send my output directly to stdout / file in this case ?
grep is sending its (standard) output to awk, which is presumably what you want; you're extracting the second field from grep's output and printing only that.
So you don't want grep to send its (standard) output directly to the terminal or to a file, you want it to send its output to awk and have awk send its (standard) output there. If you want the output to be printed on your terminal, your command is doing the right thing; if you want it sent to a file, redirect the standard output of awk to that file.

Resources