How can I get the output of Cloudfoudry CLI vmc/cf by "grep" - grep

I have used the following command
vmc info |grep target
I can get the target info exactly. But when I type:
vmc apps |grep running
There is no output.
If I try to redirect the stdout to file like:
vmc apps &> tmplog
I was confused to see that only the first column of the output (appname) was written into the file. Any suggestions?

It may be the case that you need to redirect both unix output streams to see the complete log. There is STDOUT (1) and STDERR (2). To redirect both streams to the same file by using
vmc apps > tmplog 2 &> tmplog
Your last line above only redirected one output stream (STDOUT). The other stream may be written to to console instead.
Additionally, the vmc CLI is pretty much outdated. For the current go implementation of the CF CLI (gcf/cf), I successfully tested the following command to work
cf logs $YOUR_APP_NAME | grep RTR

Related

vlc command line: -vv and -vvv give zero detail

I am chasing a command line vlc conversion (mp4 and wav to mp3) . The problem is that vlc creates a zero byte output file.
I have tried the -vv and -vvv options, hoping vlc would give me a hint about what is going on, but the presence of these switches does nothing.
How to get some hints from vlc?
Example command line:
vlc.exe -I -vvv dummy "c:\temp\test\arc\test.aac" --sout=#transcode{acodec=mp3,ab=48,channels=2,samplerate=192000}:standard{access=file,mux=ts,dst="C:\data\personal\test\cardbuilding\audio-files\hinative\test2.mp3"} vlc://quit
OK, found out how to get detailed logging for VLC when it runs from a command line.
Open VLC interactively
Tools | Preferences | All (radio at bottom)
Advanced | Logger
Turn on logging, set file, set detail level
Close VLC
Now when you use the vlc.exe command line, you will get a log.
Alternate method (better, as all control remains on the cmd line, but not yet tested by me):
vlc --file-logging --logfile abc.txt --log-verbose 3
(where 3 = Debug)
That is the good news.
The bad news is... at least in my case, the logged information is not helping me figure out the problem.

How to log TLSv1.3 keys in JSSE for Wireshark to decode traffic

I've been (successfully) looking at TLSv1.2 traffic in Wireshark via a key logfile. But I'd like to do something similar to TLSv1.3.
https://github.com/square/okhttp/pull/6060
This follows the approach described here https://security.stackexchange.com/questions/35639/decrypting-tls-in-wireshark-when-using-dhe-rsa-ciphersuites
I'm wondering if anyone has similar working with Java JSSE for TLSv1.3?
I know I need to log CLIENT_EARLY_TRAFFIC_SECRET, CLIENT_HANDSHAKE_TRAFFIC_SECRET, SERVER_HANDSHAKE_TRAFFIC_SECRET, CLIENT_TRAFFIC_SECRET_0 or SERVER_TRAFFIC_SECRET_0. But I'm not sure of the right hooks in JSSE.
Found prior art on https://wiki.wireshark.org/TLS#Using_the_.28Pre.29-Master-Secret
Specifically
https://github.com/neykov/extract-tls-secrets
and
http://jsslkeylog.sourceforge.net/
Found prior art on https://wiki.wireshark.org/TLS#Using_the_.28Pre.29-Master-Secret
Specifically
https://github.com/neykov/extract-tls-secrets
and
http://jsslkeylog.sourceforge.net/
For The github project, download https://repo1.maven.org/maven2/name/neykov/extract-tls-secrets/4.0.0/extract-tls-secrets-4.0.0.jar
Then run the following command before it attempts to connect. The sample program for OkHttp prints the PID and then has a 10 second delay for this reason.
$ java -jar ~/Downloads/extract-tls-secrets-4.0.0.jar list
$ java -jar ~/Downloads/extract-tls-secrets-4.0.0.jar <pid> /tmp/secrets.log

Docker - Handling multiple services in a single container

I would like to start two different services in my Docker container and exit the container as soon as one of them exits. I looked at supervisor, but I can't find how to get it to quit as soon as one of the managed applications exits. It tries to restart them up to three times, as is the standard setting and then just sits there doing nothing. Is supervisor able to do this or is there any other tool for this? A bonus would be if there also was a way to let both managed programs write to stdout, tagged with their application name, e.g.:
[Program 1] Some output
[Program 2] Some other output
[Program 1] Output again
Since you asked if there was another tool... we designed and wrote a powerful replacement for supervisord that is designed specifically for Docker. It automatically terminates when all applications quit, as well as has special service settings to control this behavior, plus will redirect stdout with tagged syslog-compatible output lines as well. It's open source, and being used in production.
Here is a quick start for Docker: http://garywiz.github.io/chaperone/guide/chap-docker-simple.html
There is also a complete set of tested base-images which are a good example at: https://github.com/garywiz/chaperone-docker, but these might be overkill and the earlier quickstart may do the trick.
I found solutions to both of my requirements by reading through the docs some more.
Exit supervisord on application exit
This can be achieved by using a custom eventlistener. I had to add the following segment into my supervisord configuration file:
[eventlistener:shutdownevent]
command=/shutdownhandler.sh
events=PROCESS_STATE_EXITED
supervisord will start the referenced script and upon the given event being triggered (PROCESS_STATE_EXITED is triggered after the exit of one of the managed programs and it not restarting automatically) will send a line containing data about the event on the scripts stdin.
The referenced shutdownhandler-script contains:
#!/bin/bash
while :
do
echo -en "READY\n"
read line
kill $(cat /supervisord.pid)
echo -en "RESULT 2\nOK"
done
The script has to indicate being ready by sending "READY\n" on its stdout, after which it may receive an event data line on its stdin. For my use case upon receival of a line (meaning one of the managed programs has exited), a SIGTERM is sent to the supervisord process being found by the pid it leaves in its pid file (situated in the root directory by default). For technical completeness, I also included a positive answer for the eventlistener, though that one should never matter.
Tagged output on stdout
I did this by simply starting a tail process in the background before starting supervisord, tailing the programs output log and piping the lines through ts (from the moreutils package) to prepend a tag to it. This way it shows up via docker logs with an easy way to see which program actually wrote the line.
tail -fn0 /var/log/supervisor/program1.log | ts '[Program 1]' &

Ruby background process STDOUT is empty

I'm having a weird issue with a start-up script which runs a Sinatra script using the shell's "daemon" function. The problem is that when I run the command at the command line, I get output to STDOUT. If I run the command at the command line exactly as it is in the script -- less the daemon part -- the output is correctly redirected to the output file. However, when the startup script runs it (see below), I get stuff to the STDERR log but not to the STDOUT log.
The relevant lines of the script:
#!/bin/sh
# (which is and has been a symlink to /bin/bash
# Source function library.
. /etc/init.d/functions
# Set Some Variables
RUNAS="joeuser"
PID=/var/run/myapp.pid
LOG="/var/log/myapp/app-out.log"
ERR_LOG="/var/log/myapp/app-err.log"
APPLICATION_COMMAND="RAILS_ENV=production ruby /opt/myapp/lib/daemons/my-sinatra-app.rb -p 8002 2>>${ERR_LOG} >>${LOG} &"
# Snip a bunch. This is the applicable line from the "start" case:
daemon --user $RUNAS --pidfile $PID $APPLICATION_COMMAND &> /dev/null
Now, the funky parts:
The error log is written to correctly via the redirect of STDERR.
If I reverse the order of the >> and the 2>> (I'm grasping at straws, here!), the behavior does not change: I still get STDERR logged correctly and STDOUT is empty.
If the output log doesn't exist, the STDOUT redirect creates the file. But, the file remains 0-length.
This used to work. The log directory is maintained by log-rotate. All of the more-recent 'out' logs are 0-length. The older ones are not. It seems like it stopped working some time in April. The ruby code didn't change at any time near then; neither did the startup script.
We're running three different services in this way. Two of them are ruby daemons (one uses sinatra, one does not) and the other is a background java process. This is occurring for BOTH of the ruby processes but is not happening on the java process. Maybe something changed in Ruby?
FTR, we've got ruby 1.8.5 and RHEL 5.4.
I've done some more probing. The daemon function does a bunch of stuff, but the meat of the matter is that it runs the program using runuser. The command essentially looks like this:
runuser -s /bin/bash - joeuser -c "ulimit -S -c 0 >/dev/null 2>&1 ; RAILS_ENV=production ruby /opt/myapp/lib/daemons/my-sinatra-app.rb -p 8002 '</dev/null' '>>/var/log/myapp/app-out.log' '2>>/var/log/myapp/app-err.log' '&'"
When I run exactly that at the command line (both with and without the single-ticks that got added somewhere along the line), I get the exact same screwy behavior w.r.t. the output log. So, it seems to me that this is an issue of how ruby (?) interacts with runuser?
Too long to put in a comment :-)
change the shebang to add #!/bin/sh -x and verify that everything is expanded according to your expectations. Also, when executing from terminal, your .bashrc file is sourced, when executing from script, it is not; might be something in you're environment that differ. One way to find out is to do env from terminal and from script and diff the output
env > env_terminal
env > env_script
diff env_terminal env_script
Happy hunting...

How to make output of any shell command unbuffered?

Is there a way to run shell commands without output buffering?
For example, hexdump file | ./my_script will only pass input from hexdump to my_script in buffered chunks, not line by line.
Actually I want to know a general solution how to make any command unbuffered?
Try stdbuf, included in GNU coreutils and thus virtually any Linux distro. This sets the buffer length for input, output and error to zero:
stdbuf -i0 -o0 -e0 command
The command unbuffer from the expect package disables the output buffering:
Ubuntu Manpage: unbuffer - unbuffer output
Example usage:
unbuffer hexdump file | ./my_script
AFAIK, you can't do it without ugly hacks. Writing to a pipe (or reading from it) automatically turns on full buffering and there is nothing you can do about it :-(. "Line buffering" (which is what you want) is only used when reading/writing a terminal. The ugly hacks exactly do this: They connect a program to a pseudo-terminal, so that the other tools in the pipe read/write from that terminal in line buffering mode. The whole problem is described here:
http://www.pixelbeat.org/programming/stdio_buffering/
The page has also some suggestions (the aforementioned "ugly hacks") what to do, i.e. using unbuffer or pulling some tricks with LD_PRELOAD.
You could also use the script command to make the output of hexdump line-buffered (hexdump will be run in a pseudo terminal which tricks hexdump into thinking its writing its stdout to a terminal, and not to a pipe).
# cf. http://unix.stackexchange.com/questions/25372/turn-off-buffering-in-pipe/
stty -echo -onlcr
script -q /dev/null hexdump file | ./my_script # FreeBSD, Mac OS X
script -q -c "hexdump file" /dev/null | ./my_script # Linux
stty echo onlcr
One should use grep or egrep "--line-buffered" options to solve this. no other tools needed.

Resources