So I'm logging some debug information, sending it to stdout whereupon I grep it for a string. At a certain point, logging is done and the application is waiting for stuff, but grep's output is truncated mid-line. So it matched a line, but didn't output all of that line.
Is there a way to force grep to flush?
Thanks.
UPDATE:
It appears that --line-buffered will help.
I think you solved your problem with grep by using the --line-buffered flag. Also make sure that your application is flushing your stdout after each line. If your stdout is a terminal, line buffering is the default but when you pipe it into another program the default is to use full buffering.
If you are piping data into a program that doesn't have the --line-buffered flag (like uniq for example), take a look at the stdbuf program (http://www.pixelbeat.org/programming/stdio_buffering/stdbuf-man.html) which allows you to modify the buffering options of any program.
See http://www.pixelbeat.org/programming/stdio_buffering/ for a good overview of the issue and some common solutions.
Related
Currently, from my test execution, I see the first 10 lines of logs if the test has passed. I see the first n lines of logs if the test has failed. Ideally, I would like to see the last 100 lines of logs in Standard Error.
There are no settings setup in my Jenkins file, so I assume the current behavior is the default behavior of test execution. I need a way to change it.
I have tried ${BUILD_LOG, maxLines, escapeHtml} which gets me the logs that can be emailed to someone
I have tried messing around with System Logs
I would like my standard error to show the last n lines of logs
This is probably A way to do it(definitely not THE way or may not be the most efficient way)
Once the build completes do a curl to get the console logs, use something like this and store the file in the workspace: curl -isk -X -H "$CRUMB" $BUILD_URL/consoleText --user "$USER:$APITOKEN" -o consolelogs.txt
Then you can simply tail the file for the last 100 lines. eg: tail -100 <log file> > newLogfile
Combine it with Log parser plugin with a known string like ****start of logs**** and create a rule to parse it.
For point #1 refer this for getting the crumb, Jenkins API access
Update:
Just realized that the above approach may be difficult from within the same job, Either a down stream job can used or redirect the entire test logs to a file on the workspace without sending it to stdout and then use the tail and Log parser plugin to achieve the result
I have an application that uses GLib's commandline option parser to handle commandline arguments (as described here).
What I've found is that the description for each option entry has to be very short - in order to fit within the width of a terminal of standard size (when the application is called with the --help argument). If a description for an option is too long it wraps around, and this looks pretty bad. Is there an accepted way to tidy this up?
For example, here's what part of the help output from my application looks like in an 80 character wide terminal window:
Application Options:
-i, --ip-addr Sets the IP address to which the video strea
ms will be sent. If this option is not used then the default IP address of 127.0
.0.1 is used.
-p, --port Sets the port to send the video streams to.
If not chosen this defaults to 1234.
Ideally it would look something like this:
Application Options:
-i, --ip-addr Sets the IP address to which the video
streams will be sent. If this option is not
used then the default IP address of
127.0.0.1 is used.
-p, --port Sets the port to send the video streams to.
If not chosen this defaults to 1234.
I could just get the above result manually, by working out the required length of each line of my option descriptions. Then I could manually enter newlines and spaces into the strings to get the right indentation. But this seems like a really rough approach, and I'm sure there must be a better and less time-consuming way of formatting the output.
I'm sure this problem must have come up before for others, but I haven't found a solution, does anybody here know of a better way to get nicer formatting?
I have the exact same issue. At present I am using the ghetto fix of adding spaces. This however is not possible with the argument description (rather than just the description, which is what is printed at the end). If you add newlines to break the argument description the spacing of the proceeding arguments is messed up.
From the manpage of the dbus-monitor command, I know that I can use some command line arguments like dbus-monitor "type=..., sender=..., interface=..." to specify the type/sender/interface etc I am interested in.
However, for the situation that there is a few program that has heavy dbus traffic that I am not interested in, is there an option to filter out the output of that interface/program?
THX
The dbus-daemon routes messages using message matching rules. You cannot have something like "message unmatching" rules, the specification does not support something like this. See here for more information.
To get the desired filtering behavior I would suggest using grep on the output of dbus-monitor. Check this discussion for more information.
With fconfigure you can get and set channel options. -buffering specifies the type of buffering, and by default it is line for stdin.
Is there a way to check if the buffer at stdin is empty or not?
Please see this question: How to check if stdin is pending in TCL?
Obviously you could set the channel mode to non-blocking and read from it. If the read returns 0 length then nothing was available. However, I suspect you mean to test for data present but not a complete line given your mentioning of the line buffering there. The fblocked command tests a channel for this. See fblocked(1) for the details but for a line buffered channel this lets you know that an incomplete line is present.
Another useful command when reading stdin, if you are reading interactive script commands is to use the info complete command. With this you can just accumulate lines until info complete returns true then evaluate the whole buffer in one.
You can check Tcl's input buffer with chan pending input stdin (requires at least Tcl 8.5). This does not indicate whether the OS has anything in its buffers though; those are checked by trying to read the data (gets or read) or by using a script that triggers off of a readable fileevent, when at least one byte is present. (Well, strictly what is actually promised is that an attempt to read a single byte won't block, but it could be because of an error condition which causes immediate failure. That's the semantics of how OS-level file descriptor readiness works.)
The -buffering option only affects output channels; it's useless on stdin (or any other read-only channel) and has no effect at all. Really. (It is, however, too much trouble to remove.)
I know this is an old question but it sparked some research on my end and I found a function called fileevent which calls an event handler when the stream, i.e. stdin, has something in it that can be read. It may be helpful.
Source: http://wiki.tcl.tk/880
As suggested here, latexmk is a handy way to continually compile your document whenever the source changes. But often when you're working on a document you'll end up with errors and then latex will panic and wait for user input before continuing. That can get very annoying, especially recently when I hacked up something to compile latex directly from an etherpad document, which saves continuously as you type.
Is there a setting for latex or latexmk to make it just abort with an error message if it can't compile? Or, if necessary, how would I set up some kind of Expect script to auto-dismiss LaTeX's complaints?
(I had thought pdflatex's option -halt-on-error would do the trick but apparently not.)
Bonus question: Skim on Mac OSX is a nice pdf viewer that autorefreshes when the pdf changes (unlike Preview), except that whenever there's a latex error it makes you reconfirm that you want autorefreshing. Texniscope doesn't have this problem, but I had to ditch Texniscope for other reasons. Is there a way to make Skim always autorefresh, or is there another viewer that gets this right?
ADDED: Mini-tutorial on latexmk based on the answer to this question:
Get latexmk here: http://www.phys.psu.edu/~collins/software/latexmk-jcc/
Add the following to your ~/.latexmkrc file:
$pdflatex = 'pdflatex -interaction=nonstopmode';
(For OS X with Skim)
$pdf_previewer = "open -a /Applications/Skim.app";
While editing your source file, foo.tex, run the following in a terminal:
latexmk -pvc -pdf foo.tex
Use Skim or another realtime pdf viewer to view foo.pdf. For Skim, just look at the “Sync” tab in Skim’s preferences and set it up for your editor.
Voila! Hitting save on foo.tex will now cause foo.pdf to refresh without touching a thing.
With MikTeX, pdflatex has this command-line option:
-interaction=MODE Set the interaction mode; MODE must be one
of: batchmode, nonstopmode, scrollmode,
errorstopmode.
Edit suggested by #9999years:
Those values are equivalent to a set of LaTeX \commands that provide the same functionality.
From TeX usage tips:
The modes make TeX behave in the following way:
errorstopmode stops on all errors, whether they are about errors in the
source code or non-existent files.
scrollmode doesn't stop on errors in the source but requests input when a
more serious error like like a missing file occurs.
In the somewhat misnamed nonstopmode, TeX does not request input after
serious errors but stops altogether.
batchmode prevents all output in addition to that (intended for use in
automated scripts). In all cases, all errors are written to the log file
(yourtexfile.log).
You can also put \nonstopmode or \batchmode at the very beginning of your tex file; then it'll work with any TeX version, not just pdflatex. For more info on these and related commands see the very good reference on (raw) TeX commands by David Bausum. Especially the command from the debugging family could be of interest here.
Another possible hack is simply to use:
yes x | latexmk source.tex
You could always create an alias for 'yes x | latexmk' if you're going to use this option lots. The main advantage of this that I can see above the other suggestions is that it is very quick for when you occasionally want latexmk to behave like this.
Mehmet
There is also a \batchmode command may do the work.