How to not truncate INFO messages in bazel output? - bazel

The INFO: messages emitted when running bazel are truncated to size of the terminal.
How can I specify bazel not to truncate INFO messages?
When I redirect the output to a file - of course it has the complete info.
When I change the terminal size manually, it truncates to the terminal size. The options to set the terminal size ex: export COLUMNS=500 or stty rows 50 cols 132 doesn't seem to work.
Either way I need a bazel option and not looking for workarounds.

A quick solution is to bazel .... |& tee /dev/null

Related

How do I debug a 'java_binary' target executed by a Bazel rule via 'ctx.actions.run(...)'?

I have a java_binary target in my workspace that I'm later passing as an executable to ctx.actions.run inside the rule. So far so good.
Now I want to debug this java_binary while Bazel is executing the rule. In order to attach a debugger I need the java_binary run in debug mode. So far, the only thing I came up with is setting jvm_flags on the java_binary. I was able to get that to work. But I was wondering if there is a way to achieve it from the command line instead of baking it into the java_binary.
java_binary(
...
jvm_flags = [
"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=8000"
],
)
Is it possible to achieve this from the command line without hard coding jvm_flags?
Try:
bazel run //:my-target -- --debug
One strategy is to run the build with --subcommands, which will tell bazel to print out all the commands it's running during the build. Then find the command line corresponding to the invocation of the java_binary you're interested in. Then you can copy/paste that command (including the cd part) and modify it to include the debug flags, and debug it as you would any other process.
Note also that java_binary outputs a wrapper script that includes a --debug[=<port>] flag, so that should be all that needs to be added to the command line.
Note also that --subcommands will only print the commands that are actually executed during the build, so a fully cached / fully incremental build will print nothing. You may need to do a clean, or delete some of the outputs of the action you're interested in so that bazel runs that command.
It looks like you can pass the --jvm_flag option as part of the program options after the --.
BUILD:
java_binary(
name = "extract",
main_class = "com.pkg.Main",
resources = glob(["src/main/resources/**/*"]),
)
CLI:
bazel run //:extract -- --jvm_flag="-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=7942" -path %cd%\config.json
It seems that the --jvm_flag option needs to come immediately after the --, before the program options (-path in the example). This is with Bazel 3.7.0.

Ack/Ag does not return search result without *

I am trying to search text in a directory and it turned out that the following syntaxes do not return any result
ack -i "0xabcdef" ./
ack -i "0xabcdef"
ack -i "0xabcdef" .
while the following command works
ack -i "0xabcdef" *
Can someone explain why that is the case? What is the significance of * . I also noticed that the directory has symbolic links
You should not have to specify a directory to ack. By default it delves into the current directory.
I also noticed that the directory has symbolic links
Then an excellent thing to do would be to look at the manual (either man ack or ack --man) and search for "link". The first thing you'll find is this option:
--[no]follow
Follow or don't follow symlinks, other than whatever starting files
or directories were specified on the command line.
This is off by default.
This means if you want ack to follow symlinks, you need to specify the --follow option.

Why is my terminal output not identical when running a yarn script vs its bash equivalent?

**NOTE: I've added updates in order, just keep reading, thanks. :) **
I've been very curious about this -- please see this screenshot of me running:
ls -lah build, and
yarn run assets, which runs ls -lah build.
Let me start by saying that this is a WIP build in webpack, so no need to tell me that a 31M bundle is less than optimal. :)
But why do I get the colors and the more detailed font with the native command and not when yarn executes the command? It may be relevant: this screen shot is:
- Windows 10
- Webstorm terminal
- logged in to a docker container running Ubuntu 14.4
Thanks! :)
** UPDATE: --color=always restores color **
As #Charles Duffy suggested, adding --color=always in the yarn script preserved the formatting:
If anyone has some specialized knowledge to share about what's going on here, I'm in the market to hear it! Thanks!
Short(ish) answer: What's actually going on?
The below answer assumes the GNU implementation of ls.
There are a few possibilities at play:
Your interactive terminal's options may be modified by a shell alias. Output from type ls will indicate whether this is true.
You may have ls --color=auto enabled, either via an alias or via an equivalent environment variable; regardless, this checks whether it's writing directly to a TTY, and only enables color if so.
If output is not direct to a TTY (for instance, if output is being captured by yarn before it's printed), ls --color=auto will not colorize.
To fix this, you can explicitly pass ls --color=always, or its equivalent, simply ls --color. This covers both cases: If you had an alias in use passing --color=auto on your behalf, passing it explicitly means you no longer need the alias. By contrast, if yarn is capturing content rather than passing it straight to the TTY, then --color=always tells ls to ignore isatty() returning false and colorize anyhow.
Background on what the above means:
A "TTY" is, effectively, a terminal. It provides bells and whistles (literally, for the bells) specialized for providing a device that a user is actually typing at. This means it has control sequences for inspecting and modifying cursor location, and -- pertinently for our purposes -- for changing the color with which output is rendered.
A "FIFO" is a pipe -- it moves characters from point A to point B, first-in, first-out. In the case of prog-one | prog-two, the thing that connects those two is a FIFO. It just moves characters, and has no concept of cursor location or colorization or anything else.
If ls tried to put color sequences in its output when that output is intended for any destination other than a terminal, those sequences wouldn't make any sense -- indeed, the very format in which colorization markers need to be printed is determined by the TERM variable specifying the currently active terminal type.
If you run ls --color, then, you're promising ls that its output really will be rendered by a terminal, or (at least) otherwise something that understands the color sequences appropriate to the currently configured TERM.

How can I get the output of Cloudfoudry CLI vmc/cf by "grep"

I have used the following command
vmc info |grep target
I can get the target info exactly. But when I type:
vmc apps |grep running
There is no output.
If I try to redirect the stdout to file like:
vmc apps &> tmplog
I was confused to see that only the first column of the output (appname) was written into the file. Any suggestions?
It may be the case that you need to redirect both unix output streams to see the complete log. There is STDOUT (1) and STDERR (2). To redirect both streams to the same file by using
vmc apps > tmplog 2 &> tmplog
Your last line above only redirected one output stream (STDOUT). The other stream may be written to to console instead.
Additionally, the vmc CLI is pretty much outdated. For the current go implementation of the CF CLI (gcf/cf), I successfully tested the following command to work
cf logs $YOUR_APP_NAME | grep RTR

How to make output of any shell command unbuffered?

Is there a way to run shell commands without output buffering?
For example, hexdump file | ./my_script will only pass input from hexdump to my_script in buffered chunks, not line by line.
Actually I want to know a general solution how to make any command unbuffered?
Try stdbuf, included in GNU coreutils and thus virtually any Linux distro. This sets the buffer length for input, output and error to zero:
stdbuf -i0 -o0 -e0 command
The command unbuffer from the expect package disables the output buffering:
Ubuntu Manpage: unbuffer - unbuffer output
Example usage:
unbuffer hexdump file | ./my_script
AFAIK, you can't do it without ugly hacks. Writing to a pipe (or reading from it) automatically turns on full buffering and there is nothing you can do about it :-(. "Line buffering" (which is what you want) is only used when reading/writing a terminal. The ugly hacks exactly do this: They connect a program to a pseudo-terminal, so that the other tools in the pipe read/write from that terminal in line buffering mode. The whole problem is described here:
http://www.pixelbeat.org/programming/stdio_buffering/
The page has also some suggestions (the aforementioned "ugly hacks") what to do, i.e. using unbuffer or pulling some tricks with LD_PRELOAD.
You could also use the script command to make the output of hexdump line-buffered (hexdump will be run in a pseudo terminal which tricks hexdump into thinking its writing its stdout to a terminal, and not to a pipe).
# cf. http://unix.stackexchange.com/questions/25372/turn-off-buffering-in-pipe/
stty -echo -onlcr
script -q /dev/null hexdump file | ./my_script # FreeBSD, Mac OS X
script -q -c "hexdump file" /dev/null | ./my_script # Linux
stty echo onlcr
One should use grep or egrep "--line-buffered" options to solve this. no other tools needed.

Resources