Perf doesn't show build-id of the application itself - perf

When I run the following perf command for collecting user data
$ perf record -e cycles:u -j any,u -a -o perf.data.user ./test
The output of build-id checking is
$ perf buildid-list -f -i perf.data.user
dbd41c586ea6789f3b998ed28be6ff37134e917a /lib/modules/4.19.125/build/vmlinux
b5381a457906d279073822a5ceb24c4bfef94ddb /lib/x86_64-linux-gnu/libc-2.23.so
ce17e023542265fc11d9bc8f534bb4f070493d30 /lib/x86_64-linux-gnu/libpthread-2.23.so
5d7b6259552275a3c17bd4c3fd05f5a6bf40caa5 /lib/x86_64-linux-gnu/ld-2.23.so
55a35b6df1526bf3d69586896785bf1df0bb4be6 [vdso]
59081d88e819c2fd3bcd8e58bc0c858c0ee2c3a9 /home/mahmood/bin/perf
8edd43fbf5a6d895172d205a7248a814e3b07bb2 /home/mahmood/kernel-4.19.125/test/test
2c0a469e1700fdd0b2f670737dabafeb9c38f909 /opt/glibc-2.23-install/libc.so
As you can see the /home/mahmood/kernel-4.19.125/test/test which is the binary has a build-id. That is fine. However, when I run
$ perf record -e cycles:k -j any,k -a -o perf.data.kernel ./test
I don't see the application's build-id in the same output
$ perf buildid-list -f -i perf.data.kernel
dbd41c586ea6789f3b998ed28be6ff37134e917a /lib/modules/4.19.125/build/vmlinux
49b4a1a69bb9aebaca5147b9821e1a3a2ca759f3 /lib/modules/4.19.125/kernel/net/ipv4/netfilter/iptable_filter.ko
bb4e88298fe274b1bec7ba8ab24e6a9670b93d04 /lib/modules/4.19.125/kernel/net/ipv4/netfilter/nf_nat_ipv4.ko
ee37b9e0cc9b7be3ca543ecfeaa6bde28b77df7d /lib/modules/4.19.125/kernel/net/netfilter/nf_nat.ko
2bc71fd8d0c750aa3427a31639ce75a16a3c288c /lib/modules/4.19.125/kernel/net/netfilter/nf_conntrack.ko
e5dfa4829fe8f9ed3185b708225a5bab8d6d0afe /lib/modules/4.19.125/kernel/net/ipv4/netfilter/nf_defrag_ipv4.ko
5d52d35a5b99dd81fed002ba571a7afe32b26cbd /lib/modules/4.19.125/kernel/net/ipv4/netfilter/ip_tables.ko
d19830cb5c697cb2583d327c28aa3961c945005d /lib/modules/4.19.125/kernel/drivers/gpu/drm/nouveau/nouveau.ko
b816c95c09032342acd644128cf4d21251b3578a /lib/modules/4.19.125/kernel/drivers/net/ethernet/intel/igb/igb.ko
da3d32f0230efe8329fae49f9de60ddaeddf48a9 /lib/modules/4.19.125/kernel/drivers/ata/libahci.ko
b5381a457906d279073822a5ceb24c4bfef94ddb /lib/x86_64-linux-gnu/libc-2.23.so
55a35b6df1526bf3d69586896785bf1df0bb4be6 [vdso]
Is there any reason for that? I ask that because my analyzer checks the build-ids and gives me an error regarding the missing 8edd43fbf5a6d895172d205a7248a814e3b07bb2 which is my application itself. The analyzer has no problem with the former scenario.
UPDATE:
I think since I am profiling kernel activity, there is no sign of the program itself in the recorded data. However, if I run perf record -e cycles -j any -a -o perf.data.kernel ./test, then I also see program's build-id in addition to the the kernel files.
dbd41c586ea6789f3b998ed28be6ff37134e917a /lib/modules/4.19.125/build/vmlinux
49b4a1a69bb9aebaca5147b9821e1a3a2ca759f3 /lib/modules/4.19.125/kernel/net/ipv4/netfilter/iptable_filter.ko
bb4e88298fe274b1bec7ba8ab24e6a9670b93d04 /lib/modules/4.19.125/kernel/net/ipv4/netfilter/nf_nat_ipv4.ko
ee37b9e0cc9b7be3ca543ecfeaa6bde28b77df7d /lib/modules/4.19.125/kernel/net/netfilter/nf_nat.ko
2bc71fd8d0c750aa3427a31639ce75a16a3c288c /lib/modules/4.19.125/kernel/net/netfilter/nf_conntrack.ko
e5dfa4829fe8f9ed3185b708225a5bab8d6d0afe /lib/modules/4.19.125/kernel/net/ipv4/netfilter/nf_defrag_ipv4.ko
5d52d35a5b99dd81fed002ba571a7afe32b26cbd /lib/modules/4.19.125/kernel/net/ipv4/netfilter/ip_tables.ko
d19830cb5c697cb2583d327c28aa3961c945005d /lib/modules/4.19.125/kernel/drivers/gpu/drm/nouveau/nouveau.ko
b816c95c09032342acd644128cf4d21251b3578a /lib/modules/4.19.125/kernel/drivers/net/ethernet/intel/igb/igb.ko
55a35b6df1526bf3d69586896785bf1df0bb4be6 [vdso]
59081d88e819c2fd3bcd8e58bc0c858c0ee2c3a9 /home/mahmood/bin/perf
8edd43fbf5a6d895172d205a7248a814e3b07bb2 /home/mahmood/kernel-4.19.125/test/test
b592b0baf11cf7172f25d71f5f69de2d762897cb /opt/glibc-2.23-install/lib/ld-2.23.so
2c0a469e1700fdd0b2f670737dabafeb9c38f909 /opt/glibc-2.23-install/libc.so
Not sure though if I am right. Any comment is welcome.

Related

Exporting encrypted SNMPv3 traps to JSON with TShark

I have a pcap file with recordings of encrypted SNMPv3 traps from Wireshark (Version 3.2.2). For analyzing the traps, I want to export the protocol data to json using tshark.
tshark.exe -T ek -Y "snmp" -P -V -x -r input.pcap > output.json
Currently, I supply the infos to decrypt the packages via the "snmp_users" file in C:\Users\developer\AppData\Roaming\Wireshark.
# This file is automatically generated, DO NOT MODIFY.
,"snmp_user","SHA1","xxxxxx","AES","yyyyyyy"
Is it possible to supply the options via commandline?
I have tried:
tshark.exe -T ek -Y "snmp" -P -V -x -o "snmp.users_table.username:snmp_user" ...
But that causes an error:
tshark: -o flag "snmp.users_table.username:snmp_user" specifies unknown preference
Update 16.09.2020:
Option -Y used instead of -J:
-Y|--display-filter
Cause the specified filter (which uses the syntax of read/display
filters, rather than that of capture filters) to be applied before
printing a decoded form of packets or writing packets to a file.
You need to specify the option as a User Access Table or uat, with the specific table being the name of the file, namely snmp_users. So, for example:
On Windows:
tshark.exe -o "uat:snmp_users:\"\",\"snmp_user\",\"SHA1\",\"xxxxxx\",\"AES\",\"yyyyyyy\"" -T ek -J "snmp" -P -V -x -r input.pcap > output.json
And on *nix:
tshark -o 'uat:snmp_users:"","snmp_user","SHA1","xxxxxx","AES","yyyyyyy"' -T ek -J "snmp" -P -V -x -r input.pcap > output.json
Unfortunately, the Wireshark documentation is currently lacking in describing the uat option. There is a Google Summer of Code project underway though, of which Wireshark is participating, so perhaps documentation will be improved here.

How can I clean up multi-line Bunyan log output that is coming over an ssh connection?

ssh -t -i ~/work/keys/somekey.pem ec2-user#x.x.x.x 'docker logs --follow --tail 50 -t taggoeshere' | cut -c 32- | bunyan -o short
This works well for single line docker log output:
12:54:49.038 INFO xxxxxxxxxxx: Failed message, sending to S3 (event_id=no-event-id)
But as soon as it is multiline, it gets ugly. Anyone got a handy trick for cleaning this up?

How to colorize logs for docker container

I have container which in logs sometimes write key word which is for me important, and I want to highlight this word in color in my terminal, but also important is still see all content logs in real time (--follow). I just tried command
docker logs -f my_app --tail=100 | grep --color -E '^myWord'
but not working.
So exist some way to do this ?
I use ccze. as #aimless said, grc is the great utility also. It easy to install by sudo apt install ccze for debian/ubuntu-like OS
But if you want to colorize stderr, you need to redirect stderr output to stdout. For example:
docker logs -f my-app 2>&1 | ccze -m ansi
arg -m ansi helps if you want to scroll output normally
UPD:
ccze can be very slow. If you encounter this, try running ccze with the nolookups option: ccze -o nolookups.
originally answered - https://unix.stackexchange.com/a/461390/83391
Try this.
docker logs -f my_app --tail=100 | grep --color=always -E '^myWord'
Note the "--color=always" argument.
Another option would be to use something like https://github.com/jlinoff/colorize. I wrote it to specifically address situations like this. For example it has the ability to specify different colors for each pattern (see the help for details).
Here is an example of how to use it for your case.
$ curl -L https://github.com/jlinoff/colorize/releases/download/v0.8.1/colorize-linux-amd64 --out colorize
$ chmod a+x colorize
$ ./colorize -h
$ docker logs -f my_app --tail=100 | ./colorize '^myWord'
$ # really make it standout.
$ docker logs -f my_app --tail=100 | ./colorize -c red+greenB+bold '^myWord'
try grc. Follow the instruction to install and just pipe the logs output:
docker logs -app | grc

Restoring failed Informix mirror chunk

What should be done if an Informix mirror chunk failed and needs to be replaced with new chunk?
One command that you could use is onspaces. Run by a DBSA with no arguments, the help output includes the information:
onspaces -m <spacename> { -p <path> -o <offset> -m <path> <offset> [-y] |
-f <filename> }
onspaces -r <spacename> [-y]
onspaces -s <spacename> -p <path> -o <offset> {-O | -D} [-y]
-m — Add mirroring to an existing DBspace, PLOGspace, BLOBspace or
SBLOBspace
-r — Turn mirroring off for a DBspace, PLOGspace, BLOBspace or SBLOBspace
-s — Change the status of a chunk
Managing mirroring for a complete dbspace
Clearly, you could turn mirroring off for the down chunk (with -r) and then turn it back on with -m. You should investigate the Administrator's Reference — especially the section on ON-Spaces — and maybe the Administrator's Guide too.
The ON-Spaces section on mirroring contains the note:
The mirrored chunks should be on a different disk. You must mirror all the chunks at the same time.
and the syntax diagram allows multiple occurrences of the -p <path> -o <offset> -m <path> <offset> part of the synopsis. The -p and -o portions identify the existing chunk, and the -m portion identifies the new mirror chunk.
The -f option allows you to put the per-chunk information into a text file.
This technique has the not necessarily desirable side-effect of dropping all mirroring on the affected dbspace temporarily, and then reinstating it.
Managing single mirror chunks
Further research reveals a way to recover a single chunk at a time. The Administrator's Guide has a section on:
Fault tolerance
with sub-sections:
Mirroring
Using mirroring
These cover the theory and practice of mirroring. In particular, you seem to need:
Take down a mirror chunk using onspaces
onspaces -s db_acct -p /dev/mirror_chk1 -o 0 -D
Recover a mirror chunk using onspaces
onspaces -s db_acct -p /dev/mirror_chk1 -o 0 -O
This allows you to specify the chunk that is down — if the system has not already marked it down. And you can then bring it back online (into recovery mode) when you've replaced the physical media. As noted in a comment, this is much easier when you use symlinks to name the device (file) that holds the data (and if you don't use non-zero offsets; for the most part, they're a relic from the days when big disk drives were 100 MiB or less).

How do I get the raw predictions (-r) from Vowpal Wabbit when running in daemon mode?

Using the below, I'm able to get both the raw predictions and the final predictions as a file:
cat train.vw.txt | vw -c -k --passes 30 --ngram 5 -b 28 --l1 0.00000001 --l2 0.0000001 --loss_function=logistic -f model.vw --compressed --oaa 3
cat test.vw.txt | vw -t -i model.vw --link=logistic -r raw.txt -p predictions.txt
However, I'm unable to get the raw predictions when I run VW as a daemon:
vw -t -i model.vw --daemon --port 26542 --link=logistic
Do I have a pass in a specific argument or parameter to get the raw predictions? I prefer the raw predictions, not the final predictions. Thanks
On systems supporting /dev/stdout (and /dev/stderr), you may try this:
vw -t -i model.vw --daemon --port 26542 --link=logistic -r /dev/stdout
The daemon will write raw predictions into standard output which in this case end up in the same place as localhost port 26542.
The relative order of lines is guaranteed because the code dealing with different prints within each example (e.g non-raw vs raw) is always serial.
Since November 2015, the easiest way how to obtain probabilities is to use --oaa=N --loss_function=logistic --probabilities -p probs.txt. (Or if you need label-dependent features: --csoaa_ldf=mc --loss_function=logistic --probabilities -p probs.txt.)
--probabilities work with --daemon as well. There should be no more need for using --raw_predictions.
--raw_predictions is a kind of hack (the semantic depends on the reductions used) and it is not supported in --daemon mode. (Something like --output_probabilities would be useful and not difficult to implement and it would work in daemon mode, but so far no one had time to implement it.)
As a workaround, you can run VW in a pipe, so it reads stdin and writes the probabilities to stdout:
cat test.data | vw -t -i model.vw --link=logistic -r /dev/stdout | script.sh
According to https://github.com/VowpalWabbit/vowpal_wabbit/issues/1118 you can try adding --scores option in command line:
vw --scores -t -i model.vw --daemon --port 26542
It helped me with my oaa model.

Resources