Delete certain line while using iperf - grep

I run iperf command like this :
iperf -c 10.0.0.1 -t 2 -f m -w 1K | grep -Po '[0-9.]*(?= Mbits/sec)'
I want to display throughput only such as 0.32 but because I use 1K here, there is a warning and the display becomes
WARNING: TCP window size set to 1024 bytes. A small window size will give poor performance. See the Iperf documentation.
0.32
How to delete this warning so I can get "0.32" only?

Just send the warning message to /dev/null, after that you get output only.
So your command would be,
iperf -c 10.0.0.1 -t 2 -f m -w 1K 2> /dev/null | grep -Po '[0-9.]*(?= Mbits/sec)'

Related

tshark: capture specific bytes from a packet data (content)

I have a captured trace (.pcap) file and I want to read the data field of each captured packet in this trace. I can do this using this command:
tshark -r aa.pcap -Tfields -Y "udp" -e data
3000ca02f89f0004000115af0000017900.......
This command reads all the content in the data field of each packet. My question is that how can I read specific bytes from the data (e.g. the 5th and 6th bytes only)
f89f
If you have cut available on your system, you could pipe the tshark output to it to isolate the characters you desire. For example:
tshark -r aa.pcap -Tfields -Y "udp" -e data | cut -c 9-12
You can even test this as follows:
echo 3000ca02f89f0004000115af0000017900 | cut -c 9-12
f89f
EDIT: I adjusted the offsets from 10-13 to 9-12, as that seems to be the correct offsets. If you quote the characters in the echo command, then you need 10-13, but those aren't the right offsets you need for the tshark output.

Overall total from du command

I am trying to find out all directories and the overall size starting with pattern int-*
For this I am using the below command
$sudo ls -ld int-* | grep ^d | wc -l
3339
$ sudo ls -ld int-* | grep ^d | du -sh
204G .
Are my commands correct ? Any other command combination to gather the above information ?
Simply du -shc ./int-*/ should give the grand total of all directories under the pattern int-*. Add a trailing slash would do the trick for directories
AS
-s, report only the sum of the usage in the current directory, not for each directory therein contained
-h, is to obtain the results in human readable format
No, your commands are not okay (though the first is not outright wrong).
Both parse the output of ls which a dangerous thing to do as ls is supposed to produce human-readable output and the format might change in the future (and indeed it has several times over the years, and it differs across various Unix flavors). So generally, parsing the output ls is considered bad. See http://mywiki.wooledge.org/ParsingLs for details.
The second command also pipes this output into du while du isn't reading anything from stdin. It just ignores this kind of input and will do the same as it would have done if being called without the pipe: du -sh. This of course is not what you intended.
What you wanted can best be achieved in a proper fashion like this:
find -maxdepth 1 -type d -name 'int-*' -printf 'x\n' | wc -l
find -maxdepth 1 -type d -name 'int-*' -print0 | du --files0-from=- -c
Using the option --files0-from=- the command du does read NUL-separated file names from stdin. -c makes it print a total of all arguments.
Of course you can still add options -h for human-readable sizes (4G etc.) and -s if you do not want sizes for the subdirectories of your arguments.
If you want only the grand total, the best way is to chomp the output by piping it into tail -1.

how to find MAX memory from docker stats?

With docker stats you can see the memory usage of a container over time.
Is there a way to find what the highest value of memory usage was while running docker stats?
If you need to find the peak usage you are better off requesting the .MemPerc option and calculating based on the total memory (unless you restricted the memory available to the container). .MemUsage has units which change during the life of the container which mess with the result.
docker stats --format 'CPU: {{.CPUPerc}}\tMEM: {{.MemPerc}}'
You can stream an ongoing log to a file (or script).
To get just the max memory as originally requested:
(timeout 120 docker stats --format '{{.MemPerc}}' <CONTAINER_ID> \
| sed 's/\x1b\[[0-9;]*[a-zA-Z]//g' ; echo) \
| tr -d '%' | sort -k1,1n | tail -n 1
And then you can ask the system for its total RAM (again assuming you didn't limit the RAM available to docker) and calculate:
awk '/MemTotal/ {print $2}' /proc/meminfo
You would need to know how long the container is going to run when using timeout as above, but if docker stats was run without this in background submitted by a script it could kill it once the container completed.
...
This command allows you to generate a time-series of the cpu/memory load:
(timeout 20 docker stats --format \
'CPU: {{.CPUPerc}}\tMEM: {{.MemPerc}}' <CONTAINER_ID> \
| sed 's/\x1b\[[0-9;]*[a-zA-Z]//g' ; echo) \
| gzip -c > monitor.log.gz
Note that it pipes into gzip. In this form you get ~2 rows per second so the file would get large rapidly if you don't.
I'd advise this for benchmarking and trouble shooting rather than use on production containers
I took a sampling script from here and aggregated data by #pl_rock. But be careful - the sort command only compares string values - so the results are usually wrong (but ok for me).
Also mind that docker is sometimes reporting wrong numbers (ie. more allocated mem than physical RAM).
Here is the script:
#!/bin/bash
"$#" & # Run the given command line in the background.
pid=$!
echo "" > stats
while true; do
sleep 1
sample="$(ps -o rss= $pid 2> /dev/null)" || break
docker stats --no-stream --format "{{.MemUsage}} {{.Name}} {{.Container}}" | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0 }' >> stats
done
for containerid in `awk '/.+/ { print $7 }' stats | sort | uniq`
do
grep "$containerid" stats | sort -r -k3 | tail -n 1
# maybe: | sort -r -k3 -h | head -n 1
# see comment below (didnt tested)
done
In my case I wanted to monitor a docker container which runs tests for my web application. The test suite is pretty big, it includes javascript tests in a real browser and consume significant amount of both, memory and time.
Ideally, I wanted to watch the current memory usage real time, but to also keep the history for later analysis.
I ended up using a modified and simplified version of the Keiran's solution:
CONTAINER=$(docker ps -q -f name=CONTAINER_NAME)
FORMAT='{{.MemPerc}}\t{{.MemUsage}}\t{{.Name}}'
docker stats --format $FORMAT $CONTAINER | sed -u 's/\x1b\[[0-9;]*[a-zA-Z]//g' | tee stats
Notes:
CONTAINER=$(docker ps -q -f name=NAME) find container by name, but there are other options
FORMAT='{{.MemPerc}} ...}} MemPerc goes first (for sorting); otherwise you can be creative
sed -u the -u flag is important, it turns off buffering
| sed -u 's/\x1b\[[0-9;]*[a-zA-Z]//g' removes ANSI escape sequences
| tee stats not only display in real time, but also write into the stats file
I Ctrl-C manually when it's ready – not ideal, but OK for me
after that it's easy to find the max with something like sort -n stats | tail
you can use command:
docker stats --no-stream | awk '{ print $3 }' | sed '1d'|sort | tail -1
It will give highest memory by container.
Let me Explain command:
--no-stream : Disable streaming stats and only pull the first result
awk '{ print $3 }' : will print MEM USAGE
sed '1d' : will delete first entry that is %
sort : it will sort the result
tail -1 : it will give last entry that is highest.

Rotate tcpdump stdout file by time

I have a long running tcpdump that runs over ssh which outputs to a file.
ssh remotehost.example.com -p "tcpdump -i eth0 -w - " > capture-`date '+%Y%m%d-%H%M'`.pcap
How can I rotate that file by day or week? It is important to not duplicate or lose any content.
From the tcpdump man page:
-G If specified, rotates the dump file specified with the -w option every rotate_sec‐
onds seconds. Savefiles will have the name specified by -w which should include a
time format as defined by strftime(3). If no time format is specified, each new
file will overwrite the previous.
If used in conjunction with the -C option, filenames will take the form of
`file<count>'.
For example, if you want to rotate every 10 seconds:
tcpdump -i eth0 -G 10 -w capture-%Y%m%d-%H%M%S.pcap
In your case, you can rotate daily like so:
tcpdump -i eth0 -G 86400 -w capture-%Y%m%d-%H%M.pcap

How do I get the raw predictions (-r) from Vowpal Wabbit when running in daemon mode?

Using the below, I'm able to get both the raw predictions and the final predictions as a file:
cat train.vw.txt | vw -c -k --passes 30 --ngram 5 -b 28 --l1 0.00000001 --l2 0.0000001 --loss_function=logistic -f model.vw --compressed --oaa 3
cat test.vw.txt | vw -t -i model.vw --link=logistic -r raw.txt -p predictions.txt
However, I'm unable to get the raw predictions when I run VW as a daemon:
vw -t -i model.vw --daemon --port 26542 --link=logistic
Do I have a pass in a specific argument or parameter to get the raw predictions? I prefer the raw predictions, not the final predictions. Thanks
On systems supporting /dev/stdout (and /dev/stderr), you may try this:
vw -t -i model.vw --daemon --port 26542 --link=logistic -r /dev/stdout
The daemon will write raw predictions into standard output which in this case end up in the same place as localhost port 26542.
The relative order of lines is guaranteed because the code dealing with different prints within each example (e.g non-raw vs raw) is always serial.
Since November 2015, the easiest way how to obtain probabilities is to use --oaa=N --loss_function=logistic --probabilities -p probs.txt. (Or if you need label-dependent features: --csoaa_ldf=mc --loss_function=logistic --probabilities -p probs.txt.)
--probabilities work with --daemon as well. There should be no more need for using --raw_predictions.
--raw_predictions is a kind of hack (the semantic depends on the reductions used) and it is not supported in --daemon mode. (Something like --output_probabilities would be useful and not difficult to implement and it would work in daemon mode, but so far no one had time to implement it.)
As a workaround, you can run VW in a pipe, so it reads stdin and writes the probabilities to stdout:
cat test.data | vw -t -i model.vw --link=logistic -r /dev/stdout | script.sh
According to https://github.com/VowpalWabbit/vowpal_wabbit/issues/1118 you can try adding --scores option in command line:
vw --scores -t -i model.vw --daemon --port 26542
It helped me with my oaa model.

Resources