How to display all bandwidth values in iperf - grep

I want to capture all bandwidth value in iperf not only Mbits size but also bits and Kbits as well.
[3] 0.0 - 1.0 sec 128 Kbytes 1.05 Mbits/sec
[3] 1.0 - 2.0 sec 0 Kbytes 0.00 bits/sec
[3] 2.0 - 3.0 sec 90 Kbytes 900.5 Kbits/sec
So far I know about this
iperf -c 10.0.0.1 -i 1 -t 100 | grep -Po '[0-9.]*(?= Mbits/sec)'
but that only captures Mbits value. How to capture bits/sec and Kbits/sec as well at the same time with Mbits/sec?
Thank you

I know this is old, but in case someone stumbles upon it, you could add an optional character class to your grep:
grep -Po '[0-9.]*(?= [KM]*bits/sec)'

This should do it
iperf -c 10.0.0.1 -i 1 -t 100 | awk '{print$5}' FPAT=[.0-9]+
FPAT=[.0-9]+ defines a field as one or more of .0-9
{print$5} prints just the rate

you might want to man iperf to see what's supported. Here's the latest from 2.0.10
-f, --format
[abkmgKMG] format to report: adaptive, bits, Kbits, Mbits, KBytes, MBytes (see NOTES for more)

Related

Extract specific number from command outout

I have the following issue.
In a script, I have to execute the hdparm command on /dev/xvda1 path.
From the command output, I have to extract the MB/sec values calculated.
So, for example, if executing the command I have this output:
/dev/xvda1:
Timing cached reads: 15900 MB in 1.99 seconds = 7986.93 MB/sec
Timing buffered disk reads: 478 MB in 3.00 seconds = 159.09 MB/sec
I have to extract 7986.93 and 159.09.
I tried:
grep -o -E '[0-9]+', but it returns to me all the six number in the output
grep -o -E '[0-9]', but it return to me only the first character of the six values.
grep -o -E '[0-9]+$', but the output is empty, I suppose because the number is not the last character set of outoput.
How can I achieve my purpose?
To get the last number, you can add a .* in front, that will match as much as possible, eating away all the other numbers. However, to exclude that part from the output, you need GNU grep or pcregrep or sed.
grep -Po '.* \K[0-9.]+'
Or
sed -En 's/.* ([0-9.]+).*/\1/p'
Consider using awk to just print the fields you want rather than matching on numbers. This will work using any awk in any shell on every Unix box:
$ hdparm whatever | awk 'NF>1{print $(NF-1)}'
7986.93
159.09

Conky cpubar filling in wrong way

I have a simple conky cpubar for monitoring CPU load working on Debian KDE 9, here is the relevant part:
${image ~/script/conky/static/img/cpu.png -p 0,280 -s 26x26}\
${goto 40}${font monospace:bold:size=15}${color1}CPU ${font monospace:bold:size=10}(TOT: ${cpu cpu0}%) ${color0}${hr 5}${color white}
${font monospace:bold:size=11}\
${execi 99999 neofetch | grep 'CPU' | cut -f 2 -d ":" | sed 's/^[ \t]*//;s/[ \t]*$//' | sed 's/[\x01-\x1F\x7F]//g' | sed 's/\[0m//g' | sed 's/\[.*\]//'}\
[${execi 5 sensors | grep 'temp1' | cut -c16-22}]
${cpugraph cpu0 40,340 52ff00 6edd21}
CPU 1${goto 70}${cpu cpu1}%${goto 100}${cpubar 8,width_cpu_bar cpu1}
CPU 2${goto 70}${cpu cpu2}%${goto 100}${cpubar 8,width_cpu_bar cpu2}
CPU 3${goto 70}${cpu cpu3}%${goto 100}${cpubar 8,width_cpu_bar cpu3}
CPU 4${goto 70}${cpu cpu4}%${goto 100}${cpubar 8,width_cpu_bar cpu4}
Ad this is the result:
Another example:
As you can see the result looks good but the filling cpubars dont work properly and all 4 bars have the same filling, clearly seen in the last one where I have a 100% core load (CPU3) and his bar is not completely full.
Where am I wrong?
The cpu number comes before the height,width part, i.e. use
${cpubar cpu1 8,width_cpu_bar}

Generating data for MANET nodes location in a specific format using ns2 setdest

I am following this link and trying to implement the scenarios there.
So I need to generate a data for MANET nodes representing their location in this format:
Current time - latest x – latest y – latest update time – previous x –previous y – previous update time
with the use of setdest tool with these options:
1500 by 300 grid, ran for 300 seconds and used pause times of 20s and maximum velocities of 2.5 m/s.
so I come up with this command
./setdest -v 2 -n 10 -s 2.5 -m 10 -M 50 -t 300 -p 20 -x 1500 -y 300 > test1.tcl
which worked and generated a tcl file, but I don't know how can I obtain the data in the required format.
setdest -v 2 -n 10 -s 2.5 -m 10 -M 50 -t 300 -p 20 -x 1500 -y 300 > test1.tcl
Not a tcl file : Is a "scen" / scenario file with 1,700 "ns" commands. Your file was renamed to "test1.scen", and is now used in the manet examples, in the simulation example aodv-manet-20.tcl :
set val(cp) "test1.scen" ;#Connection Pattern
Please be aware that time settings are maximum time. "Long time settings" were useful ~20 years ago when computers were slow. (Though there are complex simulations lasting half an hour to one hour.)
Link, manet-examples-1.tar.gz https://drive.google.com/file/d/0B7S255p3kFXNR05CclpEdVdvQm8/view?usp=sharing
Edit: New example added → manet0-16-nam.tcl → → https://drive.google.com/file/d/0B7S255p3kFXNR0ZuQ1l6YnlWRGc/view?usp=sharing

Grep from text file over the past hour

I have several commands similar to:
ping -i 60 8.8.8.8 | while read pong; do echo "$(date): $pong" >> /security/latencytracking/pingcapturetest2.txt; done
output:
Tue Feb 4 15:13:39 EST 2014: 64 bytes from 8.8.8.8: icmp_seq=0 ttl=50
time=88.844 ms
I then search the results using:
cat /security/latencytracking/pingcapturetest* | egrep 'time=........ ms|time=......... ms'
I am looking for latency anomalies over X ms.
Is there a way to search better than I am doing and search over the past 1,2,3, etc. hours as opposed to from the start of the file? This could get tedious over time.
You could add unix timestamp to your log, and then search based on that:
ping -i 60 8.8.8.8 | while read pong; do
echo "$(date +"%s"): $pong" >> log.txt
done
Your log will have entries like:
1391548048: 64 bytes from 8.8.8.8: icmp_req=1 ttl=47 time=20.0 ms
Then search with a combination of date and awk:
Using GNU Date (Linux etc):
awk -F: "\$1 > $(date -d '1 hour ago' +'%s')" log.txt
or BSD Date (Mac OSX, BSD)
awk -F: "\$1 > $(date -j -v '-1H' +%s)" log.txt
The command uses date -d to translate english time-sentence (or date -v for the same task on BSD/OSX) to unix timestamp. awk then compares the logged timestamp (first field before the :) with the generated timestamp and prints all log-lines which have a higher value, ie newer.
If you are familiar with R:
1. I'd slurp the whole thing in with read.table(), drop the unnecessary columns
2. then do whatever calculations you like
Unless you have tens of millions of records, then R might be a bit slow.
Plan B:
1. use cut to nuke anything you dont need and then goto the plan above.
You can also do it with bash. You can compare dates, as follows:
Crop the date field. You can convert that date into the number of seconds since midnight of 1st Jan 1970
date -d "Tue Feb 4 15:13:39 EST 2014" '+%s'
you compare that number against the number of seconds you got one hour ago,
reference=$(date --date='-1 hour' '+%s')
This way you get all records from last hour. Then you can filter after the length of the delay

efficient way to parse vmstat output

I'm trying to efficiently parse vmstat output preferably in awk or sed, it also should work on both linux and hp-ux. For example I would like to cut cpu idle % ("92" in this case) from the following output:
$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
11 0 385372 101696 61704 650716 0 1 5 9 6 12 5 2 92 0
unfortunately the vmstat output can differ on different linux distributions and hp-ux, also columns can vary in length and can be presented in other order.
I tried to write some nice awk oneliner, but eventually ended with python solution:
$ vmstat | python -c 'import sys; print dict(zip(*map(str.split, sys.stdin)[-2:])).get("id")'
92
Do you know better way to parse mentioned output, to get number values of desired column name?
using awk you can do:
vmstat | awk '(NR==2){for(i=1;i<=NF;i++)if($i=="id"){getline; print $i}}'
This should get value of "id" column on Linux as well as on HP-UX or any other standard unix system.
Tested on Linux, HP-UX and Solaris.
$ vmstat | python -c 'import sys; print sys.stdin.readlines()[-1].split()[-2]'
95

Resources