Conky cpubar filling in wrong way - lua

I have a simple conky cpubar for monitoring CPU load working on Debian KDE 9, here is the relevant part:
${image ~/script/conky/static/img/cpu.png -p 0,280 -s 26x26}\
${goto 40}${font monospace:bold:size=15}${color1}CPU ${font monospace:bold:size=10}(TOT: ${cpu cpu0}%) ${color0}${hr 5}${color white}
${font monospace:bold:size=11}\
${execi 99999 neofetch | grep 'CPU' | cut -f 2 -d ":" | sed 's/^[ \t]*//;s/[ \t]*$//' | sed 's/[\x01-\x1F\x7F]//g' | sed 's/\[0m//g' | sed 's/\[.*\]//'}\
[${execi 5 sensors | grep 'temp1' | cut -c16-22}]
${cpugraph cpu0 40,340 52ff00 6edd21}
CPU 1${goto 70}${cpu cpu1}%${goto 100}${cpubar 8,width_cpu_bar cpu1}
CPU 2${goto 70}${cpu cpu2}%${goto 100}${cpubar 8,width_cpu_bar cpu2}
CPU 3${goto 70}${cpu cpu3}%${goto 100}${cpubar 8,width_cpu_bar cpu3}
CPU 4${goto 70}${cpu cpu4}%${goto 100}${cpubar 8,width_cpu_bar cpu4}
Ad this is the result:
Another example:
As you can see the result looks good but the filling cpubars dont work properly and all 4 bars have the same filling, clearly seen in the last one where I have a 100% core load (CPU3) and his bar is not completely full.
Where am I wrong?

The cpu number comes before the height,width part, i.e. use
${cpubar cpu1 8,width_cpu_bar}

Related

Extract specific number from command outout

I have the following issue.
In a script, I have to execute the hdparm command on /dev/xvda1 path.
From the command output, I have to extract the MB/sec values calculated.
So, for example, if executing the command I have this output:
/dev/xvda1:
Timing cached reads: 15900 MB in 1.99 seconds = 7986.93 MB/sec
Timing buffered disk reads: 478 MB in 3.00 seconds = 159.09 MB/sec
I have to extract 7986.93 and 159.09.
I tried:
grep -o -E '[0-9]+', but it returns to me all the six number in the output
grep -o -E '[0-9]', but it return to me only the first character of the six values.
grep -o -E '[0-9]+$', but the output is empty, I suppose because the number is not the last character set of outoput.
How can I achieve my purpose?
To get the last number, you can add a .* in front, that will match as much as possible, eating away all the other numbers. However, to exclude that part from the output, you need GNU grep or pcregrep or sed.
grep -Po '.* \K[0-9.]+'
Or
sed -En 's/.* ([0-9.]+).*/\1/p'
Consider using awk to just print the fields you want rather than matching on numbers. This will work using any awk in any shell on every Unix box:
$ hdparm whatever | awk 'NF>1{print $(NF-1)}'
7986.93
159.09

Grep finds the word and then stucks (bash)

I have a loop (while, extracting 2 variables) where I found one command is not working. Even when I put the command in the console directly (subsituting by my own the variable) it gives the result but continue working without any advance.
The command's objective is to find in a big file.gct, in specific in its first three lines, an object obtained from other file and then print the finding and everything before in that line.
If someone know why it stucks and how to fix it or even an alternative that works well in loops and does not demands more RAM's use it would be appreciated.
head -3 file_2 | grep -E -o ".{0,1000}$variable."
Kind of an example as how it looks the big file (file_2):
head -3 file_2
| #1.2 |
| 57000 | 17300 |
|Irrelevant|Irrelevant2| DATA-B12-18 | DATA-Y17-72 | DATA-A12-44 | .... |
When I run in the terminal: head -3 file_2 | grep -E -o ".{0,1000}DATA-B12-18"
the output is:
Irrelevant Irrelevant2 DATA-B12-18 and then stacks.

print the rest of input along with matching line

I am new to linux and I am experimenting with basic terminal commands. I found out that I can list all users using compgen -u but what if I only want to display the bottom line outputs ?
Ok lets say the output of compgen -u goes like this:
extra
extra
extra
extra
extra
extra
extra
extra
extra
John
William
Kate
Harold
I can only use grep to find a single text (ex. compgen -u | grep John). But what if I want to use grep to display John as well as all the remaining entries after it ?
sed or awk solution would be easier, but if you can only use grep, then the option --after-context (or -A) might do:
grep -A 5 John file
The drawback is that you need to know the number of lines to display after the matching (or use an arbitrary big number for the rest of the file).
compgen -u | grep -A$(compgen -u| wc -l) John
Explanation:
From man grep
-A NUM, --after-context=NUM
Print NUM lines of trailing context after matching lines. Places a line containing a group separator (described under --group-separator) between
contiguous groups of matches.
grep -A -- print number of rows after pattern
$() -- Execute unix command
compgen -u| wc -l --> Get total number of rows of output of command.
You can use the following one-liner :
n=$( compgen -u | grep -n John | head -1 | cut -d ":" -f 1 ) && compgen -u | tail -n +$n
This finds out the line number for first occurrence of John, and prints everything starting that line.

A better way to get a ramtotal in busybox bin/sh?

So, something so simple, how much ram is installed in the current machine? I run a pxe image built in buildroot to grab system specifications from systems on a network boot. But, one thing seems to stick out to me. How do you effectively and reliably count the ram on every possible system.
I give you the worst code ever made, it's 6 years old and I am absolutely embarrassed by it.
ramtotal=0
ramsize=1
while test $ramsize -le 10000; do
ramcount=`dmidecode --type memory | grep -v Enabled | grep -v Installed | grep -v Maximum | grep "Size:" | grep "MB" | grep -c " $ramsize "`
ramup=$(( ramsize * ramcount ))
ramtotal=$(( ramtotal + ramup ))
ramsize=$(( ramsize * 2 ))
done
Well, may my code live long enough to be capable of counting ram chips with a size of 2^10000. Future proof ftw. And that's the thing, the code literally just worked, and so there was never any reason to make it disappear.
Today, I am trying a new code which worked fine on my Ubuntu Server, but not with busybox.
ramtotal=`dmidecode --type memory | grep -v Enabled | grep -v Installed | grep -v Maximum | grep "Size:" | grep "MB" | grep -o -P '(?<=\:\ ).*(?=\ MB)' | awk '{s+=$1} END {print s}'`
ramtotal=`dmidecode -t memory | grep "Size:" | awk '/Size: ([0-9]+) bytes|([kKMGTPEZ]B)/ {if($3 ~ /GB/) { size+=$2*1024 } else if($3 ~ /MB/) { size+=$2 } } END { print size }'`
So, it's been a long time since I originally posted. And I guess just to be consistent, I wanted to come back and update this, seeing as a change in the source code of dmidecode essentially breaks what I had previously added. Essentially for some reason dmidecode decided that this field could be MB or GB. (and perhaps something even bigger, though I didn't bother to research how forward thinking they decided to be).

Why is pcregrep faster than grep?

I have some large text file(3 GB rails log file) on a centos os with a corrupted byte in this text file. When trying to search some pattern using grep, it runs indefinitely and I have to close it, however with pcregrep it takes less than a minute, so any clue why this difference ?
My search using grep:
grep -Pzo "2016-04-20(.*?)SomeController#index" production.log | wc -l
using pcregrep:
pcregrep -M "2016-04-20(.*?)SomeController#index" production.log | wc -l

Resources