I have a loop (while, extracting 2 variables) where I found one command is not working. Even when I put the command in the console directly (subsituting by my own the variable) it gives the result but continue working without any advance.
The command's objective is to find in a big file.gct, in specific in its first three lines, an object obtained from other file and then print the finding and everything before in that line.
If someone know why it stucks and how to fix it or even an alternative that works well in loops and does not demands more RAM's use it would be appreciated.
head -3 file_2 | grep -E -o ".{0,1000}$variable."
Kind of an example as how it looks the big file (file_2):
head -3 file_2
| #1.2 |
| 57000 | 17300 |
|Irrelevant|Irrelevant2| DATA-B12-18 | DATA-Y17-72 | DATA-A12-44 | .... |
When I run in the terminal: head -3 file_2 | grep -E -o ".{0,1000}DATA-B12-18"
the output is:
Irrelevant Irrelevant2 DATA-B12-18 and then stacks.
Related
Need to ignore grep if the line starts with ; or # for a specific string in a file. file.ini contains below line
output_partition_key=FILE_CREATED_DATE
doing a grep as below returns values FILE_CREATED_DATE
grep -w "output_partition_key" file.ini | cut -d= -f2
but if say the line starts with ; or # then it should not grep anything
;output_partition_key=FILE_CREATED_DATE
I tried solutions from other posts but its not working.Can anyone tell me how to achieve the expected result
It seems like what you really want is to find lines that start with output_partition_key=. The simplest way to do that is:
grep ^output_partition_key= file.ini | cut -d= -f2
(where ^ means "the start of a line").
I have a simple conky cpubar for monitoring CPU load working on Debian KDE 9, here is the relevant part:
${image ~/script/conky/static/img/cpu.png -p 0,280 -s 26x26}\
${goto 40}${font monospace:bold:size=15}${color1}CPU ${font monospace:bold:size=10}(TOT: ${cpu cpu0}%) ${color0}${hr 5}${color white}
${font monospace:bold:size=11}\
${execi 99999 neofetch | grep 'CPU' | cut -f 2 -d ":" | sed 's/^[ \t]*//;s/[ \t]*$//' | sed 's/[\x01-\x1F\x7F]//g' | sed 's/\[0m//g' | sed 's/\[.*\]//'}\
[${execi 5 sensors | grep 'temp1' | cut -c16-22}]
${cpugraph cpu0 40,340 52ff00 6edd21}
CPU 1${goto 70}${cpu cpu1}%${goto 100}${cpubar 8,width_cpu_bar cpu1}
CPU 2${goto 70}${cpu cpu2}%${goto 100}${cpubar 8,width_cpu_bar cpu2}
CPU 3${goto 70}${cpu cpu3}%${goto 100}${cpubar 8,width_cpu_bar cpu3}
CPU 4${goto 70}${cpu cpu4}%${goto 100}${cpubar 8,width_cpu_bar cpu4}
Ad this is the result:
Another example:
As you can see the result looks good but the filling cpubars dont work properly and all 4 bars have the same filling, clearly seen in the last one where I have a 100% core load (CPU3) and his bar is not completely full.
Where am I wrong?
The cpu number comes before the height,width part, i.e. use
${cpubar cpu1 8,width_cpu_bar}
So, something so simple, how much ram is installed in the current machine? I run a pxe image built in buildroot to grab system specifications from systems on a network boot. But, one thing seems to stick out to me. How do you effectively and reliably count the ram on every possible system.
I give you the worst code ever made, it's 6 years old and I am absolutely embarrassed by it.
ramtotal=0
ramsize=1
while test $ramsize -le 10000; do
ramcount=`dmidecode --type memory | grep -v Enabled | grep -v Installed | grep -v Maximum | grep "Size:" | grep "MB" | grep -c " $ramsize "`
ramup=$(( ramsize * ramcount ))
ramtotal=$(( ramtotal + ramup ))
ramsize=$(( ramsize * 2 ))
done
Well, may my code live long enough to be capable of counting ram chips with a size of 2^10000. Future proof ftw. And that's the thing, the code literally just worked, and so there was never any reason to make it disappear.
Today, I am trying a new code which worked fine on my Ubuntu Server, but not with busybox.
ramtotal=`dmidecode --type memory | grep -v Enabled | grep -v Installed | grep -v Maximum | grep "Size:" | grep "MB" | grep -o -P '(?<=\:\ ).*(?=\ MB)' | awk '{s+=$1} END {print s}'`
ramtotal=`dmidecode -t memory | grep "Size:" | awk '/Size: ([0-9]+) bytes|([kKMGTPEZ]B)/ {if($3 ~ /GB/) { size+=$2*1024 } else if($3 ~ /MB/) { size+=$2 } } END { print size }'`
So, it's been a long time since I originally posted. And I guess just to be consistent, I wanted to come back and update this, seeing as a change in the source code of dmidecode essentially breaks what I had previously added. Essentially for some reason dmidecode decided that this field could be MB or GB. (and perhaps something even bigger, though I didn't bother to research how forward thinking they decided to be).
I am passing all my svn commit log messages to a file and want to grep only the JIRA issue numbers from that.
Some lines might have more than 1 issue number, but I want to grab only the first occurrence.
The pattern is XXXX-999 (number of alpha and numeric char is not constant)
Also, I don't want the entire line to be displayed, just the JIRA number, without duplicates. I use the following command but it didn't work.
Could someone help please?
cat /tmp/jira.txt | grep '^[A-Z]+[-]+[0-9]'
Log file sample
------------------------------------------------------------------------
r62086 | userx | 2015-05-12 11:12:52 -0600 (Tue, 12 May 2015) | 1 line
Changed paths:
M /projects/trunk/gradle.properties
ABC-1000 This is a sample commit message
------------------------------------------------------------------------
r62084 | usery | 2015-05-12 11:12:12 -0600 (Tue, 12 May 2015) | 1 line
Changed paths:
M /projects/training/package.jar
EFG-1001 Test commit
Output expected:
ABC-1000
EFG-1001
First of all, it seems like you have the second + in the wrong place, it should be at the end of [0-9] expression.
Second, I think all you need to do this is use the -o option to grep (to display only the matching portion of the line), then pipe the grep output through sort -u, like this:
cat /tmp/jira.txt | grep -oE '^[A-Z]+-[0-9]+' | sort -u
Although if it were me, I'd skip the cat step and just give the filename to grep, as so:
grep -oE '^[A-Z]+-[0-9]+' /tmp/jira.txt | sort -u
Six of one, half a dozen of the other, really.
I'm getting frustrated enough that I figured it was time to ask a question.
I'm trying to replace an email address across a website that is hard coded into 1000's of pages. It's on a FreeBSD 6.3 server.
Here is the command I am using:
grep -R --files-with-matches 'Email\#domain.com' . | sort | uniq | xargs perl -pi -e 's/Email\#domain.com/Email\#newdomain.com/' *.html
And here is the error that I keep getting:
xargs: unterminated quote
Oddly enough, when I run that command on a test case of 3 files (in a nested structure) it works just fine. I've been googling and most solutions seem to deal with adding a -print0 after the . and a -0 after the xargs. However, this yields a different set of errors that lead me to believe I'm putting things in the wrong places.
thanks in advance for your help
Pax is correct. I would further correct it to something like:
grep -R --files-with-matches 'Email\#domain.com' . -print0 | xargs -0 perl -pi -e 's/Email\#domain.com/Email\#newdomain.com/'
EDIT:
Thanks to kcwu, this is the full FreeBSD:
grep -R --files-with-matches 'Email\#domain.com' . --null | xargs -0 perl -pi -e 's/Email\#domain.com/Email\#newdomain.com/'
Note that I've removed sort and uniq. --files-without-match is documented to "stop on the first match" so you will not get duplicate files. -print0 and -0 ensure (and handle) a null-terminated file list, which is vital, because POSIX allows filenames to contain newlines.
Note that I don't know perl, but I'm assuming that part's roughly equivalent to:
sed -i s/Email\#domain.com/Email\#newdomain.com/g
Why are you giving a list of HTML files to xargs? That program takes its file list from the pipeline (output of grep).
Use GNU Parallel:
grep -R --files-with-matches 'Email\#domain.com' . | sort | uniq | parallel -q perl -pi -e 's/Email\#domain.com/Email\#newdomain.com/g'
Watch the intro video to learn more: http://www.youtube.com/watch?v=OpaiGYxkSuQ