Grep and Cut Command and divide string - grep

I have a grep command that gives the following string:
20121121001100 18 0 16 2 18
but I would like to modify this string to get
20121121 001 18 0 16 2 18
the above value are being extracted by the following:
for i in `ls -1 file.txt | sort`; do echo $i`
grep datetime $i | wc -l ``
grep abc $i | wc -l ``
grep def $i | wc -l ``
grep ghi $i | wc -l ``
grep jkl $i | wc -l ` ; done | cut -c9-500
cut -c9-500 is used because the original string is in the form of
datetime20121121001100 18 0 16 2 18
and cut -c9-500 returns
20121121001100 18 0 16 2 18
Can someone please help me to get
20121121 001 18 0 16 2 18
(ie remove the last 3 digits from the date part)

Most of what you want/do can be accomplished with awk. But for the minimum you want:
for i in `ls -1 file.txt | sort`; do echo $i`
grep datetime $i | wc -l ``
grep abc $i | wc -l ``
grep def $i | wc -l ``
grep ghi $i | wc -l ``
grep jkl $i | wc -l ` ; done | cut -c9-500 | awk '{print substr($0,1,11) substr($0,15) }'
awk is very capable at text processing.
Edit: I'm not sure of what are you doing, but, basicly this does (almost) the same:
awk 'FILENAME != oldfilename {oldfilename = FILENAME; dt = 0 ; a = 0; d = 0; g = 0; j = 0}
/datetime/ {dt++}
/abc/ {a++}
/def/ {d++}
/ghi/ {g++}
/j/ {j++}
END {print FILENAME, dt, a, d, g, j}' *
And it's faster, fewer processes, etc... Basically awk process the file, counts the occurences of the specified strings, and when it finishes the file (after the last line) prints the report.
Changed specs:
for i in `ls -1 file.txt | sort`; do echo $i`
grep datetime $i | wc -l ``
grep abc $i | wc -l ``
grep def $i | wc -l ``
grep ghi $i | wc -l ``
grep jkl $i | wc -l ` ; done | cut -c9-500 | awk '{print substr($0,1,8) " " substr($0,9,4) substr($0,15) }'

Pipe to sed:
echo "20121121001100 18 0 16 2 18" | sed -r 's/^([0-9]+)[0-9][0-9][0-9] (.*)$/\1 \2/'
gives
20121121001 18 0 16 2 18

Related

How to do operations with form values?

so i have a form with 3 text fields, in two of them, users enters a code (and OID), which i use to run a command (snmpbulkwalk). So i want to store the outputs of those commands in a variable, lets say var1 and var2.
[]
But then, in the last text field, i want to do and operation with these variables, like: var1 - var2.
So far i tried this:
#var1 = "nice -n 19 snmpbulkwalk -v 2c -c %snmp_community
%sensor_ip "+ "#{monitor_category_params[:oid].strip}"+" | awk
'/kB$/{ $(NF-1)= $(NF-1)*1024;} {print $0}' | sed 's/ [A-Za-
z]*$//' | awk '{print $NF}' | grep -o '[[:digit:]]*' | paste -s
-d';' -"
puts("var1 = " + "#{#var1}");
#var2 = "nice -n 19 snmpbulkwalk -v 2c -c %snmp_community
%sensor_ip "+ "#{monitor_category_params[:oid2]}"+" | awk
'/kB$/{ $(NF-1)= $(NF-1)*1024;} {print $0}' | sed 's/ [A-Za-
z]*$//' | awk '{print $NF}' | grep -o '[[:digit:]]*' | paste -s
-d';' -"
puts("var2 = " + "#{#var2}");
#var3 = "#{monitor_category_params[:snmp_oper].strip}"
puts("var3 with {} = " + "#{#var3}");
system = '$(#{#var3})'
puts(system(system));
The thing is i dont know how to store the output of the shell command "nice -n 19 ...blabla" in the variable. I used exec and backticks like this:
exec "nice -n 19 snmpbulkwalk -v 2c -c %snmp_community %sensor_ip "+ "#{monitor_category_params[:oid].strip}"+" | awk '/kB$/{ $(NF-1)= $(NF-1)*1024;} {print $0}' | sed 's/ [A-Za-z]*$//' | awk '{print $NF}' | grep -o '[[:digit:]]*' | paste -s -d';' -"
but it gives me this error, which i dont get it since the command is well formatted.
sh: -c: línea 0: EOF inesperado mientras se buscaba un `)' coincidente
sh: -c: línea 1: error sintáctico: no se esperaba el final del fichero
I dont know if im losing arguments (%snmp_community or %sensor_ip), but the final result should be something like system = number - number. but i only get system = var1 - var2, which is not doing nothing since i want the outputs of the commands, not the name of the variables.
sorry if didnt explain myself good, thank you in advance.
So assuming this is related with ruby or ruby on rails
#var1 and #var2 are instance variable of a method in some class.
You can directly use the backticks directly to save to output from the shell command
#var1 = `ls`
p #var1
However, some bash commands do not return a string value directly.
I have not used snmpbulkwalk till now but $PATH might be similar in a way
#var1 = `echo $PATH` #add you own shell code here to check
p #var1

dynamic exclusion of files through grep matching

I have a file source-push.sh which returns the list of files which I want to exclude from the results of find command.
It looks like this:
#!/usr/bin/env bash
find . -not \( -path './node_modules' -prune \) -name '*.js' | grep -vE $(echo $(./source-push.sh | xargs -I{} echo -n "{}|") | rev | cut -b2- | rev) | xargs -L1 standard --fix
find . -not \( -path './node_modules' -prune \) -name '*.css' | grep -vE $(echo $(./source-push.sh | xargs -I{} echo -n "{}|") | rev | cut -b2- | rev) | xargs -L1 stylelint --config stylelint.json
There are supposed to be a way to do the job better than that. Any suggestions?
Instead of:
... | grep -vE $(echo $(./source-push.sh | xargs -I{} echo -n "{}|") | rev | cut -b2- | rev ) | ...
you can use the POSIX options -F and -f:
... | grep -v -F -f <( ./source-push.sh ) | ...
-F tells grep that the patterns are fixed strings
(avoiding the problem that your original code would break if the patterns contain characters that are special to grep -E)
-f file tells grep to use a list of patterns from file
<( ... ) is a bash way to present output of a program as a file (named pipe)

How to get packets statistics by port

Is there a better way to get port statistics, such as stat by source port, or destination port?
for example to get results like below:
--------------------------------
| Src/Dst Port | Packets Count |
| -----------------------------|
| 22 | 100 |
| -----------------------------|
| 80 | 200 |
|------------------------------|
| 8080 | 300 |
| -----------------------------|
I have checked wireshark's Statistics menu, and tshark command, but still don't know how to get the results I want.
I don't believe it's possible to directly get this summary information from either Wireshark or tshark, but you can probably write a script to do it.
Here's one such script that may help you get started:
#!/bin/sh
# Check usage
if (( ${#} < 2 )) ; then
echo "Usage: $0 <file> <type> ... where type is udp or tcp"
exit 1
fi
# Sanity check if the file specified actually exists or not
stat ${1} &> /dev/null
if [ $? -ne 0 ] ; then
echo "File ${1} doesn't exist"
exit 2
fi
# Validate the type
if [ "${2}" != "udp" -a "${2}" != "tcp" ] ; then
echo "Invalid type ${2}. Specify either udp or tcp."
exit 3
fi
echo "Src/Dst Port | Packets Count"
tshark -nq -r ${1} -z endpoints,${2} | grep "^[1-9]" | tr -s ' ' | cut -s -d ' ' -f 2,3 | sort -n | gawk '{arr[$1]+=$2} END {for (i in arr) {print i,arr[i]}}'

xargs: String concatenation

zgrep -i XXX XXX | grep -o "RID=[0-9|A-Z]*" |
uniq | cut -d "=" -f2 |
xargs -0 -I string echo "RequestID="string
My output is
RequestID=121212112
8127127128
8129129812
But my requirement is to have the request ID prefixed before all the output.
Any help is appreciated
I had a similar task and this worked for me. It might be what you are looking for:
zgrep -i XXX XXX | grep -o "RID=[0-9|A-Z]*" |
uniq | cut -d "=" -f2 |
xargs -I {} echo "RequestID="{}
Try -n option of xargs.
-n max-args
Use at most max-args arguments per command line. Fewer than max-args arguments will be used if the size (see the -s option)
is exceeded,
unless the -x option is given, in which case xargs will exit.
Example:
$ echo -e '1\n2' | xargs echo 'str ='
str = 1 2
$ echo -e '1\n2' | xargs -n 1 echo 'str ='
str = 1
str = 2

How to grep a specific integer

I have a list of number in a file with format: {integer}\n . So a possible list is:
3
12
53
23
18
32
1
4
i want to use grep to get the count of a specific number, but grep -c "1" file results 3 because it takes into account except the 1, the 12, 18 also. How can i correct this?
Although all the answers until now are logical, and i thought of them and tested before, actually nothing works:
username#domain2:~/code/***/project/random/r2$ cat out.txt
2
16
11
1
13
2
1
16
16
9
username#domain2:~/code/***/project/random/r2$ grep -Pc "^1$" out.txt
0
username#domain2:~/code/***/project/random/r2$ grep -Pc ^1$ out.txt
0
username#domain2:~/code/***/project/random/r2$ grep -c ^1$ out.txt
0
username#domain2:~/code/***/project/random/r2$ grep -c "^1$" out.txt
0
username#domain2:~/code/***/project/random/r2$ grep -xc "^1$" out.txt
0
username#domain2:~/code/***/project/random/r2$ grep -xc "1" out.txt
0
Use the -x flag:
grep -xc 1 file
This is what it means:
-x, --line-regexp
Select only those matches that exactly match the whole line.
There a some other ways you can do this besides grep
$ cat file
3 1 2 100
12 x x x
53
23
18
32
1
4
$ awk '{for(i=1;i<=NF;i++) if ($i=="1") c++}END{print c}' file
2
$ ruby -0777 -ne 'puts $_.scan(/\b1\b/).size' file
2
$ grep -o '\b1\b' file | wc -l
2
$ tr " " "\n" < file | grep -c "\b1\b"
2
Use this regex...
\D1\D
...or ^1$ with multiline mode on.
Tested with RegExr and they both work.
Use e.g. ^123$ to match "Beginning of line, 123, End of line"
grep -wc '23' filename.txt
It will count the number of exact matches of digit 23.

Resources