I went through the api doc to get the data from it. I want to get the details of count of all the commits that a user made with the username. Is there any possible way to perform it. I tried searching many of the API's but I could not find a suitable one. Is there any API to get the count of all commits of a specific user? if not then how to perform it?
If you have access to the Gerrit repositories in the Gerrit server (GERRIT_SITE/git) you can get what you want executing the following command inside an specific repository (GERRIT_SITE/git/REPO_FULL_PATH):
git log --pretty="format:%ae" --since="2019-01-01 00:00:00" --until="2019-12-31 23:59:59" --all | cut -d '#' -f 1 | sort | uniq -c | sort -k 1,1nr -k 2,2
410 aaaaa
169 bbbbb
128 ccccc
22 ddddd
19 eeeee
...
Explanation:
--pretty="format:%ae" => Show only the committer e-mail
--since and --utill => Limit the search
cut -d '#' -f 1 => Remove the "#DOMAIN" from the e-mail address
sort => Sort by user name
uniq -c => Omit repeated user names, prefixing users by the number of occurrences
sort -k 1,1nr -k 2,2 => Sort by number of occurrences then by user name
Related
I am new to linux and I am experimenting with basic terminal commands. I found out that I can list all users using compgen -u but what if I only want to display the bottom line outputs ?
Ok lets say the output of compgen -u goes like this:
extra
extra
extra
extra
extra
extra
extra
extra
extra
John
William
Kate
Harold
I can only use grep to find a single text (ex. compgen -u | grep John). But what if I want to use grep to display John as well as all the remaining entries after it ?
sed or awk solution would be easier, but if you can only use grep, then the option --after-context (or -A) might do:
grep -A 5 John file
The drawback is that you need to know the number of lines to display after the matching (or use an arbitrary big number for the rest of the file).
compgen -u | grep -A$(compgen -u| wc -l) John
Explanation:
From man grep
-A NUM, --after-context=NUM
Print NUM lines of trailing context after matching lines. Places a line containing a group separator (described under --group-separator) between
contiguous groups of matches.
grep -A -- print number of rows after pattern
$() -- Execute unix command
compgen -u| wc -l --> Get total number of rows of output of command.
You can use the following one-liner :
n=$( compgen -u | grep -n John | head -1 | cut -d ":" -f 1 ) && compgen -u | tail -n +$n
This finds out the line number for first occurrence of John, and prints everything starting that line.
Im greping a bunch of files in a directory as below
grep -EIho 'abc|def' *|sort|uniq -c >>counts.csv
My output is
150 abc
130 def
What I need is Current date (-1) and the result of grep like below to be inserted to counts.csv
5/21/2018 150,130
grep..|sort|uniq -c
|awk -v d="$(date -d '1 day ago' +%D)" 'NR==1{printf "%s",d}{printf "%s",","$1;}END{print ""}'
will do it.
With your example data, it gives:
05/21/18,150,130
I want to do a search in a log file like this:
/logs/loggy.log:
INFO: cats are people
DEBUG: one doth fig're and therefore one doth be
INFO: cookies made via the catapultation of figs at an acceleration of 1 m/s^2.
INFO: informative information about my information systems
I want just the 3rd line. So I command:
grep 'cat.*fig' /logs/loggy.log
But it's a large file! Let's make it faster
grep -F -e cat -e fig /logs/loggy.log
0ops. Now I'm getting back all the lines because it now matches for either 'cat' or 'fig'. I want it to match only lines containing bolth. Is there a way to do this without going back into regular expressions land?
You can use agrep if it is available in your distro repos, which nativelly provides and operation:
$ agrep 'cat;fig' file1
Or you can use any of the following alternatives:
$ grep 'cat' file1 |grep 'fig'
$ awk '/cat/ && /fig/' file1
In all above cases the result is:
INFO: cookies made via the catapultation of figs at an acceleration of 1 m/s^2.
I am passing all my svn commit log messages to a file and want to grep only the JIRA issue numbers from that.
Some lines might have more than 1 issue number, but I want to grab only the first occurrence.
The pattern is XXXX-999 (number of alpha and numeric char is not constant)
Also, I don't want the entire line to be displayed, just the JIRA number, without duplicates. I use the following command but it didn't work.
Could someone help please?
cat /tmp/jira.txt | grep '^[A-Z]+[-]+[0-9]'
Log file sample
------------------------------------------------------------------------
r62086 | userx | 2015-05-12 11:12:52 -0600 (Tue, 12 May 2015) | 1 line
Changed paths:
M /projects/trunk/gradle.properties
ABC-1000 This is a sample commit message
------------------------------------------------------------------------
r62084 | usery | 2015-05-12 11:12:12 -0600 (Tue, 12 May 2015) | 1 line
Changed paths:
M /projects/training/package.jar
EFG-1001 Test commit
Output expected:
ABC-1000
EFG-1001
First of all, it seems like you have the second + in the wrong place, it should be at the end of [0-9] expression.
Second, I think all you need to do this is use the -o option to grep (to display only the matching portion of the line), then pipe the grep output through sort -u, like this:
cat /tmp/jira.txt | grep -oE '^[A-Z]+-[0-9]+' | sort -u
Although if it were me, I'd skip the cat step and just give the filename to grep, as so:
grep -oE '^[A-Z]+-[0-9]+' /tmp/jira.txt | sort -u
Six of one, half a dozen of the other, really.
Having a log file in the standard combined access_log format of nginx or apache, how would you, in UNIX shell, calculate the number of visits or page views (i.e. total requests) from each visitor (i.e. IP-address) that a given referrer once brought?
In other words, the number of ALL requests by each visitor that have found a link to your site on another site.
The best snippet I could come up with is the following:
fgrep http://t.co/ /var/www/logs/access.log | cut -d " " -f 1 | \
fgrep -f /dev/fd/0 /var/www/logs/access.log | cut -d " " -f 1 | sort | uniq -c
What does this do?
We first find unique IP-addresses of visits that have http://t.co/ in the log entry. (Notice that this will only count visits that came directly from the ref, but not those that stayed and browsed the site further.)
After having a list of IP-addresses that, at one point, were referred from a given URL, we pipe such list to another fgrep through stdin — /dev/fd/0 (a very inefficient alternative would have been xargs -n1 fgrep access.log -e instead of fgrep -f /dev/fd/0 access.log) for finding all hits from such addresses.
After the second fgrep, we get the same set of IP-addresses that we had in the first step, but now they repeat according to the total number of requests -- now sort, uniq -c, done. :)