lcov remove option is not removing coverage data as expected - code-coverage

I'm using lcov to generate coverage reports. I have a tracefile (broker.info) with this content (relevant fragment shown):
$ lcov -r broker.info
...
[/var/lib/jenkins/jobs/ContextBroker-PreBuild-UnitTest/workspace/test/unittests/orionTypes/]
EntityTypeResponse_test.cpp | 100% 11| 100% 6| - 0
...
[/var/lib/jenkins/jobs/ContextBroker-PreBuild-UnitTest/workspace/test/unittests/parse/]
CompoundValueNode_test.cpp | 100% 82| 100% 18| - 0
...
[/var/lib/jenkins/jobs/ContextBroker-PreBuild-UnitTest/workspace/test/unittests/rest/]
OrionError_test.cpp |92.1% 38| 100% 6| - 0
...
[/var/lib/jenkins/jobs/ContextBroker-PreBuild-UnitTest/workspace/test/unittests/serviceRoutines/]
badVerbAllFour_test.cpp | 100% 24| 100% 7| - 0
...
I want to remove all the info corresponding to test/unittest files.
I have attemped to use the -r option which, according to man page is:
-r tracefile pattern
--remove tracefile pattern
Remove data from tracefile.
Use this switch if you want to remove coverage data for a particular set of files from a tracefile. Additional command line parameters will be interpreted as
shell wildcard patterns (note that they may need to be escaped accordingly to prevent the shell from expanding them first). Every file entry in tracefile
which matches at least one of those patterns will be removed.
The result of the remove operation will be written to stdout or the tracefile specified with -o.
Only one of -z, -c, -a, -e, -r, -l, --diff or --summary may be specified at a time.
Thus, I'm using
$ lcov -r broker.info 'test/unittests/*' -o broker.info2
As far as I understand test/unittest/* matches the files under test/unittest. However, it's not working (note Deleted 0 files below):
Reading tracefile broker.info
Deleted 0 files
Writing data to broker.info2
Summary coverage rate:
lines......: 92.6% (58313 of 62978 lines)
functions..: 96.0% (6451 of 6718 functions)
branches...: no data found
I have tried also this variants (same result):
$ lcov -r broker.info "test/unittests/*" -o broker.info2
$ lcov -r broker.info "test/unittests/\*" -o broker.info2
$ lcov -r broker.info "test/unittests" -o broker.info2
So, maybe I'm doing something wrong?
I'm using lcov version 1.13 (just in case the data is relevant)
Thanks!

I have been testing another options and the following one seems to work, using the wildcard in the prefix also:
$ lcov -r broker.info "*/test/unittests/*" -o broker.info2
Maybe it is something new in version 1.13 because in version 1.11 it seems it works without wildcard in the prefix...

The below mentioned lcov command is working fine, even with wild characters (lcov 1.14):
lcov --remove meson-logs/coverage.info '/home/builduser/external/*' '/home/builduser/unittest/*' -o meson-logs/sourcecoverage.info

Related

using grep command to get spectfic word [LINUX]

I have a test.txt file with links for example:
google.com?test=
google.com?hello=
and this code
xargs -0 -n1 -a FUZZvul.txt -d '\n' -P 20 -I % curl -ks1L '%/?=DarkLotus' | grep -a 'DarkLotus'
When I type a specific word, such as DarkLotus, in the terminal, it checks the links in the file and it brings me the word which is reflected in the links i provided in the test file
There is no problem here, the problem is that I have many links, and when the result appears in the terminal, I do not know which site reflected the DarkLotus word.
How can i do it?
Try -n option. It shows the line number of file with the matched line.
Best Regards,
Haridas.
I'm not sure what you are up to there, but can you invert it? grep by default prints matching lines. The problem here is you are piping the input from the stdout of the previous commands into grep, and that can lack context at grep. Since you have a file to work with:
$ grep 'DarkLotus' FUZZvul.txt
If your intention is to also follow the link then it might be easier to write a bash script:
#!/bin/bash
for line in `grep 'DarkLotus FUZZvul.txt`
do
link=# extract link from line
echo ${link}
curl -ks1L ${link}
done
Then you could make your script accept user input:
#/bin/bash
word="${0}"
for line in `grep ${word} FUZZvul.txt`
...
and then
$ my_link_getter "DarkLotus"
https://google?somearg=DarkLotus
...
And then you could make the txt file a parameter.
etc.

Does Mercurial have a template to capture output of "hg grep"?

I was searching for a change that included "foreach" so I used this Mercurial command:
$ hg grep -r "user(mjh) & public() & date(-30)" --diff -i foreach
and it does return the hits where "foreach" was added and removed.
However, I'd like to know the actual commit hashes too. If I add a template:
$ hg grep ... -T '{date|shortdate}\n{node|short}\n{desc|firstline}\n\n'
then I get the commit hash and description as expected, but then I don't see the changed files listed.
Is there a template to capture the output of hg grep? The {files} template lists the files associated with a commit, but that's not the actual grep output. Is there an iterable template keyword available for the grep results?
Please, re-read carefully hg help grep -v (-v is important option), note the following part (new and unexpected for me also)
The following keywords are supported in addition to the common
template
keywords and functions. See also 'hg help templates'.
change String. Character denoting insertion "+" or removal "-".
Available if "--diff" is specified.
lineno Integer. Line number of the match.
path String. Repository-absolute path of the file.
texts List of text chunks.
After it you'll be able to repeat (so-so, because some details will differ slightly) default output of grep in you template
>hg grep --diff -i -r 1166 to_try
>hg grep --diff -i -r 1166 -T "{path}:{rev}:{change}:{texts}\n" to_try
hggit/compat.py:1166:-: for args in parameters_to_try:
hggit/compat.py:1166:+: for (args, kwargs) in parameters_to_try:
and after replacing {rev} by {node|short}
>hg grep --diff -i -r 1166 -T "{path}:{node|short}:{change}:{texts}\n" to_try
hggit/compat.py:f6cef55e6aeb:-: for args in parameters_to_try:
hggit/compat.py:f6cef55e6aeb:+: for (args, kwargs) in parameters_to_try:

Grepping list of phpass hashes against a file

I'm trying to grep multiple strings which look like this (there's a few hundred) against a file which contains data:string
Example strings: (no sensitive data is provided, they have been modified).
$H$9a...DcuCqC/rMVmfiFNm2rqhK5vFW1
$H$9n...AHZAV.sTefg8ap8qI8U4A5fY91
$H$9o...Bi6Z3E04x6ev1ZCz0hItSh2JJ/
$H$9w...CFva1ddp8IRBkgwww3COVLf/K1
I've been researching how to grep a file of patterns against another file, and came across the following commands
grep -f strings.txt datastring.txt > output.txt
grep -Ff strings.txt datastring.txt > output.txt
But unfortunately, these commands do NOT work successfully, and only print out a handful of results to my output file. I think it may be something to do with the symbols contained in strings.txt, but I'm unsure. Any help/advice would be great.
To further mention, I'm using Cygwin on Windows (if this is relevant).
Here's an updated example:
strings.txt contains the following:
$H$9a...DcuCqC/rMVmfiFNm2rqhK5vFW1
$H$9n...AHZAV.sTefg8ap8qI8U4A5fY91
$H$9o...Bi6Z3E04x6ev1ZCz0hItSh2JJ/
$H$9w...CFva1ddp8IRBkgwww3COVLf/K1
datastring.txt contains the following:
$H$9a...DcuCqC/rMVmfiFNm2rqhK5vFW1:53491
$H$9n...AHZAV.sTefg8ap8qI8U4A5fY91:03221
$H$9o...Bi6Z3E04x6ev1ZCz0hItSh2JJ/:20521
$H$9w...CFva1ddp8IRBkgwww3COVLf/K1:30142
So technically, all lines should be included in the OUTPUT file, but only this line is outputted:
$H$9w...CFva1ddp8IRBkgwww3COVLf/K1:30142
I just don't understand.
You have showed the output of cat -A strings.txt elsewhere, which includes ^M representing a CR (carriage return) character at the end of each line:
This indicates your file has Windows line endings (CR LF) instead of the Unix line endings (only LF) that grep would expect.
You can convert files with dos2unix strings.txt and back with unix2dos strings.txt.
Alternatively, if you don't have dos2unix installed in your Cygwin environment, you can also do that with sed.
sed -i 's/\r$//' strings.txt # dos2unix
sed -i 's/$/\r/' strings.txt # unix2dos

tar pre-run to evaluate expected size or amount of files

The problem:
I have a back-end process that at some point he collect and build a big tar file.
This tar receive few directories and an exclude files.
the process can take up to few minutes and i want to report in my front-end process (GUI) about the progress of the taring process (This is a big issue for a user that press download button and it seems like nothing is happening...).
i know i can use -v -R in the tar command and count files and size progress but i am looking for some kind of tar pre-run mode / dry run to help me evaluate either the expected number of files or the expected tar size.
the command I am using: tar -jcf 'FILE.tgz' 'exclude_files' 'include_dirs_and_files'
10x for everyone who is willing to assist.
You can pipe the output to the wc tool instead of actually making a file.
With file listing (verbose):
[git#server]$ tar czvf - ./test-dir | wc -c
./test-dir/
./test-dir/test.pdf
./test-dir/test2.pdf
2734080
Without:
[git#server]$ tar czf - ./test-dir | wc -c
2734080
Why don't you run a
DIRS=("./test-dir" "./other-dir-to-test")
find ${DIRS[#]} -type f | wc -l
beforehand. This gets all the files (-type f) one per line and counts the number of files. DIRS is an array in bash, so you can store the folders in a variable
If you want to know the size of all the stored files, you can use du
DIRS=("./test-dir" "./other-dir-to-test")
du -c -d 0 ${DIRS[#]} | tail -1 | awk -F ' ' '{print $1}'
This prints the disk usage with du, calculates a grand total (-c flag), gets the last line (example 4378921 total), and uses just the first column with awk

parse maven output in real time using sed

I am trying to parse my mvn verify output to only show lines with INFO tags. Please note that maven outputs line to stdout in real time and not by batch. I do not think that it is a problem with maven.
At first I tried to do it with grep:
$ mvn verify | grep INFO
but didn't seem to output lines in real time, as I understand grep buffers its lines before outputting, so I have to wait a few seconds between each flush and then I have tens of lines being printed at the same time, not very convenient. Then I thought I would try with sed.
According to this link, the following command:
sed -n '/PATTERN/p' file
// is equivalent to
grep PATTERN file
and according to this link, the -l option should force sed to flush its output buffer after every newline. So now I am using this command:
$ mvn verify | sed -ln -e '/INFO/p'
but I'm still getting the same result as before, I get a ton of output flushed every 30s or so and I don't know what I've done wrong. Can someone point me in the right direction please?
Try this, if your grep supports it:
mvn verify | grep --line-buffered INFO
If you're doing this in a terminal and still seeing buffered results, it would probably be something earlier than grep doing the buffering, but I'm not familiar with mvn. (And, yes, the -l option to sed should have done the same thing, so the problem may be upstream.)
try this line:
mvn verify | while read line; do echo $line|grep INFO; done
I found what was the problem, I was using a script to colorise maven output (see here) and in fact it was that script that was buffering the output down the pipe. I forgot about it as I was using it as an alias, I guess this is a good lesson, I won't alias as easily in the future. Anyway here is the fix, I changed -e to -le in the last line of the sed call:
mvn $# | sed -e "s/\(\[INFO\]\ \-.*\)/${TEXT_BLUE}${BOLD}\1/g" \
-e "s/\(\[INFO\]\ \[.*\)/${RESET_FORMATTING}${BOLD}\1${RESET_FORMATTING}/g" \
-e "s/\(\[INFO\]\ BUILD SUCCESSFUL\)/${BOLD}${TEXT_GREEN}\1${RESET_FORMATTING}/g" \
-e "s/\(\[WARNING\].*\)/${BOLD}${TEXT_YELLOW}\1${RESET_FORMATTING}/g" \
-e "s/\(\[ERROR\].*\)/${BOLD}${TEXT_RED}\1${RESET_FORMATTING}/g" \
-le "s/Tests run: \([^,]*\), Failures: \([^,]*\), Errors: \([^,]*\), Skipped: \([^,]*\)/${BOLD}${TEXT_GREEN}Tests run: \1${RESET_FORMATTING}, Failures: ${BOLD}${TEXT_RED}\2${RESET_FORMATTING}, Errors: ${BOLD}${TEXT_RED}\3${RESET_FORMATTING}, Skipped: ${BOLD}${TEXT_YELLOW}\4${RESET_FORMATTING}/g"
In effect this is telling sed to flush its output at every new line, which is what I wanted. I am sorry I didn't find another workaround that is more generic. I tried playing around with empty (see man page) and script but none of these solutions worked for me.

Resources