Why is this Codecov report saying that I've reduced my coverage when I don't believe I have? - code-coverage

I'm finding CodeCov reports hard to read at times and are potentially misleading/providing incorrect information.
I'm hoping someone could confirm/deny this assertion.
With this example PR, it's saying that I'm making things worse when I can't see how?
Current scenario.
PR: Dependabot wants to bump a Nuget package from 3.0.12 -> 3.0.14.
Ok. so this report is saying that the entire project is:
46% covered
we've dropped -23.9%
it was originally .... 69.9% in the previous commit??
OK, let's look at the diff.
So this diff says: 1 file only. and it's .. "untracked" ? (not sure what that means).
So with this one file it saying my coverage is now dropping by 24%.
Let's look at the commit.
OK, so this is THE ENTIRE PR. It's just a version number change. no other files in the PR.
Now let's go back and see what Codecov is looking at....
an21ad is the latest commit on origin/master. so it's HEAD.
b3d5da6 is the PR.
ok.. so that's kewl.. but why is it saying the change is so massive for code coverage loss?
Now look at this ...
and
wait what? It's saying:
I have more than 99 difference changes between my PR and origin/master?
I have 69 files changed between my PR and origin/master
But my commit is only ONE file!
Alright, let's look at one of these random files which is supposedly 'different':
OK, I really don't know how to read this change. It's saying that these two lines have changed but they really haven't changed.
Can anyone please explain what is Codecov doing/thinking?
I was under the impression that this commit should not change the coverage at all.
SIDENOTE: I'm using flags so I'm hoping that isn't messing things up.

Related

How do I avoid getting a file marked "changed" after modifying it in a before-clientcheckin trigger?

I have written a before-clientcheckin script to write the predicted changeset number into certain files on a user check-in operation. The script appears to be running successfully in that the new changeset number is being written to the target files and the check-in operation completes successfully.
BUT... unfortunately on check in completion a "Changed" copy of the file still shows up in the Pending Changes window. Bizarrely, this "changed" version of the file is identical to the one that was just checked in. I can get rid of it by undoing the change, which then makes my workspace and the repository equal again, with the correctly-versioned file still intact.
It seems that although the check-in is succeeding, the modification of the file by the script after the user commences the check-in operation is marking the file as changed again - even though its contents are actually unchanged.
Does anyone know how to avoid this scenario? There's no mention of this kind of issue in the Plastic Trigger manual, and although I have seen other posts from people trying to do something similar in a before-clientcheckin script, no one seems to have encountered this particular problem.
My Plastic version is 7.0.16.2421, running on Windows 10.
Any help would be much appreciated!

Missing Code Analysis output file breaking TFS builds

We have an occasional issue whereby the build cannot find a Code Analysis output file and fails the build with this message:
Much of the help surrounding this message deals with path name length which I'm pretty sure doesn't apply here as when the build is re-queued, it goes thru fine. There is also no Code Analysis related cruft in the project files to get rid of - just the standard:
<RunCodeAnalysis>true</RunCodeAnalysis>
We also don't simply want to turn Code Analysis off for the builds as that would rather defeat the object.
Has anyone else encountered this? Also, any idea how we might get round it?
If you are using VS2010 or earlier, it turns out that some code analysis tags in the .csproj file were causing the issue. Please see this post Hot to get rid of code analysis errors in TFS build for details.
If not, try to clean the Team Build workspace.

Can Plastic SCM track code moved between files?

It seems that Plastic SCM does not track code moved between files (compare with e.g. Git) Am I right, or is there a way how to switch this feature on?
(Disclaimer: I work for Plastic SCM).
As far as I know git is able to track moved fragments of code between files when you run a "git blame", but it doesn't use this information during merge, correct? Git is able to calculate the "moved fragment" between files if it happens on the same commit and that's what it does while processing the 'blame'.
No, Plastic is not able to do that yet, which is a shame because we're already doing some interesting things around the idea:
First we have semantic method history which is able to track the history of a given method even when it has been moved, renamed and modified, but always inside the same file. We have plans to extend this to a repository wide basis, indeed we were about to implement it by the end of last year but we had to postpone it since we got some other highly requested features to work on.
The 'semantic method history' is based on the SemanticMerge tech we've developed. Right now it also works inside the file, but plans are also to come up with SemanticMerge multi-file (we even have a working prototype already). I expect this to be several steps ahead of what other tools can do.
Applying the last two together it wouldn't be hard to do something like 'blame with moved code', which as you pointed is something really great to have. Hopefully we'll release something like this in the coming months.

pull changes from branch when deleting files

I can't figure out the best way to do this and it has happened a few times where I mess myself up that it'd be nice to know a possible good way for this. On master, I have our main project. We finally got approved to use ARC in iOS and I created a new branch for that to not mess with the main working master branch. I also took the time to delete some unneeded files in my ARCBranch. What I want to do is use this branch for development for the next release. I'd like to pull in the changes from master to the ARCBranch. So I switched to ARCBranch, and did
git pull origin master
I got conflicts, some which were straightforward because I could see the code, others being changes in the pbxproj file where I cannot tell what's what. I did see
<<< HEAD ==== >>>. I can't tell what I need to do here. I can't open it in Xcode, only a text editor. I tried just deleting those <<< === >>> characters since I saw one person on SO say that you typically want both changes and that you could always do that. This didn't work for me. I was wondering if there is a better way to do this change. Maybe somewhere where I can see each change by change happen? Thanks.
Instead, you could try
git rebase master
This would apply the changes commit by commit. If there are conflicts, it would stop there, so that you can resolve them and do
git rebase --continue
to finish applying all the patches.
It failed to auto merge so it marks the conflicting blocks of code and leaves them both so you can decide and remove one yourself.

Is it possible for code to change without Git knowing about it?

This morning I ran my tests and there are 2 failures. But I haven't changed any code for a few days and all tests were passing.
According to Git, there are no changes (except for coverage.data, which is the test output). gitk shows no other changes.
How does Git know when code changes? Could this be caused by an SSD failure/error?
What is the best way to figure out what happened?
EDIT: Working in Ruby on Rails with Unit Test framework.
I'd start by figuring out why the tests failed, and that might give you some clues as to how they might have passed before. Sometimes it's a timing issue, intermittent failure, something external to the test harness, data changing, a change of date or time, all sorts of stuff.
I see Mike found your problem (ticky the little answer box please).
Yes, it is possible for code to change without Git knowing about it. The file which caused the failure, perhaps a temporary testing file or fixture, could be ignored either in .gitignore or .git/info/exclude. Doing a git clean -dxf will wipe the checkout clean of anything not known to git. git status --ignored will show the files ignored by git. If that's the case, you want to add better test cleanup as part of your test runner.
For posterity, here's the short list of ways tests could fail without there being any code change visible to git:
"Temporary" test files and fixtures might be dirty.
"Temporary" databases and tables might be dirty.
It is sensitive to time or date.
It uses network resources and they changed.
The compiler was changed.
The installed libraries used were changed.
The libraries the libraries use were changed.
The kernel was changed.
Any servers used (databases, web servers, etc...) were changed.
It uses parallel processing and a subtle bug only occurs sometimes.
The disk (or the filesystem where temp files go) is full.
The disk (or the filesystem where temp files go) is broken.
Your memory/disk/process/filehandle quotas were reduced.
The machine has run out of memory.
The machine has run out of filehandles.
It uses fixtures with randomly generated data and generated some that tickled a bug.
Your repository can't have any changes without Git knowing about it and logging it somewhere since it's necessary to retrieve the logs to be able to get the repo files anyways. I would check to see that the failures aren't caused by something on the machine that shouldn't be running at the same time as the tests or it could possibly be a race condition that you're not expecting.

Resources