How can I get Linear Code Sequence And Jump (LCSAJ) Coverage for my GCC C code?
(preferably GNU).
Info about the coverage: http://en.wikipedia.org/wiki/Linear_code_sequence_and_jump
I could be wrong, but I believe LCSAJ was invented by LDRA (http://www.ldra.com/) .... and as such, the product that could get you there would be their LDRAcover
http://www.ldra.com/en/ldracover
Related
I am using SOS optimization to solve an adaptive control problem using the inverse Lyapunov method. I have been successful in obtaining the Lyapunov function and region of attraction level-set for some simple problems. Now, I am trying to determine the Lyapunov for a new system. I am getting the error "Constraint ### is empty.", where ### is a number that changes randomly. How do I debug which constraint is empty? My constraints look like the following:
prog.AddSosConstraint( V-l1 )
prog.AddSosConstraint( -((beta-h)*p1 + V-1) )
prog.AddSosConstraint( -(l2+Vdot) + p2*(V-1))
p1 and p2 have the decision variables. V, l1, and l2, are functions for the indeterminants only.
I am following the iterative procedure in [1] to solve for the Lyapunov function and region of attraction level-set.
[1] F. Meng, D. Wang, P. Yang, G. Xie and F. Guo, "Application of Sum-of-Squares Method in Estimation of Region of Attraction for Nonlinear Polynomial Systems," in IEEE Access, vol. 8, pp. 14234-14243, 2020, doi: 10.1109/ACCESS.2020.2966566.
The problem here seems to be that when Drake passes the program along to the CSDP library to solve, CDSP rejects the program as malformed. Ideally, we would like Drake to detect this and report back to you, without handing it over to CSDP to fail.
It's possible that this is a similar bug to #16732.
It's possible that the debug_mathematical_program tutorial would offer some tips for debugging. In particular, the "print a summary" might let you see anything suspicious (likely), or "naming your constraints" might also help (though probably not).
In any case, if you are able to provide sample code that reproduces the error, then we will able to offer better advice.
When we are calling functions like DirectCollocation, is there a way to see some progress in the middle (verbose mode)? I am not sure how helpful it is to check formulation errors. But just wondering :)
There are two ways to monitor the progress
You could add a visual call back function with prog.AddVisualizationCallback. If this callback function visualizes the trajectory, then you can monitor the visualization in every iteration of the optimizer.
If you use Snopt solver, then you could ask the solver to print out statistics on each iteration. The pseudo-code looks like this
std::string print_file_name="foo.txt";
prog.SetSolverOption(SnoptSolver::id(), "Print file", print_file_name);
SnoptSolver solver;
const auto result = solver.Solve(prog, initial_guess);
Then Snopt will print its statistics in each iteration to foo.txt.
I'm attempting to program with Lua syntax (I have some experience with it) to find the factors of and number and possibly factor an input polynomial. I'm not sure if everyone has done factoring but I learned it by doing a "multiply to" and "add to"/"x-box" method. It'd be interesting to actually draw out the method in Lua (see the picture attached) and display the answer. If not to draw, then I'd just use the print command.
I would like the program to have two parameters: one would be the number to determine its prime factors and the other would be the polynomial input (like a, b and c values ax^2+bx+c) to be factored. Then I may also attempt perfect squares and difference of squares.
I'd like some guidance in this and I'm in no way expecting a full working program. Thanks in advance.
you can make a for chunk loop function like this
function factor(val)
val=math.floor(val)
found={}
rev={os.time()*4}
halt=0
lastI=0
lastM=0
for m=1,val do
if halt==1 then
break
end
if lastI == m then
halt=1
break
else
for i=0,val do
if m*i == val then
print(m.."*"..i.."="..val)
table.insert(found,m.."*"..i)
table.insert(rev,i.."*"..m)
lastI=m
else
end
end
end
end
return found
end
it will return all posable factors but the downside is it will eventually run back wards but it's not a problem.
usage example:factor(6)
returns:{1*6,2*3,3*2,6*1}.
I am writing a histogram like function which looks at vector data and then puts the elements in predefined "histogram" buckets based on which range they are closest to.
I can obviously do this using if condition but I am trying to improve it using NEON because these are image buffers.
One way to do this would be with VCEQ then VBIT but sadly enough I could not find VBIT in the header of neon. Alternatively I figured I could take the VCEQ results and do an exclusive AND with a vector of 1s and then use VBIF :-) but VBIF is not there either!
Any thoughts here?
Thanks
VBIT, VBIF, and VBSL all do the same operation up to permutation of the sources; you can use the vbsl* intrinsics to get any of the three operations.
The article at onjava seems to imply that basis path coverage is a sufficient substitute for full path coverage, due to some linear-independence/cyclomatic-complexity magic.
Using an example similar to the article:
public int returnInput(int x, boolean one, boolean two)
{
int y = x;
if(one)
{
y = x-1;
}
if(two)
{
x = y;
}
return x;
}
with the basis set {FF,TF,FT}, the bug is not exposed. Only the untested TT path would expose it.
So, how is basis path coverage useful? It doesn't seem much better than branch coverage.
[Disclaimer: I've never heard of this technique before, it just looks interesting so I've done a few searches and here's what I think I've found out. Hopefully someone who knows what they're talking about will contribute too...]
I think it's supposed to be a better way of generating branch coverage tests, not a complete substitute for path coverage. There's a far longer document here which restates the goals a bit: http://www.westfallteam.com/sites/default/files/papers/Basis_Path_Testing_Paper.pdf
The onjava article says "the goal of basis path testing is to test all decision outcomes independently of one another. Testing the four basis paths achieves this goal, making the other paths extraneous"
I think "extraneous" here means, "unnecessary to the goal of basis path testing", not as one might assume, "a complete waste of everyone's time".
I think the point of testing branches independently, is to break those accidental correlations between the paths which work, and the paths you test, that occur with terrifying frequency when I write both the code and an arbitrary set of branch coverage tests myself. There's no magic in the linear independence, it's just a systematic way of generating branch coverage, which discourages the tester from making the same assumptions as the programmer about correlation between branch choices.
So you're right, basis path testing misses your bug, and in general misses 2^(N-1)-N bugs, where N is the cyclomatic complexity. It just aims not to miss the 2^(N-1)-N paths most likely to be buggy, as letting the coder choose N paths to test typically does ;-)
path coverage is no better than any other coverage metrics - it is just that metrics that shows how much of 'code' has been tested. The fact that you can achieve 100% branch coverage with (TF,FT) set of TCs as well as (TT,FF) means it is all up to your luck if your exit criteria tell you exit after 100% coverage is done.
The coverage should not be a focus for the tester - finding bugs should be and TC is just a way to show the bug just as well as coverage a proxy showing how much of this showing the bug activity has been done. As with all other white box methods - striving for max coverage with minimum costs require actually understanding the code so that you could actually write a defect w/o a TC. TC is just good for regression and as a documentation of the defect.
As a tester coverage is just a hint on how much has been done - only experience can be really helpful as to say how much is enough. As this is difficult to present in numerical values we use other methods i.e. coverage statistics.
Not sure if this makes sense to you I guess judging on the date you are far gone since the date you publish your question...
My recollection from McCabe's work on this exact subject is: you generate the basis paths systematically, changing one condition at a time, and only changing the last condition, until you can't change any new conditions.
Suppose we start with FF, which is the shortest path. Following the algorithm, we change the last if in the chain, yielding FT. We've covered the second if now, meaning: if there was a bug in the second if, surely our two tests were paying attention to what happened when the second if statement suddenly started executing, otherwise our tests aren't working or our code isn't verifiable. Both possibilities suggest our code needs reworking.
Having covered FT, we go back up one node in the path and change the first T to F. When building basis paths, we only change one condition at a time. So we are forced to leave the second if the same, yielding... TT!
We are left with these basis paths: {FF, FT, TT}. Which address the issue you raised.
But wait, you say, what if the bug occurs in the TF case?? The answer is: we should have already noticed it between two of the other three tests. Think about it:
The second if already had its chance to demonstrate its effect on the code independently any other changes to the execution of the program through the FF and FT tests.
The first if had its chance to demonstrate its independent effect going from FT to TT.
We could have started with the TT case (the longest path). We would have arrived at slightly different basis paths, but they would still exercise each if statement independently.
Notice in your simple example, there is no co-linearity in the conditions of the if statements. Co-linearity cripples basis path generation.
In short: basis path testing, done systematically, avoids the problems you think it has. Basis path testing doesn't tell you how to write verifiable code. (TDD does that.) More to the point, path testing doesn't tell you which assertions you need to make. That's your job as the human.
Source: this is my research area, but I read McCabe's paper on this exact subject a few years back: http://mccabe.com/pdf/mccabe-nist235r.pdf