kcov vs lcov vs raw performance? - code-coverage

Anyone able to give me some info on the relative performance of code running under following conditions,
Just compiled
Compiled with --coverage
Running under kcov
Am I going to need twice a long to run my test suite if I integrated code coverage tool like gcov or kcov?

My experience with this is as follows but note that actual results will probably depend heavily on your code.
Running with compiled '--coverage' is about half the speed of just compiled code.
Running with kcov is significantly (x6 - x10) times slower than just compiled code.
So what I'm doing is:
For a lot of runs or something that I know takes sometime, use '--coverage' then gcovr/lcov
For a one-off run of a shortish executable, use kcov.

Related

How to build an embedded Atmelstudio project on Debian through commandline

So, I am working on an embedded project for a cortex m7 microcontroller (ATSAME70Q21). The code is written in Atmel Studio 7, but I want to build it in a Debian environment through Docker (gcc docker image is Debian-buster based if I'm not mistaken) so that I can work in a Continuous Integration workflow.
At the moment I am trying to manually construct a Makefile, based on the generated makefile by the IDE, but that seems like the wrong way to handle this problem. Maybe I am too tunnel-visioned to notice different solutions. So I would like some help from folks who maybe have struggled with this problem before.
Thanks in advance.
I solved this problem the following way by mimicking the output of Atmelstudio into a CMakeLists file.
First I analyzed the generated makefile from the debug build to discover what files were built, what compiler flags were used and what programs were called.
Then I compared the generated makefile from the release build with the debug build to discover the differences.
With this information, I made a CMake file. For now, I GLOB_RECURSE all my source files, but I could crawl the Atmelstudio *.cproj file to find out what files are required.
This might not be the ideal answer, but it solves my problem.

How to improve PHPUnit tests run? By freezing code somehow, for example

I am working with a code with a lot of tests (i suppose). There are about 6000 tests. Cost of running that tests is 1m 30sec. You may think that it is nothing, but during tests execution you can do nothing with Your code, because PHPUnit doesn't seem to freeze tested and test's code (which is understandable) and if You change anything during tests execution some tests may fail. I have whole project on docker container with shared folders set by -v parameter.
I have imagined sth like that:
1) In PHPStorm runs tests;
2) Stops sharing folder between docker and host during test execution to preserve code state at the moment when tests started;
3) Complete tests execution, show any errors if present, bring back folder sharing.
Is it possible ?

erlang OTP app using rebar, dev environment

I am in the process or learning erlang OTP and rebar and I have put together a small example app using a couple of dependencies: cowboy and lager.
I have issued
rebar get-deps
rebar compile
And things went smoothly. Now I want to fire up my console to test things around but it is not obvious to me how to start the dependency applications.
I tried issuing a
rebar generate
In order to get all the orchestration of firing up the apps, even though it's overkill for just development tests, but I miserably failed getting the following dump
Crash dump was written to: erl_crash.dump
eheap_alloc: Cannot allocate 1459620480 bytes of memory (of type "old_heap").
Aborted
The ebin dir only has beam files for the app I wrote but not the dependencies, I see the dependencies have their own ebin directory inside the main app deps directory, how would I go about having them available in a console to start them up?
I would appreciate if someone can shed some light as to what the common practice is for the dev env with multiple OTP apps.
I have read a couple of tutorials but they are mostly targeted at the rebar release cycle and not the development process.
TIA
In your case, the modules you pull into the deps directory should typically be called from within your application code - and your application can be typically invoked from the Erlang shell using the application:start/1 function. If you haven't yet, I strongly suggest that you read Chapter 12, "OTP Behaviors", of Francesco Cesarini's excellent book Erlang Programming - it's a great practical introduction for what you're attempting.

.less CSS watch feature only works locally?

According to http://fadeyev.net/2010/06/19/lessjs-will-obsolete-css/ I should be able to set up a "watch" feature for less.
I mainly work directly on remote servers. Either through opening up file directly through FTP or using a server found through a network.
Will this still work? Or do the files have to be local to be "watched"?
I'm using Windows if that makes a difference.
Many thanks
The watch feature is something that happens on your own local development computer. You could run it on your server, but it would have to run constantly in the background, so it's probably not the best option. The watch options is not a feature of less.js, but instead of other LESS CSS compliers. A compile operation is usually a one-time operation, i.e. you call the compiler, it compiles and returns you to the shell prompt. With the -w or --watch switch, your LESS compiler will watch your specified .less file(s) and convert them as soon as they change.
This watching features is a design-time live compiler option, whereas with less.js your less files are converted at run-time. Another alternative is a compile-time operation where you invoke a less compiler as part of a build step (like with Ant).
Ruby
If you do gem install less with Ruby installed, you get the old command-line Ruby compiler. It isn't kept up by Cloudhead anymore, so it's mostly unsupported and doesn't get any new features, etc. When you run it, you can call lessc input.less output.css -w. Without the -w switch at the end, LESSC will compile it one time and return you to your prompt. With the -w switch, it will continue to watch the file for changes and recompile it each time you edit the file.
.NET
If you have DotLessCSS, (you're probably on windows) you can type dotless.Compiler input.less output.css --watch which does the same thing.
PHP
If you are using LESSPHP, you can also call that from the command line with plessc -w input.less output.css, again, the -w will do the same thing.
Mac
If you are on a mac, you can use LESS.Air. Specify which files you want the app to look at, tell it you want it to keep watching those files, and it will compile in the background without the command line.
Air
On Windows, Mac or Linux, you can use this less parser which is a clone of LESS.Air. It works the same way, but is cross-compatible and uses less.js under the hood.

Delphi 2009 command line compiler using dcc32.cfg?

In Delphi 2009, how can I build a project using command line. I tried using the command line compiler and supplying -a -u -i -r in dcc32.cfg file. But compiler is not recognizing the paths and throwing the error required package xyzPack is not found.
-aWinTypes=Windows;WinProcs=Windows;DbiProcs=BDE;DbiTypes=BDE;DbiErrs=BDE
-u"C:\MyProj\Output\DCP"
-i"C:\MyProj\Output\DCP"
-r"C:\MyProj\Output\DCP"
and on command line i execute the command :
dcc32 "C:\MyProj\MyProject.dpr" -B -E"c:\MyProj\Output\EXE"
What am I doing wrong here?
Thanks & Regards,
Pavan.
Instead of invoking the compiler directly, consider using MSBuild on your .dproj, since that's what the IDE uses. Delphi MSBuild Build Configurations From Command Line might help you with that.
From the related answer (as shown below) ie:
Compiling with Delphi 2009 from a command line under Windows Vista 64-bit
I notice that you should be able to build a single package from the command line this way. I have used batch files (buildall.cmd) to launch dcc32, and have not yet used msbuild.
I have ultimately found both approaches frustrating, and have instead decided to opt for building a little GUI shell (a lite version of Final Builder, if you like) that basically works as a semi-graphical semi-command-line way of automating my builds and filtering the compiler output to produce results. I would be highly interested in anyone else's experiences with "tinder box" (daily or even continuous build) operations with Delphi.
You may end up where I'm heading... just buy Final Builder. :-)

Resources