Number of lines of code - SonarQube licensing - jenkins

With below configuration setting in Jenkinsfile(groovy) for sonar-project.properties:
1) sonar.projectKey=MyProject-${BUILD_NUMBER}
and
2) sonar.projectName=MyProject-${BUILD_NUMBER}
where ${BUILD_NUMBER} is Jenkins build number
new project is created in SonarQube server for every new ${BUILD_NUMBER}
1) With such naming convention, Is their an impact on licensing of SonarQube V6 in terms of lines of code coverage? if yes, Does sonar.projectName=MyProject & sonar.projectKey=MyProject naming help resolve such licensing issue?
2) Does above approach consume more memory in SonarQube database or other resources in SonarQube server?
3) How to delete all these projects at one go, in SonarQube server? if needed ..

If you change the project key for every analysis, then each analysis will be considered as a new project, adding line of code until you reach your license limit.
It will also use more space in DB.
To delete all projects, you can go to Administration > Projects > Management (at least with SonarQube 7.7) and do a bulk delete.

Related

TFS - mapping Source Control Folder to VSO directory from on-prem installation?

The Challenge
We have an on-prem application (RM) that cannot communicate with VSO
Our code must stay in VSO
We need to give RM access to Build Definitions that are tied to our VSO source
Attempted Solution
Install TFS locally to host build definitions only
Have those build definitions pull from VSO
Problem
It looks like we can't use a VSO project directory in the Source Control Folder mapping
Is there a workaround?
Your best bet is to start migrating to the new Release Management service in VSTS, since the existing Release Management Server application is rapidly being deprecated. There are tools available to help ease the pain of migration.
You could also use RM Server in non-integrated mode -- nothing would be tied to build definitions, and you'd have to specify the path to the build drop manually when queuing the release. It can still be automated via the ReleaseManagementBuild.exe utility in the Release Management client folder, it would just take a bit more effort to build it out.
You could also build a custom build process template to pull the code from VSTS and build it, but again, that's investing a lot of effort in RM server.
[Full disclosure: I am a contributor to the migration tool linked above]

Set up Team Foundation Server Build service to do automatic builds and testing

Our plan is to use Team Foundation Build service to do automatic builds, then use the testing facility to automatically perform testing on the build server then release that build onto the application server.
So far we have
Team Foundation Server with TF Build Controller configured
Build server with win2012, Visual Studio 2013 and Build agent configured.
SQL Server with SQL 2013 installed
Application Server with Win2012 and .netframework installed
My question is what do I need to do to set up automatic builds, and to execute the unit test harness once compilation is successful.
Also the deployment target machine will initially be DEV, however we would like to quickly build for test env and prod etc.
This is what I got so far.
Build Controller (Already set up I believe)
Build Agent (Already installed on build server)
Build Process Template (Do I need to do anything with this. Is this what controls the whole lot)
Team Build Definition (I had a look at this, and it seems to use the build process template)
Drop Folder (I am assuming this is where the executables will be dropped into).
At the moment I have bits and pieces of info, what I would like to know is how this whole thing is hanging together. From the moment the developer wants to do the build to the moment that exe is placed into the DEVAPPSERV (Development application server).
Is anyone able to point me in the right direction or give a summary of what I need to make this happen?
Many thanks,
Dalibor
Install TFS Server (TFS Disk) Create a Team Project Collection and any desired Projects
Install TFS Controller + Agents onto a dedicated machine (TFS Disk) Configure only the build options if on a different machine to the TFS Server
Configure Build Controller to connect to a Specific Team Collection on your TFS Server
Install VS Premium or higher on build machine, if you want code coverage results for your tests
Add some code to TFS Source Control
Create a Build Definition using the default template.
Configure the build definition.
Set the working folder for the build, include only what you need as this will speed up the process
Point the definition to your .sln or proj file.
Ensure testing is enabled and that your test assembly names will match the regex used to identify test dll's i.e. name your test assemblies with the word test.
Set the trigger to be CI or what ever flavour of build you require i.e. gated build
Save the build definition
Trigger a manual build and debug any issues
you should have the basics done and a repeatable build created.
That should cover the basics, you may want to customise the build template (see Ewald Hoffman's guide for tips), you may want to narrow down your code coverage (look for runsettings file info).
If you follow these steps you should be able to get a basic build created and running from these, if you hit any issues you can come back and ask specific questions about a particular area
In order to do automatic builds you should check the CI build option ( under the trigger build option ) and third party automated testing can be run by executed by a post build script.
See the following TFS article about post build scripts.
http://msdn.microsoft.com/en-us/library/dn376353.aspx

Jenkins + Tycho: propagating update sites

I'm wondering if there is an easy way to "publish" p2 update sites in Jenkins (built with Tycho) so that they can easily be accessed in downstreams jobs? Currently I'm doing it semi-manually using Jenkins support for copying artifacts between jobs, and then specifying a repository-mirror element in a job-specific settings.xml which refers to the artifacts copied into the job, but this is all a little tricky and requires configuring jobs and build settings in a number of different places.
Is there any nicer way short of using an external solution such as Artifactory?
The only solution involving a repository manager that I am aware of is to use a Nexus and the Unzip Plug-in. (Disclaimer: The Unzip Plug-in is provided by the Tycho project, of which I am a committer.)
With such a setup, you could have one job deploy an update site to Nexus, and the next job use the update site via the unzip URL of the deployed site. Example: If the site was deployed under the GAV project.abc:site:1.0.0-SNAPSHOT, you could then access it via http://<nexus>/content/repositories/<unzip-repo-name>/project/abc/site/1.0.0-SNAPSHOT/site-1.0.0-SNAPSHOT-unzip/.
Note that you are slightly less flexible with such a setup that with what you have set up now: You need to have a version number for what your upstream project is building, so this may become tricky if you have multiple feature branches developing towards the same release version.
If you don't need this, you have the benefit of getting a portable build of your downstream project, i.e. developers build the project in the same way as your Jenkins does.

Continuous Integration Clarification

I work in a team which maintains a Java website and back end java jobs and shell script jobs.
After all developers complete their updates, only the relevant ones are committed to source control system.
Later ant build scripts are run and war files are generated.
Along with these war files there will genrally be shell scripts etc to be copied to QA/PROD.
Then one fine day there is a team call the release management team which will transfer the code from our Dev environment to QA/PROD.
Recently I came across the Continuous Integration systems like Jenkins/Hudson.
Can these tools build all the changes committed and automatically transfer my code to QA/PROD.
BTW I work in a AIX Server environment and use Tomcat as the Container.
I am more curious whether the tool will be able to copy my code to QA/PROD.
Please Clarify.
The answer is almost certainly yes, depending on your particular setup for copying the code. There is a large number of plugins for this purposes at the appropriate Jenkins wiki page. You should be able to find something there for your needs.

"Execute decorators" phase takes forever

I was just analyzing our (1 main/ 3 sub) project and wanted to analyze the code with my local Sonar server by typing mvn sonar:sonar (after cleaning and packaging the project(s)).
It successfully analyzes the EJB project but in the phase Execute decorators ... it takes forever to complete (around half an hour). This makes the analysis of the project very slow. What is going on in that phase and how can I improve the speed?
Best regards,
Sebastian
Versions used:
Maven 3.0.3
Sonar 2.10
According to this, it could be linked to using Derby, the only proposed solution is using a stronger db instead.
Following comments from sinbadblue here are links to discussions with answers from sonar team members which suggest 2 known reasons for execute decorator to be slow :
Using derby
Having the database server on a different network from the analyzer
Here are the links
2010 http://comments.gmane.org/gmane.comp.java.sonar.general/4902
2011 http://sonar.15.x6.nabble.com/Sonar-slow-in-quot-Execute-Decorators-quot-td3187847.html
2012 http://sonar.15.x6.nabble.com/Sonar-analysis-remains-on-Execute-Decorators-for-Net-Applications-tp4514700p4515249.html
The database is not always the issue but these 2 should definitely be checked before further investigation.

Resources