Hi guys when I use Jenkins+Maven+Sonar
I found that when I delete a project in sonar and then I run the job in Jenkins
the total project number in sonar is never reduced . It just repeated increment .
Even I deleted the database and run again the number of project in sonar cant be reduced.
Is there some cache in sonar?
How can I reset the number of project?
This is likely caused by a corrupted Elasticsearch index. Shut down your server, delete $SONARQUBE_HOME/data and restart.
Related
I am facing very weird issue. I am running cucumber jvm for java project on Jenkin with fork count 10.
I have more than 100 features files. Sometimes some features files (not specific) stop their execution without giving any error or exception. So because of that their json files is not getting created and records also get missed in the cucumber html report. Like it is not consistent it occurs 1 out of 8 times. But it is not giving any exceptions, warning or error so hard to track. Any one know what is issue and can help me with this issue.?
Thanks.
Please split your execution, sometimes JVM getting peak. if our TCs running for long time, this will break our jenkins. so run your test by 20 to 30 features at a time.
like
max 30 features / one jenkins build
for Eg: if you have 100 features with 500 scebarios
Build 1 - mvn test -DCucumber.Options="--tags #first0to20"
Build 2 - mvn test -DCucumber.Options="--tags #first21to40"
Build 3 - mvn test -DCucumber.Options="--tags #first41to60"
Build 4 - mvn test -DCucumber.Options="--tags #first61to80"
Build 5 - mvn test -DCucumber.Options="--tags #first81to100"
We are starting to migrate our old xaml build def's to VSTS web-based build def's. For each branch, we have a debug build def and a release build def. The debug build def is set up as a Continuous Integration build. As a test, I modified one source file and checked it in. The old xaml build def checked out the 1 file and seemed to have built only the project that changed (what we want and expect in a CI build). In the xaml build log I see the following:
<InformationField Name="Message" Value="1 file(s) were downloaded with a total size of 0.29 MB." />
and it ran the build in 3.3 minutes.
In the new VSTS build - I see that it does a "tf get /version:170936" and gets all the files in changeset id "170936":
2018-06-12T15:08:39.8409262Z Checking if artifacts directory exists: C:\BuildAgent\agent2\_work\1\a
2018-06-12T15:08:39.8409262Z Deleting artifacts directory.
2018-06-12T15:08:39.8409262Z Creating artifacts directory.
2018-06-12T15:08:39.8564882Z Checking if test results directory exists: C:\BuildAgent\agent2\_work\1\TestResults
2018-06-12T15:08:39.8564882Z Deleting test results directory.
2018-06-12T15:08:39.8564882Z Creating test results directory.
2018-06-12T15:08:39.8877401Z Starting: Get sources
2018-06-12T15:08:39.9033640Z Entering TfvcSourceProvider.PrepareRepositoryAsync
2018-06-12T15:08:39.9033640Z localPath=C:\BuildAgent\agent2\_work\1\s
2018-06-12T15:08:39.9033640Z clean=False
2018-06-12T15:08:39.9033640Z sourceVersion=170936
2018-06-12T15:08:39.9033640Z mappingJson={"mappings":[{"serverPath":"$/Path/To/Branch","mappingType":"map","localPath":"\\"}]}
2018-06-12T15:08:39.9033640Z Syncing repository: Project Name (TFVC)
2018-06-12T15:08:39.9033640Z workspaceName=ws_1_45
2018-06-12T15:09:06.7318304Z Workspace Name: ws_1_45;a6060273-b85e-4d4b-ac63-3fbbcafc308b
2018-06-12T15:09:06.7630780Z tf get /version:170936
2018-06-12T15:09:21.6070136Z Getting C:\BuildAgent\agent2\_work\1\s;C124440
2018-06-12T15:09:21.6070136Z Getting C:\BuildAgent\agent2\_work\1\s;C124440
2018-06-12T15:09:21.6226405Z Getting C:\BuildAgent\agent2\_work\1\s\.nuget;C158533
2018-06-12T15:09:21.6226405Z Getting C:\BuildAgent\agent2\_work\1\s\Build Scripts;C141602
2018-06-12T15:09:21.6226405Z Getting C:\BuildAgent\agent2\_work\1\s\Databases;C124440
~
~ The rest of branch...
~
and then seems to rebuild all projects taking 13.2 min to run, almost 10 minutes longer than old xaml build.
Am I missing something with the new build def? I do not have the "Clean" button checked in the VS Build task. I do have a build.clean variable but it is Blank by default - sometimes we want to clean so we just can set it to "all" at queue time.
Clicking about on the web shows the following MS VSTFS version: 15.105.25910.0
Any help is much appreciated.
For multiple build agents, even though you have not checked Clean checkbox of the VS Build task, and the clean option in the Repository is set to "false".
Since the agent is randomly picked, it may not contain your source code before. That's why you see that TFS does a tf get /version:170936 and gets all the files in changeset id "170936" and also build all projects.
For Multi-configuration in Options tab of build definition. Please refer the official Article: How do I build multiple configurations for multiple platforms?
After that, it will split configurations to multiple builds during the build.
And ow that your build definition successfully builds multiple configurations, you can go ahead and enable parallel builds to reduce the duration/feedback time of your build. You specify this as additional option on the Options page:
Take a look at this post Building multiple configurations and/or platforms in Build 2015
To narrow down if the issue is related to multiple build agents, you could also send TFS build to a specific agent by demand, and queue the build multiple times with clean=false in the same build agent, it should have built only the project that changed(CI).
I am new to jenkins . I am trying to deploy php codes to production server via "Publish over ssh" plugin . I enabled it in "post-build actions". Everything is fine but transfer is too slow [ 2 hours for 40MB transfer]
Here is the scenario:
Entire project is setup in local.Total size is nearly 700MB.
All codes pushed to BitBucket.
Now i configured build in Jenkins with "Send build artifacts over ssh " as post-build option.Inside transfer set i added " **/*. * " for source file option .
It is taking hours and hours to transfer entire project . Within 2 hours it transferred only 140MB.
Is it normal ? Do i need to change any settings? Network connections between the server in which Jenkins runs and the production server is fine .
"rsync over ssh " solved the problem of code transfer to production server.Now its taking only 2-3 seconds for a build.
Yes, 2 hours for 40MB is unexpectedly slow. However, it is not uncommon to get abysmally slow archiving of artifacts from agent to master.
Below are links to the 2 open tickets I know of tracking this. There are many others reported that have been closed over the last decade, but in my environment, I'm getting ~13Mbps transfers, despite the 10Gbps links we have between all nodes in our Jenkins cluster.
JENKINS-18276
JENKINS-63517
This isn't directly related to your question of sending artifacts via SSH, but it may help others who come upon this while trying to reduce the time it takes to archive artifacts in general. I used the pipeline utility plugin to zip everything I wanted to archive into a zip file, then I unzipped it when I want to un-archive it later.
That reduced the time by about 50x for me, presumably because it was just 1 file to transfer/archive/unarchive instead of a bunch of small files, and zipping/unzipping was less that 1 second in my case--much less time than archiving the individual files separately.
zip zipFile: 'artifacts.zip'
archiveArtifacts 'artifacts.zip'
then
copyArtifacts(filter: '**/*', projectName: currentBuild.projectName, selector: specific(params.archived_build_number))
unzip zipFile: 'artifacts.zip', quiet: true
The zip function also has a parameter to archive the zip, but I showed it separately here to be explicit. Again, nothing about this is specific to your SSH scenario, but I thought it would be useful to post it anyway.
I'm a little new to Jenkins, and I can't seem to figure this out. I have access to a Jenkins server that uses slaves to perform build jobs.
If a build fails, it stores a generated zip archive in a persistent Workspace directory for further debugging. The zip file is generated by a python script that keeps track of only the last 3 failed builds to conserve memory (i.e. 3 failed builds will result in 3 archives in the folder, but a fourth failed build will delete the oldest archive before adding the new one).
What I'm trying to do is add a download link to a failed Jenkins Run to allow users to quickly download the zip file that was generated for that build. But I'm really confused as to how approach this!
So I guess the question is, how could I add a download link to a Jenkins Run page to a file generated during that Run if it fails?
Example usage scenario:
1. I build some code :)
2. It fails :(
3. I download the zip file (from the Run page) with the generated debug files and find the fix :)
4. Space doesn't get filled up as zip files are kept only for the last 3 builds!
Any help would be greatly appreciated! Thank you! I'm happy to provide more information if needed ^^ I am currently trying to use a system groovy script to do this, but perhaps artifacts would be more appropriate? I really can't seem to find good documentation on this!
There are built in methods in Jenkins to allow this workflow:
you can archive any artifact (in that case the zip) as post build step
data retention strategy can be configured in the job via Discard old builds (Advanced).
in order to send out customized mails on build failure with embedded download link you should review Email Ext Plugin; it allows you to configure individual texts for e.g. build failures where you could add the link to downloading the artifact.
Building my solution succeeds, but the build fails upon copying from the build to the drop location. I get an error like this
TF270002: An error occurred copying files from 'E:\Workspace' to '\\server\drop location_20101026.25'.
Details: Access to the path '\\server\drop location_20101026.25\_PublishedWebsites\website\bin\somecompiled.dll' is denied.
This is a part of a continuous integration build (as well as several other types of build that I've tried). This is a copied build definition from a definition that has worked for several months now, running on TFS 2010.
Yep, I was right, I did this myself by putting the copy to drop location task in the parallel foreach loop, resulting in the task being done 3 times, and causing my problems.