I have created a separate Sonar dashboard (customize dashboard) and I have different types of projects.So I want to analyse code using SonarQube and want different dashboard for different types.
I see default Sonar dashboard, all projects come but how can I separate when I call from sonar runner using Jenkins?Is there any sonar properties for this?
Thanks
It looks like you are believing that it is possible to define during the analysis the dashboard to be used. If so, this is not how it works.
I use different dashboards showing different widgets and measures in different order, specialized on a technology: I don't look at the same metrics for Java or Cobol. Now, these dashboard are available for all your analysis. Although, you could have a specific user (or different groups of users) for each technology, so when they log, they can see only dashboards specific to this group.
Hope it helps, but it's true that you question is not clear.
Regards.
Related
My employers recently started using Google Cloud Platform for data storage/processing/analytics.
We're EU based so we want to restrict our Cloud Dataflow jobs to stay within that region.
I gather this can be done on a per job/per job template basis with --region and --zone, but wondered (given that all our work will use the same region) if there's a way of setting this in a more permanent way at a wider level (project or organisation)?
Thanks
Stephen
Update:
Having pursued this, it seems that Adla's answer is correct, though there is another workaround (which I will respond with). Further to this, there is now an open issue with google this now which can be found/followed at https://issuetracker.google.com/issues/113150550
I can provide a bit more information on things that don't work, in case that helps others:
Google support suggested changing where dataprep-related folders were stored as per How to change the region/zone where dataflow job of google dataprep is running - unfortunately this did not work for me, though some of those responding to that question suggest it has for them.
Someone at my workplace suggested restricting Dataflow's quotas for non-EU regions here: https://console.cloud.google.com/iam-admin/quotas to funnel it towards using the appropriate region, but when tested Dataprep continued to favour using US.
Cloud Dataflow uses us-central1 as a default region for each job and if the desired regional endpoint differs from the default region, the region needs to be specified in every Cloud Dataflow command job launched for it to run there. The zone will be automatically assigned workers to the best zone within the region, but you can also specify it with --zone.
As of this moment it is not possible to force the region or zone used by Cloud Dataflow based on the project or organization settings.
I suggest you to request a new Google Cloud Platform feature. Make sure to explain your use case and how this feature would be useful for you.
As a workaround, to restrict the jobs creation on Dataflow for a specific region and zone, you can write a script or application to only create jobs with the specific region and zone you need. If you also want to limit the creation of jobs to be done only with the script, you can remove your users’ job creation permissions and only give this permission to a service account which would be used by this script
A solution Google support supplied to me, which basically entails using Dataprep as a Dataflow job builder rather than a tool in of itself
Create the flow you want in Dataprep, but if there's data you can't send out of region, create a version of it (sample or full) where the sensitive data is obfuscated or blanked out & use that. In my case, setting the fields containing a user id to a single fake value was enough.
Run the flow
After the job has been executed once, in the Dataprep webUI under “Jobs”, using the three dots on the far right of the desired job, click on “Export results”.
The resulting pop up window will have a path to the GCS bucket containing the template. Copy the full path.
Find the metadata file at the above path in GCS
Change the inputs listed in the files to use your 'real' data instead of the obfuscated version
In the Dataflow console page, in the menu to create a job using a custom template, indicated the path copied from 2 as the “Template GCS Path”.
From this menu, you can select a zone you would like to run your job in.
It's not straightforward but it can be done. I am using a process like this, setting up a call to the REST API to trigger the job in the absence of Dataflow having a scheduler of it's own.
I would like to set up a Jenkins based CI system, where the job histories are dynamically managed based on the parameters coming from webhook triggers.
Currently, I can only trigger specific jobs, maybe with applying filters, but it will not handle jobs dynamically.
I aim for a solution, where a parameter (or group of parameters) identifies a job with its own history. If the job history does not exist, it is created automatically.
In the results, I would like to somehow mimic the behavior of the GitHub PullRequest plugin. The problem with it, that it is tightly coupled with GitHub but I need a more generic solution.
I see two marginally different solutions here:
Manage jobs
Jobs can be managed based on the build parameters. The jobs dynamically created and deleted.
Filter builds
The job remains the merged job containing all the Pull request for all the branches, and some UI features able to filter out the different histories from it based on the parameters.
I do not know if any of this is achievable with currently available Jenkins plugins, or if I have to implement something from scratch?
Thank you for any answers!
Actually, I was looking for the Multibranch Pipeline approach, just I didn't know about it yet.
That is actually doing the same that I described for GitHub and BitBucket.
I was lucky as my target was BitBucket.
I'm working on implementing TFS for numerous teams and am looking for a way to monitor TFS in terms of how many distinct users, builds ran, work item totals, collections/projects/teams, and more, preferably if I can see daily/weekly/monthly metrics. I've found some solutions by querying the SQL database, but am curious if there are any extensions or solutions others have found to monitor the usage of your TFS instance as well as any GUIs that help visualizations.
No such a comprehensive tool or extension to achieve that.
For specific team project, you can add widgets to a dashboard to monitor the status:
Widgets smartly format data to provide access to easily consumable
data. You add widgets to your team dashboards to gain visibility into
the status and trends occurring as you develop your software project.
Each widget provides access to a chart, user-configurable information,
or a set of links that open a feature or function.
For example , with builds just specify the specific build definition, for work items you can create queries and specify the query when configure a widget.
Actually you can retrieve most of the information via REST API.
e.g.: Get Builds - List:
GET http://server:8080/tfs/{project}/_apis/build/builds?api-version=3.2
You can also try to custom your own reports, please see SQL Server Reporting (TFS) and Create and manage Reporting Services reports for details.
We have 'try' build jobs that developers can initiate with parameterized variables to point to a particular branch for pulling the code and trial running the build in jenkins. Is there a way I can customize a custom personal view showing only the builds that I have started?
The custom way
I think there's a way to customize a personal view by coding / modifying your Jenkins installation, jan-molak worked on that feature here.
You can check the commits and maybe implement something by your own, especially this and this.
The plugin
Take a look on View Job Filter If you configure it, there are options which seems to acomplish what you want:
Logged-in User Relevance Filter: This adds/removes jobs based on their
relevance to the logged in user. For example: matching jobs that were
started by the user, or where the user committed changes to the source
code of the job; matching jobs with a name that contains the user’s
name or login id.
In order to prove that a team is not seeing another team jobs or folders I need to come up with a measurable solution that validate that.How can I test that each team member don't see each other jobs? Because using the UI to see and comparing with my eyes becomes really difficult with multiple of groups or users for instance.
I am using Jenkins Project-based matrix plugin and latest Jenkins.
The best way to do this is by creating a test user. In case you are using AD or Jenkins own user database, try assigning the same permissions to what you have given for team. This way you will know it for sure.
If there are any such security issues chances, Jenkins will raise a notification highlighting the security issue with that of plugin. Hope this solves the issue.