Jenkins - Install recommended plugins via Ansible or CLI - jenkins

Is there a way to install the Jenkins recommended plugins (shown when installing it via wizard) when using Ansible?
I even started creating the list in my Ansible YML, but it is pretty long and the names sometimes don't match with the repository list here
- name: Install plugin
community.general.jenkins_plugin:
name: "{{ item }}"
url_username: "{{ jenkins_admin_username }}"
url_password: "{{ jenkins_admin_password }}"
url: http://localhost:8080
timeout: 90
with_items:
- git
- maven
- thinbackup
- build-pipeline-plugin
- trilead-api
- ant
A solution via CLI would also work fine.
Thank you!

Can't speak to the Ansible part as I don't use it, though it does appear there is Ansible module to help you: community.general.jenkins_plugin – Add or remove Jenkins plugin
The master list of recommended plugins is found in GitHub; that's what the UI is built from.
The recommended CLI tool to install the plugins is the Plugin Installation Manager Tool for Jenkins.
CLI tool can retrieve the latest compatible to the Jenkins version you have or specific versions.
I don't know if the Ansible module handles dependent plugins, but the CLI and the UI does.

Here a list with the recommended plugins.
- name: Install plugin
community.general.jenkins_plugin:
name: "{{ item }}"
url_username: "{{ jenkins_admin_username }}"
url_password: "{{ jenkins_admin_password }}"
url: http://localhost:8080
timeout: "{{ jenkins_plugins_timeout }}"
state: latest
with_items:
- ace-editor
- ant
- antisamy-markup-formatter
- apache-httpcomponents-client-4-api
- bootstrap4-api
- bootstrap5-api
- bouncycastle-api
- branch-api
- build-timeout
- caffeine-api
- checks-api
- cloudbees-folder
- command-launcher
- credentials
- credentials-binding
- display-url-api
- durable-task
- echarts-api
- email-ext
- font-awesome-api
- git
- git-client
- git-server
- github
- github-api
- github-branch-source
- gradle
- handlebars
- jackson2-api
- jaxb
- jdk-tool
- jjwt-api
- jquery3-api
- jsch
- junit
- ldap
- lockable-resources
- mailer
- matrix-auth
- matrix-project
- momentjs
- okhttp-api
- pam-auth
- pipeline-build-step
- pipeline-github-lib
- pipeline-graph-analysis
- pipeline-input-step
- pipeline-milestone-step
- pipeline-model-api
- pipeline-model-definition
- pipeline-model-extensions
- pipeline-rest-api
- pipeline-stage-step
- pipeline-stage-tags-metadata
- pipeline-stage-view
- plain-credentials
- plugin-util-api
- popper-api
- popper2-api
- resource-disposer
- scm-api
- script-security
- snakeyaml-api
- ssh-credentials
- ssh-slaves
- sshd
- structs
- timestamper
- token-macro
- trilead-api
- workflow-aggregator
- workflow-api
- workflow-basic-steps
- workflow-cps
- workflow-cps-global-lib
- workflow-durable-task-step
- workflow-job
- workflow-multibranch
- workflow-scm-step
- workflow-step-api
- workflow-support
- ws-cleanup

Related

Ansible vars file group_vars/"{{ param }}"/vars.yml was not found on the Ansible Controller

I am trying pass value from jenkins to ansible. However, I am hitting below exception. This only happen for group_vars, for hosts, there is no issue.
ERROR! vars file group_vars/"{{ param }}"/vars.yml was not found on the Ansible Controller
- hosts: "{{ param }}"-value
roles:
- tests
vars_files:
- group_vars/"{{ param }}"/vars.yml
- group_vars/all/vars.yml
did you try:
- hosts: "{{ param }}-value"
roles:
- tests
vars_files:
- group_vars/{{ param }}/vars.yml
- group_vars/all/vars.yml
the quotes " are only needed when a value starts with {{.

SonarCloud quality check doesn't work after running scan

I am trying to run sonarcloud-quality-gate check after performing sonarcloud-scan. I am doing this because I want bitbucket build pipeline should fail if the quality gate check is failed.
Doing this I get some error like this
Quality Gate failed: Could not get scanner report: [Errno 2] No such file or directory: '/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/sonarsource/sonarcloud-scan/sonarcloud-scan.log'
This is how my bitbucket.yml looks.
image: node:10.15.3
clone:
depth: full # SonarCloud scanner needs the full history to assign issues properly
definitions:
caches:
sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
- step: &build-test-sonarcloud
name: Build, test and analyze on SonarCloud
caches:
- node
- sonar
script:
- npm install --quiet
- npm run test:coverage
- pipe: sonarsource/sonarcloud-scan:0.1.5
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
EXTRA_ARGS: '-Dsonar.sources=src -Dsonar.tests=src -Dsonar.test.inclusions="**.test.jsx" -Dsonar.javascript.lcov.reportPaths=coverage/lcov.info'
- pipe: sonarsource/sonarcloud-quality-gate:0.1.1
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
pipelines:
default:
- step: *build-test-sonarcloud
Although solarcloud-scan pipe runs successfully.
The problem is that the sonarsource/sonarcloud-quality-gate pipe requires a newer version of the sonarsource/sonarcloud-scan pipe. (This was the case ever since the first release of the sonarsource/sonarcloud-quality-gate pipe.)
Change your pipeline configuration like this:
- pipe: sonarsource/sonarcloud-scan:1.0.1
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
EXTRA_ARGS: '-Dsonar.sources=src -Dsonar.tests=src -Dsonar.test.inclusions="**.test.jsx" -Dsonar.javascript.lcov.reportPaths=coverage/lcov.info'
- pipe: sonarsource/sonarcloud-quality-gate:0.1.3
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
An easy way to see the latest versions is in the pipeline editor.
When you edit the bitbucket-pipelines.yml file, a sidebar like this opens,
where you can filter the list by entering "sonar":
And then, click on a pipe to see details, and note the version used.

How can I convince travis.ci to find Nashorn?

I have a JVM-based project that uses the Nashorn Javascript engine. It builds and tests locally fine. When using travis.ci my unit tests explode with NullPointerExceptions because ScriptEngineManager.getEngineByName("nashorn") is returning null.
Here's the travis.yml I'm using:
language: scala
scala:
- 2.11.8
notifications:
email:
recipients:
- info#blocke.com
jdk:
- oraclejdk8
script:
- sbt clean coverage test coverageReport && sbt coverageAggregate
before_install:
- export TZ=America/Chicago
- date
after_success:
- sbt coverageReport coveralls
addons:
apt:
packages:
- oracle-java8-installer
Moving to oraclejdk11 solved the problem!

Travis deploy based on matrix parameters

I have a travis job that runs on both Linux and OSX, which I would like to be able to use to deploy different build artefacts for each platform to github releases. My .travis.yml file currently looks something like this:
language: rust
cache: cargo
os:
- linux
- osx
rust:
- stable
- beta
- nightly
script:
- cargo build --release -vv
- cargo test --release --all -vv
matrix:
allow_failures:
- rust: nightly
fast_finish: true
deploy:
- provider: releases
skip_cleanup: true
api_key:
secure: <encrypted key here, removed for brevity>
before_deploy:
- cargo install cargo-deb
- cargo deb --no-build --no-strip
- ./scripts/package_linux.sh .
file_glob: true
file:
- "target/debian/ellington_0.1.0_amd64.deb"
- "releases/*_linux.zip"
on:
tags: true
os: linux
rust: stable
I assume that I add a second deploy step (e.g. see below), but I can't find any documentation on how to do this, let alone whether it's even possible. There is extensive documentation on deploying to multiple providers, but not on deploying multiple times to the same providers on different platforms.
- provider: releases
skip_cleanup: true
api_key:
secure: <encrypted key here, removed for brevity>
before_deploy:
- ./scripts/package_osx.sh .
file_glob: true
file:
- "releases/*_osx.zip"
on:
tags: true
os: osx
rust: stable
Check out this link!
The gist of it is yes, you were on the right track and you can define multiple deployments like so:
deploy:
- provider: releases
api_key: "<deploy key>"
file:
- "target/release.deb"
skip_cleanup: true
on:
tags: true
- provider: releases
api_key: "<deploy key>"
file:
- "target/release.dmg"
skip_cleanup: true
on:
tags: true
- provider: releases
etc...
Relevant documentation for this feature can also be found here. approximately halfway through the conditional deployment section.

Triggering different downstream jobs via templates in Jenkins Job Builder

I'm trying to trigger one or more downstream jobs using a job template. A summary of my definition:
- job-template:
name: something-{app}-build
project-type: freestyle
defaults: global
block-downstream: true
scm:
- git:
url: url
branches:
- 'master'
excluded-users:
- Jenkins
builders:
!include: templates/build-and-publish.yml
publishers:
- postbuildscript:
builders:
!include: templates/docker-build-and-push-to-ecr.yml
script-only-if-succeeded: True
mark-unstable-if-failed: True
- trigger-parameterized-builds:
- project: 'deploy-dev-ecs-service'
condition: SUCCESS
predefined-parameters: |
service={app}
envparams={envparams}
- project:
name: release-to-ecr
type: app
envparams: ''
app:
- app-1
- app-2:
- app-3:
envparams: 'FOO=42'
jobs:
- 'something-{app}-build'
Now this works, but I need to trigger different downstream jobs based on the app. This means triggering deploy-dev-ecs service multiple times with multiple parameters. For example:
app:
- app-1:
- project: deploy-dev-ecs-service
service: 'app-1'
envparams: 'foo=bar'
- app-2:
- project: deploy-dev-ecs-service
service: 'app-2.2'
envparams: 'x=2'
- project: deploy-dev-ecs-service
service: 'app-2.3'
envparams: 'x=3'
Essentially, I need to control which downstream job(s) get triggered based on
the project parameters. Is there a way to do this?

Resources