Skip initial setup wizard in Jenkins (latest version) using ansible in ubuntu - jenkins

I am trying to setup Jenkins using ansible in ubuntu for which I want to skip the initial setup wizard in Jenkins.
While I am using this code in local host it worked fine but in EC2 instance it's not working .The initial setup wizard appears.
- name: Disable Jenkins setup wizard
lineinfile:
dest=/etc/default/jenkins
regexp=^JAVA_ARGS=
line=JAVA_ARGS="-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"
become: true
- name: Start jenkins
service:
name: jenkins
enabled: true
state: started
- name: Wait for Jenkins to start up
uri:
url: http://{{ HostName }}:8080
status_code: 200
timeout: 5
register: jenkins_service_status
# Keep trying for 5 mins in 5 sec intervals
retries: 60
delay: 5
until: >
'status' in jenkins_service_status and
jenkins_service_status['status'] == 200
Please can you suggest me how to remove the initial set wizard or else how to configure it using ansible.

Related

Configure Ansible playbook to skip Jenkins Initial setup

Hello I'm new to writing Ansible Playbooks but I'm trying to have my playbook install Jenkins. It installs Jenkins just fine but the issue becomes that it wants me to do the initial unlock before installing plugins, creating jobs etc. I've seen in here a few times people saying you just need to add this to your playbook and you should be good. When I add it and then run the playbook it still has this issue even if I do it from a brand new server. Wondering what everyone has done to get by this issue. Thanks for your assistance!
Code I've seen from other posts:
Gets error "Cannot get CSRF" when trying to install jenkins-plugin using ANSIBLE
- name: Jenkins Skip startUp for MI
lineinfile:
dest=/etc/sysconfig/jenkins
regexp='^JENKINS_JAVA_OPTIONS='
line='JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"'
register: result_skip_startup_wizard
My Playbook
---
# jenkins
- name: Create jenkins group
group:
name: jenkins
state: present
- name: Create jenkins user
user:
name: jenkins
group: jenkins
state: present
- name: Import jenkins gpg key
rpm_key:
state: present
key: http://pkg.jenkins.io/redhat-stable/jenkins.io.key
validate_certs: no
- name: Download Jenkins repo
get_url:
url: http://get.jenkins.io/redhat-stable/jenkins-2.332.3-1.1.noarch.rpm
dest: /etc/yum.repos.d/
- name: Install java
yum:
name: java-11-openjdk
state: present
- name: Install Jenkins
package:
name: /etc/yum.repos.d/jenkins-2.332.3-1.1.noarch.rpm
state: latest
- name: Jenkins Skip startUp for MI
lineinfile:
dest=/etc/sysconfig/jenkins
regexp='^JENKINS_JAVA_OPTIONS='
line='JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"'
register: result_skip_startup_wizard
- name: Start and Enable Jenkins
systemd:
name: jenkins
state: started
enabled: true
- name: Sleep for 30 seconds and continue with Jenkins buildout
wait_for: timeout=30
For reference this is what I see in the server when I check the file and then when I just grep for the process.
jenkins 8474 1 34 18:29 ? 00:00:20 /usr/bin/java -Djava.awt.headless=true -jar /usr/share/java/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=8080
You can see the changes though do get put in the file as mentioned from above. Which makes me think even after restarting the service its not seeing the new option. I even manually stopped jenkins and then started but it still did not pick it up.
JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"
A little late here but I figured I'd leave a comment in here as well as I discovered when I was testing that the setup depended on the version of Jenkins you were attempting to install. Versions I tested are the comment lines above the code.
On the latest line it is just an assumption on my part not a guarantee.
# testing for jenkins 2.319.1
- name: Jenkins Skip startUp for MI
lineinfile:
dest=/etc/sysconfig/jenkins
regexp='^JENKINS_JAVA_OPTIONS='
line='JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"'
register: result_skip_startup_wizard
# below works for 2.332.1 or latest
- name: Jenkins Skip startUp for MI
lineinfile:
dest=/usr/lib/systemd/system/jenkins.service
regexp='^Environment="JAVA_OPTS=-Djava.awt.headless=true'
line='Environment="JAVA_OPTS=-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"'
register: result_skip_startup_wizard

Setting up Jenkins job in Rundeck 3.4.9 community version

I want to setup a Jenkins job which should be triggered by Rundeck. I have already installed Rundeck plugin and tested the connection between Rundeck and Jenkins which is working as expected.
Now, my requirement is that the same Jenkins job should be triggered by Rundeck. I am using Rundeck 3.4.9 community version.
How can I setup a Jenkins job in Rundeck and what configurations I need to do on both Rundeck and Jenkins end?
You can design a Rundeck job that call the Jenkins job using this endpoint via CURL.
I tested successfully with this endpoint:
http://user:JENKINS_USER_TOKEN#localhost:8080/job/TestJob/build
You can use the http workflow step to send post requests using that URL format.
Source.
Here is a job definition example:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 1fa2923a-5b1d-4ea2-97d1-4cc2a3726f07
loglevel: INFO
name: ExampleJENKINS
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "starting..."
- fileExtension: .sh
interpreterArgsQuoted: false
script: curl -vvv -X POST http://admin:11bd72f1f22653cf7158c7961f60476a1d#localhost:8080/job/MyJenkinsJob/build
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: 1fa2923a-5b1d-4ea2-97d1-4cc2a3726f07
Another way is to use the HTTP Step plugin with a basic authentication configured.
The job definition example:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 1fa2923a-5b1d-4ea2-97d1-4cc2a3726f07
loglevel: INFO
name: ExampleJENKINS
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "starting..."
- configuration:
authentication: Basic
checkResponseCode: 'false'
method: POST
password: keys/jenkins_admin_token
printResponse: 'false'
printResponseToFile: 'false'
proxySettings: 'false'
remoteUrl: http://localhost:8080/job/MyJenkinsJob/build
sslVerify: 'true'
timeout: '30000'
username: admin
nodeStep: true
type: edu.ohio.ais.rundeck.HttpWorkflowNodeStepPlugin
keepgoing: false
strategy: node-first
uuid: 1fa2923a-5b1d-4ea2-97d1-4cc2a3726f07

How to use custom ingest pipelines with docker autodiscover

I'm having a hard time using custom Elasticsearch ingest pipelines with Filebeat's Docker autodiscovery. I've started out with custom processors in my filebeat.yml file, however I would prefer to shift this to custom ingest pipelines I've created.
Firstly, here is my configuration using custom processors that works to provide custom grok-like processing for my Servarr app Docker containers (identified by applying a label to them in my docker-compose.yml file). The processor copies the 'message' field to 'log.original', uses dissect to extract 'log.level', 'log.logger' and overwrite 'message'. The final processor is a JavaScript function used to convert the log.level to lowercase (overkill perhaps, but humour me).
Filebeat configuration:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
processors:
- if:
equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
then:
- copy_fields:
fields:
- from: message
to: log.original
fail_on_error: false
ignore_missing: true
- dissect:
tokenizer: "[%{log.level}] %{log.logger}: %{message}"
field: message
target_prefix: ""
overwrite_keys: true
ignore_failure: true
- script:
lang: javascript
id: lowercase
source: >
function process(event) {
var level = event.Get("log.level");
if(level != null) {
event.Put("log.level", level.toString().toLowerCase());
}
}
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
Excerpt from docker-compose.yml file...
lidarr:
image: ghcr.io/linuxserver/lidarr:latest
container_name: lidarr
labels:
co.elastic.logs/custom_processor: "servarr"
And an example log line (in json):
{"log":"[Info] DownloadDecisionMaker: Processing 100 releases \n","stream":"stdout","time":"2021-08-07T10:10:49.125702754Z"}
This works well, and achieves my aims of extracting fields, but ideally I'd like to use Elasticsearch's (more powerful) ingest pipelines instead, and live with a cleaner filebeat.yml, so I created a working ingest pipeline "filebeat-7.13.4-servarr-stdout-pipeline" like so (ignore the fact that for now, this only does the grokking):
[
{
"grok": {
"field": "message",
"patterns": [
"\\[%{LOGLEVEL:log.level}\\] %{WORD:log.logger}: %{GREEDYDATA:message}"
],
"trace_match": true,
"ignore_missing": true
}
}
]
I tested the pipeline against existing documents (not ones that have had my custom processing applied, I should note). The pipeline worked against all the documents I tested it against in the Kibana interface.
So now I come to shift my Filebeat config to use this pipeline for containers with my custom_processor label. This is the filebeat.yml I came up with, which is apparently valid and works for the most part, but doesn't apply the grokking:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition.equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
config:
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
If I use Filebeat's inbuilt modules for my other containers such as nginx, by using a label such as in this example below, the inbuild module pipelines are used:
nginx-repo:
image: nginx:latest
container_name: nginx-repo
mem_limit: 2048m
environment:
- VIRTUAL_HOST=repo.***.***.***,repo
- VIRTUAL_PORT=80
- HTTPS_METHOD=noredirect
networks:
- default
- proxy
labels:
co.elastic.logs/module: "nginx"
co.elastic.logs/fileset.stdout: "access"
co.elastic.logs/fileset.stderr: "error"
What am I doing wrong here? The logs still end up in Elasticsearch and Kibana, and are processed, but my grok isn't applied, new fields aren't created, and the 'message' field is unchanged.
EDIT: In response to one of the comments linking to a post on the elastic forums, which suggested both the path(s) and the pipeline need to be made explicit, I tried the following filebeat.yml autodiscovery excerpt, which also fails to work (but is apparently valid config):
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition:
equals:
docker.container.labels.co_elastic_logs/custom_processor: "servarr"
config:
- type: docker
containers:
ids:
- "${data.docker.container.id}"
stream: all
paths:
- /var/lib/docker/containers/${data.docker.container.id}/${data.docker.container.id}-json.log
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
I tried with the docker.container.labels.co_elastic_logs/custom_processor value both quoted and unquoted. I have the same behaviour where the logs end up in Elasticsearch / Kibana, but they are processed as if they skipped my ingest pipeline.
We're using Kubernetes instead of Docker with Filebeat but maybe our config might still help you out.
We have autodiscover enabled and have all pod logs sent to a common ingest pipeline except for logs from any Redis pod which use the Redis module and send their logs to Elasticsearch via one of two custom ingest pipelines depending on whether they're normal Redis logs or slowlog Redis logs, this is configured in the following block:
All other detected pod logs get sent in to a common ingest pipeline using the following catch-all configuration in the "output" section:
Something else that we do is add the name of the ingest pipeline to ingested documents using the "set" processor:
This has proven to be really helpful when diagnosing whether or not a pipeline was actually executed when viewing an event document in Kibana.

Remote Trigger Bamboo build from BitBucket Server webhook only on PR?

I'm new to bamboo and webhooks. I'm trying to start a bamboo build automatically when a PR to master branch from my repo is opened.
I followed this guide but the remote trigger is not starting at all.
Bamboo:
BitBucket:
I've already checked the following:
verify that the whitelisted ip is correct (the bitbucket webhook fails if i remove that)
my bamboo plan is enabled and is building fine on manual run
What am I missing?
Bamboo build plan in YAML:
---
oid: 7818389690603565060
key: XT
name: XXX - TEMP
project:
oid: 7819374853022025730
key: DIGQA
repositories:
- oid: 7818811903068661169
parentRepository: 7818811903068661168
triggers:
- name: Bitbucket Server repository triggered
description: ''
pluginKey: com.atlassian.bamboo.plugins.stash.atlassian-bamboo-plugin-stash:stashTrigger
enabled: true
configuration: {}
triggerConditions:
com.atlassian.bamboo.triggercondition.internal:plansGreenCondition:
enabled: 'false'
triggeringRepositories:
- 7818811903068661169
- name: Remote trigger
description: Master PR Trigger
pluginKey: com.atlassian.bamboo.triggers.atlassian-bamboo-triggers:remote
enabled: true
configuration:
repository.change.trigger.triggerIpAddress: 10.40.1.120
triggerConditions:
com.atlassian.bamboo.triggercondition.internal:plansGreenCondition:
enabled: 'false'
triggeringRepositories:
- 7818811903068661169
branchConfiguration:
planBranchCreation:
enabled: false
removedBranchCleanup:
enabled: false
inactiveBranchesCleanup:
enabled: false
merging:
enabled: false
notificationStrategy: notifyCommitters
triggers: inherited
issueLinking: enabled
dependencies:
configuration:
enabledForBranches: 'true'
requireAllStagesPassing: null
blockingStrategy: none
childPlans: []
permissions:
users:
xxxxxxxx:
- administration
- build
- clone
- read
- write
groups: {}
roles:
user:
- read
anonymous:
- read
plugins:
- pluginKey: com.atlassian.bamboo.plugin.system.additionalBuildConfiguration:concurrentBuild
configuration:
custom.concurrentBuilds.overrideNumberOfConcurrentBuilds: 'true'
custom.concurrentBuilds.numberOfConcurrentBuilds: '1'
- pluginKey: com.atlassian.bamboo.plugin.system.additionalBuildConfiguration:buildExpiry
configuration:
custom.buildExpiryConfig.enabled: 'false'
- pluginKey: com.atlassian.bamboo.plugin.artifact.handler.local:artifactHandlersConfiguration
configuration:
custom.artifactHandlers.useCustomArtifactHandlers: 'false'
buildDefinition:
custom.predefinedVariables: '{"variableSetList":[]}'
stages:
- oid: 7818530428091950756
name: Default Stage
jobs:
- oid: 7818671165580276746
key: JOB1
name: Default Job
tasks:
- oid: 7819234115533708305
description: Checkout Default Repository
pluginKey: com.atlassian.bamboo.plugins.vcs:task.vcs.checkout
configuration:
repositories:
- ref: defaultRepository
buildDefinition:
cleanWorkingDirectory: false
repositoryDefiningWorkingDirectory: -1
...
===========================================================================
EDIT 1:
Okay, so I realized the hook and the trigger is actually working. I misunderstood the trigger setup on bamboo.
Current behavior:
PR to master is opened
BitBucket webhook (on PR) is fired
Bamboo trigger is set to remote / bitbucket server repo. Because of this, the build will not start until the changes are commited / PR is actually merged
Problem:
I want the build to trigger once the PR is opened (before merge). To a bit more context, this is the ideal flow of my build:
Checkout the PR code (revision)
Run my tests against the PR revision
I'm looking at the following links as it seems they managed to do it somehow but I can't make sense of the bits of info provided in both the links.
bamboo - build my pull request
What's wrong with bamboo
Since you are using Bamboo and Bitbucket Server (not Cloud), follow the instructions here:
https://confluence.atlassian.com/bamboo/integrating-bamboo-with-bitbucket-server-779302772.html
You need to create an application link between Bamboo and BBS - application links are between Atlassian applications.
Found out that this feature is supported out of the box as of Bamboo 6+: Reference

Configure Jenkins 2.0 with Ansible

I am using Ansible for provision our servers, I installed the Jenkins 2.0 but it is becomeing with a startup configuration when I open the web UI. How can I do it with Ansible or shell or jenkins-cli. CentOS 7, Ansible 2.0.1.0.
So,
Installing Jenkins 2.0 from http://pkg.jenkins-ci.org/redhat-rc/jenkins-2.0-1.1.noarch.rpm rpm.
Install java with yum.
Service start jenkins.
Open 192.168.46.10:8080, which is opening the Jenkins.
In Web UI adding the initial admin password.
In web UI select and install plugins.
In web UI create a new admin user.
The 5,6,7 points are all the startup config of the new Jenkins. I haven't idea how we can install it autmatically.
Edit 1:
The 1,2,3 point is already done, just I didn't share because it is not necessary, because I only need an advice how can I configure the Jenkins. But now I add it to my question.
---
- name: Jenkins - install | Install java
yum: name=java state=installed
- name: Jenkins - install | Install Jenkins 2.0
yum: pkg=http://pkg.jenkins-ci.org/redhat-rc/jenkins-2.0-1.1.noarch.rpm state=installed
- name: Jenkins - install | Start and enable Jenkins 2.0
service: name=jenkins state=started enabled=yes
You can create an initialization script (in groovy) to add an admin account.
Script should be present in $JENKINS_HOME/init.groovy.d/*.groovy.
See Jenkins CI Wiki for more details.
Here's an example.
security.groovy.j2 file:
#!groovy
import java.util.logging.Level
import java.util.logging.Logger
import hudson.security.*
import jenkins.model.*
def instance = Jenkins.getInstance()
def logger = Logger.getLogger(Jenkins.class.getName())
logger.log(Level.INFO, "Ensuring that local user '{{ jenkins.admin_username }}' is created.")
if (!instance.isUseSecurity()) {
logger.log(Level.INFO, "Creating local admin user '{{ jenkins.admin_username }}'.")
def strategy = new FullControlOnceLoggedInAuthorizationStrategy()
strategy.setAllowAnonymousRead(false)
def hudsonRealm = new HudsonPrivateSecurityRealm(false)
hudsonRealm.createAccount("{{ jenkins.admin_username }}", "{{ jenkins.admin_password }}")
instance.setSecurityRealm(hudsonRealm)
instance.setAuthorizationStrategy(strategy)
instance.save()
}
How to use in Ansible playbook:
- name: Create initialization scripts directory
file: path={{ jenkins.home }}/init.groovy.d
state=directory
owner=jenkins
group=jenkins
mode=0775
- name: Add initialization script to setup basic security
template: src=security.groovy.j2
dest={{ jenkins.home }}/init.groovy.d/security.groovy
I was inspired by this GitHub reposiotry.
I found a solution, this is turn off the setup wizard, after it I was able to change config files.
- name: Jenkins - configure | Turn off Jenkins setup wizard
lineinfile: dest=/etc/sysconfig/jenkins regexp='^JENKINS_JAVA_OPTIONS=' line='JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"'
notify: restart jenkins
The above solution didn't work for me but give me hint and this is the solution that worked for me on Ubuntu:
- name: Configure JVM Arguments
lineinfile:
dest: /etc/default/jenkins
regexp: '^JAVA_ARGS='
line: 'JAVA_ARGS="-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"'
notify:
- Restart Jenkins
On Ubuntu 16.04 with Jenkins installed using apt-get, this works:
- name: "Turn off Jenkins setup wizard"
lineinfile:
dest: /etc/init.d/jenkins
regexp: '^JAVA_ARGS='
line: 'JAVA_ARGS="-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"'
insertbefore: '^DAEMON_ARGS='
notify: restart jenkins
- name: restart jenkins
service: name=jenkins state=restarted
You will still have to setup the security though!

Resources