I am running Jenkins on Docker container. I have installed ansible on my computer and Ansible plugin on Jenkins as well. which ansible is /usr/local/bin/ansible. How can I configure my system's Ansible on Jenkins Global Tool Configuration (http://localhost:8080/configureTools/) as the /usr/local/bin/ansible is not being recognized by Jenkins.
"/usr/local/bin/ansible is not a directory on the Jenkins master (but perhaps it exists on some agents)" is the error I get when I try to put the ansible path
ansible should be installed on the server where you want to execute the ansible via Jenkins commands. (but perhaps it exists on some agents) -- this line means that in case there are slave registered to jenkins with ansible installed you can use them --> this error is generic. Also you can't use your local machine's ansible. Ansible tower is one such tool you can use but that requires custom installation on linux server
Related
I installed Jenkins on my -AWS EC2 Ubuntu- and on the same server I'm running Docker. what I'm trying to do is to build a pipeline via Jenkins on that EC2 Instance. my jenkinsfile stored on Github
When I run the pipeline I get the following error:
Jenkins’ doesn’t have label ‘docker'
How to solve this problem?
I went to manage Jenkins -> global tools and also to configure system
but non of these solve the problem.
Any suggestions?
I have configured jenkins using its docker image to scale and deploy it on a kubernetes cluster (minikube) using the kubernetes plugin and successfully able to generate the slaves dynamically. But I am not able to run groovy scripts by passing the file path of the the groovy file on the slave. I have tried using SSH and scp command but not able to run the scripts on slave node. Any other idea??
To try I created a sample groovy file on the slave node and gave its path to the groovy plugin and tried to build the job which works. Can we create a file in our local system and make it run on the slave node.
Via SSH slave plugin, we can have Jenkins slave to run specific job, but in my understanding, only SSH is enough to execute commands, why Jenkins still want to run slave.jar(Have to install JAVA)?
SSH is the communication mechanism between the master and slave machines.
The slave still has to run something to listen to the master and to do the actual builds. That Jenkins slave code is written in Java and stored in slave.jar.
So the reason you need Java on the slave machine is because the Jenkins slave software is written in Java. SSH is used by the master to tell the slave to do something.
I have latest Jenkins and using it's latest Swarm Plugin.
I have written Ansible modules/roles/playbooks to setup install various tools/configuration on a given target node (which I would like to use as a Swarm slave node).
After Ansible playbook run is complete, I now see a new Slave is created and attached to my Jenkins master but Swarm Plugin's docs (Available Options) doesn't mention how to create ENVIRONMENT variables in the slave. https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin
My question is:
How can I have multiple slaves created on a same target machine and they all have their own individual settings for setting various tools like JAVA_HOME, M2_HOME, GRADLE_HOME, PATH etc.
How can I set ENVIRONMENT variables for a slave using Swarm plugin?
This is required as if I created a slave whose default JAVA is jdk1.7.0_67, then I would like to create another slave whose default JAVA_HOME is jdk1.8.0_45. Similarly, the end goal is to have various flavors of such slaves with various tools if possible, where each slave's tools are slightly different. I'll assign the LABEL(s) accordingly and use it in a Jenkins job's configuration so that a job runs only using / on these slave if the associated label is assigned/tied to the job.
I tried using https://github.com/MovingBlocks/GroovyJenkins/blob/master/src/main/groovy/AddNodeToJenkins.groovy but not sure how I can automatically define/set ENVIRONMENT variables in the slave's configuration.
I'm assuming you're running on Linux here.
You can have a shell script to export the new environment before calling the swarm-client. These variables will be inherited by the new swarm slave
https://unix.stackexchange.com/questions/130985/if-processes-inherit-the-parents-environment-why-do-we-need-export
Alternatively you could run docker and have a separate swarm slave containers https://hub.docker.com/r/csanchez/jenkins-swarm-slave/ and put your specific install into the Dockerfile and add a new ENTRYPOINT in the bottom of the Dockerfile
ENTRYPOINT ["/usr/local/bin/jenkins-slave.sh" \
"-labels", "label1", "-labels", "label2"]
We have a Jenkins system to automate build from Github, now we are implementing a Saltstack system. So I need to integrate my Jenkins with Salt-master so that it passes all the new builds to the master which then sends it across the salt-clients(minions).
The saltstack setup is in AWS cloud and and the Jenkins machine is outside the cloud in a local setup.
You could enable the salt-api and using the following plugin: https://wiki.jenkins-ci.org/display/JENKINS/saltstack-plugin then all of your jenkins builds can execute states / orchestrations etc. to any minions on a per job basis.
Another way of doing this is to have a minion running on the salt-master and install the jenkins slave on the same box. Then restrict the jenkins jobs to that jenkins slave and execute the commands as if you were at the command line. NOTE: this option requires a bit more configuration.