I wanted to create ipa files with a Jenkins slave on mac. My Jenkins master is on Linux.
So when creating a job in master, I need to configure the Xcode plugin. So should I give the path and keychain in the slave?
Or is there a way to create the job in slave and the master triggering it?
Jenkins slaves are just "runners", they don't provide configuration page. All the job configurations are made on the master, even if it's a Linux machine.
Start by setting up a Jenkins slave on the Mac:
http://www.parsed.io/setup-a-mac-slave-for-jenkins/
https://blog.samsaodev.com/how-to-setup-a-jenkins-slave-running-mac-osx-for-ios-projects-part-1/
Related
I have two different Jenkins instances with different versions.
One has version 1.609.1 and when I create a slave on it I see options in the launch method like:
"Launch slaves agents on a Unix machine via SSH"
"Launch slave agents via Java web start"
"Launch slaves via execution of command on the master and let Jenkins control this Windows slave as a Windows service"
I have one more instance that is of version 2.89.4, but the slave launch option is different - it has one *"Launch slaves agent via SSH". The rest of the all options are the same.
Is this slave launching option something version-specific?
It is specific to the plugin. You need to add it in your missing instance. See SSH Slaves Plugin.
My Jenkins master is up and running. I have created a slave node, launched it successfully from the slave machine, and have done the web services installation so that the connection is established on startup of the slave machine. I have also created a "job" that builds successfully in Jenkins.
How do I tell Jenkins what to actually do on my slave machine? I want to use Jenkins to run an IntelliJ test suite (Selenium and Cucumber) on the slave machine, but haven't been able to figure out exactly how to get it to do this. Note: I've just started looking into the Seleniumhq plug-in, but I'm not sure if this is what I need or not since I'm working with a remote slave.
Limit where the jobs can run using the 'Restrict where this project can be run' to your slave node.
Distributed Builds in Jenkins
My confusion here stemmed from not having my project connected to a VCS repository. Without it, I couldn't figure out how to build-out my project's workspace in the slave environment from Jenkins. I also didn't understand the concept of adding additional build steps at the time I asked this question.
Once I had the VCS connection set-up (I had to do some finagling with Git/Visual Studio Team Services to get it connected, which is why I went with "none" as my version control option at first), my workspace was built for me on the slave machine when I built the project from Jenkins. Then, I used a combination of build steps ("execute Windows batch command" and "Invoke top-level Maven targets") to carry-out the rest of the project's functions.
Via SSH slave plugin, we can have Jenkins slave to run specific job, but in my understanding, only SSH is enough to execute commands, why Jenkins still want to run slave.jar(Have to install JAVA)?
SSH is the communication mechanism between the master and slave machines.
The slave still has to run something to listen to the master and to do the actual builds. That Jenkins slave code is written in Java and stored in slave.jar.
So the reason you need Java on the slave machine is because the Jenkins slave software is written in Java. SSH is used by the master to tell the slave to do something.
I've been reading about Jenkins master/slave configurations but I still have some questions:
Is it so that the slave Jenkins is not actually installed and started up the way master Jenkins is? I assumed I would install one master Jenkins and another slave Jenkins in the same way, and then master Jenkins would control the slave e.g. through SSH? So I cannot view the slave Jenkins through a GUI?
The reason why I have thought about adding a slave Jenkins on another VM is because the VM contains our application servers (many test environments). Deploying and starting/stopping application servers from master Jenkins is a pain because master Jenkins and application servers are on different machines. Therefore, if I would add a slave Jenkins to the machine where our application servers are, these would actually be deployed and started/stopped locally (by slave Jenkins). I wonder if I have missed something, of if my presumptions are still valid.
In a standard Jenkins master/slave setup, Jenkins is only installed on the master. That is where you see the user interface and start/configure build jobs.
The slaves execute the jobs. There is no Jenkins installation here other than a small Java app to have Jenkins communicate to/from the slave. Jenkins talks to these slaves through the slave.jar app over e.g. SSH via the SSH Slaves Plugin and can monitor if the slave is running, etc.
So in your case, you can start jobs from the master that will execute on the application servers.
The master/slave setup also allows you to host all whole bunch of different slaves, with different OSes, different hardware, etc. You can communicate job results (artifacts) from one slave to another via the Copy Artifacts Plugin.
There are also ways to duplicate the actual Jenkins master with load balancing in a heavy use scenario. That is not what you seem to be looking for.
We have a master Jenkins running on a Linux system. The same master is attached as node using "Launch slave via execution of a command on the master". It has the same FS root as the JENKINS_HOME. The command is ssh "machine_name" "shell_script"
The shell script gets the latest slave.jar and runs it.
The master has 0 executors. The node has been given 7. I'm seeing weird behavior in the builds, like workspaces being deleted once a day, etc. I'm not sure if this is related to the way the Jenkins Master-slave is configured.
Any ideas if this is a supported configuration?