I am very new to CI systems like Jenkins.
I have a master Jenkins server running on Ubuntu. I have one Ubuntu slave (managed over SSH) ,which is dedicated to Android builds, having Android SDK and others already available.
My question is how I can tell the Jenkins master the path of android SDK on the slave server?
Go to Manage Jenkins -> Manage Nodes, and open configuration page for the slave node. Add environment variables, which will then be given only to that slave.
If you define environment variables which are used by default by the SDK, then this is enough. Otherwise you will also have to edit the job to use these (I mean, like in execute shell script build steps).
I'm not familiar with the Andriod SDK specifically, but could you pass it as a parameter to the build?
Related
I was trying to understand jenkins agents. This page asks to first create jenkins docker agent. But it doesnt say where to execute these steps?
Q1. Should we be executing these steps on node or a machine which we want to designate as agent?
The next step asks to setup an agent through Jenkins UI:
Q2. Above is nothing but the Jenkins controller UI right?
But above UI does not seem to accept IP address of the agent node on which we staarted docker agent.
Q3. Does Jenkins controller automatically discovers running agents reachable on the network?
Q4. What are exactly Jenkins plugins in relation with agents? Jenkin glossary defines plugin as "an extension to Jenkins functionality provided separately from Jenkins Core." But that does not explain much of its nature or functionality. This page also explain plugin installation and management on the controller, but doesnt explain exact nature of their functionality.
Q4.1. Do plugins run jobs of agent nodes? For example, does Android Emulator plugin installed on controller installs and runs android emulator on available agent?
Q4.2. If yes is the answer to Q4.1, does every plugin need corresponding process to be installed on the agent so that agent can carry out functionality specified in the pluin on the controller?
PS: Am a noob in Jenkins and overall DevOps stuff and just trying to wrap my head around Jenkins
My Jenkins master is up and running. I have created a slave node, launched it successfully from the slave machine, and have done the web services installation so that the connection is established on startup of the slave machine. I have also created a "job" that builds successfully in Jenkins.
How do I tell Jenkins what to actually do on my slave machine? I want to use Jenkins to run an IntelliJ test suite (Selenium and Cucumber) on the slave machine, but haven't been able to figure out exactly how to get it to do this. Note: I've just started looking into the Seleniumhq plug-in, but I'm not sure if this is what I need or not since I'm working with a remote slave.
Limit where the jobs can run using the 'Restrict where this project can be run' to your slave node.
Distributed Builds in Jenkins
My confusion here stemmed from not having my project connected to a VCS repository. Without it, I couldn't figure out how to build-out my project's workspace in the slave environment from Jenkins. I also didn't understand the concept of adding additional build steps at the time I asked this question.
Once I had the VCS connection set-up (I had to do some finagling with Git/Visual Studio Team Services to get it connected, which is why I went with "none" as my version control option at first), my workspace was built for me on the slave machine when I built the project from Jenkins. Then, I used a combination of build steps ("execute Windows batch command" and "Invoke top-level Maven targets") to carry-out the rest of the project's functions.
Currently, I am working in a quality process so as to ensure that the code is acceptable. For that, I'm integrating Jenkins, SonarQube and GitLab, which are running in different servers (actually they are in different docker containers).
The idea is to check with SonarQube everytime the code is pushed against GitLab and block commits, merges, and so on, whether SonarQube has not passed.
I have already integrated Jenkins with SonarQube, but Jenkins checks the code inside his workspace, so imagine a situation where a developer in his laptop needs to push his changes.
My conceptual question is simple: Is it possible to integrate these technologies in order to do this? And, if the question is yes, which steps are necessary?
PD: I don't need to see code, configuration files,and so on. I just need something like:
Configure SonarQube to work with Jenkins
Do an script so as to copy that file in that folder,
...
First, in docker means each tool is in its own container.
They only need to see each other through the network, which is where a Docker Engine in Swarm mode comes in.
Second "configure Jenkins to work with SonarQube"... that is what I have done in my shop, and there isn't much to it.
Once the Jenkins SonarQube plugin is installed, and the address for the SonarQube server entered, you can configure your job and call sonar (for instance with maven: $SONAR_MAVEN_GOAL -Dsonar.host.url=$SONAR_HOST_URL)
The analysis done in the Jenkins workspace will then be published in the SonarQube server.
A swarm server is the more modern version of this 2015 docker-compose.yml file from the marcelbirkner/docker-ci-tool-stack project.
The idea remains the same though: each element is isolated in its own container.
I haven't tried It myself but https://gitlab.talanlabs.com/gabriel-allaigre/sonar-gitlab-plugin could be interesting in your setup.
I'm new to Jenkins, and I like to know if it is possible to have one Jenkins server to deploy / update code on multiple web servers.
Currently, I have two web servers, which are using python Fabric for deployment.
Any good tutorials, will be greatly welcomed.
One solution could be to declare your web servers as slave nodes.
First thing, give jenkins credentials to your servers (login/password or ssh login+private key or certificate. This can be configured in the "Manage credentials" menu
Then configure the slave nodes. Read the doc
Then, create a multi-configuration job. First you have to install the matrix-project plugin. This will allow you to send the same deployment intructions to both your servers at once
Since you are already using Fabic for deployment, I would suggest installing Fabric on the Jenkins master and have Jenkins kick off the Fabric commands to deploy to the remote servers. You could set up the hostnames or IPs of the remote servers as parameters to the build and just have shell commands that iterate over them and run the Fabric commands. You can take this a step further and have the same job deploy to dev/test/prod just by using a different set of hosts.
I would not make the webservers slave nodes. Reserve slave nodes for build jobs. For example, if you need to build a windows application, you will need a windows Jenkins slave. IF you have a problem with installing Fabric on your Jenkins master, you could create a slave node that is responsible for running Fabric deploys and force anything that runs a fabric command to use that slave. I feel like this is overly complex but if you have a ton of builds on your master, you might want to go this route.
Travis CI has a really nice feature, builds are run within VirtualBox VMs. Each time a build is started, the box is refreshed from a snapshot and the code copied on to it. Any problems with the build cannot affect the host, and you can use any OS to run your builds on.
This would be really good, for example, compiling and testing code on a guest OS that matches your production env. Also, you can keep your host free of any installation dependencies you might need (e.g. a database server) and run ITs without worrying about things like port conflicts.
Does such a thing exist for Jenkins?
Check out the Vagrant Plugin https://wiki.jenkins-ci.org/display/JENKINS/Vagrant-plugin
This plugin allows booting of Vagrant virtual machines, provisioning them and also executing scripts inside of them
You can run Jenkins in a Master Slave Setup. Your Master instance manages all the jobs but lets all the slaves do the actual work. These Slaves can be VMs or physical machines. Go To Manage Jenkins -> Manage Nodes -> New Node to add Nodes to your Jenkins Setup.
There is the vSphere Cloud Plugin and the Scripted Cloud Plugin that can be used for this purpose.