I am just started learning Jenkins deployment on Google Kubernetes engine. I was able to successfully deploy an application to my GKE instance. However, I couldn't figure out how to manage Nodes and Clouds.
Any tutorial or guidance would be highly appreciated.
Underlying idea behind nodes : Just one node may not be sufficient/effective to run multiple jobs so to distribute the load jobs are transferred to a different node to attain a good performance.
Prerequisites
#1 : A instance (Lets’ say it DEV) which is hosting Jenkins (git, maven, Jenkins)
#2 : A instance (Let’s call it Slave) which will be used to serve as host machine for our new node
In this machine you need to have java installed
A pass wordless connection should be established between two instances.
To achieve it enable password authentication >generate key in main machine i.e., Dev machine and copy this key into Dev machine.
Create a directory “workspace” in this machine (/home/Ubuntu/workspace)
Now Let's get started with Jenkins part -
Go to manage Jenkins> Manage nodes and cloud
By default Jenkins contains only the master node
To create a new node one could use the option “new node” available on the right side of the screen.
Provide a name to new node, mark it as permanent agent
Define remote root directory : It is the directory which is defined by you.
For e.g., a location like
“/home/Ubuntu/workspace “
Provide a label of your choice for e.g., let’s give the label as “Slave_lab”
Label = slave_lab
Now define your Launch method
Let’s select “Launch agent via execution of command on the master”
In the command put command as :
Ssh Ubuntu#private_IP_of_slave java -jar slave.jar
Note : here by #private_IP_of_slave i mean the IP of machine which will be used for our new node
Now we could process to configure jobs to be run on our new node
For that right click on your job > select configure
Under the general tab select the following
"Restrict where this project can be run" and provide the label "slave_lab"
Now when you’ll run the job it will be executed on the slave node not on the master name.
Related
I have multiple Unix servers where I need to stop and start few services (name of the service are same in all servers and login user and password is also same). I am able to restart services for single unix server using Execute shell script on remote host using ssh. But not able to do for multiple servers.
Ex: Server 1 and server 2 (Both are unix servers)
script file name: sample.sh
order to run this script using Jenkins:
stop service in Server1 using sample.sh script
stop service in Server2 using sample.sh script
start service in Server1 using sample.sh script
start service in Server2 using sample.sh script
please let me know how to achieve this using Jenkins. I have done this by creating 4 job for 4 steps and then pipelined them. But in real time i have more than 10 servers and i believe this is not a good way to do.
I think your best approach would be using the Matrix Project plugin. We use it to run adminstration tasks on all nodes matching a given label in parallel.
Trivally, you can use one matrix job to stop service on all nodes and when done, trigger 2nd job to start all nodes.
It has lots of extension points defined as well.
From the notes;
You have to choose "Build multi-configuration project" when
creating a project, it can not be changed later. If you skip this
step, you will be very confused and not get very far
Each configuration is akin to an individual job. It has its own build history, logs, environment, etc. The history of your
multi-config job only shows you a list of configurations executed. You
have to drill into each configuration to see the history and console
logs.
I have a web site (site A) deployed on machine A, which depends on a service (service B) deployed onto Machine B.
Machine A and B are in the same deployment group, differentiated by tags (App and Service respectively) and I have 2 deployment phases (one for each tag) pushing the code out to the respective boxes
I need to write a value into the configuration of Site A to tell it the location of Service B.
Is there a way of discovering the name of the machine that Service B was deployed to, to keep my deployment truly dynamic?
Put another way, can I discover the name of a machine with a given deployment tag and use it in a variable?
I've tried running local powershell on the deployment agents to update a variable but that update doesn't seem to make it back to the controlling agent so it can't pass the values across between machines.
My fallback is just to use known server names and write the values into configuration but that feels like a massive hack given how dynamic the rest of the system is.
I'm using TFS 2018 on-prem - the GUI based deployment pipeline (no YAML)
There are predefined agent variables that allow you to reference the machine name in your pipeline.
1,You can refer to the machine name by wrap the predefined variable in "$()", eg. "$(Agent.MachineName)" or "$(Agent.Name)"
This method will get the Agent Name from the Agent.Name property in the Capabilities of the agent.
2,There is another workaround. You can also add a powershell task to script below script to get the local machine name which hosts the agent and assign it to a variable.
You need to define a variable(eg.MachineName) in the Variables tab of your pipeline
echo "##vso[task.setvariable variable=MachineName]$([System.Net.Dns]::GetHostName())"
The second method will get the Machine name from the on premise Computer's properties.
I'd like to be able to dynamically provision docker child nodes for builds and have the configuration / setup of those nodes be part of the Jenkinsfile groovy script it uses.
Limitations of the current setup of jobs means Jenkins has one node/executor (master) and I'd like to support using Docker for nodes to alleviate this bottleneck.
I've noticed there's two ways of using a docker container as a node:
You can use the agent section in your pipeline file which allows you to specify an image to use. As part of this, you can target a specific node which supports running docker images, but I haven't gotten that far as to see what happens.
You can use the Jenkins Docker Plugin which allows you to add a Docker Cloud in Jenkins' configuration. It allows you to specify a label which, when used as part of a build, will spawn a container in that "cloud" from the image chosen in the cloud configuration. In this case, the "cloud" is the docker instance running on the Jenkins server.
Unfortunately, it doesn't seem like you can use both together - using the label but specifying a docker image in the configuration (1) where the label matches a docker cloud template configuration (2) does not seem to work and instead produces a label not found error during the build.
Ideally I'd prefer the control to be in the pipeline groovy file so the configuration is stored with the application (1), not with the Jenkins server (2). However, it suggests that if I use the agent section and provide a docker image, it still must target an existing executor first (i.e. master) which will cause other builds to queue until the current build is complete.
I'm at a point of migrating builds, so not all builds can support using a docker container as the node yet, and builds will have issues when ran in parallel on the master node.
Is there a way for a docker pipeline file to determine the image of the child node it runs on?
There are a few options I have considered but not attempted yet:
Migrate jobs to run on the "docker cloud" until all jobs support running on child container nodes, then move the configuration from Jenkins to the pipeline build file for each job and turn on parallel builds on the master node.
Attempt to add a new node configuration which is effectively a copy of master (uses the same server, just different location). Configure it to support parallel builds, and have all migrated jobs target the node explicitly during builds.
Is it possible to provision multiple vms from an agent template/snapshot and access them within a Jenkins job? Or does this limit have to be known ahead of time and each pre-provisioned and connected to Jenkins?
Reading the documentation on Distributed Builds and vsphere plugin I have this perception that I could have a template VM from which I dynamically provision as many clones as I need (limited by concurrent build limits) and connect and build on those - however when it comes to implementation I have two problems:
1) The agent tries to connect to the same nodes defined in /computer (cloning and have static ips so have lots of conflicts there)
2) If I name the vm clone as something else, the label is not recognized as a valid node (i.e. Clone from a VM attached as node 'Agent1' to 'Agent2' -- using the label 'Agent2' does not connect to the new vm since Agent2 is not a valid node)
You can have Jenkins create new nodes from a template and name them with an incrementing counter (i.e. prefix of 'WinAgent-' create 'WinAgent-1', W'WinAgent-2', etc and they show up as new nodes under the executors)
1) This is an issue of static IPs. Use VM configurations to change the IP or setup dhcp. Use the vm options to have Jenkins send in the name of the agent. For example, in esx use vmtoolsd --cmd "info-get guestinfo.SLAVE_JNLP_URL" Use a script to start the agent using the parameters from the vm options.
2) When setup in the vsphere cloud portion of 'Manage Jenkins > Configure System' the system will create a new node automatically. All you have to do in the script is use the configured label.
I use Jmeter to generate a huge load to my web-server. Some slave machines are acted as Jmeter-server, another one - as Jmeter master that coordinates the load and collects statistics from slaves.
Now I'm trying to integrate this system to CI (Jenkins).
That's how I do it now. I have two separate Jenkins jobs: one of them prepares all slaves by running jmeter-server, another one runs Jmeter-master itself. All is fine with 2nd part: I successfully generate traffic and collect statistics. The issue is with 1st job. I have a huge set of slaves that can be rebooted anytime. So, I can't run the job that initiates jmeter-server once and forget about it. I need to run this job every time before Jmeter-master.
But in this case on some machines (that were not rebooted) I have multiple copies of java processes (jmeter-server copies).
So, I'm looking for a mechanism to start jmeter-server on slave nodes in a proper way.
Any ideas appreciated.
Thank you in advance!
Read this:
https://dzone.com/articles/distributed-performance
It combines:
JMeter
Maven Lazery JMeter plugin
Jenkins
All you have to do for jmeter-slaves is to start them from Jenkins using jmeter-server.sh , you might want to tweak port if you have 2 slaves on same host.
Then from controller you will reference those host machines (in this casse default port is used):
remote_hosts=test-server-1.nerdability.com,test-server-2.nerdability.com,test-server-3.nerdability.com