I am making jenkins environment with 100 slaves.
So I'm trying to make 100 nodes.
Is it okay to use same agent.jar file with every slaves?
Related
I have a Jenkins Server (2.204.1) with Docker plugin (1.1.9) and a docker cloud API.
I work with Jenkins docker agents (slaves)
And i map the docker slave build workspace between the container and the host in order to be able to path
Artifacts to the downstream jobs.
in Jenkins Configuration - Docker Cloud Details - Container settings:
Volumes /var/lib/jenkins:/var/lib/jenkins
This works fine for a single build , The problem starts when i run concurrent builds,
They are all mapped to the same workspace on the Docker host and interfering each other.
What would be the best practice when using docker slaves and mapping workspace as a volume ?
I wouldn't like to use $CustomWorkspace or coping artifacts during the build as this is hard to manage and purge.
I prefer the Jenkins regular slave approach of adding #2 to a second concurrent build but this is not the behavior when running concurrent builds on docker slaves
One remote Jenkins agent has no way of knowing whether a given workspace directory is in use by another agent running on the same machine. This is equally true for docker-based agents that share a common directory via volume mounting. Ideally, all agents working from the same machine would have some way of talking to each other to keep from stepping on each other's toes (e.g. a lockfile in the workspace that gets removed upon job termination), but this is not currently the case.
Solution #1: Unique Build Workspaces
If we are using Jenkins pipelines, we can append a unique subdirectory to the workspace directory on a per-build basis. This solution is clean, simple, and easy to implement.
agent {
node {
customWorkspace "${env.BUILD_NUMBER}"
}
}
Ref: https://www.jenkins.io/doc/book/pipeline/syntax/#agent
Solution #2: Unique Agent Workspaces
If this is not possible or desirable, another potential solution is to change the root working directory of the Jenkins agent itself, which can be done by supplying an additional argument to the agent's startup command:
-workDir FILE : Declares the working directory of the
remoting instance (stores cache and logs by
default)
Source: java -jar agent.jar -help
When spinning up multiple agents dynamically on the same machine, we can set this -workDir value to something with a bit more uniqueness to give each agent its own directory to work out of, effectively mitigating workspace collisions. Something like this should work well:
java -classpath agent.jar hudson.remoting.jnlp.Main -headless \
-workDir /var/lib/jenkins/workspace/$(date +%3N) ...
The magic is in the $(date +%3N), which returns the system clock nanoseconds to three digits of precision. We may want to use more or fewer digits because there's a tradeoff: more precision will result in a higher maximum number of workspace directories but decrease the risk of workspace collisions; less precision will have the opposite effect - fewer directories, increased collision risk.
How this command is configured will vary based on your Jenkins setup. For example, we are using the Docker Swarm plugin (v1.9) on Jenkins 2.249.3. Our agent command is configurable at Manage Jenkins >> Manage Nodes and Clouds >> Configure Clouds >> Docker Swarm Cloud Configuration >> Docker Agent templates >> Command.
Ref: https://man7.org/linux/man-pages/man1/date.1.html
I have around 100 linux servers that need to be added to a Jenkins master. The situation here is I need to add them by Copy Existing Node and the Jenkins master should not be shutdown/restart.
I don't want to do it manually for a hundred times. Is there any automation way to handle such request. Thank you in advance.
You could script this (self-automate). The Jenkins agent configuration files are located in the nodes subdirectory in the Jenkins home directory. You'd create a sub-directory for each node and inside that put a config.xml file for that nodes configuration. I recommend that you shutdown your Jenkins server while doing this, we've observed Jenkins deleting things when doing this while it is running. Use an existing agent's config.xml file for a template. Assuming all of your servers are configured the same, you need only update the name and host tags, which can be automated using sed.
Update with zero-downtime:
CloudBees has a support article for creating a node using the Rest API. If you'd prefer to use the Jenkins CLI, here's an example shell script. Neither of these approaches will require restarting Jenkins.
Goal:
I would like to use Amazon Ec2 Plugin to add dynamic slaves to Jenkins based on the load.
Architecture:
Jenkins Master + 4 slaves + dynamic slaves (based on the requirement)
1st job runs on dynamic slave (no concurrent jobs) - label1 (ami-12345)
2nd job runs concurrently on dynamic slaves - label2 (ami-23314)
These two has different AMI and different labels.
PROBLEM:
first job is able to spin up the instance and executes the job everything looks good. If I run the 2nd job Jenkins able to spin up the instance, However if jobs are queued up it's not adding new slaves.Even though I added the instance to 4 for that AMI.
Jenkins v1.656
Amazon EC2 plugin v1.31
I tried to minimize the number of executors on master and try to run the job, But no luck. Changed EC2 instance size to little bit low and increased number of executors(in order increase more load the slave). Job waited for couple of minutes(~5 minutes) and started another slave.
Solution:
Your cluster should be overloaded for more than couple of minutes to add a new dynamic slave.
I use Jmeter to generate a huge load to my web-server. Some slave machines are acted as Jmeter-server, another one - as Jmeter master that coordinates the load and collects statistics from slaves.
Now I'm trying to integrate this system to CI (Jenkins).
That's how I do it now. I have two separate Jenkins jobs: one of them prepares all slaves by running jmeter-server, another one runs Jmeter-master itself. All is fine with 2nd part: I successfully generate traffic and collect statistics. The issue is with 1st job. I have a huge set of slaves that can be rebooted anytime. So, I can't run the job that initiates jmeter-server once and forget about it. I need to run this job every time before Jmeter-master.
But in this case on some machines (that were not rebooted) I have multiple copies of java processes (jmeter-server copies).
So, I'm looking for a mechanism to start jmeter-server on slave nodes in a proper way.
Any ideas appreciated.
Thank you in advance!
Read this:
https://dzone.com/articles/distributed-performance
It combines:
JMeter
Maven Lazery JMeter plugin
Jenkins
All you have to do for jmeter-slaves is to start them from Jenkins using jmeter-server.sh , you might want to tweak port if you have 2 slaves on same host.
Then from controller you will reference those host machines (in this casse default port is used):
remote_hosts=test-server-1.nerdability.com,test-server-2.nerdability.com,test-server-3.nerdability.com
I have Jenkins installed on my machine. I have a batch file which is located on another machine and I want to run it in my Jenkins job.
What steps are required to do this?
If you want to run the batch file on the other machine, the solution is to split your job in 2 jobs:
One running on your machine
The second running on the other machine to launch your batch file
The other solution is to store your batch file in SVN or Git, and to pool it on your machine with your Jenkins job.