I have two servers X.X.X.X and Y.Y.Y.Y . I have a jenkins job that first execute a job on X.X.X.X and then execute another Job on Y.Y.Y.Y-
to be more precise, I build on the Server X.X.X.X and then the job starts on Y.Y.Y.Y and on Y.Y.Y.Y, it used the mapped results of the build on X.X.X.X
I want to delete workspace on X.X.X.X, after a special command is done on Y.Y.Y.Y.
How can I do that?
There isn't a great way to do this.
You could make a job on x.x.x.x that deletes the workspace and is triggered by the parameterized remote trigger plugin on y.y.y.y.
Having said that, I think you should strongly look at why you are doing this and eliminate the need rather than solving it.
You're going to run into problems when jobs are already running, and workspace either cannot be deleted (causing errors), or the workspace is deleted whilst a job is running causing errors.
Related
Who does the actual cloning of the project, is it the master or the agent node? If it is the master, then how does the agent node actually execute the job. If it is the agent node, how can we view the workspace in the browser?
When people ask "where is the workspace" the answer is usually a path, but I am more interested in where that path is, on the master or the agent node? Or maybe it is both?
Edit1
Aligned terminology to this: https://jenkins.io/doc/book/glossary/ in order to avoid confusion.
In a Jenkins set up all the machines are considered nodes. The master node connects to one or more agent nodes. Executors can run both on the master or agent nodes.
In my scenario, no executors run on the master. They are run only on the agent nodes.
The answer is: it depends !
First of all, although it is not a good practice IMO, some installation let the master be an actual worker and run jobs. In this case, the workspace will be on the master.
If you configured the master not to accept jobs, there are still occasion when a workspace can be created on the master. A good example is when your job is a "pipeline script from SCM". In this case, the master will create a workspace for the job, clone the target repo, read the pipeline, and start needed jobs on whatever slave is targeted, creating a workspace to run the actions themselves. If the pipeline targets multiple slaves, there will be a workspace on each of them.
In simple situation (e.g. maven or freestyle job), the workspace will only be on the targeted slave.
I needed to dig a bit deeper to understand this.
I ran a brand new instance of Jenkins and I attached a single agent node. I used SSH and I set the remote (agent) root directory to: /home/igorski/jenkins
As soon as I attached the node the remoting folder and remoting.jar showed up in that root directory.
I ran a basic Gradle Java pipeline job (Jenkinsfile in the project).
The workspace showed up on the slave. Not on the master.
From the Jenkins GUI I can access the workspace and see it's contents.
At the moment I kill the agent machine I can no longer view the workspace in Jenkins.
My guess is that the remoting.jar somehow does a live sync.
I also ran a freestyle project and I can confirm the same. As soon as the agent is killed I can no longer open the Workspace and I get an error stack trace:
hudson.remoting.Channel$CallSiteStackTrace: Remote call to JenkoOne
This was much more obvious with the Pipeline job though. There you get a link to the agent that you need to click in order to see the contents. As soon as the agent is gone the link is disabled. And you know exactly on which agent the node is. With freestyle jobs, you just get a Workspace link. There is no indication on what agent it is or if the agent is accessible at the moment.
So, both Zeitounator and fabian were correct.
(I'm new to Jenkins and curl, so please forgive any imprecision below.)
I am running a Jenkins job on one network that sends a curl command to a second network that is used to start a Jenkins job on that second network.
Sometimes I have to log onto that second network and restart the job using the Rebuild button provided by the Rebuild plugin.
I need to know how to determine whether the job on the second network was started by the original curl command or restarted via the Rebuild plugin, without the user having to do anything but restart the job with the same parameters.
I could use an extra boolean-parameter in the job on the second network that can be set to true by the curl command and to false when using the Rebuild button, but that requires the user to manually change the value of that parameter. I don't want the user to have to do that.
I think this only possible by using different users, or lets say an own user for the remote invocation. Anything more informative would have to be done manually for instance by an own paramter for the job. Normally jobs are started by developers, schedulers, hooks or by other jobs on the same instance. If triggered remotelly the job is triggered by a user as well, without authentication its the anonymus user who triggers the job.
Do you know the Jenkins CLI, could replace your curl commands?
https://jenkins.io/doc/book/managing/cli/
My job failed due to some reason. I want to go to the machine who actually run the job.
How do I know if a job is configured to use slaves?
How do I know which slave was used in that job?
By default all jobs are able to use all nodes, including the master as an executor. If you want to lock a job down to a particular slave, you can do so by selecting it in Job > Configuration > General > Restrict where this job can run.
To see which host a job ran on, click on the build number and in the top right it will say 'Started ~duration~ on ~host~', where ~host~ is your slave (or master).
I'm trying to a run a simple windows batch command (say copy) on Master inside a job that's set to run on a particular slave.
What i'm trying to accomplish with this is that copy the build log that gets saved on a master to a shared drive that's accessible from master. Please advise.
You are going to have to make the jenkins file system visible on the client independently of Jenkins. Since you have a Windows client, you are probably going to have to setup sharing from the Jenkins master using samba or something of the sort.
What I do instead: When I need assets from master is I use curl or wget to download the assets to the clients. You can use the FSTrigger plugin to start builds when the file changes on the Jenkins master. Once curl or wget has run, your asset is then in the %WORKSPACE% directory and you can proceed.
I would recommend to handle the logfile copying (and maybe further tasks) as a dedicated job (let's call it "SaveLog"). SaveLog should be tied to run only on master.
You should then configure SaveLog to be triggered after completion of your primary job.
The Logfile is already available on master, even if you do not save any artifacts.
Should you need further files from the slave workspace, then you should save them as artifacts. SaveLog (on master) can then still decide whether to do anything useful with those artifacts.
Can someone direct me here? I have a simple job configured in Jenkins on a WINDOWS environment (master and all slaves running on windows) and the job is supposed to run on a particular slave. When you build the job, the build log ( log.log) gets stored in ” %JENKINS_HOME%\jobs\\builds\%BUILD_NUMBER%\” on the master.
I do have a Jenkins workspace (which is required when you add a slave node) set on the slave for this job–where nothing gets stored when the job runs.
With this scenario, I would like to copy the build log (log.log file that’s available on the master) to a share drive. Please advise me the way to get this done. I have tried few plugins “Copy to slave”, “Copy Artifact Plugin” and ArtifactDeployer Plugin…I could not get them working to meet what I need.
Use a second build action with the execute batch option. Put the copy command there to copy the log to another location.
The following command kind-of works:
curl ${BUILD_URL}consoleFull -o ${TargetDir}/Log.txt
where
TargetDir="${WORKSPACE}/Directory/target"
BUILD_URL and WORKSPACE are set by Jenkins. Unfortunately Jenkins doesn't copy the whole log. I've tried consoleText and gotten the same result: partial logs files. :-(