I have a Jenkins job that is a declarative pipeline. It's URL is $JENKINS_URL/job/dnscheck/.
I started using Bitbucket projects on Jenkins, and now that Jenkinsfile is also discovered and the same job lives at $JENKINS_URL/job/website/job/dnscheck/job/master/
I want to copy the entire history (log files etc) from $JENKINS_URL/job/dnscheck/ to $JENKINS_URL/job/website/job/dnscheck/job/master/, and then delete $JENKINS_URL/job/dnscheck/.
Can I do that?
If yes, how do I do that?
I don't want to overwrite existing files
On the master, the logs are all stored in ${JENKINS_HOME}/jobs/<path/to/job>/builds/, unless overridden by a system property jenkins.model.Jenkins.buildsDir. They consist of a series of numbered directories with a log file (the build log) inside and possibly some additional data files (eg: build.xml, changelog.xml, injectedEnvVars.txt).
There are also some sym-links for last builds (good/bad, etc.), both inside the jobs directory and inside the builds directory. You could copy all the directories over (renumber if you have conflicts) AND update the sym-links accordingly. You may also need to reset the last build ( Number of builds since the start of the project ) to n+1 so that the next build number increments w/o overlapping. It's in a file inside /nextBuildNumber.
Finally, you must get Jenkins to recognize the new content since Jenkins caches everything. You can do that by either restarting the system, reloading the configuration from disk or less drastically, reload the data on the one job, something like:
def configXMLFile = job.getConfigFile();
def file = configXMLFile.getFile();
InputStream is = new FileInputStream(file);
job.updateByXml(new StreamSource(is));
job.save();
Related
Objective
I have a monorepo setup with a growing number of services services. When I deploy the application I run a command and every service will be rebuilt and the final Docker images will be published.
But as the number of services grows the time it takes to rebuilt all of them gets longer and longer, although changes were made to only a few of them.
Why does my setup rebuilt all Docker images although only a few have changed? My goal is to rebuilt and publish only the images that have actually changed.
Details
I am using Bazel to build my Docker images, thus in the root of my project there is one BUILD file which contains the target I run when I want to deploy. It is just a collection of k8s_objects, where every service is included:
load("#io_bazel_rules_k8s//k8s:objects.bzl", "k8s_objects")
k8s_objects(
name = "kubernetes_deployment",
objects = [
"//services/service1",
"//services/service2",
"//services/service3",
"//services/service4",
# ...
]
)
Likewise there is one BUILD file for every service which first creates a Typescript library from all the source files, then creates the Node.Js image and finally passes the image to the Kubernetes object:
load("#npm_bazel_typescript//:index.bzl", "ts_library")
ts_library(
name = "lib",
srcs = glob(
include = ["**/*.ts"],
exclude = ["**/*.spec.ts"]
),
deps = [
"//packages/package1",
"//packages/package2",
"//packages/package3",
],
)
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "image",
data = [":lib", "//:package.json"],
entry_point = ":index.ts",
)
load("#k8s_deploy//:defaults.bzl", "k8s_deploy")
k8s_object(
name = "service",
template = ":service.yaml",
kind = "deployment",
cluster = "my-cluster"
images = {
"gcr.io/project/service:latest": ":image"
},
)
Note that the Typescript lib also depends on some packages, which should also be accounted for when redeploying!
To deploy I run bazel run :kubernetes_deployment.apply
Initially one reason I decided to choose Bazel is because I thought it would handle building only changed services itself. But obviously this is either not the case or my setup is faulty in some way.
If you need more detailed insight into the project you can check it out here: https://github.com/flolude/cents-ideas
Looks like Bazel repo itself does something similar:
https://github.com/bazelbuild/bazel/blob/ef0f8e61b5d3a139016c53bf04361a8e9a09e9ab/scripts/ci/ci.sh
The rough steps are:
Calculate the list of files that has changed
Use the file list and find their dependents (e.g. The bazel querykind(.*_binary, rdeps(//..., set(file1.txt file2.txt))) will find all binary targets which are dependents of either file1.txt or file2.txt)
build/test the list of targets
You will need to adapt this script to your need (e.g. make sure it finds docker image targets)
To find out the kind of a target, you can use bazel query //... --output label_kind
EDIT:
A bit of warning for anyone who wants to go down this rabbit hole (especially if you absolutely do not want to miss tests in CI):
You need to think about:
Deleted files / BUILD files (who depended on them)
Note that moved files == Deleted + Added as well
Also you cannot query reverse deps of files/BUILD that doesn't exist anymore!
Modified BUILD files (To be safe, make sure all reverse deps of all targets in the BUILD are built)
I think there is a ton of complexity here going down this route (if even possible). It might be less erro-prone to rely on Bazel itself to figure out what changed, using remote caches & --subcommands to calculate which side-effects need to be performed.
I want to get the number inside the _work folder of an on-premise TFS agent.
For example:
From C:/agent/_work/1 get the 1.
Is there a variable to get the 1 part?
You can use a small PowerShell script to extract the number and set a new variable for the sequences steps:
$folderPath = "$env:Agent_BuildDirectory"
$folderNumber = $folderPath.Split('\')[$folderPath.Split('\').Count - 1]
Write-Host "##vso[task.setvariable variable=folderNumber]$folderNumber"
Now you can use the variable $(folderNumber) in the sequences tasks.
There is no reason for you to parse this information out. The current working folder for a given build or release is accessible in the variable $(Agent.BuildDirectory).
If you are trying to reference the working folder of a different build, then you are doing something wrong with your build process, and there are a number of different, valid solutions to that problem.
Check out the different variable values and ways to customize them if needed. Note the Agent.DeploymentGroupId is is not something you would change.
Release variables and debugging
Agent.DeploymentGroupId
The ID of the deployment group the agent is registered with. This is available only in deployment group jobs.
Example: 1
Agent.WorkFolder
The working directory for this agent, where subfolders are created for every build or release. Same as Agent.RootDirectory and System.WorkFolder.
Example: C:\agent_work
I have a parent job and two child jobs in jenkins. The workspace shared by the child jobs resides within the parent job.
Now the child jobs are producing a Junit RspecFormatted logs (job1.xml and job2.xml), which is getting stored in the parent's job workspace.
I am trying to refer giving the full path:
$JENKINS_HOME/workspace/{parent-job}/{folder a}/{folder b}/{folder c}/test-results/job1.xml in the post build result section but the build fails to find this path.
Note: I am able to print the file in the Execute Shell section with this path
I'm not really sure about the issue, can you give me more details?
By the way if i understood your needs, you can use the:
https://wiki.jenkins.io/display/JENKINS/Parameterized+Trigger+Plugin
To get variables like TRIGGERED_JOB_NAME and NUMBER, so you can build the PATH
$JENKINS_HOME/workspace/${TRIGGERED_JOB_NAME}/{folder a}/{folder b}/{folder c}/test-results/job1.xml
What about folder a/b/c, where are these variable valued?
You can also simply pass these information as parameters and than use them as variables to dynamically build the path...
Let me know if it's clear and if it helped...
I would like to perform the following steps in the TFS build process:
do post build event that will copy some files from my compiled projects to another predefined directory, I'd like that directory path to include the branch name.
I'd like to be able to refer to the branch name inside my xaml workflow template as well.
The first one is rather simple. When you're using the new TFS 2013 build server and process template, you can simply add a post-build powershell script in the Build Definition Configuration, check in the script and run it during the build.
The second one is dependent on whether you're using TFVC or Git, in the first case, use the VersionControlServer class to query the BranchObjects, then check which one is the root for your working folder. Be aware though, that in TFVC multiple branches can be referenced in one workspace, so there may be multiple answers to this query, depending on which file you use the find the branchroot. A custom CodeActivity would do the trick, similar to this check in a custom checkin policy.
The code will be similar to:
IBuildDetail buildDetail = context.GetExtension<IBuildDetail>();
var workspace = buildDetail.BuildDefinition.Workspace;
var versionControlServer = buildDetail.BuildServer.TeamProjectCollection.GetService<VersionControlServer>();
var branches = versionControlServer.QueryRootBranchObjects(RecursionType.Full);
var referencedBranches = listOfFilePaths.GroupBy(
file =>
branches.SingleOrDefault(
branch => file.ServerItem.StartsWith(branch.Properties.RootItem.Item)
)
).Where(group => group.Key != null);
To get a list of all items in yo workspace, you can use Workspace.GetItems.
In case you're using Git, you have a few options as well. The simplest is to invoke the command line:
git symbolic-ref --short HEAD
or dive into LibGit2Sharp and use it to find the branch name based on the current working folder from a custom activity.
If you want to include this in an MsBuild task, this may well be possible as well. It goes a bit far for this answer to completely outline the steps required, but it's not that hard once you know what to do.
Create a custom MsBuild task that invokes the same snippet of code above, though instead of getting access to the workspace through BuildDetail.BuildDefinition.Workspace, but through the WorkStation class:
Workstation workstation = Workstation.Current;
WorkspaceInfo info = workstation.GetLocalWorkspaceInfo(path);
TfsTeamProjectCollection collection = new TfsTeamProjectCollection(info.ServerUri);
Workspace workspace = info.GetWorkspace(collection);
VersionControlServer versionControlServer = collection.GetService<VersionControlServer>();
Once the task has been created, you can create a custom .targets file that hooks into the MsBuild process by overriding certain variables or copying data when the build is finished. You can hook into multiple Targets and define whether you need to do something before or after them.
You can either <import> these into each of your projects, or you can place it in the ImportAfter or ImportBefore folder of your MsBuild version to make it load globally. You can find the folder here:
C:\Program Files (x86)\MSBuild\{MsBuild Version}\Microsoft.Common.Targets\ImportAfter
I've had a dig around but can't find an elegant solution for what I want to do, so I hope some of you may be able to offer some suggestions. I've also asked this question on a jenkins forum, but no takers.
I want to be able to run a jenkins parent job with parameters that will feed down to triggered jobs, and then group all the job run results in a view dynamically.
The use case I'm trying to cover is: We have 10+ different jenkins jobs that run suites of tests, I want to simply manage a run of all those jobs to run against a specific code branch, on a specific test environment, and see the results (in one view) for only that run. The complication is the same Jenkin job may be run against another release or test environment and I don't want to see those results.
We already have the parent job triggering children with parameters, but I can't figure out how best to group the results.
I know I can create filters for views, but the name of jenkins jobs is static, and I want the view created at runtime, without having to build it myself. We do use the 'Set Build description' Plugin, so I could create a view that filters for a unique build descriptor, or something similar. But there doesn't seem to be a way to create views with filter programmatically.
Other considerations would be clean up. I wouldn't want a years worth of views clogging the views, so I need a way to clear out old runs too.
Any ideas to kick me off?
For groupping of reports you can just use a simple logic instead of finding a Jenkins plugin. You can place all the result files (preferably XMLs) in a common folder/ file server and at the end of execution of all the suites (jobs) you can trigger a common job which will process all the XML files and generate a common report. By this you can have " consolidated + individual reports ".
I have done it using Perf Publisher plugin which process XMLs and gives a beautiful aggregated report.
Job1 ----> Report1 ----> Move report of report folder
Job2 ----> Report2 ----> Move report of report folder
Job3 ----> Report3 ----> Move report of report folder
.
.
.
Job n ----> Report n ----> Move report of report folder
So after completion of job n, trigger Report job which will operate on "report" folder containing all the reports!
Hope it helps!
I have a partial solution:
All jobs accept a parameter called VIEW_IDENTIFIER.
Parent job is kicked off with a unique VIEW_IDENTIFIER being set, and all the child jobs have that passed into them when run.
After all jobs are run I edit a Jenkins View that has a 'Job Filter - > Parameterized Jobs Filter - > Name = VIEW_IDENTIFIER, Value = my unique ID set for the run'
This results in all jobs run with that unique ID being grouped in one single view for review.
The shame is I have to do the manual edit of the Job Filter.