Difference between lightweight checkout and shallow clone in Jenkins - jenkins

In the pipeline SCM configuration of Jenkins job builder, we have two options- lightweight checkout and shallow clone. What is the difference between these options and when do we use each option?

From the documentation:
Shallow clone.
Perform a shallow clone, so that git will not download the history of the project, saving time and disk space when you just want to access the latest version of a repository.
Lightweight checkout.
If selected, try to obtain the Pipeline script contents directly from the SCM without performing a full checkout. The advantage of this mode is its efficiency; however, you will not get any changelogs or polling based on the SCM. (If you use checkout scm during the build, this will populate the changelog and initialize polling.) Also build parameters will not be substituted into SCM configuration in this mode. Only selected SCM plugins support this mode.
To sum up:
Shallow Clone is the Git feature that lets you pull down just the latest commits, not the entire repo history. So if your project has years of history, or history from thousands of commits, you can select a particular depth to pull.
Lightweight checkout is a Jenkins capability that enables to pull a specific file from the repo, as opposed to the entire repo. So it is useful for example when fetching the Jenkinsfile from a repo because you you need only the specific file and Don't care about other SCM information.

Related

How to track deployment and commits of multiple repositories in a single Bitbucket pipeline?

We host a project's source code on Bitbucket, in multiple repositories, one for the backend, one from the frontend, and one for server configuration and deployment.
The deployment is done with a Bitbucket custom pipeline hosted in the latter repository (where "custom" means triggered manually or by a scheduler, not by pushing to branch). In the pipeline, we clone the other repositories (using an SSH key for authentication), build Docker images, push them to a Docker repository, and then trigger the deployment on the server.
This is all working well, except for how it's tracked in Bitbucket and Jira. In Bitbucket, in the pipelines overview, it shows the latest commit that was deployed by a pipeline run. However, since the pipeline is in the config repository, this will only show commits of the config repository. Since the config rarely changes, most of our commits are in the backend and frontend repositories, so this "latest commit" rarely represents the latest change that was deployed.
Similarly, and more annoyingly, when connecting Jira with Bitbucket, Jira only associates commits in the config repository with a deployment. All the interesting work done in the backend and frontend repositories isn't seen.
Is there away to tell Bitbucket that multiple repositories are involved in a pipeline deploy? I believe this is currently not possible, so this would have to be a feature request for Atlassian.
Does anybody know of a workaround? I was thinking, maybe having the backend and frontend repos as git submodules of the config repo might work? Git submodules scare me, so I don't want to try only to find out that Bitbucket/Jira would not see the commits/issues in the submodules anyway.
Another workaround could be to push a dummy commit with a commit message that summarizes all commits done in all repos. That commit would have to be already pushed to the config repo when the pipeline is started, so that would maybe have to be done in a separate pipeline: the first pipeline pushes the summary commit and then triggers the second pipeline for the actual deployment.
Put everything, all software components plus configuration and infrastructure, together in a monorepository.
So as to push such a big change in historically independent repositories, it is worth to use the --allow-unrelated-histories option for the git-merge command so as not to loose each git history.
Otherwise, yes, use git submodules in a parent repo and track submodules refs updates as meaningful commits. If that scares you, you should really not be splitting your code in multiple repos.

List Git Branches on Jenkins after the pipeline build

Currently i'm using in my jenkins pipeline the List Git Branches system, and its working very well, when i'm gonna build the pipeline, i choose the branch that is automatic pulled from git repository that i specify in the pipeline configs. But i have a problem, i need to make a new pipeline model that the user gonna choose the git repository link with a jenkins choise parameter before the build starts and when the build starts, the branchs gonna be pulled and displayed to choise, so well, i'm needing a way to pause the pipeline build to user choise the especified branch and after that the proccess continues normally.

List all git repositories used in a Jenkins build? (Groovy)

Using Groovy and Jenkins Pipeline, is there any method of listing all git repositories and commits checked out using checkout during the course of a build aside from manually parsing the build log? I'm aware that changeSets allows one to see what changes have been made between runs, and that by bookkeeping all of these commits, it is possible to piece together the last known set of commits that were successful in building, but deleting/losing any of these builds would result in an incomplete log and prevent reconstruction. I'd like to know if there's an easier way of obtaining a git configuration for a given build.

Poll SCM multiple repositories on Jenkins

I have around 10 repositories that I would like to poll. If a folder is a added in the root folder of any repo, I'd like to trigger a certain build, (the same).
I thought using the Poll SCM plugin but it requires one job per repo and it's not scalable.
Is there any clean way to do this and any plugin that would help?
EDIT: I have a job generating debian packages from folders that are in my 10 repositories (each folder corresponds to a separate package). When a new folder is added, it means a new package is.
I would like then to trigger a packaging build so developers can fetch it from our apt repository without waiting the nightly build
You can use this plugin:
https://wiki.jenkins.io/display/JENKINS/Pipeline+Multibranch+Plugin
As per the manual:
Enhances Pipeline plugin to handle branches better by automatically
grouping builds from different branches. Automatically creates a new
Jenkins job whenever a new branch is pushed to a source code
repository. Other plugins can define various branch types, e.g. a Git
branch, a Subversion branch, a GitHub Pull Request etc.
See this blog post for more info:https://jenkins.io/blog/2015/12/03/pipeline-as-code-with-multibranch-workflows-in-jenkins/ "

How to ensure same git checkout for build and deploy jobs in Jenkins?

In Jenkins, I have a "Build" job setup to poll my git repo and automatically build on change. Then, I have separate "Deploy to DEV", "Deploy to QA", etc. jobs that will call an Ant build that deploys appropriately. Currently, this configuration works great.
However, this process favors deploying the latest build on the latest development branch. I use the Copy Artifact plugin to allow the user to choose which build to deploy. Also, the Ant scripts for build/deploy are part of the repo and are subject to change. This means it's possible the artifact could be incompatible between versions. So, it's ideal that I ensure that the build and deploy jobs are run using the same git checkout.
Is there an easier way? It ought to be possible for the Deploy job to obtain the git checkout hash used from the selected build and checkout. However, I don't see any options or plugins that do this.
Any ideas on how to simplify this configuration?
You can use Parameterized Trigger Plugin to do this for you. The straight way is to prepare file with parameters as a build step and pass this parameters to the downstream job using the plugin. You can pass git revision as a parameter for example or other settings.
The details would vary for a Git repo (see https://stackoverflow.com/a/13117975/466874), but for our SVN-based jobs, what we do is have the build job (re)create an SVN tag (with a static name like "LatestSuccessfulBuild") at successful completion, and then we configure the deployment jobs to use that tag as their repo URL rather than the trunk location. This ensures that deployments are always of whatever revision was successfully built by the build job (meaning all unit tests passed, etc.) rather than allowing newer trunk commits to sneak into the deployment build.

Resources