Related
I have multiple environments. They are debug, dev, and prod. I'd like to refer to an image by the latest dev (latest) or dev (version 1.1) or prod (latest). How would I go about tagging builds and pushes?
My first thought was to create separate repositories for each environment debug, dev, and prod. But I am starting to wonder if I can do this with just one repository. If its possible to do with one container what would the syntax be when building and pushing?
This is what has worked best for me and my team and I recommend it:
I recommend having a single repo per project for all environments, it is easier to manage. Especially if you have microservices, then your project is composed by multiple microservices. Managing one repo per env per project is a pain.
For example, I have a users api.
The docker repo is users. And this repo is used by alpha, dev and beta.
We create an env variable called $DOCKER_TAG in our CI/CD service and set it at the time the build is created, like this:
DOCKER_TAG: $(date +%Y%m%d).$BUILD_NUMBER => This is in bash.
Where $BUILD_NUMBER is previously set by the build being run when the CI/CD run is triggered. For example, when we merge a PR, a build is triggered, as build no. 1, so $BUILD_NUMBER: 1.
The resulting tag looks like this when used: 20171612.1
so our docker image is: users:20171612.1
Why this format?
It allows us to deploy the same tag on different environments with a
run task.
It helps us keep track when an image was created and what
build it belongs to.
Through the build number, we can find the commit information and map all together as needed, nice for trobleshooting.
It allows us to use the same docker repo per project.
It is nice to know when we created the image from the tag itself.
So, when we merge, we create a single build. Then that build is deployed as needed to the different environments. We don't create an independent build per environment. And we keep track on what's deployed where.
If there's a bug in an environment with certain tag, we pull such tag, build and trobleshoot and reproduce the issue under that condition. If we find an issue, we have the build number in the tag 20171612.1 so we know the build no. 1 has the issue. We check our CI/CD service and that tells us what commit is the most current. We check out that commit hash from git and debug and fix the issue. Then we deploy it as a hotfix, for example.
If you don't have a CI/CD yet, and you are doing this manually, just set the tag in that format manually (pretty much type the full string as is) and instead of a build number, use a commit short git hash (if you are using git):
20170612.ed73d4f
So you know what is the most current commit so you can troubleshoot issues with a specific image and map back to the code to create fixes as needed.
You can also define any other suffix for your tag that maps to the code version, so you can easily troubleshoot (e.g. map to git tags if you are using those).
Try it, adjust it as need it and do what it works best for you and your team. There's many ways to go around tagging. We tried many and this one is our favorite so far.
Hope this is of help.
There's two schools of thought, stable tagging, where you update a single tag, and unique tagging. Each have their pros and cons. Stable tags can create instability when deploying to self healing clusters as a new node might pull a new version, while the rest of the cluster is running a slightly older version. Unique tagging is a best practice for deployment. However, to manage base image updates of OS & Framework patching, you'll want to build upon stable tags in your dockerfile, and enable automatic container builds. For a more detailed walk through, with visuals, here's a post:
https://blogs.msdn.microsoft.com/stevelasker/2018/03/01/docker-tagging-best-practices-for-tagging-and-versioning-docker-images/
I think "lastest" is for the last productive image. It is what I expect in the docker hub, although there are no images in development.
On the other hand you can use tags like for example 0.0.1-dev. When this image is finished you can do the tag again and push, and then the repository will detect that the layers are already in the repository.
Now when you are about a candidate version to come out to production, you have to only have the semantic version despite not being in an environment pruduccion. That's what I would do.
We are migrating 50+ .net project from TFS to GitHub, at the same, we want to use Jenkins to automate the build. Currently all the builds are done inside the Visual Studio manually. I know how to automate this build using MSBuild and we already have a lot of these projects building inside Jenkins.
My question: is there a way to set up these 50+ project quickly w/o creating them one by one manually? Anyway to script them? e.g. a Jenkins project has everything inside a folder, I can copy a sample project/folder to create a new one and modify something. Or create a Jenkins project using a script reading a config file? Any idea can save some time is appreciated.
Not a direct answer but too long for a comment so here it goes anyway. Following the Joel test (which in no way is dogmatic for me but does make a lot of good points), and in my experience, you should already have an msbuild file now to build all those projects 'in one click'. Then, setting up a build server, in fact any build server, is just a matter of making it build that single parent project. This might not work for everyone, but for several projects I've worked on this had the following advantages:
the entire build process gets defined by developpers, working locally on their machine, using 'standard' tools
as such they don't need to spend hours in a web interface figuring out the appropriate build steps, dependencies and whatnot (also those hours would have been worthless in the end if switching to a different build server)
since a complete build is now just a matter of msbuild master.proj, possibly along with some options to define configuration/platform/output directories getting this running on any build server should be painless and quick
in the same manner this makes it easy to test different build servers with a minimum of time and migrate between them (also no need to ask SO questions on how to set everything up :)
this also makes it easy for other developpers to get complete builds as well without having to go round via a build server
Anecdote: we once had Jenkins running on multiple different projects as well. It took us days to get everything running, with the templates etc, and we found the web intercae slow and cumbersome (and getting to know the API would have taken even more days). Then one day I got sick of this and made a bunch of msbuild scripts which could build everything from one msbuild command. That took much less time than setting up Jenkins, a couple of hours or so. Then I took a TeamCity installation we already had and made it build the new master project. Took like an hour and everything worked. Just recently I took the same project and got it working on Visual Studio Online, again in no time.
If those projects are more or less similar to build, you will probably be interested in using the template plug-in for jenkins. There you configure a dummy project such that it does what is common to (most of) the 50+ projects.
Afterwards you create a separate project for each: Create the first project and make it use the template project for each of the steps which can be shared with the template project (use build step from other project). All subsequent projects can be created as slightly adopted copy of this first 'real' project.
I use it such that the variable $JOB_NAME (the actual project name in jenkins that is) is part of the repository path and I can thus clone from http://example.org/$JOB_NAME/
Configured that way, I can include the source code management step in the templating job and use it unmodified. Similar with the build step and post-build step: they are run by a script which is somewhat universal accross all my projects (mostly calling make and guessing deployment / publication paths upon $JOB_NAME again).
How do you implement a complex value stream with multiple pipelines in Jenkins WorkFlow? Similar like you can do with Go CD: How do I do CD with Go?: Part 2: Pipelines and Value Streams.
For a distributed system I would like to have each dev team and operation team to start with their own delivery pipeline. One change needs to trigger only the pipeline of the team that made the change. It needs to trigger a new pipeline that needs to take the latest successful artifacts from each of the team's pipelines and move on from there. This mean that the artifacts from the other teams were not rebuild or retested as they were not changed. And after the Fan In we can run a set of automated tests to verify the correct behaviour of the distributed system with the change.
In the documentation I only find you can pull from multiple VCS's but I assume everything is then build and tested with every change. Which is something I want to avoid.
If each delivery pipeline is in it's own Jenkins Job. How can I visualize the complete pipeline and what is the best way to pull in the last successful artifacts or version from the other pipelines?
There is no direct equivalent in Jenkins for value streams, and Workflow jobs do not behave any differently in that respect: you can have upstream jobs and downstream jobs correlated with triggers (in this case the build step, or the core ReverseBuildTrigger), and use (for example) the Copy Artifact plugin to transfer artifacts to downstream builds. Similarly, you could use an external repository manager as the “source of truth” and define job triggers based on snapshots pushed to the repository.
That said, part of the purpose of Workflow is to avoid the need for complex job chains in most situations¹, since it is usually easier to reason about, debug, and customize a single script with standard control flow operators and local variables than to manage a set of interdependent jobs. If the main problem with a single flow is that you need to avoid rebuilding unmodified parts, one solution would be to use something like JENKINS-30412 to check the changelog of particular repository checkouts and skip build steps if empty. I think there would be more features needed to make such a system work in the general case that workspaces are clobbered or discarded by other builds.
¹One case where you definitely need separate jobs is that for security reasons the teams contributing to different projects must not be able to see one another’s sources or build logs.
Assuming that each of your dev teams works on a different module of your project and „One change needs to trigger only the pipeline of the team that made the change“ I'd use Git Submodules:
Submodules allow you to keep a Git repository as a subdirectory of another Git repository.
with one repo, that becomes a submodule of a main module repo, for each team. This will be transparent to the teams since they just work on their designated repos only.
The main module is also the aggregator project for your module projects in terms of the build tool. So, you have the options:
to build each repo/pipeline individually or
to build the whole (main) project at once.
A build pipeline that comprises one or more build jobs is associated to every team/repo/module.
The main pipeline is merely a collection of downstream jobs which represent the starting points of the team/repo/module pipelines.
The build triggers can be any of manually, timed or on source changes.
A decision has also to be made:
whether you version your modules individually, such that other modules depend on release versions only.
Advantage:
Others rely on released, usually more stable versions.
Modules can decide which version of a dependency they want to use.
Disadvantages:
Releases have to be prepared for each module.
It may take longer until the latest changes are available to others.
Modules have to decide which version of a dependency they want to use. And they have to adapt it every time they need functionality added in a newer version.
or whether you use one version for the entire project (which is inherited by the modules then): ...-SNAPSHOT during the development cycle, a release version when releasing the project.
In this case, if there are modules that are essential for others, e.g. a core module, a successful build of it should trigger a build of the dependent modules, as well, so that incompatibilities are recognized as early as possible.
Advantages:
Latest changes are immediately available to others.
A release is prepared for the whole project only once it is to be delivered.
Disadvantages:
Latest changes immediately available to others may introduce not so stable (snapshot) code.
Re „How can I visualize the complete pipeline“
I'm not aware of any plugin that can do this with Workflows at the moment.
There's the Build Graph View Plugin which originally has been created for Build Flows, but it's more than two years old now:
Downstream builds are identified by DownStreamRunDeclarer extension point.
Default one is using Jenkins dependencyGraph and UpstreamCause and as such can detect common build chain.
build-flow plugin is contributing one to render flow execution as a graph
some Jenkins plugins may later contribute dedicated solutions.
(You know, „may“ and „later“ often become will not and never in development. ;)
There's the Build Pipeline Plugin but it apparently is also not suitable for Workflows:
This plugin provides a Build Pipeline View of upstream and downstream connected jobs [...]
Re „way to pull in the last successful artifacts“
Apparently it's not that smooth with Gradle:
By default, Gradle does not define any repositories.
I'm using Maven and there exist local and remote repositories where the latter can also be:
[...] internal repositories set up on a file or HTTP server within your company, used to share private artifacts between development teams and for releases.
Have you considered using a binary repository manager like Artifactory or Nexus?
From what I have seen, people are moving towards smaller, independent pieces of code delivery rather than monolithic deployments. But clearly, there will still be dependencies between different components. At the very least, for example, if you had one script that provisioned your infrastructure and another that built and deployed your app, you would want to be sure your infrastructure update script was run before your app deployment. On the other hand, your infrastructure does not depend on deploying your app code - it can be updated at its own pace, so long as it ideally passes some testing.
As mentioned in another post, you really have two options to accomplish this dependency:
Have a single pipeline (workflow script) that checks out code from both repos and puts them through the same pipeline simultaneously. Any change to one requires the full boat pipeline for everything.
Have two pipelines and this would allow each to go at its own pace independent of what the other does. This isn't a problem for the infrastructure code, but it very well could be for the app code. If you pushed your app code to production without the infrastructure update having happened first, the results may not be pleasant.
What I've started to do with Jenkins Workflow is establish a dependency between my flows. Basically, I declare that one flow is dependent on a particular version (in this case, simply BUILD_NUM) and so before I do a production deploy I verify that the last successful build of the other pipeline has completed first. I'm able to do this using the Jenkins API as part of my flow script that waits for that build or greater to succeed, like so
import hudson.EnvVars
import hudson.model.*
int indepdentBuildNum = 16
waitUntil{
verifyDependentPipelineCompletion("FLDR_CM/WorkflowDepedencyTester2", indepdentBuildNum)
}
boolean verifyDependentPipelineCompletion(String jobName, int buildNum){
def hi = jenkins.model.Jenkins.instance
Item dep2 = hi.getItemByFullName(jobName)
hi = null
def jobs = dep2.getAllJobs().toArray()
def onlyJob = jobs[0] //always 1 job...I think?
def targetedBuild = onlyJob.getLastSuccessfulBuild()
EnvVars me = targetedBuild.getCharacteristicEnvVars()
def es = me.entrySet()
int targetBuildNum = 0;
def vars = es.iterator()
while(vars.hasNext()){
def envVar = vars.next()
if(envVar.getKey().equals("BUILD_ID")){
targetBuildNum = Integer.parseInt(envVar.getValue())
}
}
if (buildNum > targetBuildNum) {
return false
}
return true
}
Disclaimer that I am just beginning this process so I do not have much real-world experience with this yet, but will update this thread if I have more relevant information. Any feedback welcome.
Our team is sharing a Jenkins server with other teams, and this currently means that we are sharing the same OS-level build-user account. The different teams' OS-level build-user settings (Maven settings, bash settings, user-level Ant libraries, etc...) have collided a few times--"fixing" the settings for one team's jobs inadvertently "breaks" another team's jobs. The easiest sol'n that occurs to me is giving each team its own OS-level build-user account with which to execute its Jenkins jobs--but I cannot find a way to do this.
I have checked with Google, and also here
https://wiki.jenkins-ci.org/display/JENKINS/Use+Jenkins
and here
https://wiki.jenkins-ci.org/display/JENKINS/Plugins
to no avail.
Is there a way to do this? If not, can you recommend any best practices for segregating sets of builds from one another?
Maven Specific
You have two options that come to mind,
Add additional installations of Maven into your Jenkins global configuration, each using their own Home directory, and thus settings files. This will allow you to use totally different version of Maven, and selected based on Job requirements (You are given the option to select which "version" of maven you wish to use on the job itself.
Similar to (1), but specify specific settings configurations using Maven command line arguments. Its a little less "obvious" but may be quicker to implement
Multi-slave
You could possibly make use of multiple slaves on each machine. It increases the overheads of the builds quite significantly, and the implementation is such that you'd have multiple user accounts on a machine, each setup as needed, and then one slave instance for each user.
I'm not sure these solutions will totally answer your problem, I'll have a think and see if anything else pops into mind, but it might give some starting points
Key builds to a specific team directory that contains that team's settings. For example, provide a parameter 'TEAM' to every build, set its default value to the appropriate team name, and use that parameter as a key to a directory that contains the team's settings (so instead of using ${HOME} as in what you want to do, you'll use something like ${TEAM_SETTINGS}/${TEAM}).
You can set per-job users (who has access to/can build a particular job).
Under "Manage Jenkins" > "Configure System" >
Click on Enable Security
Check Project-based Matrix Authorization Strategy
However, I do not think there is a "per-build" option for a single job.
If you have the same project that you are sharing between teams, you could (and probably should) create two jobs for this project, and have different libraries/scripts be used in each.
You could also parametrize the build (On the Job Page, "Configure" > This build is parametrized) and supply the library versions, etc via string parameters.
You could also use a parameter to be the team's name, and in your build script change libraries based on the parameter:
For example, have a parameter called "TEAM", with choices: TEAM_A and TEAM_B, and in your script, have
if [ $TEAM == "TEAM_A" ]
then
ANT_HOME=/opt/ant/libA
else
ANT_HOME=/opt/ant/libB
fi
======================================================================
Have you considered sourcing your settings? In Linux, you could do this by saving your OS settings in a script file (for example paths, etc), and using source /path/to/settings/file, in Windows it would be call /path/to/settings/batch/file.
Can you give examples of OS level settings that you would require and per-build user for?
You problem is a common one.
Whenever something nonstandard is installed on a build server, something will break for someone.
The only solutions I know are
Set up a separate build slave for each team or product. Then they can install whatever they want on the build slave and any mess they create is all their own fault.
Any dependencies required by a job need to come with the job. This is my preferred way of working. For example: If a job needs a library or a tool, the library or tool is not installed on the build server but in the source tree and the build uses it from the source tree.
Sometimes the latter way is more work. You need to set up the tools or library so it works when it is installed in the source tree. Some tools have hard-coded paths and they do not work. In that case you can install the source of the tool and compile the tool during the build.
An even better solution is to set up separate Jenkins jobs for all the tools and libraries and the jobs that need a library or tool will download them from the Jenkins jobs.
This way you can control all your dependencies and different jobs do not conflict when e.g. one needs an older version of a library and one a newer version. And if someone upgrades the library, it is immediately visible in the version control who did what.
I am preparing to move my team's source control from VSS over to TFS 2008.
This is for an asp.net website, and I am currently using a combination of nant scripts and Cruise Control to do all of the builds and deployments.
I've been trying to wrap my head around the best way to architect TFS build to do the same thing I'm doing with NANT and Cruise Control, but I can't determine the best approach.
Here are my requirements:
When code reaches a certain point, I manually apply a label to it.
This labeled code needs to be built and deployed to any of our 25 different Dev, QA, or production environments.
Any of these 25 environments can be on any current or past labeled version of the application.
I need to be able to deploy any labeled version of the application to any of the environments.
I'm currently accomplishing the above using NANT to perform the build, and using Cruise Control to just pass in command line options for which environment(s) to build and deploy. I have a Nant config file with a list of all of my environments, and an associated label each environment should currently be using. This file gets manually updated whenever a new label is created.
I know the approach I'm using for NANT probably won't be the same as with Team Build, but has anyone done something similar with Team Build and could share how you accomplished it?
Labeling in TFS is much more robust than in VSS. When you create a label, you can create based on a changeset, date, workspace version, heck even a different label. (BTW, I was grabbing a link and came across this post that you might find relevant.)
By default, a Team Build will build from the latest version of code in source, but you can override the "CoreGet" target in a build to build a specific version. Aaron Hallberg (a.k.a. TFS's John Skeet) shows an example here.
see 4.
I haven't personally had this difficult of a requirement, but I've done something similar. When you queue the build, you can pass in any number of parameters in a couple important ways, 1) through the response file and 2) at queue time (simple example here). In either case, two parameters could be which environment and which label/version number. At my current project, I have continuous integration turned on, so when code in a workspace is checked in, the current code is automatically labeled, pull the specifics for my drop location from the response file, then deploy to the respective location.
Given the fact that you have ~25 environments and n number of versions/labels, you could build a simple GUI that reads the current labels through the TFS API and lets you pick which version to build to a particular environment.
To answer the question, the way I addressed this was to use a combination of a custom build task, cruise control, and msbuild.
The custom build task allowed me to get the latest version from a specific branch and label.
Cruise control allowed me to pass in specific information for a specific build to MSbuild, using a config file, but initiate the build from a UI.
msbuild was used like normal, however it was called from cruise control, and the custom build task did most of the work.