What does the "release" stage in Twelve-Factor App correspond to? - docker-swarm

In Docker Swarm project, what does the "release" stage of 12 factor app correspond to?
From my understanding the "docker build" corresponds to build stage, and the image this produces does not constitute a release (because it has yet to be combined with an environment's config).
Here is the official description:
A codebase is transformed into a (non-development) deploy through
three stages:
The build stage is a transform which converts a code repo into an
executable bundle known as a build. Using a version of the code at a
commit specified by the deployment process, the build stage fetches
vendors dependencies and compiles binaries and assets.
The release
stage takes the build produced by the build stage and combines it with
the deploy’s current config. The resulting release contains both the
build and the config and is ready for immediate execution in the
execution environment.
The run stage (also known as “runtime”) runs
the app in the execution environment, by launching some set of the
app’s processes against a selected release.
They further clarify:
Every release should always have a unique release ID, such as a
timestamp of the release (such as 2011-04-06-20:32:17) or an
incrementing number (such as v100). Releases are an append-only ledger
and a release cannot be mutated once it is created. Any change must
create a new release.
Builds are initiated by the app’s developers whenever new code is
deployed. Runtime execution, by contrast, can happen automatically in
cases such as a server reboot, or a crashed process being restarted by
the process manager. Therefore, the run stage should be kept to as few
moving parts as possible, since problems that prevent an app from
running can cause it to break in the middle of the night when no
developers are on hand. The build stage can be more complex, since
errors are always in the foreground for a developer who is driving the
deploy.

Related

Cleanup Jenkins home directory

We have started to use jenkins from last few months and now the size of home directory is about 50GB. I noticed that size of Jobs and workspace directories are about 20 GB each. How can I clean them? What kind of strategy I should use?
Consider the various Jenkins areas that tend to grow excessively. The key areas are: system logs, job logs, artifact storage and job workspaces. This will detail options to best manage each of these.
System logs
System logs may be found in <JENKINS_HOME>/logs or /var/log/jenkins/jenkins.log, depending on your installation. By default, Jenkins does not always include log rotation (logrotate), especially if running straight from the war. The solution is to add logrotate. This Cloudbees post and my S/O response add details.
You can also set the Jenkins System Property hudson.triggers.SafeTimerTask.logsTargetDir to relocate the logs outside the <JENKINS_HOME>. Why answered later.
Job Logs
Each job has an option to [ X ] Discard old builds. As of LTS 2.222.1, Jenkins introduced a Global Build discarder (pull #4368) with similar options and default actions. This is a global setting, Prior to that, job logs (and artifacts) were retained forever by default (not good).
Advanced options can manage artifact retention (from post-build action, "Archive the artifacts" separately.
What's in Jobs directory?
The Jobs directory contains a directory for every job (and folders if you use them). Inside the directory is the job config.xml (a few KB in size), plus a directory builds. builds has a numbered directory holding the build logs for each retained build, a copy of the config.xml at runtime and possibly some additional files of record (changelog.xml, injectedEnvVars.txt). IF you chose the Archive the artifacts option, there's also an archive directory, which contains the artifacts from that build.
Jenkins System Property, jenkins.model.Jenkins.buildsDir, lets you relocate the builds to outside the <JENKINS_HOME>
Why Relocate logs outside <JENKINS_HOME>?
I would strongly recommend relocating both the system logs and the job / build logs (and artifacts). By moving the system logs and build logs (and artifacts if ticked) outside of <JENKINS_HOME>, what's left is the really important stuff to back and restore Jenkins and jobs in the event of disaster or migration. Carefully read and understand the steps "to support migration of existing build records" to avoid build-related errors. It also makes it much easier to analyze which job logs are consuming all the space and why (ie: logs vs artifacts).
Workspaces
Workspaces are where the source code is checked out and the job (build) is executed. Workspaces should be ephemeral. Best Practicesare to start with an empty workspace and clean up when you are done - use Workspace Cleanup ( cleanWS() ) plugun, unless necessary.
The OP's indication of a workspaces in the Jenkins controller suggests jobs are being run on the master. say that's not a good (or secure) practice, except lightweight pipelines always execute on master. Mis-configured job pipelines will also fall back to master (will try find reference). You can set up a node physically running on the same server as the master for better security.
You can use cleanws() EXCLUDE and INCLUDE patterns to selectively clean the workspace if delete all is not viable.
There are two Jenkins System Properties to control the location of the workspace directory. For the master: jenkins.model.Jenkins.workspacesDir and for the nodes/agents: hudson.model.Slave.workspaceRoot. Again, as these are ephemeral, get them out of <JENKINS_HOME> so you can better manage and monitor.
Finally, one more space consideration...
Both maven and npm cache artifacts in a local repository. Typically that is located in the user's $HOME directory. If incrementing versions often, that content will get stale and bloated. It's a cache, so take a time hit every once in a while and purge it or otherwise mange the content.
However, it's possible to relocate the cache elsewhere through maven and npm settings. Also, if running a maven step, every step has the Advanced option to have a private repository. That is located within in job's workspace. The benefit is you know what your build is using; no contamination. The downside though is massive duplication and wasted space if all jobs have private repos and you never clean them out or delete the workspaces, or longer build times every time if you cleaned. Consider using s the cleanWS() or a separate job to purge as needed.
The workspaces can be cleaned after and/or prior any execution. I recommend doing it prior and after an execution. After the build, do it only on successful builds. In case of errors, you can enter the workspaces and check there for any clue. You do it on the pipeline using the CleanWs() command.
For jobs directory you can select on your jobs the amount of time / maximum of executions to store. This is more complicated because it depends on what you want to save. For example if there is a lot of builds and you don't mind deleting that information you could save 10 builds during 30 days . That configuration is on the job configuration under job properties and search for "Discard old builds" and "Days to keep build" and "Max # builds to keep".
My suggestion is that you use larger numbers at first and then you can test how it behaves

How to make production deployment a part of CI/CD pipeline

I want to build a new pipeline for my project which includes production deployment as a stage. A sample can be seen in the below picture.
Once the pipeline has passed all the quality gates, it will be deployed to dev environment automatically, and functional test will run in the next stage. There is a manual approval required at this stage, once approved the next step will be deploy to prod/preprod. I am able to achieve the goal till this point. I am stuck at following step:
Can we keep this build forever, or for a specified duration, say, 1 or 2 months, and continue from the same stage when required? For example, the git master branch is getting updated very frequently, and the builds are getting triggered automatically, all builds should reach this stage and wait. Only the build having approval should be able to proceed forward.
There can be one other way; I can have a separate job for deploy to preprod/prod. But, I want to know if this is possible.
Regarding - Can we keep this build forever, or for a specified duration, say, 1 or 2 months...
You could achive it partially - build could stop at manual step for a period of time. But remember - while job is running it is using executor from agent. Once you use all executors you can't start another build.
... and continue from the same stage when required? - It's not possible.

Jenkins CD Pipeline: How to execute particular steps just once a day

I have a (declarative) Jenkins Pipeline that is doing builds and tests continuously. When successful, the application should be deployed on particular test environments once a day, based on some schedule.
For instance, if the build was successful, and current time is
between 11:00 and 14:00, deploy to TestA, but just once a day;
between 14:00 and 18:00 deploy to TestB, but also just once a day;
etc.
I would be able to do the time slot handling in some groovy code, but I'm not sure how to "remember" whether there already was a deployment in this time period as of today. Of course, it is useless to store that information in the workspace, since later builds may be executed somewhere else.
So what options do I possibly have?
Store some marker file in a shared network location, and check this file and its timestamp in later builds to decide whether a deploy is required. This would probably work, but introduces dependency to external resources.
Can I somehow "mark" the Jenkins build when doing deployment, so that following builds can iterate through previous build(s) and search for such marker? Like archiving some small text file with the build?
Alternatively, is there any plugin that supports this scenario?
Or any completely different idea?
This seems to be a frequent scenario in CD pipelines, so I wonder how this is done in the wild... Thanks for any hints!
You should have the build and deploy stages on separate pipelines. That way the build can occur independently, and the deployment can be triggered by the timer to run exactly once per day.
In this case you'd want the build pipeline to archive its artifacts, so that the deploy pipeline can always deploy a successful build. The Copy Artifacts plugin can be used to get the build pipeline's artifacts into the deploy pipeline's workspace.

How to achieve Roll back using Jenkins

I know this Forum is not to provide strategy's.
Using Jenkins I have set up CI and CD to my Dev,QA and Staging environments. I am stuck up with Rollback strategy for all my environments.
1- What happens if my build fails in Dev
2- What happens if my build fails in QA and passed in Dev.
3- What happens if my build fails in Staging and passed in Dev and QA.
How should I roll back and get things done considering DB in not in place. I have created sample workflow but not sure its an right process.
Generally you can achieve this in 2 ways:
Setting up some sort of release management tool that tracks every execution of your pipeline and snapshots the variables, artifacts, etc... that was used on that exact execution, then you can just run an earlier release of it (check tools like octopus deploy)
If you are using a branch strategy with tags you can parameterize your jobs, passing the tag you wanna build, and build the "earlier tag" if something fails. Check the rebuild option for older job executions.

TFS only releasing to the first agent in an environment

In TFS 2015 Update 2, I have configured seven release agents in one pool, separated into a set of logical environments using capabilities.
I also have a release configured with three environments defined to use the queue corresponding to this pool and demands specified to filter to the appropriate servers for each environment (1-Test, 2-QA, 4-Prod).
My problem is that TFS is only releasing to the first agent created that meets the demands. If I remove all demands in an environment I would assume it would release to every agent in the pool yet TFS still releases to only the first agent in the pool. If I disable that first agent, it will release to the next; but still to only one agent.
What am I missing?
I think you're misunderstanding what agents are for. The agent merely acts as an invocation mechanism for your deployment activities. You don't need one agent per environment or per server.
For example, if you need to run a PowerShell script on a machine, you use the "PowerShell on Target Machine" deployment activity. The agent will then use WinRM to tell the target machine what scripts to run. That agent can run PowerShell scripts against any machine.
Why would you want the release or build to select a different agent every time? I had always seen with TFS that the probability of a build running on the agent, where the last successful build for a specific build definition ran, is very high.
The only reason I think a build/release should run on a different agent every time is if the number of builds running at a same time requiring the same capabilities is more than one.
If you would like to test each of the agents then try disabling one agent at a time and run the build/release.

Resources