Tracking version of Terraform templates - devops

I want to include the Terraform orchestration process in my Continuous Integration pipelines. The idea is that each time someone modifies a Terraform template, a new version is bumped up and a snapshot is saved on a repository somewhere, like Nexus.
In a very naive approach, I was thinking of putting a comment on the top of every Terraform template file like this: # Version 1.0.0 and on every release I look into this string and bump it up to # Version 1.0.1.
Is there however, a recommended way of doing it, the Terraform way?

I believe what you are looking for is a terraform S3 backend with terraboard view.
By this way, the state file goes to S3 bucket whenever a change happens. Terraboard gives a good UI to view/compare the versions/states.
https://github.com/camptocamp/terraboard#use-with-docker
Remember: AWS S3 needs to have Versioning enabled.
Thanks.

Related

Jenkins Multibranch Pipeline can't find Jenkinsfile in subdirectory using svn

I'm trying to set up a build using Multibranch. I'm basically having the same problem as stated here, but our SCM is Subversion. The Bug in the Bitbucket Branch Source Plugin as described here can therefore be ruled out, especially since our Jenkins has the newest version installed anyway.
I tried to find a similar ticket regarding my problem, but couldn't find one, so here I am.
As this particular project is configured in a way that configuration files (including something like the Jenkinsfile) are to be stored in a subfolder, I don't know what else to try, apart from configuring individual jobs. I'd rather stick to using Multipipelines, however, as they help keeping the build jobs tidy.

How to versionate artifacts on Artifactory without overwriting

I'm trying to extend our Jenkins job (which builds the entire project) to deploy the built artifacts to our Artifactory but then I faced some problems related to the versioning of the artifacts. If I try to redeploy an artifact whose version didn't change (not a snapshot), I get an error 403 (user 'foo' needs DELETE permission) which is understandable, I should not replace an already released artifact. If the artifact version contains -SNAPSHOT then there are no problems, it's always uploaded. My question is: how we should approach the scenario of having locked overwriting in Artifactory?
Shouldn't the artifactory plugin from Jenkins just ignore the deploy of the artifact in case is already deployed instead of failing the job?
Or should we use always -SNAPSHOT (during development) even the artifact has not changed?
Do we increase the version on every release even the artifact has not changed?
Shouldn't the artifactory plugin from Jenkins just ignore the deploy
of the artifact in case is already deployed instead of failing the
job?
The job should fail if the artifact is already deployed with a fixed version (non -SNAPSHOT). For instance on a manual job trigger, I would like to know if I tried to build and deploy using a version name that is already published (maybe by someone else in the team)
Or should we use always -SNAPSHOT (during development) even the
artifact has not changed?
-SNAPSHOT is made for development. Yes we usually push the artifact at the end of the build, even if it did not change because you updated for instance a README and the job was triggered.
Usually SNAPSHOT have a lifetime depending on how you binary repository (here Artifactory) is configured. SNAPSHOT can be cleaned every 2 weeks for instance.
The link shared by Manuel has other interesting definitions like
Usually, only the most recently deployed SNAPSHOT,
for a particular version of an artifact is kept in the artifact repository.
Although the repository can be configured to maintain a rolling archive
with a number of the most recent deployments of a given artifact
https://docs.oracle.com/middleware/1212/core/MAVEN/maven_version.htm#MAVEN401
Do we increase the version on every release
even the artifact has not changed?
yup we increase the version number at every release. I call release what the customer will get. Except an exceptional occasion, you wont go through the process of release if the artifact didn't change. A release usually involves a lot of people in an organization, even people that are not from Development. A popular standard is to use semantic versioning https://semver.org/ Sometime people prefer to version with the date. My advice is to use semver and have a file in the artifact with the date of the build. This file could be used by the artifact itself to tell its version at runtime.
You could work with build numbers, and you wouldn't overwrite existing versions. Instead a buildNumber could include some bugfixes/security fixes.
https://docs.oracle.com/middleware/1212/core/MAVEN/maven_version.htm#A1000661
If you're using the depenfency, you can handle the versions with expressions. Exact version or expression which covers the buildNumber.

Get the list of files that has change-puppet

We have our Jira managed by puppet, so we have puppet script to install lira.So after installation we have few file like server.xml,setting.sh manually changed in the server without using puppet.
So we need to commit the changes done back to puppet repo(r10k managed).But how will we identify the files which have changes compared to files in puppet .
You shouldn't be changing files manually on the server and committing them back into puppet.
But how will we identify the files which have changes compared to files in puppet .
You can't.
This fundamentally breaks the idea of infrastructure as code and configuration management. The whole idea of putting your configuration data into puppet is to stop this behaviour so that multiple people can always know what has changed because the changes are tracked in version control.
Make all the changes inside a git repo and then test them using a Puppet run, potentially with --noop if you're worried this may break JIRA.
You need to get a workflow set up so that this is easy, not continue to manipulate files on a server by hand and then expect Puppet to understand what each person has done.

OpenShift S2I build strategy from multiple data sources

A web application typically consists of code, config and data. Code can often be made open source on GitHub. But per-instance config and data may contain secretes therefore are inappropriate be saved in GH. Data can be imported to a persistent storage so disregard for now.
Assuming the configs are file based and are saved in another private secured SVN repo, in order to deploy the web app to OpenShift and implement CI, I need to merge config files with code prior to running build scripts. In addition, the build strategy should support GH webhooks for automated build.
My questions are, to be more specific:
Does OS BuildConfig support multiple data sources, especially from svn?
If not, how to deploy such web app to OS?
The solution I came up with so far:
Instead of relying on OS for CI, use Jenkin instead.
Merge config files with code using Jenkins.
Instead of using Git source type in BuildConfig, use binary source instead
Let jenkins run
oc start-build --from-dir=<directory>
where <directory> contains merged code/config

How to deploy the latest version of the site using Chef and tar_extract

I am using jenkins to continuously build the website front-end code from github repository, package it up into the tar archive and post the it to the S3 bucket. The Jenkins build creates files named like this FrontEnd-122.tgz where 122 is the build number.
I am using the following recipe to deploy the app onto the server:
deploy_version = node['my-app']['build-number']
deploy_from = "http://mybucket.com/FrontEnd-#{deploy_version}.tgz"
tar_extract "#{deploy_from}" do
target_dir '/usr/local/site/FrontEnd'
creates '/usr/local/site/FrontEnd/index.php'
tar_flags [ '--strip-components 1' ]
end
This all works great, however I have to manually update the node attribute my-app/build-number. Which is fine for QA and production deployments.
What I would like to do is to have a snapshot deployment VM, where I the latest code gets deployed, for further testing with selenium and friends. However, to do that I need to
have a way for the above cookbook to figure out what is the latest build number is and deploy from there. Do you have any suggestions?
Tricky one because you need a mechanism for chef to determine the latest revision stored in S3.
Presumably you store the code in a revision control system like subversion or git? Would it be feasible to use the chef deploy resource instead? Let chef pull the website code from your trunk or master branch, for testing purposes.
Another option would be to use a binary repository manager that understands the concept of "Snapshots". Take a look at products like Nexus, Artifactory and Archiva. You could then use S3 for both backup and a distribution area for approved and released copies of your site.
So, I used the dumb way to solve this issue. Besides putting the versioned archive to the S3 bucket. I also push the same archive using the name like 'FrontEnd-Latest'. Also I modified the cookbooks to use a version parameter. The staging server has version parameter set to 'Latest' and the production server has the parameter set to whatever version is considered to be stable.

Resources