I have a scheduled job that I want to run in AWS Fargate. I have so far setup a CloudWatch schedule that runs a lambda function that starts the task. I'm using a task definition for each of my environments, but I'm struggling to find an easy way for our QA and PM to promote code from the dev environment to QA, Staging, and Prod. Each environment will have its own CloudWatch rule and its own lambda function.
How can I setup deployments to each environment, preferably without building a web interface tool that will allow them to select versions from a dropdown or something? I also have to consider that each environment will need its own environment variables that may need to change on the fly.
We're going to be using CircleCI's approval hold to manage deployment to each environment. This solution doesn't offer any easy rollback of versions, but documentation can be found here.
Related
I currenly have an Azure DevOps release pipeline containing Test, Acceptance and Production stage, that are triggered in that order. The Test is triggered when there is a new build available to deploy.
The problem I have with this is that all stages currently deploy the exact same artifact. But this is wrong, since they are deploying to different environments that need to have their own version of the Web.config.
How do I change my setup in such a way that all environments get the right package? Should I change my build setup in such a way that it builds for multiple different configs or should I have separate builds for each environment? And how do I select what artifact each stage of the release pipeline should deploy?
This is what my release pipeline looks like now:
Each environment can have its own variables defined. Simply click on the variables tab and make sure you scope any of those variables to the proper environment.
Then using the Azure App Service Deploy (if targeting Azure) or IIS Web app deploy tasks, you can update your configuration files with the values of your variables, here is the documentation on how to do so.
As part of QA pipeline(in Jenkins), goal is to automate provisioning and configuration of a VM to run the QA tests.
Jenkins pipeline can trigger Terraform code to automate provisioning of VM and ansible code for configuration of a VM, but, issues like rollback, error handling is not easy unless we use some vendor specific template like AzureResourceManager template.
So, with Jenkins pipeline,What should be the best approach to provision and configure a VM in Azure cloud? we write pipeline scripts for jenkins pipeline...
As the goal is to know the best approach to automate provisioning and configuration of a VM to run the QA tests so I would go with simple jenkins pipeline script by leveraging Azure CLI commands in it.
To be precise, I would just add an Azure service principal to Jenkins credential. And then write simple Jenkins pipeline script by having 'withCredentials([azureServicePrincipal('SERVICEPRINCIPALCREDENTIALID')])' and then by using 'sh' part to have Azure CLI command to provision and configure VM. For illustration related to this you may refer https://learn.microsoft.com/en-us/azure/jenkins/execute-cli-jenkins-pipeline#add-azure-service-principal-to-jenkins-credential.
Regarding the issues like rollback and error handling when going with the approach of having Jenkins pipeline that triggers Ansible code (with or without using ARM templates) that can automate provisioning and configuration of a VM to run the QA tests, (you might already be aware of this but wanted to let you know that) for certain types of tasks you may write custom modules that can leverage the error handling functionality and in few scenarios you may leverage 'failed_when' option. Also you may leverage 'blocks' functionality by which you can define a set of tasks to be executed in the rescue: section. This 'blocks' functionality specially should help in enabling us to get the things rolled back.
Hope this helps!! :)
I'm looking over the docs on environments. I'm trying to understand what these statements actually mean in terms of what server executes the script.
Environments are like tags for your CI jobs, describing where code gets deployed.
The environment keyword is just a hint for GitLab that this job actually deploys to this environment's name.
It makes use of non-difinitive terms 'like' and 'hint' so does it actually execute on Runners tagged with the environment name?
It also states:
If you have a deployment service such as Kubernetes enabled for your project, you can use it to assist with your deployments
Is that a requirement to utilize environments or just a helpful manager?
And I guess my final question would be, if I have multiple runners tagged with an environment (assuming that is how it works) would the job execute on all runners unlike tags which just choose any runner that matches?
The environment name has no effect on the execution location, it is used for display purposes in the Environment UI.
Tags are still specified on the deployment step to determine where execution is run. This does not require Kubernetes to use, but I suspect that functionality with environment management is greatly reduced.
We are developing a CI/CD pipeline leveraging Docker/Kubernetes in AWS. This topic is touched in Kubernetes CI/CD pipeline.
We want to create (and destroy) a new environment for each SCM branch, since a Git pull request until merge.
We will have a Kubernetes cluster available for that.
During prototyping by the dev team, we came up to Kubernetes namespaces. It looks quite suitable: For each branch, we create a namespace ns-<issue-id>.
But that idea was dismissed by dev-ops prototyper, without much explanation, just stating that "we are not doing that because it's complicated due to RBAC". And it's quite hard to get some detailed reasons.
However, for the CI/CD purposes, we need no RBAC - all can run with unlimited privileges and no quotas, we just need a separated network for each environment.
Is using namespaces for such purposes a good idea? I am still not sure after reading Kubernetes docs on namespaces.
If not, is there a better way? Ideally, we would like to avoid using Helm as it a level of complexity we probably don't need.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you submit a Pull Request we automatically create a Preview Environment which is exactly what you describe - a temporary environment which is used to deploy the pull request for validation, testing & approval before the pull request is approved.
We now use Preview Environments all the time for many reasons and are big fans of them! Each Preview Environment is in a separate namespace so you get all the usual RBAC features from Kubernetes with them.
If you're interested here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).
I am trying to wrap my head around this. Most CI/CD examples/projects have a single master that is always released, and have some variant of, e.g. git-flow, to have a develop branch. Once tagged, it goes to master.
Either way, master is always released to production.
But in the real world as I see it, there are human gates for release to production and other environments. What mechanism do you use to manage the deployment of different versions?
For example:
v1.5 is the current production release
v1.6 has passed all tests, artifacts are ready, it is tagged as valid, but business decides to deploy it only to staging, awaiting an opportune moment to deploy
v1.5 is deployed to a demo environment
v2.0 has also passed all tests, but is in UAT, subject to the customer being happy, as it is a major release
There could be many more such environments - production, staging, UAT, demo, demo2, etc.
What mechanism do you use to handle the tagging of a particular version for a particular environment, and the actual deployment thereof?
Although there a probably a few ways to do it, I use the build pipeline plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin Along with the copy artifacts plugin https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
With these, you can create individual jobs for each piece of your environment, and link them altogether.
So as in your example, the pipeline would look like:
Build -> Test and Deploy to UAT (2.0) -> deploy to staging(1.6) -> demo(1.5) -> prod (1.5)
Each piece represents a different build in jenkins. The idea behind continuous integration is you create the binaries once, and you carry it down the pipeline, only changing configuration pieces along the way. In a build job, the artifacts are created and then archived. In any jobs after, the artifact is picked up from the upstream job, some stuff is done, and then it get's re-archived for the next downstream job. So the deploy to staging would go to the Test and Deploy to Uat job to get its binary. The entire concept of Continuous Delivery boils down to the the build pipeline. http://en.wikipedia.org/wiki/Continuous_delivery (and yes I did just cite wikipedia).
As for tagging individual binaries for specific environments, that is by definition, not continuous integration. A binary is suppose to be created in a way that it can easily be propagated from one environment to the next. So unfortunately, individual builds for specific environments can never be continuous delivery. You can use jenkins as a CI server all you want, but if your process does not match, you will never achieve true continuous integration.
Braching, merging and checkins always seems to be a touchy subject when it comes to Continuous Integration, so I won't go into it much. But a lot of people share the idea that : "If different members of the team are working on separate branches, then by definition, they not participating in continuous integration process." http://eugenedvorkin.com/continuous-integration-strategies-for-branching-and-merging/
EDIT
For Flagging specific builds, it sounds like your looking to take use of this feature : https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint ... Which gets the job done effectively, giving you the entire life of any individual artifact. A bit more complex solution would be artifactory, which is essentially artifact source control.
I explained the concept of the deployment process above, and without information on your specific environment it is hard to go much further. But for me, for java applications deployed to tomcat containers, the deploy plugin works great https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
You shouldn't have to worry about selection of which artifact to deploy. The pipeline should be setup to always deploy the latest artifact that was archived in its corresponding upstream job.
Maybe Docker can help you out with this issue. It is able to deploy images of projects to a specific environment. If that environment has a docker client or a docker deamon you are able to request specific information about that environment and the project (to be) deployed on it.
Jenkins can still play a huge part in your pipeline for the integration part and you could let docker do the delivery part.
Docker: https://www.docker.com
Docker plugin for jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
Docker also has support for windows machines and .NET.