How to build different configs in Azure DevOps release pipeline? - asp.net-mvc

I currenly have an Azure DevOps release pipeline containing Test, Acceptance and Production stage, that are triggered in that order. The Test is triggered when there is a new build available to deploy.
The problem I have with this is that all stages currently deploy the exact same artifact. But this is wrong, since they are deploying to different environments that need to have their own version of the Web.config.
How do I change my setup in such a way that all environments get the right package? Should I change my build setup in such a way that it builds for multiple different configs or should I have separate builds for each environment? And how do I select what artifact each stage of the release pipeline should deploy?
This is what my release pipeline looks like now:

Each environment can have its own variables defined. Simply click on the variables tab and make sure you scope any of those variables to the proper environment.
Then using the Azure App Service Deploy (if targeting Azure) or IIS Web app deploy tasks, you can update your configuration files with the values of your variables, here is the documentation on how to do so.

Related

Jenkins how to deploy same code to different servers (that I can specify)?

in the software company I work we use jenkins to deploy to different servers, the way we do that is ever single branch from git repository deploy to the specific server based on the name of the branch and in the specifications on the jenkinsfile. But we are in the process of unification of this branchs in just one: Master, but how we can configure jenkins to catch the same code and deploy to the servers we are interested in, without changing the code? I think we should separate code from deploy, but the pipeline still have to exist in some way.
2 solutions come to mind:
I believe you may be using SCM polling to get the builds started. With git diff you can check what was changed and based on that start the specific deployment.
If you are running the builds manually you can parameterize the build and specify this way which one you want to deploy.
From experience you might want to set the pipeline so when you commit to the repository only testing and building is done and deploy is not done (or only on a test env) and the proper prod deploy is done only manually (and can be parameterized).

Jenkins Parameterize choice for the Deploying Servers

I am working with Maven Project in Jenkins. Previously I configured maven build and Nexus Deployment. Now I want to deploy the project in deployment servers. There are four build environments called QA,Dev,Prod,Stress and each one have specific servers. Until this point I have made the selection properties using jenkins plugins.
The requirement is, when I select the deployment environment one by one as a example QA, I need to list down only QA servers, If it is stress I need to list down Stress Servers.
I am using extended choice parameter plugin.
Does anyone know how to do this?
In here I attach my deployment environment and servers.
I got the solution for this. I think this will be important to future readers. For achieving this, I used,
Active Choices Plugin
Below I attached snapshots of the configuration.

CI/CD pipeline deployment flow for test and prod environment

I am trying to implement CI/CD pipeline for my microservice deployment creating in Spring Boot. I am trying to use my SVN repository, Kubernetes and Jenkins for implementing the pipeline. When I am exploring the deployment using Kubernetes and Jenkins, I found tutorials and many videos for deploying in both test and prod environment by creating and defining in the Jenkinsfile. And also adding the shell script in Jenkins configuration.
Confusion
Here I had felt the doubt that when we are deploying into test environment, how we can deploy the same into prod environment after the proper testing is finished? Do I need to add separate shell script for prod? Or are we deploying serially using one script for both test and prod?
It's completely up to you how you want to do this. In general, we create separate k8s clusters for prod and staging(etc). And your Jenkins needs to deploy to different cluster depending on your pipeline. If you want a true CI/CD, then one pipeline is enough - which will deploy to both the clusters (or environments).
Most of the time businesses don't want CI on production (for obvious reasons). They want manual testing on QA environments before it's deployed to prod.
As k8s is container based, deploying the same image to different envs is really easy. You just build your spring boot app once, and then deploy it to different envs as needed.
A simple pipeline:
Code pushed and build triggered.
Build with unit tests.
Generate the docker image and push to registry.
Run your kubectl / helm / etc to deploy the newly build image on
STAGING
Check if the deployment was successful
If you want to deploy the same to prod, continue the pipeline with (you can pause here for QA as well https://jenkins.io/doc/pipeline/steps/pipeline-input-step/):
Run your kubectl / helm / etc to deploy the newly build image on
PRODUCTION
Check if the deployment was successful
If your QA needs more time, then you can also create a different Jenkins job and trigger it manually (even the QA enggs can trigger this)
If you QA and PM are techies, then they can also merge branches or close PRs, which can auto trigger jenkins and run prod deployments.
EDIT (response to comment):
You are making REST calls to the k8s API. Even kubectl apply -f foo.yaml will make this rest call. It doesn't matter from where you are making this call - given that your kubectl is configured correctly and can communicate with the k8s server. You can have multiple cluster configured for kubectl and use kubectl --context <staging-cluster> apply -f foo.yaml. You can pick the context name from jenkins env variable or some other mechanism.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).

User permissions for TFS Build server

I am creating a build using the new TFS 2015 Build definitions. I have msbuild tasks as well as npm/gulp tasks. I am looking at using variables to allow me to build and deploy to each environment, with DEV being the only one that runs on check-in. However, I don't want anyone to be able to start a deploy for production. How would I go about limiting the users that can start a deploy to production? I'd prefer to only have one build definition, for maintenance.
Use the Release hub capabilities for deployments and create an approval workflow for your environment pipeline.

How do you manage multiple releases in multiple environments in continuous integration/delivery?

I am trying to wrap my head around this. Most CI/CD examples/projects have a single master that is always released, and have some variant of, e.g. git-flow, to have a develop branch. Once tagged, it goes to master.
Either way, master is always released to production.
But in the real world as I see it, there are human gates for release to production and other environments. What mechanism do you use to manage the deployment of different versions?
For example:
v1.5 is the current production release
v1.6 has passed all tests, artifacts are ready, it is tagged as valid, but business decides to deploy it only to staging, awaiting an opportune moment to deploy
v1.5 is deployed to a demo environment
v2.0 has also passed all tests, but is in UAT, subject to the customer being happy, as it is a major release
There could be many more such environments - production, staging, UAT, demo, demo2, etc.
What mechanism do you use to handle the tagging of a particular version for a particular environment, and the actual deployment thereof?
Although there a probably a few ways to do it, I use the build pipeline plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin Along with the copy artifacts plugin https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
With these, you can create individual jobs for each piece of your environment, and link them altogether.
So as in your example, the pipeline would look like:
Build -> Test and Deploy to UAT (2.0) -> deploy to staging(1.6) -> demo(1.5) -> prod (1.5)
Each piece represents a different build in jenkins. The idea behind continuous integration is you create the binaries once, and you carry it down the pipeline, only changing configuration pieces along the way. In a build job, the artifacts are created and then archived. In any jobs after, the artifact is picked up from the upstream job, some stuff is done, and then it get's re-archived for the next downstream job. So the deploy to staging would go to the Test and Deploy to Uat job to get its binary. The entire concept of Continuous Delivery boils down to the the build pipeline. http://en.wikipedia.org/wiki/Continuous_delivery (and yes I did just cite wikipedia).
As for tagging individual binaries for specific environments, that is by definition, not continuous integration. A binary is suppose to be created in a way that it can easily be propagated from one environment to the next. So unfortunately, individual builds for specific environments can never be continuous delivery. You can use jenkins as a CI server all you want, but if your process does not match, you will never achieve true continuous integration.
Braching, merging and checkins always seems to be a touchy subject when it comes to Continuous Integration, so I won't go into it much. But a lot of people share the idea that : "If different members of the team are working on separate branches, then by definition, they not participating in continuous integration process." http://eugenedvorkin.com/continuous-integration-strategies-for-branching-and-merging/
EDIT
For Flagging specific builds, it sounds like your looking to take use of this feature : https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint ... Which gets the job done effectively, giving you the entire life of any individual artifact. A bit more complex solution would be artifactory, which is essentially artifact source control.
I explained the concept of the deployment process above, and without information on your specific environment it is hard to go much further. But for me, for java applications deployed to tomcat containers, the deploy plugin works great https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
You shouldn't have to worry about selection of which artifact to deploy. The pipeline should be setup to always deploy the latest artifact that was archived in its corresponding upstream job.
Maybe Docker can help you out with this issue. It is able to deploy images of projects to a specific environment. If that environment has a docker client or a docker deamon you are able to request specific information about that environment and the project (to be) deployed on it.
Jenkins can still play a huge part in your pipeline for the integration part and you could let docker do the delivery part.
Docker: https://www.docker.com
Docker plugin for jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
Docker also has support for windows machines and .NET.

Resources