how to define different deployments in vercel? - devops

Currently I use Vercel as hoster of my web application.
I would like to declare different branches as different deployments.
that means I want to deploy only PR on dev and main and not each push.
In the doc of vercel you can ignore build steps, unfortunately this only works for production
[ "$VERCEL_ENV" != production ]

Related

Jenkins how to deploy same code to different servers (that I can specify)?

in the software company I work we use jenkins to deploy to different servers, the way we do that is ever single branch from git repository deploy to the specific server based on the name of the branch and in the specifications on the jenkinsfile. But we are in the process of unification of this branchs in just one: Master, but how we can configure jenkins to catch the same code and deploy to the servers we are interested in, without changing the code? I think we should separate code from deploy, but the pipeline still have to exist in some way.
2 solutions come to mind:
I believe you may be using SCM polling to get the builds started. With git diff you can check what was changed and based on that start the specific deployment.
If you are running the builds manually you can parameterize the build and specify this way which one you want to deploy.
From experience you might want to set the pipeline so when you commit to the repository only testing and building is done and deploy is not done (or only on a test env) and the proper prod deploy is done only manually (and can be parameterized).

How to build different configs in Azure DevOps release pipeline?

I currenly have an Azure DevOps release pipeline containing Test, Acceptance and Production stage, that are triggered in that order. The Test is triggered when there is a new build available to deploy.
The problem I have with this is that all stages currently deploy the exact same artifact. But this is wrong, since they are deploying to different environments that need to have their own version of the Web.config.
How do I change my setup in such a way that all environments get the right package? Should I change my build setup in such a way that it builds for multiple different configs or should I have separate builds for each environment? And how do I select what artifact each stage of the release pipeline should deploy?
This is what my release pipeline looks like now:
Each environment can have its own variables defined. Simply click on the variables tab and make sure you scope any of those variables to the proper environment.
Then using the Azure App Service Deploy (if targeting Azure) or IIS Web app deploy tasks, you can update your configuration files with the values of your variables, here is the documentation on how to do so.

Sharing Agent Queue For Prod and Stag Environment in TFS Build/Release Definition

I am trying to setup build and release definition in TFS 2015. I have set up multiple agent queues for different environments Staging, Production, Load, UAT. I have different physical agents for each of this environment and each agent has permission to connect to respective environment to deploy code.
My Question is how do i share agents over these environments. Is it possible to have one agent which has permission to all these environment and can deploy code to IIS website. My websites name is also same in each environmetn. For e.g. abc.com (UAT), abc.com (PROD).
TFS version is 2015.
Fundamentally there's nothing stopping it, you will need to look at a couple of things though.
First does the agent/VM have access to all the environments? Often environments are in different AD domains, so you may have an agent that is in/can see your UAT domain, but is unable to access the PROD domain. If that's fine, secondly, you will need to make sure that the user the agent is running on has permissions also, it may be the machine can see both domains, but the agent is running under an account like tfsagent#uat.domain, and your other agent runs under a tfsagent#prod.domain account.
If both the agent/VM and agent user can see both/all domains, then you need to consider security (what's stopping a dev changing a name, or the deploy process, and pushing something live with no oversight, etc?).

Deploying Different Images for Staging and Production Environment using Kubernetes and Gitlab

I want to have two separate deployments for Staging and Production branches. I am using Kubernetes cluster in AWS. I have integrated the kubernetes cluster with gitlab. Right now I have setup only one environment which works fine by setting the -only parameter to staging in the .gilab-ci.yml file. Now I want to setup the production environment which will build from the master branch.
Will setting the -only parameter to master work or do I need to make additional changes as well in gitlab?
Also will separate images be created for the two branches or do I need to explicitly create two separate images for the master and staging branch. and if so what are the steps?

How do you manage multiple releases in multiple environments in continuous integration/delivery?

I am trying to wrap my head around this. Most CI/CD examples/projects have a single master that is always released, and have some variant of, e.g. git-flow, to have a develop branch. Once tagged, it goes to master.
Either way, master is always released to production.
But in the real world as I see it, there are human gates for release to production and other environments. What mechanism do you use to manage the deployment of different versions?
For example:
v1.5 is the current production release
v1.6 has passed all tests, artifacts are ready, it is tagged as valid, but business decides to deploy it only to staging, awaiting an opportune moment to deploy
v1.5 is deployed to a demo environment
v2.0 has also passed all tests, but is in UAT, subject to the customer being happy, as it is a major release
There could be many more such environments - production, staging, UAT, demo, demo2, etc.
What mechanism do you use to handle the tagging of a particular version for a particular environment, and the actual deployment thereof?
Although there a probably a few ways to do it, I use the build pipeline plugin https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin Along with the copy artifacts plugin https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
With these, you can create individual jobs for each piece of your environment, and link them altogether.
So as in your example, the pipeline would look like:
Build -> Test and Deploy to UAT (2.0) -> deploy to staging(1.6) -> demo(1.5) -> prod (1.5)
Each piece represents a different build in jenkins. The idea behind continuous integration is you create the binaries once, and you carry it down the pipeline, only changing configuration pieces along the way. In a build job, the artifacts are created and then archived. In any jobs after, the artifact is picked up from the upstream job, some stuff is done, and then it get's re-archived for the next downstream job. So the deploy to staging would go to the Test and Deploy to Uat job to get its binary. The entire concept of Continuous Delivery boils down to the the build pipeline. http://en.wikipedia.org/wiki/Continuous_delivery (and yes I did just cite wikipedia).
As for tagging individual binaries for specific environments, that is by definition, not continuous integration. A binary is suppose to be created in a way that it can easily be propagated from one environment to the next. So unfortunately, individual builds for specific environments can never be continuous delivery. You can use jenkins as a CI server all you want, but if your process does not match, you will never achieve true continuous integration.
Braching, merging and checkins always seems to be a touchy subject when it comes to Continuous Integration, so I won't go into it much. But a lot of people share the idea that : "If different members of the team are working on separate branches, then by definition, they not participating in continuous integration process." http://eugenedvorkin.com/continuous-integration-strategies-for-branching-and-merging/
EDIT
For Flagging specific builds, it sounds like your looking to take use of this feature : https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint ... Which gets the job done effectively, giving you the entire life of any individual artifact. A bit more complex solution would be artifactory, which is essentially artifact source control.
I explained the concept of the deployment process above, and without information on your specific environment it is hard to go much further. But for me, for java applications deployed to tomcat containers, the deploy plugin works great https://wiki.jenkins-ci.org/display/JENKINS/Deploy+Plugin
You shouldn't have to worry about selection of which artifact to deploy. The pipeline should be setup to always deploy the latest artifact that was archived in its corresponding upstream job.
Maybe Docker can help you out with this issue. It is able to deploy images of projects to a specific environment. If that environment has a docker client or a docker deamon you are able to request specific information about that environment and the project (to be) deployed on it.
Jenkins can still play a huge part in your pipeline for the integration part and you could let docker do the delivery part.
Docker: https://www.docker.com
Docker plugin for jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
Docker also has support for windows machines and .NET.

Resources