Github Action Get Workflow Status Regardless of Environment Deployment - devops

I am using GitHub Action's environment feature which allows to integrate continuous deployment to our project. After setting up "staging" and "production" bound to the same branches, I'd like the option to make it optional to deploy as I do not want to deploy just yet or simply some times just not at all (waiting for a release commit).
In the event I don't wish to deploy, the workflow is left hanging with the yellow running icon while I'd like it to be green because previous CI tests passed.
In the even I just want to deploy to "staging" environment, regardless of whether or not deployment succeeded I have a yellow icon as the "production" environment was not fired. If I were to reject the "production" environment, I'd de-facto have a failed pipeline.
How can the pipeline success be defined solely by the CI part of it ? Would it be possible to retro-actively update the pipeline success status when triggering an environment ?

Related

How to build different configs in Azure DevOps release pipeline?

I currenly have an Azure DevOps release pipeline containing Test, Acceptance and Production stage, that are triggered in that order. The Test is triggered when there is a new build available to deploy.
The problem I have with this is that all stages currently deploy the exact same artifact. But this is wrong, since they are deploying to different environments that need to have their own version of the Web.config.
How do I change my setup in such a way that all environments get the right package? Should I change my build setup in such a way that it builds for multiple different configs or should I have separate builds for each environment? And how do I select what artifact each stage of the release pipeline should deploy?
This is what my release pipeline looks like now:
Each environment can have its own variables defined. Simply click on the variables tab and make sure you scope any of those variables to the proper environment.
Then using the Azure App Service Deploy (if targeting Azure) or IIS Web app deploy tasks, you can update your configuration files with the values of your variables, here is the documentation on how to do so.

How to achieve Roll back using Jenkins

I know this Forum is not to provide strategy's.
Using Jenkins I have set up CI and CD to my Dev,QA and Staging environments. I am stuck up with Rollback strategy for all my environments.
1- What happens if my build fails in Dev
2- What happens if my build fails in QA and passed in Dev.
3- What happens if my build fails in Staging and passed in Dev and QA.
How should I roll back and get things done considering DB in not in place. I have created sample workflow but not sure its an right process.
Generally you can achieve this in 2 ways:
Setting up some sort of release management tool that tracks every execution of your pipeline and snapshots the variables, artifacts, etc... that was used on that exact execution, then you can just run an earlier release of it (check tools like octopus deploy)
If you are using a branch strategy with tags you can parameterize your jobs, passing the tag you wanna build, and build the "earlier tag" if something fails. Check the rebuild option for older job executions.

Run script before removing job in Jenkins Pipelines

I'm setting up a development environment where I have Jenkins as CI server (using pipelines), and the last build step in Jenkinsfile is a deployment to staging. The idea is to have a staging environment for each branch that is pushed.
Whenever someone deletes a branch (sometimes after merging), Jenkins automatically removes its respective job.
I wonder if there is a way to run a custom script before the automatic job removal, then I would be able to connect to the staging server and stop or remove all services that are running for the job that is going to be deleted.
The plugin multibranch-action-triggers-plugin might be worth a look.
This plugin enables building/triggering other jobs when a Pipeline job is created or deleted, or when a Run (also known as Build) is deleted by a Multi Branch Pipeline Job.

Jenkins delivery pipeline plugin, how to skip manual trigger

I have a setup with Jenkins build/delivery pipeline plugin where job #:
1) retrives code,
2) builds
3) runs unit tests
4) deploys to system test environment
5) deploys to UAT
6) deploys to Production
The deployments are manual triggers. Is it possible to somehow skip a manual trigger stage? Say, I would like to skip deployment to system test environment and deploy right ahead to UAT? I could align all jobs 4-6 vertically on the same level so any builds between 4-6 can be built after 3, but it would still be nice to have these as a "chain". Any thoughts?
It is fully possible to have the deployments happening automatically. In some cases you might want certain environments (e.g. dev) to be deployed with the latest version on every successful commit, while other environments (e.g. UAT, prod) might need to be manually triggered. This is possible with the current version of the Delivery Pipeline Plugin.
It's fully possible to make deployments happen simultaneously to different environments, but I think it makes more sense to start deploying to one environment, execute some smoke tests etc, making sure a certain set of assertions pass before going to the next stage. This avoids unnecessary work being executed and keeps the feedback loop as quick as possible.

Jenkins Deployment to Staging

I'm trying to find a way for Jenkins to deploy to my staging server on Engine Yard when all all tests have passed? Are there any plugin for this post-build action from Jenkins?
You can also consider creating build pipeline https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin
One build to build & test and another for deployment (or maybe more to deploy to staging automatically if tests pass and do third step - deploy to next environment triggered manually).
-Radim
engineyard-jenkins looks like it will do the trick.

Resources