trigger a release pipeline from target server - tfs

Im kinda new to Devops
i want to create a basic release pipeline, that instead of running on 1 or several servers- would run individually on each developer's machine, and ONLY WHEN THEY CHOOSE TO RUN IT.
NOT scheduled, and NOT by one person who decides to run it for all developers at a certain time.
that means each developer would have an agent on their developing machine.
so i figured, each developer would have access to their own private area on TFS (Azure Devops)
in which they would (ONLY!!) have access to their own Release pipelines. (several pipelines because there are several environments).
Dose that make sense? is my need common? any other recommended approaches for this need?
Thank You

Related

How to organize deployment process of a product from devs to testers?

We're a small team of 4 developers and 2 testers and I'm a team lead of the team. Developers do their tasks each in separate branch. Our stack is ASP.NET MVC, ASP.NET Core, Entity Framework 6, MSSQL, IIS, Windows Server. We also use Bitbucket, Jira software to store code and manage issues.
For example, there is a task "add an about window". A developer creates branch named "add-about-window" and put all the code there. Once the task is done, I do code review and in case all was good, I merge the branch into some accumulating branch let's name it "main". As a next step, I then manually deploy the updated "main" branch to test server with installed IIS, MSSQL. Once done I notify testers to test freshly uploaded app to make sure "add about window" is done correctly and works good. If testers find a bug, I have to revert the task branch merge from "main" branch and tell the developer to fix the bug in task's branch. Once the developer fixed it, I merge the branch into "main" branch again and ask testers to check again. In the end the task branch gets deleted.
This is really inconvenient, time consuming and frustrating. I have heard about git flow (maybe this is kind of what we have now).
Ideally, I would like this process to be as this:
Each developer still do work in separate branches.
Once a task is done and all the task code is in task branch I do code review.
Once code review is done and all found issues are fixed I just click "deploy"
There is a Docker image which contains IIS, MSSQL, Windows. It also with some base version of the application we work on, fully tested and stable. Let's say it's on a state of some date, like start of the year.
The Docker image is taken and a new container starts.
This Docker container gets fully initialized and then the code from a branch gets installed on the running container.
This container then has own domain name like "proj-100.branches.ourcompany.com" ("proj-100" is task's ID in Jira) which testers can go on and test.
This would definitely decrease time I spend on deployment and also will make the process more convenient and comfortable.
Can someone advice some resources I can learn about similar deployment models? Or maybe someone can share info on this. Any info will be very appreciated.
regardless of your stack, and before talking about the solutions, what you describe is the basic use case of any CI-CD process. all the exhausting manual steps you described, can be done with any CI tool.
now, let's consider what you already have, and talk about the steps for your desired solution - you're using bitbucket, which already gives you at least steps 1 and 2 - merging only approved PRs into master/main.
step 3 is where we start the CI automation process - you define a webhook upon certain actions in the bitbucket repo, which triggers a CI job/pipeline(can be a Jenkins server, gitlab-ci, or many other CI solutions). this way, you won't even need a "deploy" button, since the merging action can trigger the job, which can automatically run unit tests, integration tests(if you define them), build artifacts/docker-images and finally deploy.
step 4 needs some basic understanding of the docker containers design - a docker image is not a VM. it has its use cases and relevant scenarios, and more importantly an advised architecture guideline to follow.
to make it short, I'll only mention the principle of separation - each service should be in a separate container. it allows upscaling and easier debugging, and much more. which means - what you need is not a docker image that contains your entire system, but an orchestration of containers, each containing an independent software unit, with a clear responsibility. and here Kubernetes comes into play.
back to the CI job - after the PR merge, the job starts, running the pre-defined unit tests, building the container, and uploading to your registry.
moving to CD - depending on your process, after the updated and tested docker images are in your registry(could be artifactory/GitLab registry/docker registry...), the CD job can take any image it needs, and deploy them in your Kubernetes cluster. and that's it! the process is done.
A word of advice - if you don't have a professional DevOps team, or a good understanding of docker, CI-CD process, and Kubernetes, and if your dev team is small(and unfortunately it seems so) you may want to consider hiring a DevOps company to build the DevOps/CICD infrastructure for you, preferably with a completely managed DevOps solution and then do a handover. everything I wrote is just the guideline and basic points, to give you the big picture. good luck!
All the other answers are here great still I would like to add my piece of advice.
Recently I was also working on a product and we were three team members. It was a node.js project. If you are on AWS then you can use the AWS pipeline. This will detect pushes from a specific GitHub branch and the changes will get deployed to the server. The pipeline service has a build stage too. You can also configure slack notifications.
But you should have at least two environments production and dev to check if deployment is working properly on dev.
AWS also has services like AWS Code Commit and AWS Code Deploy.
This is just a basic solution. You don't actually need fancy software to set up ci/cd.
This kind of setup is usually supported by a CICD tool coupled with Kubernetes.
Either an on-premise one, like Jenkins+Kubernetes, or its Jenkins Kubernetes plugin, which runs dynamic agents in a Kubernetes cluster.
You can see an example in "How to Setup Jenkins Build Agents on Kubernetes Pods" by Bibin Wilson.
Or a Cloud one, like Bitbucket pipeline deploying a containerized application to Kubernetes
In both instances, the idea remains the same: create a ephemeral execution environment (a Docker container with the right components in it) for each pushed branch, in order to execute tests.
That way, said tests can take place before any merge between a feature branch and an integration branch like main.
If the tests pass, Jenkins itself could trigger an automatic merge (assuming the feature branch was rebased first on top of the target branch, main in your case)
We have similar process in our team.
We use gitlab-ci.
Hence there are some out of docker infrastructure (nginx with test stand dns),
we just create dev1, dev2 ... stands (5 stands for team of 10 developer and more than 6 microservices). For each devX stand and each microservice we have deploy to devX button in our CI-CD. And we just reserve in slack devX for feature Y on time of tests after deploy. Whan tests are done and bugs are fixed we merge to main branch and other feature brunch can be deployed and tested on devX stand.
As a next step, I then manually deploy the updated "main" branch to test server with installed IIS, MSSQL.
Once done I notify testers to test freshly uploaded app to make sure "add about window" is done correctly and works good.
If testers find a bug, I have to revert the task branch merge from "main" branch and tell the developer to fix the bug in task's branch.
If there are multiple environments then devs could deploy themselves into them. Even a single "dev" environment they can deploy to would greatly help. The devs should be able to deploy themselves and notify the testers without going through you.
That the deploy is "manual" is suspicious. How manual? Ideally it should just be a few button clicks. Sometimes you can even have it so that pushing to a branch does a deploy (through webhooks).
You should be able to deploy from branches besides main. What that means or looks like can vary a lot but the point is that if you're forcing everything there and having to revert when it doesn't work you're creating a lot of unneeded work. Ideally there should be some way to test locally. If there really can't be then you need to at least allow a way to deploy from any branch (or something like force pushing to a branch called 'dev' or something).
From another angle, unless the application gets horribly broken you don't necessarily need to rollback changes unless a release is coming soon. You can just have it fixed in another pull request.
All in all the main problem here sounds like there's only a single environment for testing, the process to deploy to it is far too manual, and the devs have no way to deploy to it themselves. This sort of thing is a massive bottle neck. Having a burdensome process to even begin to test things takes a big toll on everyone's morale -- which can be worse than the loss in velocity. You don't necesscarily need every dev to be able to spin up as many environments as they want at the push of a button but devs do need some autonomy to be able to test.
Having the application run in Docker containers can greatly help with running it locally as well as making the deployment process simpler. I've tried to stay away from specific product suggestions because this is more of a process problem it sounds like.

Best practice for moving fastlane deployment of whitelabel apps off local machine and to a server/service

We create iOS and Android apps that are white-labeled. They all use a single code base (one for iOS and one for Android). Whenever we need to make changes to all of our apps (> 100 live in App Store) we rely on Fastlane. We have a "bulk" command that submits each new build to Apple, changing out config variables first and a few files so each app is unique.
This has worked well for us... but... its getting really slow. We'd love to be able to take advantage of some of the continuous development services out there. It seems like they weren't necessarily made for this use case but it might still work?
Ideally instead of running bulk on a local machine we could spin up 100 instances on something like CircleCI and they all run side by side, using our fastlane script to build, submit, etc.
We started by looking into CircleCI. The problem we are running into is they don't allow injection of variables into a job (https://ideas.circleci.com/ideas/CCI-I-690).
Is there a better service for this goal? Is there a tool that was built to achieve this? Struggling to find an alternative to hacking together a bunch of smaller tools.
I think you already identified your first step: You will have to split your fastlane (and other tooling) configuration, so it is possible to build each app in isolation.
Then you can trigger a job for each app on a CI service like for example Travis CI or Azure Pipelines (both have a simple API you can use to start jobs and give them some parameters that will be available to your job) that builds and releases the app.
All the other things (e.g. one big build vs. many small build steps etc.) are just implementation details and will depend on the individual service or tools you choose.

Parallel releases after built in TFS

I am running into problems getting my build servers to run releases in parallel.
I just recently created my second build and test server. Both servers correctly run builds in parallel. And each of my builds triggers a release upon successful completion. However, only one server at a time will run a release.
I have verified that both servers are configured correctly because if I take down one server the other one starts running the releases correctly. However, when both are stood up together only one of them will run releases.
Is there a setting in TFS that does not allow multiple separate release definitions to run in parallel on different machines?
A TFS concurrent pipeline gives you the ability to run a single release at a time in a team project collection.
You can keep hundreds or even thousands of release definitions in your collection. But, to run more than one release at a time, you need additional concurrent pipelines.
One free concurrent pipeline is included with every collection in a
Team Foundation server. Every Visual Studio Enterprise subscriber
in a Team Foundation server contributes one additional concurrent
pipeline. You can buy additional private pipelines from the Visual
Studio Marketplace.
Purchase additional concurrent pipelines
If you need to run more concurrent releases, you can buy additional private pipelines from the Visual Studio marketplace. Since there is no way to directly purchase concurrent pipelines from Marketplace for a TFS instance at present, you must first buy concurrent pipelines for a VSTS account. After you buy the private pipelines for a VSTS account, you enter the number of purchased concurrent pipelines manually on the resource limits page described below.
http://{your_server}:8080/tfs/DefaultCollection/_admin/_buildQueue?_a=resourceLimits
Note: above is for TFS2017/2018.
If you are using TFS2015, take a look at this question: Do I need concurrent pipelines to use release management in versions before TFS 2017?
More details suggest you go through the official link in MSDN: Concurrent release pipelines in Team Foundation Server
The system seems to still be based on the honor code.
Through the link PatrickLu-MSFT gave I was able to change the number to anything I wanted, even though I did not actually purchase any additional licenses.
After resetting the additional licenses back to zero, I also found this link:
http://{your_server}:8080/tfs/_admin/_licenses
Again, it seemed to be based on the honor code here as well because I was able to add users with subscriptions to less than Enterprise to the VS Enterprise group.
However, when all my exploring was done, I was able to use that final link to enter the people who do have Enterprise subscriptions and that changed my overall license count so I was able to run releases in parallel.

How to share config set by developers for multiple jobs

Our team develops micro-monolith application. Which means that our applications is spitted to multiple modules like in microservice pattern but run on tomcats not containers.
We used to have linear workflow:
Environments: dev -> qa -> production
Where after each commit to dev Jenkins automatically build, run tests and deployed application to dev environment.
But lately we see that sometimes there is need to develop multiple excluding features simultaneously and we would want to have multiple dev environments.
Is there a way to allow developers to easly set and see shared properties in Jenkins? i.e. pars feature_branch_name=environment_name
Feature_branch_123=alfa
Feature_branch_124=bravo
So always on beginning of working on feature, person sets this property and later when (s)he pushes changes to repository Jenkins can automatically build and deploy given commit for previously set environment. Sharing is required because we have one build pipeline for each service and we want to be able to share this configuration across whole application.
In similar cases of sharing config between steps people recommend use of EnvInject Plugin, but I don't think this would be good solution here there would be no easy way to set parameters, the best way I could think of is by use of parameterized job, and also then people doesn't have possibility to see what parameters are currently set.

What builds are using a specific build agent

My company has a bunch of different builds and half a dozen different build agents, and I need to update some software for one of the builds. I don't want to break any other builds that are using said agent. I would like to get a list of all builds that use said agent so that I can validate them after my software updates on the agent. I would prefer not to individually review each build, as there are dozens, if not hundreds of them. Is there some way to get this information quickly? Either from the agent, or from TFS somehow?
By default, builds are tied to controllers, not agents and could therefore run on any of the agents bound to the controller. Unless as suggested by Daniel Mann, you have your builds tagged to specific agents you won't be able to get that level of detail. Without tagging, your report would be limited to a list of machines that each build could possibly run on.
What I do in this situation is to have a separate, private build controller for build software testing. Upgrade the software on this and then queue test builds for the affected definitions, changing the controller to your test controller in the Queue Build parameters. Once you've verified that your changes won't break the builds you can then schedule downtime to upgrade the production agent machines.

Resources