TFS 2015 Deploy website to multiple machines with a load balancer - tfs

I'm using TFS 2015 to build and deploy my websites.
I have multipe websites and i need to deploy then to multiple machines that have a NLB.
So the steps are:
1 - Stop NLB on machine 1
2 - Deploy files
3 - Start NLB on machine 1
4 - Repeat to all machines.
Is there a way of doing this without have to configure this steps to each machine?
Its possible to have a machine group and apply the steps to each one?
Thanks

You need to use a custom task called Tokenizer in the workflow of the release. It tokenizes the variable in web.config which then can be transformed. Tokenizer needs the initial values of the custom variable in a specific format.
To install the tokenizer you first need node.js with npm packager
installed on our machine. Follow this process to install and use
Tokenizer.
Download and install node.js on your machine if it is not present. It
also installs npm package loader.
Download tokenizer from https://github.com/openalm/VSOtasks. It comes
as a .zip file. Unzip it.
Open command prompt and change directory to the folder
“Tokenizer\x.x.x” in the unzipped folder.
From that folder run the command npm install -g tfx-cli to install the
command line tool that can upload the tokenizer task.
After using this you will be albe to write the environment specific configuration file when you are deploying to different environments. More detail steps and tutorials. Please take a look at this blog from MSDN: Deploy to multiple environments with appropriate configurations
Update
For "rolling deploy", this can't be achieved for now. No this option and task in web base release management. You may have to apply the steps to each machine. If you really need this feature, you can add it in uservoice of VSTS, TFS admin and PM will kindly review your suggestion.

Related

BentoML: how to build a model without importing the services.py file?

Is it possible to run bentoml build without importing the services.py file during the process?
I'm trying to put the bento build and containarize steps in our CI/CD server. Our model depends on some OS packages installed and some python packages. I thought I could run bentoml build to package the model code and binaries that are present. I'd leave the dependencies especification to the contanairize step.
To my surprise the bentoml build process tried to import the service file during the packaging and the build failed since I didn't have the dependencies installed in my CI/CD machine.
Can I prevent this importing while building/packaging the model? Maybe I should ignore the bento containarize and create my bento container by hand and just execute the bentoml serve inside.
I feel that the need to install by hand the dependencies is doubling the effort to specify them in the bentofile.yaml and preventing the reproducibility of my environment.
This is not currently possible. The community is working on an environment management feature, such that an environment with the necessary dependencies will be automatically created during build.

Best practice for running Symfony 5 project with Docker and Docker-Swarm

I have an existing Symfony 5 project with a mysql database and a nginx Webserver. I wanna dockerize this project, but on the web I have found different opinions how to do that.
My plan is to write a multi-stage Docker file with at least a dev and a prod stage and let this build with docker-swarm. In my opinion it is useful to install the complete code during the build and having multiple composer.json files (one for every stage). In the web I have found opinions to not install the app new on every build but to copy the vendor and var folder to the container. Another opinion was to start the installation after the build process of the container is ready. But I think with that the service is not ready, when the app is successfully deployed.
What are you thinking is the best practice here?
Build exactly the same image for all environments
Do not build 2 different images for prod and dev. One of the main docker benefits is, that you can provide exactly the same environment for production and dev.
You should control your environment with ENV vars. for example, you can enable Xdebug for dev and disable it for prod.
Composer has option to install dev and production packages. You should use this feature.
If you decide to install some packages to dev. Try to use the same Dockerfile for both environment. Do not use Dockerfile.prod and Dockerfile.dev It will introduce some mess in the future.
Multistage build
You can do multistage build described in the official Docker documentation if your build environment requires much more dependencies than your runtime.
Example of it is compilation of the program. During compilation you need a lot of libraries, and you produce single binary. So your runtime does not need all dev libraries.
First stage can be build in second stage you just copy binary and it is.
Build all packages into the docker image
You should build your application when Docker image is building. All libraries and packages should be copied into image, you should not install them when the application is starting. Reasons:
Application starts faster when everything is installed
Some of the libraries can be changed in future or removed. You will be in troubles and probably you will spend a lot of time to do debugging.
Implement health checks
You should implement Health-Check. Applications require external dependencies, like Passwords, API KEY, some non sensitive data. Usually, We inject data with environment variables.
You should check if all required variables are passed, and have good format before your application is started. You can implement Health-Check or you can check it in the entrypoint.
Test your solution before it is released
You should implement mechanism of testing your images. For example in the CI:
Run your unit tests before image is built
Build the Docker image
Start new application image with dummy data. If you require PostgreSQL DB you can start another container,
Run integration tests.
Publish new version of the image only if all tests pass.

Deploy apps from release server

I don't like when it comes to release my projects on production server.. May be i just don't have enough experience, nobody taught me how to do this in a right way.
For now i have several repos with scala (on top of spray). I have everything to build and run this projects on my local machine (of course, i develop them). So installed jenkins on my production server in order to sync from git, build and run. It works for now but i don't like it, because i need to install jenkins on every machine i want to have run my projects. What if i want to show my project to my friend in cafe?
So i've come with idea: what if i run tests before building app, make portable build (e.q. with sbt native packager) and save it on remote server "release server". That server just keeps these ready to be launched apps.
Then i go to production server, run bash script that downloads executables from release server and runs my project on a machine
In future i want to:
download and run projects inside docker containers.
keep ready to be served static files for frontend. Run docker
container with nginx and linked volume with static files
I heard about nexus (http://www.sonatype.org/nexus/), that artist use to save their songs, images, so on. I believe there should be open source projects that expose idea like mine
Any help is appreciated!
A common anti-pattern, in my opinion, is to build the software every time you perform a deployment.You are best advised to separate the process of build from the act of deployment by introducing a binary repository manager (you've mentioned on such example, nexus).
Best Practice - Using a Repository Manager
Binary repository manager
How can I automatically deploy a war from Nexus to Tomcat?
Only successfully tests builds get pushed to the repository, so you can treat each successful build as a mini-release. A by-product of this is that your production server does not have to have all the build software pre-installed (like, Jenkins, ANT , Maven, etc).
It should be noted that modern repository managers like Nexus and Artifactory now support Docker registries too, so that you use these for deploying docker images too.
Update
A related chef question, a technology where there is no intermediate binary file (like a jar). In this case the software is still "released" by creating a tar distribution stored in the repo.
chef cookbook delivery - chef server vs. artifactory + berkshelf

Build and Deploy a Web Application with TFS 2015 Build

We have just installed TFS 2015 (Update 1) on-premise and are trying to create a Continuous Integration/Build system using the new TFS Build system. The build works fine, and gives me a green light, but when I look at the default build it has only built the binaries from the bin directory, and there seems to be no easy way to deploy the app on-premise to a local server.
There are two deploy options for a filesystem copy, and a powershell script, and it would certainly be easy enough to use them to copy files to a new server, but since the build only built the binaries, I don't see a tool to gather up the Web artifacts (cshtml, images, scripts, css, etc..) for this.
After an exhaustive google search, I've only found one article which talks about this at:
http://www.deliveron.com/blog/building-websites-team-foundation-build-2015/
However, this uses WebDeploy and creates a rather messy deploy package.
How can I deploy the site (standard MVC web application, in fact my tests are using the default boilerplate site created by the create project wizard) complete with artifacts to a local server in the easiest possible way? I don't want to have to install WebDeploy on the servers, and would rather use PowerShell or something to deploy the final artifacts.
The build is just the standard Visual Studio build template, with 4 steps (Build, Test, Index & Publish, Publish Build Artifacts).
We use "Visual Studio Build" step and as Arguments for MSBuild we use following line:
/p:DeployOnBuild=True /p:PublishProfile=$(DeploymentConfiguration)
On Variables tab page DeploymentConfiguration has to be configured. It must be the Name of the publish Profile (filename of the pubxml file). If the file Name is Build.pubxml the publish profile is Build.
for example:
/p:DeployOnBuild=True /p:PublishProfile=Build
I wanted to add that Ben Day has an excellent write-up that helped us package quickly and then release to multiple environments through Release Manager.
His msbuild arguments look like this:
/p:DeployOnBuild=True /p:DeployDefaultTarget=WebPublish /p:WebPublishMethod=FileSystem /p:DeleteExistingFiles=True /p:publishUrl=$(build.artifactstagingdirectory)\for-deploy\website
The difference between this and the accepted answer is that this parameter set stages everything in an artifacts folder, and then saves it as part of the build. We can then deploy exactly the same code repeatedly.
We capture the web.env.config files alongside the for-deploy folder and then use xdt transforms in the release process to ensure everything gets updated for whichever environment we're deploying to. It works well for all our web projects.
We use WebDeploy/MSDeploy for 40+ applications and love it. We do install WebDeploy on all our servers so we can deploy more easily but you could also use the Web Deploy On Demand feature which doesn't require WebDeploy be pre-installed.

Elastic Beanstalk deploy - ASP.NET from commandline

I'm trying to deploy my MVC4 app to ELB. The project has several post-build steps which pull together dependencies. The AWS SDK publish wizard then does not do the trick - it builds a Web Deploy package behind the scenes, which does not action those post-build steps or preserve the resulting directory structure.
So, I downloaded the command-line EB tools, got a git repository working, but can't work out the next step: what do I push to the server with git aws.push: because if it's just the resulting files then I can't specify the "Enable 32-bit applications" flag (required), etc. Do I then push a web deploy package from my repository?
I presume so, but if so, how do I include the files copied into the output folder during "normal" builds by my post-build steps?
Here we go. This seems to be in conflict with what Jim Flanagan was saying - below it's a zip file, but Jim says it's the contents of it.
#Jim Flanagan - perhaps you could comment if you have some time. Thanks.
Hi thanks for contacting AWS Premium Support
Communication from the Elastic Beanstalk Engineering Team.
When you aws.push an ASP.NET/MVC app you do not push the web deploy archive, rather you push the artifacts as you want them deployed on the machine. From the customers stack overflow question it seems they have already found the local git repo that the VS deployment wizard created and looking their should give them a good indication of what is needed in the git repository.
There isn't a nice way through the aws.push to specify what the "Enable 32-bit Application" app pool setting should be (or any other configuration setting). If you need a specific configuration setting set I would suggest creating the environment (via the console or using the eb command line tool) which allow you to specify the configuration. And then use git aws.push to deploy to that environment, git aws.push will just use the configuration that is already present on the environment.
The last question about still being incremental is not really valid since you are not pushing just one big zip file. But if you were, it could still be incremental depending on what changed in the zip file, it might just send a diff between the two versions of the zip file. As the question implies though that use case is not really what incremental deployments were designed to help with.

Resources