Code First Database Migration when swapping Azure Web App - asp.net-mvc

Setup:
Dev slot with slot setting to a DEV database
Production slot with slot setting to a PRODUCTION database.
I enabled 'Execute Code First Migration' on my publish profile and publish onto the DEV slot. The DEV database gets updated perfectly.
But when swapping my slot to PROD the Code First Migration isn't being executed on the PROD database.
We have multiple customers that need this setup. I want developers to setup the new version onto the DEV slot and I want my project managers to SWAP it when they feel the customer is ready to receive the new version and this way they can demo the new version immediately. I don't want them doing any more additional actions.
For now I have made a fix that makes them browse to an URL in the app that will execute any missing updates via the following piece of code:
var configuration = new Configuration();
var migrator = new DbMigrator(configuration);
migrator.Update();
It is normal that the migration isn't getting triggered when swapping the slots?

Do you have the connection string configured as a slot-specific setting? If so your worker process should restart on swap. And you just need to ensure migrations run on startup. Apparently the configuration changes made by the publishing profile to cause migration to run on startup aren't propagated into the production slot.
See this post on the ASP.NET blog for options: EF Code First Migrations Deployment to an Azure Cloud Service

Related

How to set up liferay for team development and deployment?

I am looking into how to set up a liferay project with version control and automated deployment. I have a working local development environment in eclipse, but as far as I understand it, setting up a portal in liferay is in part the liferay portal instance running on tomcat and then my custom module projects for customization. I basically want all of that in one git repository which can then be
1: cloned by any developer to set up their local dev environment
2: built and deployed by eg. jenkins into eg. AWS
I have looked at the liferay documentation regarding creating a docker container for the portal, but I don't fully understand how things like portal content would be handled.
I would be very grateful if someone could lead me in the right direction on how a environment like this would be set up.
Code and content are different beasts. Set up a local Liferay instance for every single developer. Share/version the code through whatever version control (you mention git).
This way, every developer can work on their own project, set breakpoints, and create content that doesn't interfere with other developers.
Set up a separate integration test environment, that gets its code exclusively through your CI server, never gets touched manually.
Your production (or preproduction) database will likely have completely different content: Where a developer is quick to create a few "Lorem Ipsum" posts and pages, you don't want them to escape into production. Thus there's no movement of content from development to production. Only code moves that way.
In case you want your developers to work on a production-like environment, you can restore the production content (database) to development machines. Note that this is risky though: The database also contains user accounts, and you might trigger update notification mails from your development machines - something that you want to avoid at all costs. Plus, this way you give developers access to login data (even though it's hashed) which can be abused. And it might even be explicitly forbidden by industry regulations to use production data in development environments.
In general: Every system has its own database (at least their own schema), document store and indexing server. Every developer has their own portal JVM running. The other environments (integration test, load test, authoring, production) are also separate environments. And no, you don't need all of them all the time.
I can't attribute this quote (Milen can - see his comment), but it holds here:
Everybody has a testing environment. Some are lucky to run a completely different production environment.
Be the lucky one. If everyone has their own fully separated environment, nobody is stepping on each other's shoes. And you'll need the integration tests (with the CI output) anyway.

Jenkins: Tracing the history of unsaved new test definition (copied from another test definition)?

Recently, in our enterprise production setup, it seems someone has tried to setup a new job / test definition by using another (copying) from identical job. However, (s)he seems to have NOT saved (and probably, am guessing here, closed the browser with the session being lost).
But the new job got saved though it was not set to stable or active; we knew about this because changes uploaded to gerrit, started failing in this newly setup partial job (because, these changes were in certain repos that met certain TDD settings).
Question: Jenkins system does not have trace of who setup the system in 'configure versions' option. Is there anyway to know the details of who setup the job / when was that done ?
No, Jenkins does not store that information by default.
If your Jenkins instance happen to be running behind an Apache or Nginx web server, there might be access logs that can help you. To find out when the job was created you could look at when its config.xml file was created/modified.
However, there are a few plugins that can add this functionality so that you won't have this problem again:
JobConfigHistory Plugin – Tracks changes in your job configurations and gives the ability to restore old versions.
Audit Trail Plugin – Keeps a log of who performed particular Jenkins operations, such as configuring jobs.

change connection string automatically on publish azure

I have a cloud service project. I have two web projects and 4 class libraries.
I want to, on azure publish, change connection string automatically for web roles and also for the class libraries.
I have two deployment slots: one for staging and other for production. I want to select automatically connectionstrings for staging when it's running on staging and production when running on production.
I found a lot of solutions on the net but it doesn't show how to change connection string for projects other than web roles (class libraries).
I understand that you are using Web Apps, since you mention Deployment Slots. Each slot has their own Application Settings sections, all you need to do is go to the Slot and set the Connection String you want to use and check the "Slot Setting" mark.
This will make sure that, even if you Swap, that setting (the Connection String) remains fixed to that Slot.
For your requirement, you should create 2 set of configuration files (Web.config). One for Staging and other for production. While publishing your web project, choose the profile in publish dialog box accordingly(For Staging / Production). Visual studio will take of your configuration to be deployed in server.
This link will be helpful for you:
https://msdn.microsoft.com/en-us/library/kwybya3w(v=vs.110).aspx

Can I trigger a 'pull' update from a RoR app instead of 'push' from dev environment?

Is it possible to trigger a server update via Capistrano on the actual deployment server, so it fetches updates rather than them being pushed to it?
Our customer's server config is locked down from external access, so can't push an update to it (but it can get to the interwebs, so could see a repository somewhere)
I can't believe I'm inventing something new here, so is it possible to visit and admin page in the app to find an update is available and stop/update/restart the server? What do other people do?
Just to finally close this off, we solved this eventually by moving the project tot gitlab.com and using a gitlab-runner to poll for deployments - thus, pulling updates, not having them pushed.

In Octopus Deploy, can you use a variable set per environment as the value for feed?

We are using Octopus Deploy and we would like to have two feeds, one for development branch and the other for our main branch in TFS. When we are done with a piece of functionality we merge it from the development branch to the main branch. We have builds for both branches that produce nuget packages. The DEV builds get the code from the DEV branch and publish nuget packages to the DEV feed, the MAIN builds get from the MAIN branch and publish packages to the MAIN feed. We'd like the dev build to automatically kick off a deployment in Octopus and have it use the nuget packages from the DEV feed. We'd also like to use that same Octopus deployment project to deploy to our QA, Production, and Training environments but from the MAIN feed instead of the DEV feed.
We have tried a couple different ways to solve this problem but haven't been successful yet. The Octopus UI for creating the steps allows a variable entry in the feed field so I'm assuming we can do it but we just have something slightly wrong. But it is possible that, because we have the variable set up based on environment (Octopus environment) that is part of the problem?
We also tried having the TFS build tell Octopus which feed to use and this seems to work to get the release created but when it tries to deploy it can't figure out what that variable is anymore.
I have found these posts with similar or the same problem but no posted solution yet:
http://help.octopusdeploy.com/discussions/problems/16452-custom-binding-of-nuget-feed
http://help.octopusdeploy.com/discussions/questions/2189-separate-nuget-feeds-for-regional-deployments
I have tried creating a variable scoped by environment called testFeed and used the below syntax as the feed value in the step and it allows me to save the change and create a release but when I try to deploy is says "There was a problem with your request. Pre-deployment validation failed: one or more feeds referenced by steps in this project no longer exist. You will need to create a new release.":
#{#{testFeed}|feeds-33}
Fortunately this now has a very easy solution that has been documented by Paul in the comments of this post.
Instead of using the or syntax I had been trying all you have to do is set up an additional variable that is not scoped to environment that acts as the default.
So for my situation, I set up a variable called nugetFeed with the value of feeds-33 (my dev feed) and row in the variable table also called nugetFeed with the value of feeds-34 (my MAIN feed) and scoped it for the QA and PROD environments. Then in the process step feed field use a custom expression of #{nugetFeed}.
Note: once you save the process step it will look like you've choosen the default feed but that is just the UI resolving the value. When you actually deploy it is using the variable values.
Currently there is a bug to watch out for described as: When you enter the #{Feed} variable and save everything works fine. But when you open the package in Edit again the #{Feed} variable is evaluated in the designer to the first of the feeds e.g. feeds-1. If you change something else and save again you process is broken since the feed now has the value feeds-1 and not #{Feed}.

Resources