With the new feature of Azure Website deployment slots' "Slot Settings", we can 'pin' a connection string and app settings to a particular slot. I have set up two slots: production and staging, and verified I can swap between them and point to the correct database. The database is being updated automatically using code first migrations. However, I'm unsure how exactly a "rollback" would (or should) work with the database in this scenario.
For example, consider the following:
App v1 is running in staging and pointed to staging Db v1
App v1 is running in production and pointed to production Db v1
App v2 is deployed to staging, and Code First Migrations updates staging Db to Db v2
staging and production slots are swapped.
App v2 is running in production, and production db is updated to Db v2.
App v1 is running in staging, but pointed at staging db, which is still Db v2
Is there a way to roll the staging database back to v1? If an "emergency" occurred and I had to swap staging and production again, would there be a way to get the production database back to v1? I understand this can be done using Update-Database, but am unclear how to set it up as automated as possible in Azure Websites.
I think you answered your own question. Unless there is a staging db on DBv1 then you would have to manually update your staging database to do the rollback. I do not think there is an automated way of doing this.
Related
i'm now trying to migrate my parse db to a mLab with a parse server hosting it in AWS Elastick Beanstalk.
While migrating, I had few pending problems and i will be glad if you know about these questions.
After migrating the DB, will the parse server that is hosted by
Parse.api.com will continue connecting to the DB that is migrated?
After deploying my development DB and parse server to mongoLab and AWS, will the parse.api.com with the production DB still remains running with the app that is used by the user?
After migrating the development DB and also the Parse Server to AWS, is it possible to migrate the prduction DB?
After migrating the DB, will the parse server that is hosted by
Parse.api.com will continue connecting to the DB that is migrated?
That is correct, api.parse.com will hit your self-hosted database for as long as you don't delete the app on parse.com/close your Parse account or until Parse shuts down in January 2017.
Should you chose to delete the app on parse.com, all users that haven't updated to a version of your app that uses your own Parse Server will be left with a broken app until they install the update of your app.
After deploying my development DB and parse server to mongoLab and AWS, will the parse.api.com with the production DB still remains
running with the app that is used by the user?
Assuming that your development and production Parse Apps are two different apps on Parse you will need to migrate them separately and yes, if you only migrate your development app (let's call it App A for now), App B (your production app) won't be effected until you migrate it as well. Of course any non-migrated app will stop working all-together at the end of January 2017.
After migrating the development DB and also the Parse Server to AWS, is it possible to migrate the production DB?
You are free to migrate as many databases/apps you want. So the answer is yes, you can migrate your production/development versions as well.
I use code first and the app works well on a local database which was generated.
But when I deploy to Azure, although it succeeds, the tables are not created, just the empty database.
I excluded the local app_data folder and chose to run code first migrations
in the deployment options.
Any tips what's wrong?
Have you configured Azure deployment to replace connection strings (via the publishing wizard) or are you using environmental variables in your code? It doesn't sound like it. It sounds like you deployed with localdb which does not work in Azure.
You need to either (there are more options, but these are easy to implement):
Configure your deployment process to update your web.config with your SQL Azure connection string (you can use config transformations or deplyment wizard)
Use Azure environmental variables to be used automatically when running in Azure and local variables when locally
I would like to test my rails app on my local machine and also have it functional on heroku. However, if I specify my IP address for the "Website" field on the facebook app settings, then my heroku breaks and vice versa. Is there any way to have them both work using the same API Key?
If not, how do I tell Omniauth to use one api key for the development environment and another for the production environment? Thanks!
Use a separate FB app for dev (local) and for production (Heroku).
Read the key out of the environment like:
ENV["FACEBOOK_APP_ID"]
ENV["FACEBOOK_SECRET"]
Then set the key/creds in your config on Heroku using heroku config:add.
Locally use foreman to run your app and set the dev key/creds in a .env file. http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
Keep in mind that FB requires you to use SSL so you'll need to setup something locally that can handle SSL requests.
You will either have to create another development Facebook application for development, which is what I do, or you will have to create an entry in your /etc/hosts file that points the hostname of your Heroku app to you local machine.
Considering migrating an app to Heroku. Currently we build & test locally before deploying to our own server for hosting...But the application is growing and now wondering if it's reasonable to have, say, 3 versions of our app. One local to developer's machines. A second (testing) deployed via Capistrano to an internal server. And then finally a third on Heroku (production). Databases would not need to be shared.
Any problems or advice for this sort of scenario?
I think it's a good thing to have a staging server with the same environment as your production. So instead of internal server, wouldn't it be better to test on heroku?
For this purpose I've created another app on heroku and before updating my production app, I push my app to the staging one.
I would highly recommend the heroku_san gem which simplifies pushing app to heroku to just rake staging deploy.
I do this. I have development on developer's machines, staging, and production.
Staging is our test sandbox and sometimes also shares user databases with production so I can let users beta test, etc.
Whether or not you use Heroku for production really doesn't matter does it?
I am getting ready to deploy to a true production environment. When I say true I mean that my current production environment will now be staging because there is other crap on the server and I am creating a new larger slice for what will actually be my production machine.
The capistrano-ext gem has made separating the deploy recipes quite easy. However, one issue I run into is getting my code from one slice to another. I have a git repo set up on my staging slice that I will be using for production. The flow will be:
Develop locally
Test locally
Push from local to stage
Test on stage
Push from stage to production
...
Therefore I obviously need a way to establish a secure connection between staging and production. When deploying to production, I get a "Permission denied (publickey)." error because this is not set up. How can I establish this connection? Do I need to generate keys on my production server and put the public on my staging? How do I know what user on my production server is trying to connect to my staging server?
Branches and capistrano multistage are your friends.
To solve the production not having access to the git repo issue, try…
set :deploy_via, :copy
…this deploys by checking out locally, and pushing a tar ball.
I find that branching or version tagging works much better for differentiating staging vs. production when using Capistrano.
For example, set up a 'staging' and 'production' branch for your application and use your source control tools to manage migrating changes from one to the next. During deployment simply deploy as you usually would, but with a particular branch instead of the main one.
It's not necessary to promote directly from staging to production, and in fact, this may be considered a bad idea since anyone with access to the staging machine potentially has access to the production server. In most environments a staging server is treated much more casually than the production site, so the security profile is usually quite different.
Do I need to generate keys on my production server and put the public on my staging?
Yes.
How do I know what user on my production server is trying to connect to my staging server?
The productionuser will be whatever user you connect with (see :user). The staginguser will be from the git url (see :repository).
When you use
set :deploy_via, :remote_cache
(which is the default), two ssh connections are actually occurring. The first one is from your local machine to production, and it uses the 'user' as configured in your recipe.
set :user, 'www-data'
The second ssh connection is made by that user, on production, to your git origin. So if git origin is on staging, the production user is trying to connect back to staging to pull code from git.
set :repository, "staginguser#staging.com:project.git"
Try this: ssh to production as the user. Then run the failing command by hand. You'll see the "permission denied" and maybe a prompt for a password. Add the public key of the staging server user to the production box and things should work better.
There's also:
set :gateway, 'staging server ip'
which should allow you to tunnel all the way through to your firewalled production box. But if you're deploying from staging you need to set up keys on the staging box if you're going to go through it that way.
On a side note, it's important to be able to do this whole process from your home box, staging really shouldn't need to have a capistrano gem, the hope is that you can do the whole process without ever having to actually log into a server. That includes logging in to your staging server. :)
If there's an issue of data pushing between the two this could easily be added onto just the production config so that it automatically takes data from staging and rsync's it over.