We have three environments: development, staging, and production.
In our configuration variables for development, we set our database to connect to 127.0.0.1 with a set username or password.
However, other developers part of our team have different username and password configurations.
How do we reconcile this without making development_steve, development_mark, development_amir, etc.?
Eventually we decided to use the same local credentials for our DBs.
Related
I have a rails app, that I am currently shifting to production. But I want to setup a subdomain such that,
if I goto :
dev.myapp.com , I reach the development environment & if I goto
prod.myapp.com , I reach the production environment
Will I have to use 2 instances for this purpose, or can this managed by one?
My servers are on AWS, and domain is managed by GoDaddy
You can definitely use both environments on the same server but you would have to have 2 different instances running.
You can use nginx or Apache HTTPD to route the different domains (or sub-domains) to the actual instance running on your server (if it's an AWS EC2).
You have several other ways to configure it depending on your setup.
You'll need separate instances of the application running; the choice of running environment is a global, boot-time decision, with wide-ranging effects.
It's totally possible to run both of those application instances on the same server (AWS EC2 instance)... though it's more traditional to run the development mode on a local development machine, safely distanced from production.
With the new feature of Azure Website deployment slots' "Slot Settings", we can 'pin' a connection string and app settings to a particular slot. I have set up two slots: production and staging, and verified I can swap between them and point to the correct database. The database is being updated automatically using code first migrations. However, I'm unsure how exactly a "rollback" would (or should) work with the database in this scenario.
For example, consider the following:
App v1 is running in staging and pointed to staging Db v1
App v1 is running in production and pointed to production Db v1
App v2 is deployed to staging, and Code First Migrations updates staging Db to Db v2
staging and production slots are swapped.
App v2 is running in production, and production db is updated to Db v2.
App v1 is running in staging, but pointed at staging db, which is still Db v2
Is there a way to roll the staging database back to v1? If an "emergency" occurred and I had to swap staging and production again, would there be a way to get the production database back to v1? I understand this can be done using Update-Database, but am unclear how to set it up as automated as possible in Azure Websites.
I think you answered your own question. Unless there is a staging db on DBv1 then you would have to manually update your staging database to do the rollback. I do not think there is an automated way of doing this.
I would like to test my rails app on my local machine and also have it functional on heroku. However, if I specify my IP address for the "Website" field on the facebook app settings, then my heroku breaks and vice versa. Is there any way to have them both work using the same API Key?
If not, how do I tell Omniauth to use one api key for the development environment and another for the production environment? Thanks!
Use a separate FB app for dev (local) and for production (Heroku).
Read the key out of the environment like:
ENV["FACEBOOK_APP_ID"]
ENV["FACEBOOK_SECRET"]
Then set the key/creds in your config on Heroku using heroku config:add.
Locally use foreman to run your app and set the dev key/creds in a .env file. http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
Keep in mind that FB requires you to use SSL so you'll need to setup something locally that can handle SSL requests.
You will either have to create another development Facebook application for development, which is what I do, or you will have to create an entry in your /etc/hosts file that points the hostname of your Heroku app to you local machine.
Considering migrating an app to Heroku. Currently we build & test locally before deploying to our own server for hosting...But the application is growing and now wondering if it's reasonable to have, say, 3 versions of our app. One local to developer's machines. A second (testing) deployed via Capistrano to an internal server. And then finally a third on Heroku (production). Databases would not need to be shared.
Any problems or advice for this sort of scenario?
I think it's a good thing to have a staging server with the same environment as your production. So instead of internal server, wouldn't it be better to test on heroku?
For this purpose I've created another app on heroku and before updating my production app, I push my app to the staging one.
I would highly recommend the heroku_san gem which simplifies pushing app to heroku to just rake staging deploy.
I do this. I have development on developer's machines, staging, and production.
Staging is our test sandbox and sometimes also shares user databases with production so I can let users beta test, etc.
Whether or not you use Heroku for production really doesn't matter does it?
I am getting ready to deploy to a true production environment. When I say true I mean that my current production environment will now be staging because there is other crap on the server and I am creating a new larger slice for what will actually be my production machine.
The capistrano-ext gem has made separating the deploy recipes quite easy. However, one issue I run into is getting my code from one slice to another. I have a git repo set up on my staging slice that I will be using for production. The flow will be:
Develop locally
Test locally
Push from local to stage
Test on stage
Push from stage to production
...
Therefore I obviously need a way to establish a secure connection between staging and production. When deploying to production, I get a "Permission denied (publickey)." error because this is not set up. How can I establish this connection? Do I need to generate keys on my production server and put the public on my staging? How do I know what user on my production server is trying to connect to my staging server?
Branches and capistrano multistage are your friends.
To solve the production not having access to the git repo issue, try…
set :deploy_via, :copy
…this deploys by checking out locally, and pushing a tar ball.
I find that branching or version tagging works much better for differentiating staging vs. production when using Capistrano.
For example, set up a 'staging' and 'production' branch for your application and use your source control tools to manage migrating changes from one to the next. During deployment simply deploy as you usually would, but with a particular branch instead of the main one.
It's not necessary to promote directly from staging to production, and in fact, this may be considered a bad idea since anyone with access to the staging machine potentially has access to the production server. In most environments a staging server is treated much more casually than the production site, so the security profile is usually quite different.
Do I need to generate keys on my production server and put the public on my staging?
Yes.
How do I know what user on my production server is trying to connect to my staging server?
The productionuser will be whatever user you connect with (see :user). The staginguser will be from the git url (see :repository).
When you use
set :deploy_via, :remote_cache
(which is the default), two ssh connections are actually occurring. The first one is from your local machine to production, and it uses the 'user' as configured in your recipe.
set :user, 'www-data'
The second ssh connection is made by that user, on production, to your git origin. So if git origin is on staging, the production user is trying to connect back to staging to pull code from git.
set :repository, "staginguser#staging.com:project.git"
Try this: ssh to production as the user. Then run the failing command by hand. You'll see the "permission denied" and maybe a prompt for a password. Add the public key of the staging server user to the production box and things should work better.
There's also:
set :gateway, 'staging server ip'
which should allow you to tunnel all the way through to your firewalled production box. But if you're deploying from staging you need to set up keys on the staging box if you're going to go through it that way.
On a side note, it's important to be able to do this whole process from your home box, staging really shouldn't need to have a capistrano gem, the hope is that you can do the whole process without ever having to actually log into a server. That includes logging in to your staging server. :)
If there's an issue of data pushing between the two this could easily be added onto just the production config so that it automatically takes data from staging and rsync's it over.