How to deploy multiple app servers from different branches - ruby-on-rails

I have a RoR app running on 2 app/web servers using nginx/unicorn. The site is deployed using Rubber from the "master" branch of our repo. Life is good!
Some of the new customers we are working with require their data and files to be kept on separate servers. We are planning on having separate boxes for each of these customers. The sites for these customers will be available at customerX.site.com and the code for these apps will be the same as the code in the "master" branch except for a couple of images and the database.yml file.
So my question is, is there a way to set the git branch that should be used to pull the code based on the role of the box or any alternative to easily manage this multi-app deployment process?

Related

How to set up liferay for team development and deployment?

I am looking into how to set up a liferay project with version control and automated deployment. I have a working local development environment in eclipse, but as far as I understand it, setting up a portal in liferay is in part the liferay portal instance running on tomcat and then my custom module projects for customization. I basically want all of that in one git repository which can then be
1: cloned by any developer to set up their local dev environment
2: built and deployed by eg. jenkins into eg. AWS
I have looked at the liferay documentation regarding creating a docker container for the portal, but I don't fully understand how things like portal content would be handled.
I would be very grateful if someone could lead me in the right direction on how a environment like this would be set up.
Code and content are different beasts. Set up a local Liferay instance for every single developer. Share/version the code through whatever version control (you mention git).
This way, every developer can work on their own project, set breakpoints, and create content that doesn't interfere with other developers.
Set up a separate integration test environment, that gets its code exclusively through your CI server, never gets touched manually.
Your production (or preproduction) database will likely have completely different content: Where a developer is quick to create a few "Lorem Ipsum" posts and pages, you don't want them to escape into production. Thus there's no movement of content from development to production. Only code moves that way.
In case you want your developers to work on a production-like environment, you can restore the production content (database) to development machines. Note that this is risky though: The database also contains user accounts, and you might trigger update notification mails from your development machines - something that you want to avoid at all costs. Plus, this way you give developers access to login data (even though it's hashed) which can be abused. And it might even be explicitly forbidden by industry regulations to use production data in development environments.
In general: Every system has its own database (at least their own schema), document store and indexing server. Every developer has their own portal JVM running. The other environments (integration test, load test, authoring, production) are also separate environments. And no, you don't need all of them all the time.
I can't attribute this quote (Milen can - see his comment), but it holds here:
Everybody has a testing environment. Some are lucky to run a completely different production environment.
Be the lucky one. If everyone has their own fully separated environment, nobody is stepping on each other's shoes. And you'll need the integration tests (with the CI output) anyway.

Storing configuration settings in Azure Service Fabric and MVC apps

I have reached the point where I have to get my Service Fabric Cluster deployed to Azure :) Besides the the stateful/stateless services I have 2 MVC applications. I currently have a few settings in the web.config files (mostly connection strings).
I plan to configure continuous build / deploy using Visual Studio Online, but have not dogged into to doing that yet.
Where are the recommended place to store the configuration settings. I will need settings for 3 different environments (dev/test/prod).
I found a reference, at some point, to store the settings on the build definition which sounds like a better place to store production credentials than in config files that are being part of the source code for the applications. I need to limit access to values for the production environment and having them in the config files that all developers has access to does not sound like the best way to do this.
Any white papers or best practices regarding this I should be aware of?
You can use de publish profiles and application parameters of the service fabric project to store your settings for each environment.
In my case i have a dev, a homolog and a production environment with different database connection strings, so i created publish profiles named Cloud.Homolog.xml, Cloud.Production.xml and for dev environment i'm still using Local.5Node.xml.
Then, when i want to deploy in some of this environments i choose the correct publish profile.
Here is the documentation for multiple environment management:
Link

change connection string automatically on publish azure

I have a cloud service project. I have two web projects and 4 class libraries.
I want to, on azure publish, change connection string automatically for web roles and also for the class libraries.
I have two deployment slots: one for staging and other for production. I want to select automatically connectionstrings for staging when it's running on staging and production when running on production.
I found a lot of solutions on the net but it doesn't show how to change connection string for projects other than web roles (class libraries).
I understand that you are using Web Apps, since you mention Deployment Slots. Each slot has their own Application Settings sections, all you need to do is go to the Slot and set the Connection String you want to use and check the "Slot Setting" mark.
This will make sure that, even if you Swap, that setting (the Connection String) remains fixed to that Slot.
For your requirement, you should create 2 set of configuration files (Web.config). One for Staging and other for production. While publishing your web project, choose the profile in publish dialog box accordingly(For Staging / Production). Visual studio will take of your configuration to be deployed in server.
This link will be helpful for you:
https://msdn.microsoft.com/en-us/library/kwybya3w(v=vs.110).aspx

How do I configure two sets of hosts (3 for QA and 3 for Prod) for deploying a distributed system using Spinnaker?

I am using Spinnaker to deploy a 3-tier system to QA and then to Production. Configuration files in each of these systems point to others. If I bake in the configuration for QA in the AMI, then how do I change it while promoting to Prod? Is it 1) by having two different sets of AMIs - one for QA and one for Prod, or, 2) by having the AMIs with no configuration and then configure it (somehow) after deployment to change the configuration files?
What is recommended?
You can define custom AWS user data for cluster at deploy time ( under advanced settings of the cluster configuration ). You can then retrieve this user data in your application. This will allow you to change these type of configurations.
At Netflix, we have a series of init scripts that are baked into the base image and provide a mechanism for extending custom startup ( init.d ) scripts via nebula / gradle. This usually sets values like NETFLIX_ENVIRONMENT that are well known and programmed against.
We also use a feature flipping mechanism via https://github.com/Netflix/archaius . This allows us to add properties that are external to the clusters but can be targeted towards them.
When it comes to secured credentials, the approach is outlined in this presentation, but essentially the images reach out to an external service that issues these type of creds. https://speakerdeck.com/bdpayne/key-management-in-aws-how-netflix-secures-sensitive-data-without-its-own-data-center
I am struggling with similar problems myself in our company.
My solution was to create AMIs for specific purposes using a Packer script. This allows me to -
1. Configure the server as much as I can and then store those configurations in an AMI.
2. Easily change these configurations if the need arises.
Then, launching the AMI using an Ansible script, and make all the rest of the configurations on the specific instance.
In my case I chose creating different images for staging and production, but mostly because they differ greatly. If they were more alike I might have chosen using a single AMI for both.
The advantage Ansible gives you here is factoring your configurations, and including written once to both production and staging servers.

Best way to keep some custom local changes in GIT?

We have a Rails app that we share with several developers and we are using git.
I've made a few changes to the Rails app to match my computer configuration and my own preferences that I don't want to share in the main GitHub repository. I wonder what's the best way to keep my custom changes when I am developing while being able to push/pull to stay up to date with the main branch and being able to do pull requests to share the work I am doing on the app.

Resources