We have a Web interface built using React (and nginx) and a Rest API (with json schema validation). They are in different repositories. Our cluster is a private openshift (3.11)
We would like to achieve a zero downtime deployment.
Let's assume that:
we have 10 pods for the Web and 20 pods for the Rest API.
we want to upgrade WEB and API from 1.0.0 to 2.0.0
the new version of WEB supports only the new version of the API
each repo (WEB and API) has its own helm chart (if needed and it is recommended, we could create an additional repository containing a single helm chart that deploys both web and api)
Which deployment strategy should we use? (blue/green, canary, a/b ?)
How can we configure the new WEB pods in order to hit the only new service of the API:
WEB 1.0.0 --> API 1.0.0
WEB 2.0.0 --> API 2.0.0
How can we perform the upgrade with zero downtime?
The very important thing is that, during the upgrade, the new version of the WEB should hit only the new version of the API, while the already deployed pods (1.0.0) should continue to hit the old version of the API.
I have done the same, and within Kubernetes, you can achieve this. Let's follow the below approach.
If you look above, I am doing my deployment via helm, and all the K8s objects (Pods, SVC, ingress) are unique based on release names. By this, I can access my specific front-end release by adding a context after my domain like https://app.com/1.0 or https://app.com/2.0.
The version which I want to expose to the internet, I am controlling it via Separate Ingress object (You can call super-ingress), which is independent of your releases and decide which version you want to keep live. By this, you can deploy N number of releases in production without any conflict, and by super-ingress, you can choose, which svc you want to point to the public.
Given the constraints you're telling us, your only choice is to follow a blue/green approach.
You have a pack of stuff which work together, let's say A. And another pack which work together, B. AB is something not possible, so this rule out canary or a/b testing.
You need to deploy B (green), and when everything is correct, switch the domain from A to B.
In kubernetes' words, you will have two different Deployments and Services, like if both are standalone applications. When you are confident the v2 is working properly, you need to change the DNS record pointing to the LoadBalancer of the v1's Service, to point to the v2's Service
Related
We are using a GitOps model for deploying our software. Everything in dev branch goes to the dev environment and everything in main gets deployed to production. All good and fine except that we use Google Cloud Endpoints that rely in the host parameter of the openapi.yaml. There is only room for a single value so we have to remember to change it for each deployment not allowing us to do a fully automated deploy.
How do you manage the same openapi.yaml definition when using Google Cloud Endpoints?
There is one example given in the official documentation, see if it helps your use-case.
Basic structure of an OpenAPI document, notice how the "host" is parameterized with "YOUR-PROJECT-ID.appspot.com"
Deploying the Endpoints configuration, using the provided script "./deploy_api.sh"
Source code for deploy_api.sh
One common solution for different environments properties management is to create different build profiles, and create different environment specific properties files like openapi_dev.yaml, openapi_qa.yaml, openapi_prod.yaml, and supply the one based on the profile(dev/qa/prod) being used. Refer here for more details.
Another way documented at GitOps-style continuous delivery with Cloud Build, where a multi branch, multi-repository approach is suggested.
Under the FAQ section in Swagger OpenAPI guide, it is clearly mentioned that, we can specify multiple hosts, e.g. development, test and production but for OpenAPI 3.0. OpenAPI2.0 supports only one host per API specification (or two if you count HTTP and HTTPS as different hosts). A possible way to target multiple hosts is to omit the host and schemes from your specification and serve it from each host. In this case, each copy of the specification will target the corresponding host.
As per Google documentation Cloud Endpoints currently support OpenAPI version 2.0. A feature request has been filed for support of version 3.0 but there have been no releases. You can follow for the updates here.
If I have an Edge Device with several modules, I can then update one module without affecting the others, even if all modules are deployed with the same deployment manifest.
If you don't update the module image URI, environment settings or createOptions of a module, it will keep running. This means that if you only update these options of one of your modules, only that one will be restarted, the rest will remain active.
I am using Spinnaker to deploy a 3-tier system to QA and then to Production. Configuration files in each of these systems point to others. If I bake in the configuration for QA in the AMI, then how do I change it while promoting to Prod? Is it 1) by having two different sets of AMIs - one for QA and one for Prod, or, 2) by having the AMIs with no configuration and then configure it (somehow) after deployment to change the configuration files?
What is recommended?
You can define custom AWS user data for cluster at deploy time ( under advanced settings of the cluster configuration ). You can then retrieve this user data in your application. This will allow you to change these type of configurations.
At Netflix, we have a series of init scripts that are baked into the base image and provide a mechanism for extending custom startup ( init.d ) scripts via nebula / gradle. This usually sets values like NETFLIX_ENVIRONMENT that are well known and programmed against.
We also use a feature flipping mechanism via https://github.com/Netflix/archaius . This allows us to add properties that are external to the clusters but can be targeted towards them.
When it comes to secured credentials, the approach is outlined in this presentation, but essentially the images reach out to an external service that issues these type of creds. https://speakerdeck.com/bdpayne/key-management-in-aws-how-netflix-secures-sensitive-data-without-its-own-data-center
I am struggling with similar problems myself in our company.
My solution was to create AMIs for specific purposes using a Packer script. This allows me to -
1. Configure the server as much as I can and then store those configurations in an AMI.
2. Easily change these configurations if the need arises.
Then, launching the AMI using an Ansible script, and make all the rest of the configurations on the specific instance.
In my case I chose creating different images for staging and production, but mostly because they differ greatly. If they were more alike I might have chosen using a single AMI for both.
The advantage Ansible gives you here is factoring your configurations, and including written once to both production and staging servers.
I have a RoR app running on 2 app/web servers using nginx/unicorn. The site is deployed using Rubber from the "master" branch of our repo. Life is good!
Some of the new customers we are working with require their data and files to be kept on separate servers. We are planning on having separate boxes for each of these customers. The sites for these customers will be available at customerX.site.com and the code for these apps will be the same as the code in the "master" branch except for a couple of images and the database.yml file.
So my question is, is there a way to set the git branch that should be used to pull the code based on the role of the box or any alternative to easily manage this multi-app deployment process?
Just looking at http://github.com/mojombo/grit
Curious, if grit is on a web server, and the git repositories are on another, will this still work or it HAS to be on the same server? Or does it use remoting somehow?
At GitHub (where grit was developed and is extracted from) we use Grit on both the frontend where the web application is run and on the backends where the git repositories are. We patch Grit to make every call to the Grit::Git functions (where all of the actual file access is contained) over BERT-RPC to the appropriate backend instead of executing the code locally. The file path passed to the Grit initializer is the path on the backend server in that case. So the raw repository access is done by ruby handlers running Grit on the backend servers, while the rest of the Grit namespace (Grit::Commit, Grit::Diff, etc) is run on the frontends. It's actually pretty cool. At GitHub we run something like 300mil RPC calls a month through this system.
If you want to learn more about BERT-RPC, check out Toms talk at RubyConf : http://rubyconf2009.confreaks.com/19-nov-2009-10-25-bert-and-ernie-scaling-your-ruby-site-with-erlang-tom-preston-werner.html
It has to be on the same server. If you look at the documentation then you'll see that the Repo constructor accepts a local file path:
repo = Repo.new("/Users/tom/dev/grit")
All implementations (and part implementations, part wrappers, part interfaces) to Git should be able to talk to each other, be it C git, JGit (in Java), Grit (in Ruby), Git-Sharp / Git# (in C#) or Dulwich (in Python), independently of what implementation is used on server and what implementation is used on client. The same is true (perhaps to lesser extent) with different implementations working on the same repository.
If it isn't true, it is a bug in the implementation of Git (the original version in C being reference implementation).
It seems like you want to have git repos on server B and have an interface for it like codaset or github on server A. The developer of Codaset does what I think you are looking for, read his blog post: http://codaset.com/codaset/codaset/blog/quiet-at-the-front-but-busy-at-the-back