Thoughts wanted: When deploying new code, are there benefits to deploying to brand new instances (droplets on Digital Ocean) versus deploying to existing instances?
With the ease of deploying new instances nowadays, I'm wondering if the better route to go now is to simply deploy a new instance and change the routing to the new instance...
I know that whether you use Chef or other deployment tool may dictate a solution, but looking for the general thoughts on the topic....
It all depends on how big your infrastructure is and how much work you will need to debug your system in the case of show stoppers and bugs. deploying to existing machines on a small scale can be maintained to some extent but when the footprint of your system becomes big, it should be time for Immutable infrastructure ; where you are deploying new machines and getting rid of the old ones. instead of the approach of applying new changes on existing ones.
The benefits of this approach are:
Easier more capsulated deployments on the machine level. you attach the new instances, wait for traffic to stop on the old ones and then take them off.
Logging is more centralized for all your machines, that is handy in the case of debugging; where the issue is coming from, and which machine is creating this bad behaviour.
Flexibility to kill servers that are giving problems and creating
new ones on the fly to connect them to your load balancer.
Related
Where I work we've been adding microservices for different purposes and the local development environment is becoming difficult to setup. Services have too many environment variables to configure and usually there's not enough memory avaiable to run them.
We plan to fix these issues. I understand it's a matter of architecture and DevOps mostly. One way we've thought of is to create a proper service registry that allows easier setup and opens the door to, for example, have some services running locally and others in the cloud. All wired together with the service registry.
Another option could be to stub some of the dependencies with something like https://wiremock.org/ but it seems too limited and difficult (?).
I wanted to ask, what other strategies are there to manage growing development environments?
The image below show the two pipelines that we have in my company to manage the life cycle of a model.
The first pipeline, "Application", relates to the creation of the application component that host the model and has the inference logic.
The second, that of "Model", is a pipeline that leads to the generation of the model in binary format.
Together (model and application) will be deployed in our orchestrator (a kubernetes cluster).
I am in a situation where the application logic does not change but the models do.
I could find myself in the situation below.
I suppose there are two approach to manage the runtime model refresh on the orchestrator (hope someone suggests me other possibilities that I haven't thought about):
In the application logic; The code manages the refresh through a thread by taking the new model.
Pros: A new container is not generated
Cons: Ability to introduce a bug.
Through the pipeline; The pipeline must be triggered by an event (in my case a merge on a git branch) and bring the container by performing a rolling update. The new container at run will load the new model.
Pros: Existing process
Cons: Each new version of the model should provide for a new build of the container, even if the application logic has not changed.
[Question]
Are there any best practices for these cases (perhaps through a system of tags on the images) that someone can suggest me?
Thank you
Kipliko
The most seamless way is to do a rolling update via k8s/kubectl. This will require a new container, however, this is considered a best practice, as each container stays atomic and reproducible. Updating the model via threads would be difficult to debug.
Another scenario you could do is blue-green deployment using Istio, and slowly move traffic between the old and new model, although this would require a bit more overhead.
I am deploying a web app onto multiple servers. I would like to push code to one of these servers so that it can be updated but not affect the other deployed servers. The web app and server is unique to each client so if I add a feature for one client I would not like to add it to all the other servers that are deployed. There will be occasions where an update might be required for all the active servers.
I am hoping to automate this functionality so that I can push code for one server without it affecting the other servers. I have been looking into using Docker, Ansible, and Kubernetes for this job but am fairly new to deployment in general and would like to get an idea of the best practices for something like this.
Thank you.
It is now deployed as a stand alone web server where a push to the master branch causes an automatic deployment for that one server. The only solution I've come up with is deploying multiple servers individually.
Your question is not strictly related to Docker/Kubernetes but I will try to answer anyway. There are two ways this could be done
1) Maintain a repo for each customer - which has the customer's customization and pulls the common code from somewhere else. Maybe a Git submodule or some other way. This way each customer can have their own lifecycle. I don't like this approach because the drift between customer tends to get bigger. Also, it is like maintaining as many codebases as there are customers - which is not scalable
2) The only way I can think of this can be done in a way which is uniform for all customers is to do feature flags. This has to be during development and is not a purely operational exercise. This is difficult to get started but in longer-term is the only way I think this can be scaled. Specific features for the one customer then can be turned off by configuration. There are also some companies like LaunchDarkly which enable this
I'm studying the best way to have multiple redmine instances in the same server (basically I need a database for each redmine group).
Until now I have 2 options:
Deploy a redmine instance for each group
Deploy one redmine instance with multiple database
I really don't know what is the best practice in this situation, I've seen some people doing this in both ways.
I've tested the deployment of multiple redmines (3 instances) with nginx and passenger. It worked well but I think with a lot of instances it may not be feasible. Each app needs around 100mb of RAM, and with the increasing of requests it tends to allocate more processes to the app. This scenario seems bad if we had a lot of instances.
The option 2 seems reasonable, I think I can implement that with rails environments. But I think that there are some security problems related with sessions (I think a user of site A is allowed to make actions on site B after an authentication in A).
There are any good practice for this situation? What's the best practice to take in this situation?
Other requirement related with this is: we must be able to create or shut down a redmine instance without interrupt the others (e.g. we should avoid server restarts..).
Thanks for any advice and sorry for my english!
Edit:
My solution:
I used a redmine instance for each group. I used nginx+unicorn to manage each instance independently (because passenger didn't allow me to manage each instance independently).
The two options are not so different after all. The only difference is that in option 2, you only have one copy of the code on your disk.
In any case, you still need to run different worker processes for each instance, as Redmine (and generally most Rails apps) doesn't support database switching for each request and some data regarding a certain environment are cached in process.
Given that, there is not really much incentive to share even the codebase as it would require certain monkey patches and symlink-magic to allow the proper initialization for the intentional configuration differences (database and email configuration, paths to uploaded files, ...). The Debian package does that but it's (in my eyes) rather brittle and leads to a rather non-standard system.
But to stress again: even if you share the same code on the disk between instances, you can't share the running worker processes.
Running multiple instances from the same codebase is not officially supported by Redmine. However, Debian/Ubuntu packages seem to support such approach... See:
Multiple instances of redmine on Debian squeeze
So, generally:
If you use Debian/Ubuntu go with option #2
Otherwise go with #1
Rolling forward a couple of years, and you might now want to consider a third option of using docker containers for each of your redmine instances.
I've been using https://github.com/sameersbn/docker-redmine.git , and have been quite happy with it except that it doesn't yet support handling of incoming mail for creating and commenting on tickets.
I have an ASP.NET MVC 3 application, WouldBeBetter.com, currently hosted on Windows Azure. I have an Introductory Special subscription package that was free for several months but was surprised at how expensive it has turned out to be (€150 p/m on average!) now that I have started paying for it. That is just way too much money for a site that is not going to generate money any time soon so I've decided to move to a regular hosting provider (DiscountASP.Net).
One of the things I'll truly miss though, is the separated Staging and Production environments Azure provides, along with the zero-downtime environment swap.
My question is, how could I go about "simulating" a staging environment while hosting on a traditional provider? And what is my best shot at minimizing downtime on new deployments?
Thanks.
UPDATE: I chose the answer I chose not because I consider it the best method, but because it is what makes the most sense for me at this point.
Before abandoning Windows Azure, there are several cost-saving things you can do to lower your monthly bill. For instance:
If you have both a Web role and a Worker role, merge the two. Take your background processing, queue processing, etc. and run them in your Web role (do your time-consuming startup in OnStart(), then just add a Run() override to call queue-processing, etc.
Consider the new Extra Small instance, which costs just under half of a Small instance
Delete your Staging deployment after you're confident your production code is running ok. Keep the cspkg handy though, in blob storage, so that you could always re-deploy it.
I use DiscountASP myself. It's pretty basic hosting for sure, a little behind the times. But I have found just creating a subdirectory and publishing my beta/test/whatever versions there works pretty well. It's not fancy or pretty, but does get the job done.
In order to do this you need to create the subdirectory first, then go into the control panel and tell DASP that directory is an application. Then you also have to consider that directory's web.config is going to be a combination of its own and the parent one. You also have to consider robots.txt for this subdirectory and protecting it in general from nosy people.
You could probably pull this off with subdomains too, depending on how your domain is set up.
Another option: appharbor? They have a free plan. If you can stay within the confines of their free plan, it might work well (I've never used them, currently interested in trying them though)
1) Get an automated deployment tool. There are plenty of free/open-source ones that million/billion dollar companies actually use for their production environments.
2) Get a second hosting package identical to the first. Use it as your staging, then just redeploy to production when staging passes.