Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
When reading about docker, I came to know that earlier IT companies used to run single app per server and since they didn't knew how large the application would scale, they bought over performing servers which run at a fraction of their potential capacity resulting in waste of the resources and money.
But I wonder, what stopped them from running multiple application on the same server? Why a server cannot run multiple applications with time sharing of its resources just like our personal computers?
With docker, you typically make smaller and more specific services. The containerization of them makes them easier to scale, and you are sort of guided into designing systems differently to accommodate these properties.
It is not uncommon to compose several docker images and containers to replace the function of a single old-school service.
The services of yesteryear were big, with a lot of moving parts, resource-hungry dependencies and you had to scale your hardware to serve the peak resource draw of your system. Meaning, if you had a certain day of the month where you had to process millions of requests, even though you only had to serve a couple of hundred for the remaining days, meant you had to run the service on hardware capable of serving at that peak level at any given time. Running several services on the same node would put you at risk of having an outage on the important peak periods. Which is one reason single responsibility servers would be employed.
This is only one of the reasons, other reasons that added to the benefit of running one app per node are maintenance, deployability, and development. For instance, if you are running multiple applications on one node when you have to take that node down for maintenance, well now your entire system is down. And you have lost your ability to handle that situation gracefully. When it comes to deployment, well configuring multiple applications on one machine can get pretty complex. You risk disrupting the entire system if something goes wrong, and it is hard to automate. Because you will have to make sure you are not corrupting the state of another running application on the same machine. Remember DLL-hell on Windows? It was not uncommon for having two applications using the same component with a different version to disrupt each other at runtime because of incompatibility between the two versions. A single application per box mitigates that risk. And if you are developing using multiple teams, well then you can have them work independently of each other if the teams have separate infrastructures.
Now, something new came along and changed how we spread our applications across the network. Virtualization made it easier to utilize the hardware more efficiently. Instead of spreading our applications across multiple physical servers, we would model the same structure using virtual machines running on the same machine. These machines would be true beasts compared to the previous servers, but the combined cost would be significantly less, not to mention the maintenance requirements would be significantly lower.
Now, with time this approach grew into containers. Where you not only spread your applications across the same piece of hardware, you spread your tasks within that application as well. This combined with using the bare minimum of your runtime environment resources like you only bring the portion of your OS that you need to function, well you can build many super light-weight purpose-built tiny applications. The separation of different applications on different machines drove this innovation towards single-responsibility functions that can be run, scaled, destroyed, developed, maintained, and life-cycled independently of each other.
Related
I have a monolithic application based on .NET , the application itself is a web based app.
I am looking at multiple articles and trying to figure out if the Azure CI or similar would be an correct service to use.
The application will run 24/7 and i guess this is where confusion comes in, wouldn't it be normal to have always on application running on CI?
What i am trying to achieve is a container per customer where each customer gets one or more instances that he owns. The other question would be costs and scalability, i would expect to have thousands of containers so perhaps i should be looking at Kubernetes ?
Thanks.
Here is my understanding. I'm pretty new to both ACI and Kubernetes, so treat this as a suggestions and not a definitive answers 🙂.
Azure Container Instances is a quick, easy and cheap way to run a single-instance of a container in Azure. However, it doesn't scale very well on its own (it can scale up, but not out, and not automatically..), and it lacks the many container-orchestration features that kubernetes offers.
Kubernetes offers a lot more, such as zero-downtime deployments, scaling out with multiple replicates, and many more features. It is also a lot more complex, costs more, and takes much longer to set up.
I think ACI is a bit too simple to meet your use-case.
We are writing our first micro services using Docker containers using Amazon fargate. We have many doubts on the implementation level using Spring Boot
We will have multiple micro services in the project, is it a good practice we are writing all the micro services in a single container or I have to create separate Docker container for separate micro services. In a cost effective way we use single container but is that make any problems for our project structure in future?
We are planning to deploy the application in AWS fargate and our application will have large option to extend in future and expecting around 100 to 150 different micro services. In this case is it cost effective if we are uploading all these microservices in different containers too?
The most important thing to remember with microservices is that they're not primarily about solving technical problems but organisational problems. So when we look at whether an organisation should be using microservices, and how those services are deployed, we need to look at whether the org has the problems that the microservices style solves.
The answer to your question about your architecture, then, will mostly depend on the size of your technology team, the organisational structure, the age of your product, your current deployment practices, and how those are likely to change over the medium term.
As an example, if your organisation:
has less than 25 tech staff,
organised into 1 or 2 teams,
each of which works on any part of the product,
which is less than 12 months old,
and is deployed all at once on a regular basis (e.g. daily, weekly, monthly),
and the org isn't about to grow rapidly,
then you almost definitely want to forget about microservices for now. In a situation like this, the team is still new in learning about the domain, so likely don't know everything they'd need to know to really understand what would be a great way to split the system up into a distributed architecture. That means if they split it up now, they'll probably want to change the boundaries later, and that becomes very expensive when you already have a distributed system, while being far simpler in a monolith. What's more, with only a small team who can all work on (and support) any part of the system, there's little reason to invest in building a platform where individual teams can deploy and maintain individual services. An organisation at this stage will typically be far more concerned with finding customers and iterating the product quickly, perhaps even pivoting the product, as opposed to making teams autonomous and building a high-scaling, resilient architecture. A monolithic architecture makes sense at this point, but a well-designed monolith, with clear component boundaries enforced by APIs, and encapsulated data access, making it easy to pull out services into separate processes later.
Let's look a little further on and consider an organisation that is...
over 50 tech staff,
organised into 7 teams,
each of which works only on specific areas of the product,
which is 3 years old,
and has teams wanting to deploy their work independently of what other teams are doing.
Such an organisation should definitely be building a distributed architecture. If they don't, and have all these teams working in a monolith instead, they will run into all kinds of organisational problems, with teams needing to coordinate their work, releases being delayed while the one team finishes QA on their new feature, patch deploys being a big hassle for staff and customers. What's more, with a mature product, the organisation should know enough about the domain to be able to sensibly split both the domain and the teams (in that order; see Conway's Law) into sensible, autonomous units that can make progress while minimising coordination.
You seem to have chosen microservices already. Depending on where you sit on the scales above, maybe you want to revisit that decision.
If you want to keep developing with microservices but deploying them all in one container, know that there's nothing wrong with that if it suits the way your organisation works at the moment. Will it make problems for your project structure in future? Well, if you're successful and your organisation grows, there will probably come a time where this single-container deployment is no longer the best fit, in particular when teams start owning services and want to deploy just their service without deploying the whole application. But that autonomy will come at the cost of extra work and complexity, and it may give you no benefit at this point in time. Just because it won't be the right approach for your system in the future doesn't mean that it isn't the right approach for today. The trick is in keeping an eye on it and knowing when to make the extra investment.
No problem if you are using single container for your microservices but the main goal of microservices is to maintain each services separately each service should be loosely coupled and each service should have separate database (if you want to achieve database per service architecture).
So try to achieve this thing run your services in separate container and orchestrate those services with docker swarm or Kubernetes .
I know cost matters but if you do it in right way you will then see the power of microservices architecture then.
My startup is building an online/mobile labor marketplace where there will be a business interface for businesses posting jobs and we distribute these jobs through a mobile interface for users. We use Rails, REST, Amazon RDS & EC2 and mysql.
My question is: from an architecture standpoint on the server-side does it make sense to?:
a) Have 2 applications one serving the web interface and one acting as the server-side (API) for the mobile interface and both communicating via the DB and via 2 different EC2 instances
b) Try to build one comprehensive application serving both interfaces
Any opinion and perspective on pros and cons would be much appreciated
Thanks
Amazon Web Services are giving you a few tools to make your architecture simpler, more robust and scalable, if you break down your system into smaller, simpler and independent components.
Since you have a set different sizes of instances from micro to extra large (and a few sizes of extra large), you have the flexibility to mix each service type to the appropriate size, configuration, software dependencies, update cycle etc. It will make the life of the developers, testers and administrators much easier.
You also have the ability to scale and even auto-scale each layer and service independently. If you have more users of one of the interfaces, or the other, or increase in data size that is fed through one of the interfaces, you can scale only the relevant service. It will save you the complexity and cost of scaling the full system as a whole.
Another characteristic of AWS is the ability to scale up, down, out and in based on your need. For example, if one of the interface has more users on working weekdays and fewer on weekends, you can scale down this interface for the weekend or night, saving 50% of your computation costs for this instances. In this regards, you can switch from the on-demand model for the more static interfaces to reserved instances, saving again 50% of your costs.
To allow communication between the different interfaces you can use your DB, but you also have a few more options based on your use case. One option is to use Queueing system as SQS. The queue is used as a buffer between the interfaces, reducing the risk of failure of one component (software bug, hardware failure...), affecting the whole system. Another option for inter interfaces communication, that is more tuned to performance is to use in-memory cache. AWS are offering ElasticCache as such service. It can be more efficient than updating such transient data for short period of time and with high load.
I see no long term cons, other than the quick (and dirty) implementation of a single service on a single machine.
The overall goal should be to have most amount of code common between the mobile and web application. Otherwise you are looking at maintenance headaches where you will end up e.g. fixing bugs, or adding features at two places at minimum.
Ideally everything below the front end should be common. The web UI should itself be calling the server side APIs that you are going to need for mobile application. And most amount of logic should be put into these APIs, leaving only presentation details to the UI. This is as prescribed by many patterns such as MVC.
I have a mobile site and desktop site running myself. The codebase is exactly the same at PHP level. Only the smarty templates are different between the two. Granted that mobile application is different from mobile site, but the basic principles still apply.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am looking at Erlang for a future version of a distributed soft-real-time hosted web-based telephony app (i.e. Erlang looks like absolutely the perfect choice for this kind of app). I come from a .NET background and the current version of this app uses a combination of C#, WCF and JQuery to deliver the service. I now need Erlang to allow me to add extra 9s to my up-time and to allow me to get more bang for my server bucks.
Previously I'd set up a development process here combining VS.NET, GIT, TeamCity and auto-deployment of MSI files to the various environments we maintain. It's not perfect, but we're all now pretty comfortable with it. I'm wondering whether a process like we have is even appropriate for such a radically different technology stack (LYME)?
I'm confident that all of the programming challenges we previously solved using .NET can be better solved in less code with Erlang, so I'm completely sold on the language choice. What I don't yet understand from reading the Pragmatic and O'Reilly books on Erlang, is how I should adapt my software engineering and application life-cycle management (ALM) processes to suit the new platform. I see that in-place code updates could make my (and my testing and ops team's) life much easier (compared to the god-awful misery of trying to deploy MSI files across a windows network) but I am not sure how things should change when I use Erlang.
How would you:
do continuous integration in Erlang (is it commonly used?)
use it during a QA cycle (we often run concurrent topic branches using GIT, that get their own mini-QA cycle, so they all get deployed into a test environment)
build and distribute your code to DEV, TEST, UAT, STAGING, and PROD environments
integrate code generation phases into your build cycle (we currently use MSBUILD + T4 templates)
centralize logging for a bunch of different servers (we currently use Log4Net, MSMQ, etc)
do alerting with tools like SCOM
determine whether someone/something has misconfigured your production servers
allow production hot-fixes only after adequate QA (only by authorized personnel)
profile the performance (computation and communication) of your apps
interact with windows-based active directory servers
I guess I need to know what worked for you and why! What tools and frameworks did you use? What did you try that failed? What would you do differently if you could start over, knowing what you know now?
Whoa, what a long post. First, you should be aware that the 99.9% and better kool-aid is a bit dangerous to drink while blind. Yes, you can get some astounding stability figures, but you need to write your program in a way facilitating this. It does not come for free. It does not happen by magic either. Your application must be designed in a way such that other subsystems recover. OTP will help you a lot - but it still takes time to learn.
Continuous integration: Easily done. If you can call rebar or make through your build-bot you are probably set here already. Look into eunit, cover and Erlang QuickCheck (the mini variant is free for starters) - all can be run from rebar.
QA Cycle: I have not had any problems here. Again, if using rebar you can build embedded releases that are minimized erlang vm's you can copy anywhere and run (they are self-contained). You can even hot deploy fixes to such a system pretty easily by altering the code path a bit so you have an overlay of newer fixes. Your options are numerous. Git already help you here a lot.
Environmentalization: Easily done.
Logging centralization: Look into SASL and the error_logger. You can do anything you want here.
Alerting: The system can be probed for all you need (introspection is strong in Erlang). But you might have to code a bit to hook it up to the system of your choice.
Misconfiguration: Configuration files are Erlang terms. If it can be computed, it can be done.
Security: Limit who has access. It is a people problem, not a technical one in my opinion.
Profiling: cprof, cover, eprof, fprof, instrument + a couple of distributed systems for doing the same. Random sampling is also easy (introspection is strong in Erlang).
Windows interaction: Dunno. (Bias: last time I used windows professionally was in 1998 or so).
Some personal observations:
Your largest problem might end up being that you try to cram Erlang into your existing process and it might resist. It is a new environment, so new approaches will be needed in places and you should expect to adapt and workaround limitations you find along the way. The general consensus is that it can work (it is working for several big sites).
It looks like you have a well-established and strict process. How much is that process allowed to be sacrificed to give way to a new kind of thinking?
Are your programmers willing to throw out almost all of their OO knowledge? If not, you will end with a social problem rather than a technical one. If they are like me however, they will cheer, clap in their hands and get a constant high by working with an interesting language solving an interesting problem in a new way.
How many Erlang-experienced programmers do you have? If you have rather few, then better cut your teeth on some smaller subsystems first and then work towards the larger goal. Getting the full benefit of the system takes months if not years. Getting partial benefit can be had in weeks though.
FastCGI is old but it still seems like it must be the right answer in some cases.
It seems like the preferred deployment of Perl/Catalyst web applications is with FastCGI.
FastCGI was popular with Rails but seems to no longer be. (Why?)
The Java world doesn't seem to have anything to do with FastCGI. Is something like Tomcat way better than Apache+FastCGI?
Is choosing FastCGI still a good idea or just a lingering technology?
Ted
Since it depends a lot on your setup and requirements, I'll let the "Is X still a right answer?" up to you. However, by looking at different architectures, you can come up with a list of questions to ask to determine if it still is a right answer given specific circumstances.
Concerns of frequent interest
The questions you'll want to ask are usually related to security and flexibility. For security, you'll want to follow the principle of least privilege. For flexibility, you'll want to know if you can run multiple frameworks, multiple versions of the framework and how easily you can delegate work to other tasks.
Other concerns
For a simple web front-end to a database-backed application, not all of these questions are important. You also need to keep in mind that some of the recommendations have nothing to do with what's outlined here. Many web frameworks will recommend whatever architecture is easiest to setup with their framework. They do this because it helps get new users trying out the framework with minimal fuss and without flooding the mailing list. Also, the Java community tends to stick to a common denominator rather than take full advantage of the platform at hand, so they'll often recommend an all-Java solution.
Popular architectures
Single process architectures
From a pure performance point of view, a single process (probably threaded) with an embedded framework probably gives most performance as it reduces most communication overhead between whatever receives the request and whatever produces a response.
Security: a single process must have all of the permissions required to perform every single task it is handed. In simple applications, this might not be a problem. However, its possible you might serve multiple services
Flexibility: probably can't run multiple version of the same framework (e.g. code for different parts of your website require different versions of Java, Rails, Python, etc.). Moreover, changing your setup to serve some work on different machines becomes painful (less difficult when split up on virtual hosts).
Sub-process based architectures
Under the CGI model, you have to pay the price of spawning a new process for each request. Even on UNIX machines where spawning a process is considered cheap, 600 requests a second will kill your server if you spawn a process for each.
Security: to spawn child processes under different user accounts, your gateway probably runs under quite high privileges.
Flexibility: additional flexibility for the multiple frameworks, multiple versions, multiple languages approach, but you're still stuck on the same machine.
Distributed architectures
The FastCGI/SCGI approach tried to solve the CGI process management problem in a clean way. Just keep the process alive. Have the gateway talk to that process to serve the request.
Security: Because the gateway doesn't spawn the processes that serve requests, the gateway can run with far less privileges enabled. Actually, if it only serves as a gateway and doesn't do any work itself, it can run with hardly any privileges at all.
Flexibility: you get even better flexibility than the CGI model because you can forward the request to any machine on the network.
Conclusion
I like FastCGI, because it gives me high flexibility at a price (i.e. request forwarded through socket) I can afford to pay. It's not my full time job to administer systems. I don't develop all the apps I hosts. This means I look for the easiest solution for hosting whatever I try to host. FastCGI popular enough to be supported by major web servers and popular web frameworks. Adding another app usually just boils down to installing and mapping the desired URL to the application over FastCGI.