Microservices in Docker implementation - docker

We are writing our first micro services using Docker containers using Amazon fargate. We have many doubts on the implementation level using Spring Boot
We will have multiple micro services in the project, is it a good practice we are writing all the micro services in a single container or I have to create separate Docker container for separate micro services. In a cost effective way we use single container but is that make any problems for our project structure in future?
We are planning to deploy the application in AWS fargate and our application will have large option to extend in future and expecting around 100 to 150 different micro services. In this case is it cost effective if we are uploading all these microservices in different containers too?

The most important thing to remember with microservices is that they're not primarily about solving technical problems but organisational problems. So when we look at whether an organisation should be using microservices, and how those services are deployed, we need to look at whether the org has the problems that the microservices style solves.
The answer to your question about your architecture, then, will mostly depend on the size of your technology team, the organisational structure, the age of your product, your current deployment practices, and how those are likely to change over the medium term.
As an example, if your organisation:
has less than 25 tech staff,
organised into 1 or 2 teams,
each of which works on any part of the product,
which is less than 12 months old,
and is deployed all at once on a regular basis (e.g. daily, weekly, monthly),
and the org isn't about to grow rapidly,
then you almost definitely want to forget about microservices for now. In a situation like this, the team is still new in learning about the domain, so likely don't know everything they'd need to know to really understand what would be a great way to split the system up into a distributed architecture. That means if they split it up now, they'll probably want to change the boundaries later, and that becomes very expensive when you already have a distributed system, while being far simpler in a monolith. What's more, with only a small team who can all work on (and support) any part of the system, there's little reason to invest in building a platform where individual teams can deploy and maintain individual services. An organisation at this stage will typically be far more concerned with finding customers and iterating the product quickly, perhaps even pivoting the product, as opposed to making teams autonomous and building a high-scaling, resilient architecture. A monolithic architecture makes sense at this point, but a well-designed monolith, with clear component boundaries enforced by APIs, and encapsulated data access, making it easy to pull out services into separate processes later.
Let's look a little further on and consider an organisation that is...
over 50 tech staff,
organised into 7 teams,
each of which works only on specific areas of the product,
which is 3 years old,
and has teams wanting to deploy their work independently of what other teams are doing.
Such an organisation should definitely be building a distributed architecture. If they don't, and have all these teams working in a monolith instead, they will run into all kinds of organisational problems, with teams needing to coordinate their work, releases being delayed while the one team finishes QA on their new feature, patch deploys being a big hassle for staff and customers. What's more, with a mature product, the organisation should know enough about the domain to be able to sensibly split both the domain and the teams (in that order; see Conway's Law) into sensible, autonomous units that can make progress while minimising coordination.
You seem to have chosen microservices already. Depending on where you sit on the scales above, maybe you want to revisit that decision.
If you want to keep developing with microservices but deploying them all in one container, know that there's nothing wrong with that if it suits the way your organisation works at the moment. Will it make problems for your project structure in future? Well, if you're successful and your organisation grows, there will probably come a time where this single-container deployment is no longer the best fit, in particular when teams start owning services and want to deploy just their service without deploying the whole application. But that autonomy will come at the cost of extra work and complexity, and it may give you no benefit at this point in time. Just because it won't be the right approach for your system in the future doesn't mean that it isn't the right approach for today. The trick is in keeping an eye on it and knowing when to make the extra investment.

No problem if you are using single container for your microservices but the main goal of microservices is to maintain each services separately each service should be loosely coupled and each service should have separate database (if you want to achieve database per service architecture).
So try to achieve this thing run your services in separate container and orchestrate those services with docker swarm or Kubernetes .
I know cost matters but if you do it in right way you will then see the power of microservices architecture then.

Related

What stops us from running multiple applications on the same server? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
When reading about docker, I came to know that earlier IT companies used to run single app per server and since they didn't knew how large the application would scale, they bought over performing servers which run at a fraction of their potential capacity resulting in waste of the resources and money.
But I wonder, what stopped them from running multiple application on the same server? Why a server cannot run multiple applications with time sharing of its resources just like our personal computers?
With docker, you typically make smaller and more specific services. The containerization of them makes them easier to scale, and you are sort of guided into designing systems differently to accommodate these properties.
It is not uncommon to compose several docker images and containers to replace the function of a single old-school service.
The services of yesteryear were big, with a lot of moving parts, resource-hungry dependencies and you had to scale your hardware to serve the peak resource draw of your system. Meaning, if you had a certain day of the month where you had to process millions of requests, even though you only had to serve a couple of hundred for the remaining days, meant you had to run the service on hardware capable of serving at that peak level at any given time. Running several services on the same node would put you at risk of having an outage on the important peak periods. Which is one reason single responsibility servers would be employed.
This is only one of the reasons, other reasons that added to the benefit of running one app per node are maintenance, deployability, and development. For instance, if you are running multiple applications on one node when you have to take that node down for maintenance, well now your entire system is down. And you have lost your ability to handle that situation gracefully. When it comes to deployment, well configuring multiple applications on one machine can get pretty complex. You risk disrupting the entire system if something goes wrong, and it is hard to automate. Because you will have to make sure you are not corrupting the state of another running application on the same machine. Remember DLL-hell on Windows? It was not uncommon for having two applications using the same component with a different version to disrupt each other at runtime because of incompatibility between the two versions. A single application per box mitigates that risk. And if you are developing using multiple teams, well then you can have them work independently of each other if the teams have separate infrastructures.
Now, something new came along and changed how we spread our applications across the network. Virtualization made it easier to utilize the hardware more efficiently. Instead of spreading our applications across multiple physical servers, we would model the same structure using virtual machines running on the same machine. These machines would be true beasts compared to the previous servers, but the combined cost would be significantly less, not to mention the maintenance requirements would be significantly lower.
Now, with time this approach grew into containers. Where you not only spread your applications across the same piece of hardware, you spread your tasks within that application as well. This combined with using the bare minimum of your runtime environment resources like you only bring the portion of your OS that you need to function, well you can build many super light-weight purpose-built tiny applications. The separation of different applications on different machines drove this innovation towards single-responsibility functions that can be run, scaled, destroyed, developed, maintained, and life-cycled independently of each other.

Azure CI container per customer

I have a monolithic application based on .NET , the application itself is a web based app.
I am looking at multiple articles and trying to figure out if the Azure CI or similar would be an correct service to use.
The application will run 24/7 and i guess this is where confusion comes in, wouldn't it be normal to have always on application running on CI?
What i am trying to achieve is a container per customer where each customer gets one or more instances that he owns. The other question would be costs and scalability, i would expect to have thousands of containers so perhaps i should be looking at Kubernetes ?
Thanks.
Here is my understanding. I'm pretty new to both ACI and Kubernetes, so treat this as a suggestions and not a definitive answers 🙂.
Azure Container Instances is a quick, easy and cheap way to run a single-instance of a container in Azure. However, it doesn't scale very well on its own (it can scale up, but not out, and not automatically..), and it lacks the many container-orchestration features that kubernetes offers.
Kubernetes offers a lot more, such as zero-downtime deployments, scaling out with multiple replicates, and many more features. It is also a lot more complex, costs more, and takes much longer to set up.
I think ACI is a bit too simple to meet your use-case.

AWS VPC through a higher level PaaS?

I have been using Heroku up to now, but need to offer something more "enterprise-y" to an organisation that would otherwise insist on local server deployment.
AWS VPC sounds like the ideal infrastructure to use. I'll have Rails + Postgres + AWS S3.
Maybe a silly questions, but is there an automated PaaS that runs on top of AWS? Such that we own the AWS account but can outsource the server admin side? Or is this just a case of having to hire a consultant and deal with upgrades etc. as they arise?
is there an automated PaaS that runs on top of AWS?
Yes, there are LOTS of them. They range in spectrum from "a thin layer on Elastic BeanStalk" to wanna-be Heroku clones. (From my experience, most don't think about HA enough, so it works great, but has plenty of Single Points of Failure that can bite you down the road. But maybe you don't need so many nines of availability at first. Anyone remember Twitter in the early days?)
There are plenty of points in between, such as EC2 Containers, OpsWorks, Flynn, CoreOS, etc. Even Empire which I haven't looked at yet.
Every PaaS is a "Straitjacket" that reduces your "freedom to do anything" in exchange for "simplifying some things". That trade-off is not a constant, nor universal.
A PaaS that works for one company may not work for another, or even the same company six months later. I frequently see companies use Heroku or Google AppEngine for months/years, then suddenly hit a wall and need move off of it. After, they wonder why they put up with the limitations for so long.
we own the AWS account but can outsource the server admin side?
Yes, you can easily pay someone to setup a "PaaS" for you. The problem is they have to have a lot of knowledge to know which one will work best. (Few people have tried them all and talk intelligently about the trade-offs between them all. And new ones get written practically every day.)
The one universal thing: Companies rarely outgrow AWS. (Remember Zynga? They thought they were "spending too much money" on AWS, so they built their own datacenter. It supposedly "saved them money" until their fortunes turned around. Then that datacenter became a boat anchor that pulled them under. Ooops.)
Or is this just a case of having to hire a consultant and deal with upgrades etc. as they arise?
You need to think deeply about what it is that you want.
You could stay high level and run on a 3rd party PaaS. There are no shortage of them (Engine Yard, Heroku, etc). You trade a bit of extra money now (and it's really a few bucks per month) for speed. But you pay a cost later when you have to move off. That's OK, as long as it bought you some time to prove your business model, get customers, etc.
You could hire a consultant to build you a PaaS on AWS. (Why? Is it merely to save money? To gain flexibility? How will you know they are competent? How do you know they aren't a one-trick pony building the same solution they built for everybody else, without knowing where it will work and not work?)
You could bite the bullet and hire (or grow) a cloud guy. They learn AWS and iterate. This will give you the most flexibility in the future. You will be prepared to take advantage of cost savings offered by tuning your application to AWS. (A generic PaaS doesn't encourage you do to this, which means you will loose out of performance or cost savings.)
Long term, I think only the last one makes sense. The 3rd party PaaS is also a fine way to gain speed in the short-term. I'm not so keen on a consultant, unless it's to help accelerate knowledge transfer to your company.

Correct approach for deploying a content management Web Application for different accounts on Azure platform

I develop a commercial Asp.Net MVC Web Application. The application is standard, runs on IIS Web Server and utilizes SQL Server database. Our business model is such that we deploy our application on-site in our customers’ Intranet or data center. That is, for each such customer (account) we supply the complete setup, usually installed in a dedicated standalone server. Each such account has its own private content, users, configurations and so on.
We wish to expand and offer our service on the WWW (public Internet). After some research, I’ve chosen Microsoft’s Azure cloud platform to host our application. With some minor efforts (mainly teaching the application to work with Azure’s File Storage using blobs) I’ve managed to fully deploy to the cloud, using three cloud services: Web Site, Data base and File Storage.
Please note that I employ the same code base for the two deployment types (Intranet and Cloud), by using different configurations for Debug, Release – Intranet, Staging – Azure, Production – Azure.
However, the application (as is it written now) can serve only a single customer account, while I need our cloud version to serve numerous accounts (hopefully a lot ;) … each with its own private data set.
Question: which of the following strategies should I employ here?
Change the application so it will support multiple accounts. This means changes both in the data model (adding an Account entity in the data layer, bind it to all content types, etc.) and in the business logic.
Create for each Account its own site on the cloud (web site + data base + file storage services). This means deploying several times the same application to different Azure services.
It’s obvious that the amount of development needed for the 1st approach here is very large as is the risk in stability of the system, while the 2nd approach requires much less effort.
However, im not clear about how to manage a set of many identical services (applications) each serving a different customer account. I’ve started looking for some tools to help me here (e.g. Red Gate), and would love to hear of more.
Another question is the cost – is such a solution, using many cloud services instead of only few, more costly that the more standard “one application for all accounts” approach.
thanx,
I would recommend (1). It is a higher short term cost in terms of development effort, but will be better for two reasons long term:
Cheaper. The costs will go up quite a bit by adding more cloud services. I suppose you could pass this cost on to your client though?
It does become harder to manage releases across many clients.
I would say you can either spend your time refactoring existing code you know - or - you can learn how to do dev ops against Azure to manage releases. Probably easier to refactor what you know instead of learn something new.
As a note there are many great SDKs to automate deployments, scaling cloud services, etc.
Found some good reading about this issue:
MSDN: Developing Multi-tenant Applications for the Cloud, 3rd Edition
Provides good comparison between the two approaches:
Single Instance, Multi-tenant (my 1st option)
Multi Instance, Single-tenant (my 2nd option)
As it looks i will go with the Multi-tenant approach. All things considered it looks like it will require less development effort and will need less effort in maintenance. Also, my expertise lies in application development and less in system administration (which is required for truly implementing multi-instance solution).
Other reasons: there's a need to share some of the content between accounts (tenants), and this will be more easily achieved when using a single database. Also, there are future plans for the product which can utilize this solution.
Segregation of data will be done (high level):
File storage (blobs): use a separate container for each tenant (account).
Database: use tenant unique key to associate content to tenant
Cahce: use tenant unique key to generate cache keys for cached data items.
Scalability: it is easier using one instance to simply scale up its capabilities, or even move the Web Site to a dedicated virtual machine. In the future can also enhance the system to a Multi-instance, Multi-tenant structure, creating new separate instances for large tenants.

System architecture advice for a dual interface online/mobile marketplace

My startup is building an online/mobile labor marketplace where there will be a business interface for businesses posting jobs and we distribute these jobs through a mobile interface for users. We use Rails, REST, Amazon RDS & EC2 and mysql.
My question is: from an architecture standpoint on the server-side does it make sense to?:
a) Have 2 applications one serving the web interface and one acting as the server-side (API) for the mobile interface and both communicating via the DB and via 2 different EC2 instances
b) Try to build one comprehensive application serving both interfaces
Any opinion and perspective on pros and cons would be much appreciated
Thanks
Amazon Web Services are giving you a few tools to make your architecture simpler, more robust and scalable, if you break down your system into smaller, simpler and independent components.
Since you have a set different sizes of instances from micro to extra large (and a few sizes of extra large), you have the flexibility to mix each service type to the appropriate size, configuration, software dependencies, update cycle etc. It will make the life of the developers, testers and administrators much easier.
You also have the ability to scale and even auto-scale each layer and service independently. If you have more users of one of the interfaces, or the other, or increase in data size that is fed through one of the interfaces, you can scale only the relevant service. It will save you the complexity and cost of scaling the full system as a whole.
Another characteristic of AWS is the ability to scale up, down, out and in based on your need. For example, if one of the interface has more users on working weekdays and fewer on weekends, you can scale down this interface for the weekend or night, saving 50% of your computation costs for this instances. In this regards, you can switch from the on-demand model for the more static interfaces to reserved instances, saving again 50% of your costs.
To allow communication between the different interfaces you can use your DB, but you also have a few more options based on your use case. One option is to use Queueing system as SQS. The queue is used as a buffer between the interfaces, reducing the risk of failure of one component (software bug, hardware failure...), affecting the whole system. Another option for inter interfaces communication, that is more tuned to performance is to use in-memory cache. AWS are offering ElasticCache as such service. It can be more efficient than updating such transient data for short period of time and with high load.
I see no long term cons, other than the quick (and dirty) implementation of a single service on a single machine.
The overall goal should be to have most amount of code common between the mobile and web application. Otherwise you are looking at maintenance headaches where you will end up e.g. fixing bugs, or adding features at two places at minimum.
Ideally everything below the front end should be common. The web UI should itself be calling the server side APIs that you are going to need for mobile application. And most amount of logic should be put into these APIs, leaving only presentation details to the UI. This is as prescribed by many patterns such as MVC.
I have a mobile site and desktop site running myself. The codebase is exactly the same at PHP level. Only the smarty templates are different between the two. Granted that mobile application is different from mobile site, but the basic principles still apply.

Resources