Azure CI container per customer - docker

I have a monolithic application based on .NET , the application itself is a web based app.
I am looking at multiple articles and trying to figure out if the Azure CI or similar would be an correct service to use.
The application will run 24/7 and i guess this is where confusion comes in, wouldn't it be normal to have always on application running on CI?
What i am trying to achieve is a container per customer where each customer gets one or more instances that he owns. The other question would be costs and scalability, i would expect to have thousands of containers so perhaps i should be looking at Kubernetes ?
Thanks.

Here is my understanding. I'm pretty new to both ACI and Kubernetes, so treat this as a suggestions and not a definitive answers 🙂.
Azure Container Instances is a quick, easy and cheap way to run a single-instance of a container in Azure. However, it doesn't scale very well on its own (it can scale up, but not out, and not automatically..), and it lacks the many container-orchestration features that kubernetes offers.
Kubernetes offers a lot more, such as zero-downtime deployments, scaling out with multiple replicates, and many more features. It is also a lot more complex, costs more, and takes much longer to set up.
I think ACI is a bit too simple to meet your use-case.

Related

Microservices in Docker implementation

We are writing our first micro services using Docker containers using Amazon fargate. We have many doubts on the implementation level using Spring Boot
We will have multiple micro services in the project, is it a good practice we are writing all the micro services in a single container or I have to create separate Docker container for separate micro services. In a cost effective way we use single container but is that make any problems for our project structure in future?
We are planning to deploy the application in AWS fargate and our application will have large option to extend in future and expecting around 100 to 150 different micro services. In this case is it cost effective if we are uploading all these microservices in different containers too?
The most important thing to remember with microservices is that they're not primarily about solving technical problems but organisational problems. So when we look at whether an organisation should be using microservices, and how those services are deployed, we need to look at whether the org has the problems that the microservices style solves.
The answer to your question about your architecture, then, will mostly depend on the size of your technology team, the organisational structure, the age of your product, your current deployment practices, and how those are likely to change over the medium term.
As an example, if your organisation:
has less than 25 tech staff,
organised into 1 or 2 teams,
each of which works on any part of the product,
which is less than 12 months old,
and is deployed all at once on a regular basis (e.g. daily, weekly, monthly),
and the org isn't about to grow rapidly,
then you almost definitely want to forget about microservices for now. In a situation like this, the team is still new in learning about the domain, so likely don't know everything they'd need to know to really understand what would be a great way to split the system up into a distributed architecture. That means if they split it up now, they'll probably want to change the boundaries later, and that becomes very expensive when you already have a distributed system, while being far simpler in a monolith. What's more, with only a small team who can all work on (and support) any part of the system, there's little reason to invest in building a platform where individual teams can deploy and maintain individual services. An organisation at this stage will typically be far more concerned with finding customers and iterating the product quickly, perhaps even pivoting the product, as opposed to making teams autonomous and building a high-scaling, resilient architecture. A monolithic architecture makes sense at this point, but a well-designed monolith, with clear component boundaries enforced by APIs, and encapsulated data access, making it easy to pull out services into separate processes later.
Let's look a little further on and consider an organisation that is...
over 50 tech staff,
organised into 7 teams,
each of which works only on specific areas of the product,
which is 3 years old,
and has teams wanting to deploy their work independently of what other teams are doing.
Such an organisation should definitely be building a distributed architecture. If they don't, and have all these teams working in a monolith instead, they will run into all kinds of organisational problems, with teams needing to coordinate their work, releases being delayed while the one team finishes QA on their new feature, patch deploys being a big hassle for staff and customers. What's more, with a mature product, the organisation should know enough about the domain to be able to sensibly split both the domain and the teams (in that order; see Conway's Law) into sensible, autonomous units that can make progress while minimising coordination.
You seem to have chosen microservices already. Depending on where you sit on the scales above, maybe you want to revisit that decision.
If you want to keep developing with microservices but deploying them all in one container, know that there's nothing wrong with that if it suits the way your organisation works at the moment. Will it make problems for your project structure in future? Well, if you're successful and your organisation grows, there will probably come a time where this single-container deployment is no longer the best fit, in particular when teams start owning services and want to deploy just their service without deploying the whole application. But that autonomy will come at the cost of extra work and complexity, and it may give you no benefit at this point in time. Just because it won't be the right approach for your system in the future doesn't mean that it isn't the right approach for today. The trick is in keeping an eye on it and knowing when to make the extra investment.
No problem if you are using single container for your microservices but the main goal of microservices is to maintain each services separately each service should be loosely coupled and each service should have separate database (if you want to achieve database per service architecture).
So try to achieve this thing run your services in separate container and orchestrate those services with docker swarm or Kubernetes .
I know cost matters but if you do it in right way you will then see the power of microservices architecture then.

Stateful Containers with Kubernetes/Docker is it possible?

I apologize if this is an ignorant question but I am building out a Kubernetes cluster and I really like the idea of abstracting my computing infrastructure from a single cloud provider and steering away from a VM platform but what about statefulness? I want to be able to setup a mysql server for example and keep that data for life, I want Kubernetes to load balance a mysql container with a data drive, we speak about containers and we think life and death within seconds but what about when we want to keep data around and build a kick ass data center without VM's is there a concept of of being stateful in this paradigm?
Kubernetes provides StatefulSets for Deploying stateful containers like databases. Follow the below link to understand how to deploy mysql database In highly available mode
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
Not ignorant at all, in fact, stateful apps (often DBs) require special consideration in Kubernetes.
StatefulSets are one of the primary Kubernetes objects that exist to help support the use of stateful apps.
This is a decent guide to solving some of the challenges. It's written by Google for Kubernetes Engine but the concepts apply to Kubernetes broadly. There is also a GKE-specific overview.

Pivotal Cloud Foundry vs OpenShift

I'm looking for a PaaS solution and lately I've been looking into Pivotal Cloud Foundry and OpenShift Origin, trying to compare them.
And honestly, it feels like both offer pretty much the same functionality, to the point where I don't really see a thing, that is truly specific to one of them.
They achieve different things differently, but the functionality remains the same.
The biggest difference seems to be the runtimes these technologies use, as OpenShift uses Kubernetes and PCF uses Diego, and PCF also has it's own container solution.
So here comes my question. What are the differences?
Is there a killer feature in one of them that I'm missing?

How to implement microservices [Node.js]?

I am new to this, what is a best approach to implement microservices?
I found fw like seneca but it is little bit confusing...
Is there any tut how to create jwt auth, mongodb and other staff in microservices?
Take a look on Docker.
With docker-compose you can play with several services with an easy integration without worrying about the IP addresses to connect them.
Also if you add nginx to your stack, it's gonna be very easy to scale those services, there are several videos and tutorials that you can lookup to help you get started.
I've heard aboutseneca, but I haven't used, I think you shouldn't depend on a specific framework because one of the ideas behind of Microservices is the low coupling.
To make the jump into the real micro-services world is not trivial. It's not about plumbing some APIs, but a radical change in architecture thinking that, well, at the beginning will make you a bit uncomfortable (e.g. every service with its own database) :)
The best book I have read so far about micro-services is The Tao of Microservices, by Richard Rodger the author of Seneca himself. It exposes very well the shift from monolithic and object-oriented software towards micro-services.
I have personally struggled a bit with Seneca because of the average quality of documentation (inconsistencies, etc...). I would rather recommend Hemera, which took its inspiration from the message-pattern approach from Seneca, but is better documented and much more production-ready.
1) Build services and deploy it with Docker Containers
2) Let them communicate via gRPC coz it is really fast for inter services communication.
3) Use error reporter like Bugsnag or Rollbar. Error reporting is really important to catch error quickly.
4) Integrate tracing using opentracing or opencensus. Tracing is important too because it will be so hard to monitor all microservices with logs only.

Cloud-aware programming and help choosing a good framework

How can i write a cloud-aware application? e.g. an application that takes benefit of being deployed on cloud. Is it same as an application that runs or a vps/dedicated server? if not then what are the differences? are there any design changes? What are the procedures that i need to take if i am to migrate an application to cloud-aware?
Also i am about to implement a web application idea which would need features like security, performance, caching, and more importantly free. I have been comparing some frameworks and found that django has least RAM/CPU usage and works great in prefork+threaded mode, but i have also read that django based sites stop to respond with huge load of connections. Other frameworks that i have seen/know are Zend, CakePHP, Lithium/Cake3, CodeIgnitor, Symfony, Ruby on Rails....
So i would leave this to your opinion as well, suggest me a good free framework based on my needs.
Finally thanks for reading the essay ;)
I feel a matrix moment coming on... "what is the cloud? The cloud is all around us, a prison for your program..." (what? the FAQ said bring your sense of humour...)
Ok so seriously, what is the cloud? It depends on the implementation but usual features include scalable computing resource and a charge per cpu-hour, storage area etc. So yes, it is a bit like developing on your VPS/a normal server.
As I understand it, Google App Engine allows you to consume as much as you want. The back-end resource management is done by Google and billed to you and you pay for what you use. I believe there's even a free threshold.
Amazon EC2 exposes an API that actually allows you to add virtual machine instances (someone correct me please if I'm wrong) having pre-configured them, deploy another instance of your web app, talk between private IP ranges if you wish (slicehost definitely allow this). As such, EC2 can allow you to act like a giant load balancer on the front-end passing work off to a whole number of VMs on the back end, or expose all that publicly, take your pick. I'm not sure on the exact detail because I didn't build the system but that's how I understand it.
I have a feeling (but I know least about Azure) that on Azure, resource management is done automatically, for you, by Microsoft, based on what your app uses.
So, in summary, the cloud is different things depending on which particular cloud you choose. EC2 seems to expose an API for managing resource, GAE and Azure appear to be environments which grow and shrink in the background based on your use.
Note: I am aware there are certain constraints developing in GAE, particularly with Java. In a minute, I'll edit in another thread where someone made an excellent comment on one of my posts to this effect.
Edit as promised, see this thread: Cloud Agnostic Architecture?
As for a choice of framework, it really doesn't matter as far as I'm concerned. If you are planning on deploying to one of these platforms you might want to check framework/language availability. I personally have just started Django and love it, having learnt python a while ago, so, in my totally unbiased opinion, use Django. Other developers will probably recommend other things, based on their preferences. What do you know? What are you most comfortable with? What do you like the most? I'd go with that. I chose Django purely because I'm not such a big fan of PHP, I like Python and I was comfortable with the framework when I initially played around with it.
Edit: So how do you write cloud-aware code? You design your software in such a way it fits on one of these architectures. Again, see the cloud-agnostic thread for some really good discussion on ways of doing this. For example, you might talk to some services on GAE which scale. That they are on GAE (example) doesn't really matter, you use loose coupling ideas. In essence, this is just a step up from the web service idea.
Also, another feature of the cloud I forgot to mention is the idea of CDN's being provided for you - some cloud implementations might move your data around the globe to make it more efficient to serve, or just because that's where they've got space. If that's an issue, don't use the cloud.
I cannot answer your question - I'm not experienced in such projects - but I can tell you one thing... both CakePHP and CodeIgniter are designed for PHP4 - in other words: for really old technology. And it seems nothing is going to change in their case. Symfony (especially 2.0 version which is still in heavy beta) is worth considering, but as I said on the very beginning - I can not support this with my own experience.
For designing applications for deployment for the cloud, the main thing to consider if recoverability. If your server is terminated, you may lose all of your data. If you're deploying on Amazon, I'd recommend putting all data that you need persisted onto an Elastic Block Storage (EBS) device. This would be data like user generated content/files, the database files and logs. I also use the EBS snapshot on a 5 day rotation so that's backed up itself. That said, I've had a cloud server up on AWS for over a year without any issues.
As for frameworks, I'm giving Grails a try at the minute and I'm quite enjoying it. Built to be syntactically similar to Rails but runs on the JVM. It means you can take advantage of all the Java goodness, like threading, concurrency and all the great libraries out there to build your web application.

Resources