I am new to this, what is a best approach to implement microservices?
I found fw like seneca but it is little bit confusing...
Is there any tut how to create jwt auth, mongodb and other staff in microservices?
Take a look on Docker.
With docker-compose you can play with several services with an easy integration without worrying about the IP addresses to connect them.
Also if you add nginx to your stack, it's gonna be very easy to scale those services, there are several videos and tutorials that you can lookup to help you get started.
I've heard aboutseneca, but I haven't used, I think you shouldn't depend on a specific framework because one of the ideas behind of Microservices is the low coupling.
To make the jump into the real micro-services world is not trivial. It's not about plumbing some APIs, but a radical change in architecture thinking that, well, at the beginning will make you a bit uncomfortable (e.g. every service with its own database) :)
The best book I have read so far about micro-services is The Tao of Microservices, by Richard Rodger the author of Seneca himself. It exposes very well the shift from monolithic and object-oriented software towards micro-services.
I have personally struggled a bit with Seneca because of the average quality of documentation (inconsistencies, etc...). I would rather recommend Hemera, which took its inspiration from the message-pattern approach from Seneca, but is better documented and much more production-ready.
1) Build services and deploy it with Docker Containers
2) Let them communicate via gRPC coz it is really fast for inter services communication.
3) Use error reporter like Bugsnag or Rollbar. Error reporting is really important to catch error quickly.
4) Integrate tracing using opentracing or opencensus. Tracing is important too because it will be so hard to monitor all microservices with logs only.
Related
PLEASE! I've been trying to find some inbox module for ejabberd like MongooseIm's but without success
I would like to know if there are any, and if not, can I adapt MongooseIm's mod_inbox in ejabberd or is it better to switch to MongooseIm?
EDITED:
or how can I create a similar implementation on the client with 0313 - MAM and 0013 - Offline messages that give me the same or approximate result. Please help, I'm breaking my head, I don't want to change the ejabberd but if necessary, no problem
Disclaimer: I'm on MongooseIM core team.
It's not clear how much you've invested in deploying and integrating ejabberd. If the integration is only on XMPP level, then MongooseIM is mostly a drop-in replacement. You can just grab a MongooseIM container or a prebuilt package and be done with it.
If you have, on the other hand, invested in metrics gathering, deployment pipelines, infrastructure as code, etc, then switching might cost you a bit more effort due to some differences in how the projects are built from source, report stats, format logs. The switch is still relatively easy, but there's some overhead involved.
If you're comfortable programming in Erlang, then porting mod_inbox won't be a big problem (it's a matter of a few days at most). If you don't have Erlang experience and hiring is not an option, then better stick to prebuilt MongooseIM container images / packages.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm planning the basic architecture for a new software that will need to be modular.
I'm trying to define a multitenancy application to have a single instance running for all the users.
What I need is the possibility to scale when and where needed, so I don't like the idea to spawn multiple applications (monolitical architectured) behind a load balancer when it could be a single part of the computation that needs more resources.
So I'm thinking about a Service Oriented Architecture, it would have the rails application as web client and other services, that could be virtually written in any language and accessed via APIs by the rails application.
I'd also like to have this APIs open to users to integrate with their existing software and easily extend these services.
I've some specific question:
would it be a good idea to have this kind of architecture for a new startup (1-5 employees)?
using APIs i don't need to use any RPC since the API request itself is an RPC, am I right on this concept?
what would be a nice standard for the APIs (REST only defines HOW to access resources)?
what could be, pratically, the best (= a good) way to expose those APIs to customers? Via the Web Rails application? Directly via a proxy that makes them all available under the same domain? APIs would be accessible in a RESTful way so via HTTP requests.
with this kind of architecture would it be less expensive to have VPS's, Cloud, or dedicated servers? I like clouds because of their failure-tollerant nature, it would free us from worry about data persistence and backups (including the fact that we want to build an architecture almost 100% available).
Any other suggestion or point of view, and any simply start point to think about this would be very appreciated.
I know very well Python, C/C++, JS, Perl, other pl and I started recently with Ruby/Rails. I'm choosing this last one because it seems to me that this community is strongly oriented in building services and what I mind (before that extreme performances) is the ability to learn asap and have someone to share experience with and to learn from, also with pratical examples (I know it's about an architecture, not PL that implements it, but I think it would be more easy to get it wrong in an immature environment that is still working with web1 or web2.0 style in mind).
P.S. I also need to write the basic architecture design, do you have any template where I can start from? I do need to share it with my team and other very-expert pros, I'd like to have it complete and easy to understand.
Hope to read some good suggestions here guys!
Thanks,
Alex.
Architecture
Here is an example stack that would I think mostly do what you're looking to accomplish:
Cluster
One or more app servers
One or more database servers
Zero or more job servers
Instances
Chef for configuration
Unicorn or Passenger
Nginx
Application
Ruby on Rails
Check out Grape for simple APIs
More Specific Answers
would it be a good idea to have this kind of architecture for a new startup (1-5 employees)?
If done correctly, this approach can be very stable and robust. What you don't want to do it get into a situation where you are spending all your time managing your servers. You want to get it up, be able to deal with problem instances quickly, and work on making your application do stuff. If you do it right, creating instances can be simple and totally automated.
using APIs i don't need to use any RPC since the API request itself is an RPC, am I right on this concept?
Yes.
what would be a nice standard for the APIs (REST only defines HOW to access resources)?
Here we'd need a bit more clarification on how you need to use RESTful design to accomplish specific goals.
what could be, pratically, the best (= a good) way to expose those APIs to customers? Via the Web Rails application? Directly via a proxy that makes them all available under the same domain? APIs would be accessible in a RESTful way so via HTTP requests.
A domain (or subdomain) should be accessible via HTTP and RESTful software design. It might return JSON or something else. It's all up to you.
with this kind of architecture would it be less expensive to have VPS's, Cloud, or dedicated servers? I like clouds because of their failure-tollerant nature, it would free us from worry about data persistence and backups (including the fact that we want to build an architecture almost 100% available).
You get what you pay for. I'd recommend cloud servers. Check out Heroku to get started, or Rackspace if you are prepared to "roll your own." Or Engine Yard.
Any other suggestion or point of view, and any simply start point to think about this would be very appreciated.
I would try creating a test API using something like a free Heroku account.
I have a fairly complex windows service (written in .net 4) with several sub systems that run in parallel.
I have implemented pretty good logging throughout, but I'm feeling I'm needing more info about what each subsystem is currently doing. This would be very useful for times that I need to stop the service for upgrade/bug fixes.
It would be nice to have a gui app that will show me the status for each part of the application that I'm interested in. I've had some ideas for how I'm going to do this, but I'd like to hear some others' ideas as well.
I'm interested in a solution that would be easy to plop down in a future windows service and I'm not looking for anything very complex.
Are there any tools for this sort of thing?
Have you done this yourself?
What about interprocess communication?
Since Windows services can no longer interact with the user session, you'll need to have a separate application that does the interacting for you. Based on the details of your question, I think you understand this.
The big question is how to facilitate the communication between your Windows service and the application. There are all kinds of approaches - shared memory, socket, pipe, remoting, etc. What I have used successfully is WCF. If your UI is going to reside on the same machine as the service, use the NetNamedPipeBinding. If you ever need access from a remote machine, you can change to the NetTcpBinding. I've found this flow chart helpful in binding selection.
.
If you're looking for a more formal framework approach that just straight WCF, have a look at Juval Lowy's Publish-Subscribe WCF Framework, which is described in pretty good detail in this MSDN article. The code is available to look at via the article, or you can download the source and example from Lowy's website here. Go to the Downloads section, filter by the Discovery category, and you'll see it there.
FastCGI is old but it still seems like it must be the right answer in some cases.
It seems like the preferred deployment of Perl/Catalyst web applications is with FastCGI.
FastCGI was popular with Rails but seems to no longer be. (Why?)
The Java world doesn't seem to have anything to do with FastCGI. Is something like Tomcat way better than Apache+FastCGI?
Is choosing FastCGI still a good idea or just a lingering technology?
Ted
Since it depends a lot on your setup and requirements, I'll let the "Is X still a right answer?" up to you. However, by looking at different architectures, you can come up with a list of questions to ask to determine if it still is a right answer given specific circumstances.
Concerns of frequent interest
The questions you'll want to ask are usually related to security and flexibility. For security, you'll want to follow the principle of least privilege. For flexibility, you'll want to know if you can run multiple frameworks, multiple versions of the framework and how easily you can delegate work to other tasks.
Other concerns
For a simple web front-end to a database-backed application, not all of these questions are important. You also need to keep in mind that some of the recommendations have nothing to do with what's outlined here. Many web frameworks will recommend whatever architecture is easiest to setup with their framework. They do this because it helps get new users trying out the framework with minimal fuss and without flooding the mailing list. Also, the Java community tends to stick to a common denominator rather than take full advantage of the platform at hand, so they'll often recommend an all-Java solution.
Popular architectures
Single process architectures
From a pure performance point of view, a single process (probably threaded) with an embedded framework probably gives most performance as it reduces most communication overhead between whatever receives the request and whatever produces a response.
Security: a single process must have all of the permissions required to perform every single task it is handed. In simple applications, this might not be a problem. However, its possible you might serve multiple services
Flexibility: probably can't run multiple version of the same framework (e.g. code for different parts of your website require different versions of Java, Rails, Python, etc.). Moreover, changing your setup to serve some work on different machines becomes painful (less difficult when split up on virtual hosts).
Sub-process based architectures
Under the CGI model, you have to pay the price of spawning a new process for each request. Even on UNIX machines where spawning a process is considered cheap, 600 requests a second will kill your server if you spawn a process for each.
Security: to spawn child processes under different user accounts, your gateway probably runs under quite high privileges.
Flexibility: additional flexibility for the multiple frameworks, multiple versions, multiple languages approach, but you're still stuck on the same machine.
Distributed architectures
The FastCGI/SCGI approach tried to solve the CGI process management problem in a clean way. Just keep the process alive. Have the gateway talk to that process to serve the request.
Security: Because the gateway doesn't spawn the processes that serve requests, the gateway can run with far less privileges enabled. Actually, if it only serves as a gateway and doesn't do any work itself, it can run with hardly any privileges at all.
Flexibility: you get even better flexibility than the CGI model because you can forward the request to any machine on the network.
Conclusion
I like FastCGI, because it gives me high flexibility at a price (i.e. request forwarded through socket) I can afford to pay. It's not my full time job to administer systems. I don't develop all the apps I hosts. This means I look for the easiest solution for hosting whatever I try to host. FastCGI popular enough to be supported by major web servers and popular web frameworks. Adding another app usually just boils down to installing and mapping the desired URL to the application over FastCGI.
How can i write a cloud-aware application? e.g. an application that takes benefit of being deployed on cloud. Is it same as an application that runs or a vps/dedicated server? if not then what are the differences? are there any design changes? What are the procedures that i need to take if i am to migrate an application to cloud-aware?
Also i am about to implement a web application idea which would need features like security, performance, caching, and more importantly free. I have been comparing some frameworks and found that django has least RAM/CPU usage and works great in prefork+threaded mode, but i have also read that django based sites stop to respond with huge load of connections. Other frameworks that i have seen/know are Zend, CakePHP, Lithium/Cake3, CodeIgnitor, Symfony, Ruby on Rails....
So i would leave this to your opinion as well, suggest me a good free framework based on my needs.
Finally thanks for reading the essay ;)
I feel a matrix moment coming on... "what is the cloud? The cloud is all around us, a prison for your program..." (what? the FAQ said bring your sense of humour...)
Ok so seriously, what is the cloud? It depends on the implementation but usual features include scalable computing resource and a charge per cpu-hour, storage area etc. So yes, it is a bit like developing on your VPS/a normal server.
As I understand it, Google App Engine allows you to consume as much as you want. The back-end resource management is done by Google and billed to you and you pay for what you use. I believe there's even a free threshold.
Amazon EC2 exposes an API that actually allows you to add virtual machine instances (someone correct me please if I'm wrong) having pre-configured them, deploy another instance of your web app, talk between private IP ranges if you wish (slicehost definitely allow this). As such, EC2 can allow you to act like a giant load balancer on the front-end passing work off to a whole number of VMs on the back end, or expose all that publicly, take your pick. I'm not sure on the exact detail because I didn't build the system but that's how I understand it.
I have a feeling (but I know least about Azure) that on Azure, resource management is done automatically, for you, by Microsoft, based on what your app uses.
So, in summary, the cloud is different things depending on which particular cloud you choose. EC2 seems to expose an API for managing resource, GAE and Azure appear to be environments which grow and shrink in the background based on your use.
Note: I am aware there are certain constraints developing in GAE, particularly with Java. In a minute, I'll edit in another thread where someone made an excellent comment on one of my posts to this effect.
Edit as promised, see this thread: Cloud Agnostic Architecture?
As for a choice of framework, it really doesn't matter as far as I'm concerned. If you are planning on deploying to one of these platforms you might want to check framework/language availability. I personally have just started Django and love it, having learnt python a while ago, so, in my totally unbiased opinion, use Django. Other developers will probably recommend other things, based on their preferences. What do you know? What are you most comfortable with? What do you like the most? I'd go with that. I chose Django purely because I'm not such a big fan of PHP, I like Python and I was comfortable with the framework when I initially played around with it.
Edit: So how do you write cloud-aware code? You design your software in such a way it fits on one of these architectures. Again, see the cloud-agnostic thread for some really good discussion on ways of doing this. For example, you might talk to some services on GAE which scale. That they are on GAE (example) doesn't really matter, you use loose coupling ideas. In essence, this is just a step up from the web service idea.
Also, another feature of the cloud I forgot to mention is the idea of CDN's being provided for you - some cloud implementations might move your data around the globe to make it more efficient to serve, or just because that's where they've got space. If that's an issue, don't use the cloud.
I cannot answer your question - I'm not experienced in such projects - but I can tell you one thing... both CakePHP and CodeIgniter are designed for PHP4 - in other words: for really old technology. And it seems nothing is going to change in their case. Symfony (especially 2.0 version which is still in heavy beta) is worth considering, but as I said on the very beginning - I can not support this with my own experience.
For designing applications for deployment for the cloud, the main thing to consider if recoverability. If your server is terminated, you may lose all of your data. If you're deploying on Amazon, I'd recommend putting all data that you need persisted onto an Elastic Block Storage (EBS) device. This would be data like user generated content/files, the database files and logs. I also use the EBS snapshot on a 5 day rotation so that's backed up itself. That said, I've had a cloud server up on AWS for over a year without any issues.
As for frameworks, I'm giving Grails a try at the minute and I'm quite enjoying it. Built to be syntactically similar to Rails but runs on the JVM. It means you can take advantage of all the Java goodness, like threading, concurrency and all the great libraries out there to build your web application.