What is better kserve, seldon core or bentoML ? and what are the advantages /disavantages and feature of each one
Did a lot of research and can't find a clear answer
I'm in a similar position where lately I've been looking around the model serving landscape to choose what stack/tech go for. Currently we're using FastAPI to wrap models into microservices, but we want to split IO/Network-bound consumption (usually from business logic) from compute/memory-bound consumption (models), and also better orchestration (scaling, traffic distribution for A/B tests, etc).
Generally you have two kinds of tools:
Inference servers, which deal with wrapping the model into a microservice
Servers orchestrators, which add orchestrating features for scaling, deploying and generally managing the server fleet
BentoML is a model server, and the direct comparison wouldn't be to Seldon Core or KServe, but rather's Seldon Core's MLServer/Python Client and KServe's KFModel (which in turn uses Ray). I feel like their feature set is very similar, so which one is best depends on experience/trial and error. Personally I went for BentoML at this moment because it seemed the easiest to iterate on, but I wouldn't exclude switching to the others if Bento doesn't work as well.
Seldon Core and KServe are more orchestration tools, meaning that their feature set, while including inference servers, also extends beyond that. BentoML also has an orchestration tool, Yatai, but I feel like it's still lacking in features compared to the above two. The good news is that I believe Seldon Core and KServe should work with most inference servers tech (namely BentoML), although some features might be degraded compared to using their own solutions.
I don't have a clear cut answer to which one is best and from my research it seems people seem to use all of them in some form or another, like:
BentoML + Helm for deployment
BentoML + Seldon Core
Seldon's prepackaged inference servers/custom + Seldon Core
BentoML + KServe
My personal suggestion is to try out the quickstart tutorials of each and see what fits best your needs, generally going for the path of least resistance - the MLOps landscape changes a lot and quickly, some tools are more mature than others, so not investing too much in a hard tool makes most sense to me.
Related
I am new to this, what is a best approach to implement microservices?
I found fw like seneca but it is little bit confusing...
Is there any tut how to create jwt auth, mongodb and other staff in microservices?
Take a look on Docker.
With docker-compose you can play with several services with an easy integration without worrying about the IP addresses to connect them.
Also if you add nginx to your stack, it's gonna be very easy to scale those services, there are several videos and tutorials that you can lookup to help you get started.
I've heard aboutseneca, but I haven't used, I think you shouldn't depend on a specific framework because one of the ideas behind of Microservices is the low coupling.
To make the jump into the real micro-services world is not trivial. It's not about plumbing some APIs, but a radical change in architecture thinking that, well, at the beginning will make you a bit uncomfortable (e.g. every service with its own database) :)
The best book I have read so far about micro-services is The Tao of Microservices, by Richard Rodger the author of Seneca himself. It exposes very well the shift from monolithic and object-oriented software towards micro-services.
I have personally struggled a bit with Seneca because of the average quality of documentation (inconsistencies, etc...). I would rather recommend Hemera, which took its inspiration from the message-pattern approach from Seneca, but is better documented and much more production-ready.
1) Build services and deploy it with Docker Containers
2) Let them communicate via gRPC coz it is really fast for inter services communication.
3) Use error reporter like Bugsnag or Rollbar. Error reporting is really important to catch error quickly.
4) Integrate tracing using opentracing or opencensus. Tracing is important too because it will be so hard to monitor all microservices with logs only.
I ask about the longevity of microframeworks like Flask, Bottle, and expressjs. Advantages: small, fast, manageable.
Are they intended to be replaced as code complexity and user base grow? Also asked: should they be replaced with a full framework like Django or Pyramid, or are microframeworks the new standard?
Well, it kind of depends on what you mean by growth. Let's look at two possibilities:
User growth. If you're building an application with fairly fixed features which you expect to have a rapidly expanding user-base (like Twitter), then a microframework may be ideally suited for scalability since it's already stripped down to the bare essentials + your application code.
Feature growth. If you have a site which you're expecting to require rapid addition of many discrete and complex yet generic features (forums, messaging, commerce, mini-applications, plugins, complex APIs, blogs), then you may save time by using a full-featured framework like Django or Ruby on Rails.
Basically, the only reason a microframework might be unsuitable for your application in the long term is if you think you would benefit from plug-and-play functionality. Because fully-featured frameworks are higher-level than microframeworks, you'll often find fully-featured solutions as plugins right out of the box. Blogs, authentication, and so on. With microframeworks you're expected to roll your own solutions for these things, but with access to a lot of lower-level functionality through the community.
It depends on what the (micro)framework supports, as well as the amount of documentation provided for it.
For example, a site using Flask needs a database for storing data. Even though Flask does not have a database extension built in, there are extensions available for it.
If the microframework can handle it why replace it with something else?
I have recently been come to for advise on an idea of rewriting an existing site due to massive maintenance problems in their old design.
Basically, the company is considering a complete rewrite of aprox. 90% of their site which is currently written in PHP using an in-house framework.
The company would like to rebuild the backend and some way down the road the front-end as well in order to minimize their maintenance problems and make it easier to bring in new tallent which doesn't need to spend months learning the architecture before they can become affective developers.
We've come up with several possible architectures, some involving rewriting the whole site using an existing scripting web framework such as Cake, Django or RoR and some compiled language frameworks in Java or even .Net.
In addition we have come up with some cross technology solutions - such as a web application built in Django with a Scala backend.
I was wondering what merit would there be to using a single technology stack (such as RoR) as apposed to using a cross between two (such as RoR with Scala, like Twitter now do) and vise versus.
Take into consideration the fact that this company's site is a high traffic site with over 1 million unique visitors a day, which will be transitioned onto the new architecture slowly over a long period (several month to a year)...
Thanks
Generally speaking, I don't think any particular technology stack is better than any other in terms of performance; Facebook runs on PHP and I know first hand that Java and .Net scale well too. Based on what you've said I'd be worrying more about the maintainability related issues than performance and scalability just now.
Generally speaking, I would keep within one well known technology stack if possible:
It'll be easier to find (good) staff for a well known platform / technology stack; there will be more in the market, and rates will not be as expensive as the skills are too rare.
Splitting your technology means you need a wider range of knowledge; by sticking with a single technology stack you can focus on it, with better / faster results.
People tend to focus on one platform / technology stack, so it'll be easier to find developers for technology X, rather than technologies X, Y and Z.
It's easier for team members to work on different parts of the system as it's all written in the same technology - presumably in a similar way.
In terms of integation, items within the same technology stack play nicer together, crossing into different stacks can quickly become more difficult and harder to support.
Where you do want to use different technology, ensure the boundary is clean - something standards based or technology agnostic like web service / JSON calls.
Rewriting your whole codebase will require significant effort and lots of pressure, and for a start you would be best to start by doubling or maybe tripling the initial time estimate.
You can think about your problem from two perspectives :
Number of platforms. In order to minimize and manage complexity of this task, it is most definitely your imperative to reduce mental strain by using as less new technologies/platforms as possible. For example, an advantage of RoR over PHP+Smarty that has been cited often is that with RoR you don't have to learn a new presentation language.
Team effort required to learn new techs. If your existing team is already versatile with PHP, Django etc, but not RoR, then you might be better off reusing existing skills, since the mental strain for developers will be lesser.
Single technology means less moving targets; simpler is always better as long as it meets the requirements. So, use as many technologies as you need, but not more than that. The technology is not important; the right technology is the one that makes your job easier. So, ask yourself what are your current pain points, and how would each of those technologies help.
Getting the architecture right and the code clean is the easiest with Smalltalk and Seaside, especially when you do the persistence with Gemstone. At this scale, you'll have to talk to them about license costs. You might know them from the Ruby work they do with Maglev.
How can i write a cloud-aware application? e.g. an application that takes benefit of being deployed on cloud. Is it same as an application that runs or a vps/dedicated server? if not then what are the differences? are there any design changes? What are the procedures that i need to take if i am to migrate an application to cloud-aware?
Also i am about to implement a web application idea which would need features like security, performance, caching, and more importantly free. I have been comparing some frameworks and found that django has least RAM/CPU usage and works great in prefork+threaded mode, but i have also read that django based sites stop to respond with huge load of connections. Other frameworks that i have seen/know are Zend, CakePHP, Lithium/Cake3, CodeIgnitor, Symfony, Ruby on Rails....
So i would leave this to your opinion as well, suggest me a good free framework based on my needs.
Finally thanks for reading the essay ;)
I feel a matrix moment coming on... "what is the cloud? The cloud is all around us, a prison for your program..." (what? the FAQ said bring your sense of humour...)
Ok so seriously, what is the cloud? It depends on the implementation but usual features include scalable computing resource and a charge per cpu-hour, storage area etc. So yes, it is a bit like developing on your VPS/a normal server.
As I understand it, Google App Engine allows you to consume as much as you want. The back-end resource management is done by Google and billed to you and you pay for what you use. I believe there's even a free threshold.
Amazon EC2 exposes an API that actually allows you to add virtual machine instances (someone correct me please if I'm wrong) having pre-configured them, deploy another instance of your web app, talk between private IP ranges if you wish (slicehost definitely allow this). As such, EC2 can allow you to act like a giant load balancer on the front-end passing work off to a whole number of VMs on the back end, or expose all that publicly, take your pick. I'm not sure on the exact detail because I didn't build the system but that's how I understand it.
I have a feeling (but I know least about Azure) that on Azure, resource management is done automatically, for you, by Microsoft, based on what your app uses.
So, in summary, the cloud is different things depending on which particular cloud you choose. EC2 seems to expose an API for managing resource, GAE and Azure appear to be environments which grow and shrink in the background based on your use.
Note: I am aware there are certain constraints developing in GAE, particularly with Java. In a minute, I'll edit in another thread where someone made an excellent comment on one of my posts to this effect.
Edit as promised, see this thread: Cloud Agnostic Architecture?
As for a choice of framework, it really doesn't matter as far as I'm concerned. If you are planning on deploying to one of these platforms you might want to check framework/language availability. I personally have just started Django and love it, having learnt python a while ago, so, in my totally unbiased opinion, use Django. Other developers will probably recommend other things, based on their preferences. What do you know? What are you most comfortable with? What do you like the most? I'd go with that. I chose Django purely because I'm not such a big fan of PHP, I like Python and I was comfortable with the framework when I initially played around with it.
Edit: So how do you write cloud-aware code? You design your software in such a way it fits on one of these architectures. Again, see the cloud-agnostic thread for some really good discussion on ways of doing this. For example, you might talk to some services on GAE which scale. That they are on GAE (example) doesn't really matter, you use loose coupling ideas. In essence, this is just a step up from the web service idea.
Also, another feature of the cloud I forgot to mention is the idea of CDN's being provided for you - some cloud implementations might move your data around the globe to make it more efficient to serve, or just because that's where they've got space. If that's an issue, don't use the cloud.
I cannot answer your question - I'm not experienced in such projects - but I can tell you one thing... both CakePHP and CodeIgniter are designed for PHP4 - in other words: for really old technology. And it seems nothing is going to change in their case. Symfony (especially 2.0 version which is still in heavy beta) is worth considering, but as I said on the very beginning - I can not support this with my own experience.
For designing applications for deployment for the cloud, the main thing to consider if recoverability. If your server is terminated, you may lose all of your data. If you're deploying on Amazon, I'd recommend putting all data that you need persisted onto an Elastic Block Storage (EBS) device. This would be data like user generated content/files, the database files and logs. I also use the EBS snapshot on a 5 day rotation so that's backed up itself. That said, I've had a cloud server up on AWS for over a year without any issues.
As for frameworks, I'm giving Grails a try at the minute and I'm quite enjoying it. Built to be syntactically similar to Rails but runs on the JVM. It means you can take advantage of all the Java goodness, like threading, concurrency and all the great libraries out there to build your web application.
Other than the monetary aspects, how different is Amazon's SimpleDB from Apache's CouchDB in the following terms
Interfacing with programming languages like Java, C++ etc
Performance and Scalability
Installation and maintenance
I'm a fairly heavy SimpleDB user (I'm the developer of http://www.backupsdb.com/) but am currently migrating some projects off SimpleDB and into Couch, so I guess I can see this from both sides now.
1. Interfacing with programming languages like Java, C++ etc
Easier with Couch as you can talk to it very easily using JSON. SimpleDB is a bit more work, largely due to the complexities of signing each request for security and the lower level access you get which requires you to implement exponential back off in the case of busy signals etc. You can get good libraries for SimpleDB though in many languages now and this takes the pain away in many respects.
2. Performance and Scalability
I don't have any benchmarks, but for my own use case, CouchDB outperforms SimpleDB. It's harder to scale though - SimpleDB is great at that, you chuck more at it and it autoscales around you.
There are lots of occasionally irritating limits in SimpleDB though, limits on the number of attributes, size of attributes, number of domains etc. The main annoyance for many applications is the attribute size limit which means you can't store large forum posts for example. The workaround is to offload those into something else such as S3, but it's a bit annoying at times. Obviously CouchDB doesn't have that issue and indeed the fact that you can attach large files to documents is one thing that particularly attracts me to it.
Scaling wise, you should also possibly be looking at bigcouch which gives you a distributed cluster and is closer to what you get with SDB.
3. Installation and Maintenance
I actually found it much easier with CouchDB. I suspect it depends on which library you need to use for SimpleDB, but when I was starting with it, the Amazon supplied libraries weren't very mature and the open source community ones had various issues that meant getting up and running and doing something serious with it took more time than I would have liked. I suspect this is much better now.
CouchDB was surprisingly easy to install and I love the web interface to it. Indeed that would be my major criticism of SimpleDB - Amazon still don't have any form of web console for it despite having web consoles for almost every other service. That's why we wrote the very basic BackupSDB just so we could extract data in XML and run queries from a web browser, I'd like to have seen Amazon do something similar (but more powerful and better) by now and have been very surprised that they haven't. There are lots of third party firefox plugins and some applications for it though but I have the impression that SimpleDB isn't that widely used - this is only a hunch really.
4. Other Observations
The biggest issue I think is that with SimpleDB you are entrusting all your data to a third party with no easy way of getting it out (you'll need to write something to do that), plus your costs keep gently rising. When you get to the point that the cost is comparable to a powerful dedicated database server, you kind of feel you'd get better value that way, but the migration headache is non trivial by this point as you'll have a large commitment to the cloud.
I started off as a huge Amazon evangelist, and for most things I still am, but when it comes to SDB, I feel it's a bit of a hobby project for Amazon the way the Apple TV was for Steve jobs.