How to manage multiple domains in RASA? - machine-learning

for domain I do not mean the domain.yml file, but the general concept of a group of intents that belong to the same semantic domain/topic.
Following the above premise, suppose your conversational application has to manage 10 different domains, each one with a variable set of intents (say 20 intents for each domain).
How can I manage this with RASA?
As far as I know I can not manage multiple domain in RASA.
A possible solution is to “flatten” all intents as a unique long list of intents. But this maybe not optimal because I could have similar intent that belong to different domain, and the intent wrong classification could raise wrong consequences (intent mismatch involves domain mismatch).
A typical approach, could be to put in front of different single-domain apps, a domain classifier that forward to the correct app, but how to do that with in a RASA multi-domain architecture?
Any Idea? Thanks

Related

Zanzibar doubts about Tuple + Check Api. (authzed/spicedb)

We currently have a home grown authz system in production that uses opa/rego policy engine as core for decision making(close to what netflix done). We been looking at Zanzibar rebac model to replace our opa/policy based decision engine, and AuthZed got our attention. Further looking at AuthZed, we like the idea of defining a schema of "resource + subject" types and their relations (like OOP model). We like the simplicity of using a social-graph between resource & subject to answer questions. But the more we dig-in and think about real usage patterns, we get more questions and missing clarity in some aspects. I put down those thoughts below, hope it's not confusing...
[Doubts/Questions]
[tuple-data] resource data/metadata must be continuously added into the authz-system in the form of tuple data.
e.g. doc{org,owner} must be added as tuple to populate the relation in the decision-graph. assume, i'm a CMS system, am i expected to insert(or update) the authz-engine(tuple) for for every single doc created in my cms system for lifetime?.
resource-owning applications are kept in hook(responsible) for continuous keep-it-current updates.
how about old/stale relation-data(tuples) - authz-engine don't know they are stale or not...app's burnded to tidy it?.
[check-api] - autzh check is answered by graph walking mechanism - [resource--to-->subject] traverse path.
these is no dynamic mixture/nature in decision making - like rego-rule-script to decide based on json payload.
how to do dynamic decision based on json payload?
You're correct about the application being responsible for the authorization data it "owns". If you intend to have a unique role/relationship for each document in your system, then you do need to write/delete those relationships as the referenced resources (or the roles on them, more likely) change, but if you are using an RBAC-like design for your schema, you'd have to apply these role changes anyway; you'd just apply them to SpiceDB, instead of to your database. Likewise, if you have a relationship between say, a document and its parent organization, you do have to write/delete those as well, but that should only occur when the document is created or deleted.
In practice, unless you intend to keep the relationships in both your database and in SpiceDB (which some users do), you'll generally only have to write them to one or the other. If you do intend to apply them to both, you can either just perform the updates to both at the same time, or use an outbox-like pattern to synchronize behind the scenes.
Having to be proactive in your applications about storing data in a centralized system is necessary for data consistency. The alternative is federated systems that reach into other services. Federated systems come with the trade-offs of being eventually consistent and can also suffer from priority inversion. I presented on the centralized vs federate trade-offs in a bit of depth and other design aspects of authorization systems in my presentation on the cloud native authorization landscape.
Caveats are a new feature in SpiceDB that enable dynamic policy to be enforced on the relationship graph. Caveats are defined using Google's Common Expression Language, which a language used for policy in other cloud-native projects like Kubernetes. You can also use caveats to make relationships that eventually expire, if you want to take some of book-keeping out of your app code.

Microservice Architecture using multiple Services and Ocelots: .NET Core

Migrating from monolith to a microservice architecture having a single API gateway to access multiple services i.e., cars, shoes, mobiles etc. I'm using NET 6, Docker, Docker-Compose, Ocelot for my project. I'd highly appreciate your feedbacks on my question below based on two scenarios.
Scenario 1
Number of solutions [ApiGateway.sln, Cars.sln, Shoes.sln, Mobiles.sln, ...]
Docker Container [ApiGateway, Cars, Shoes, Mobiles, ...]
Docker Sub Containers for Cars [Hyundai, Honda, ...]
Ocelot used for [ApiGateway, Cars, Shoes, Mobiles]
Sub-ApiGateways: used for all services. MasterAPIGateway will interact with the SubApiGateways of each services.
Details: For an instance, a call for getting all cars of Hyundai is made. So the MasterApiGateway calls the cars service. Now the car serivce uses its own ApiGateways configured using Ocelot to call the required project i.e., Hyundai.csproj methods.
Yes this can be simplied by removing the ocelot from Cars and converting projects into methods.
Scenario 2
Number of solutions [ApiGateway.sln, Services.sln]
Docker Container [ApiGateway, Services]
Docker Sub Containers for Services [Cars, Mobiles, Shoes, ...]
Ocelot used for [ApiGateway]
Details: This is too mainstream but what if each services cars is a big project in itself. Due to which I've tried to separate the services i.e., cars.service, mobile.services hosted in differnt ports as seen in the above image. Again what if services has a huge module i.e., cars.services.honda has over 1000 methods. Due to which I've created sub projects within Cars again hosted in different ports. However, I am trying to encapsulate these sub projects as a single service i.e., for cars only 5000 port will be used by the MasterApiGateway.
Please do suggest me best way to achieve this. Again each service and sub projects within each services is a huge project. So having all these in one solution is something I'm trying to avoid. Thank you for your feedbacks.
this is a design problem and it is highly abstract and depends on business requirements so there is no absolute solution.
the scenario that you have car service and has api for each car may looks proper one BUT as you said each one of them is huge. THIS IS MY OPONION AND NOT A SOLUTION:
if it is just HUGE in data dont bother your self its better go fore one service of car
if all type of cars share same sort of functionality (methods , process..etc) then one service is ok.
if each car type has its own methods and process (not just getting data) then you have complexity in business logic then go for services for each car or one main service of car with similar functionality which can be support by services specific to cars type which contains specific functionality to the car type. here the car service may be play role as aggregator service.
if the car service become very very huge in code size in such a way that the maintenance require more than 5 colleagues (the number may be vary depend on organization size etc) then it should break in pieces.
also look at ubiquitous language in Domain Driven Design. at least it helps your architecture to be more appropriate by hard communication with domain experts.
your problem is the very challenging part of microservices (true microservices) and its out of my experience (i am still studying the microservice architecture and always i find the huge mistakes on my previous works). so please discuss and study and just dont rely what i said.
these two articles are very very useful
decompose-by-subdomain decompose-by-business-capability
The first question you should ask yourself is why do you need microservices, and usually it's better to start with a modular monolith and then break out a service at a time when needed...
You really need to have a clear understanding of the reason why you do it and not just for the fun of creating a solution like this.
I agree what Rouzbeh says about domain driven design, start there! and find your true bounded contexts using the ubiquitous language as a guide.

Which approach is better -- Multiple SSIDs or Single SSID

I am setting up wireless network in an university where we have a broad base of users type like Students (some are of graduation course, some of PG, Ph.D students and others); supporting staff, faculties, resident staff (along with their families).
I have to design the wireless network keeping all those user base in mind.
I have two options for providing wireless access to the users;
I need inputs (pros and cons) on these options -
OPTION I
Separate SSID for each user category (like separate SSID for IT students, separate SSID for commerce students; and so on).
If i go with this approach, i will ends up in creating roughly 20 SSIDs and in this approach i will be able to apply policies based on user category and can also limit the time period for different user group.
OPTION II
Second option, i am thinking about creating single SSID for all the users (or may be 2/3 SSID).
In this approach, i will need not be required to create 'n' number of SSIDs and will only needs to advertise ONE SSID for all the users (and this will help me in keeping the things simple).
But what i will miss in this approach is the granularity and will not be able to apply different policies for different user base.
I am open for any other approach also and i want to do the things in best possible manner.
Please suggest with which approach i should go ahead and if possible, explain pros and cons of the same.
Option with large number of SSIDs is undesirable because access points will broadcast beacons for each SSID 10 times per second on the lowest mandatory speed. It may consume significant airtime especially if you need to support legacy 802.11b/g standards. There are recommendations to use no more than 3-5 SSIDs on any single AP (link1, link2).
Depending on the functionality of the network equipment different policies may be applied on a per-client or per-user basis.
You could differentiate user groups by using a radius server and certificates. I believe some AP can even use this to set specific VLANs. You get a lot of flexibility but you need to assign a certificate to every potential client.
or you could assign each user group to a different subnet thanks to the DHCP server (but that does not sound very secure as people could manually change their IP to get more priviledges)

Cucumber folder structure for web + mobile app

I have a few questions on appropriate folder structure in cucumber:
I think I am going to organize my feature folders according to type_of_user/type_of_feature.feature, i.e. main_admin/add_a_customer.feature or franchisee/schedule_job.feature. The only slight issue with this is that of the user types I have: cleaners, customers, franchisees and main admin/franchisor, the latter two users share many features. For example, both franchisees and franchisor have the ability to add new customers and schedule jobs, the only difference being that the franchisor has the ability to schedule a job for anyone, anywhere - i.e. the only real difference is permissions, not functionality. Does it matter that I will be essentially duplicating tests for these two users, given the proposed folder structure? Or should I be looking to seperate folders by functionality only, then type of user?
For my mobile app, should I have these feature folders separate from the web app or should these go in the root as well: mobile/ios/cleaner_login.feature, mobile/android/cleaner_login.feature etc?
Regarding user types:
Organizing at the top level by user type has worked well for me. However, I would only consider user types separate if they actually used different features, not just if they differed in permissions with respect to specific objects as in the example you gave. You could consider both franchisees and franchisors "administrators", make a top-level folder for those, and just write scenarios for franchisees and franchisors for features that had different permissions for those roles.
If you're a developer and writing RSpec specs in addition to Cucumber features, you might even just write specs instead of features to cover the difference between franchisees and franchisors. (I would only do that if the differences between franchisee and franchisor were fairly trivial and not worth exposing in Cucumber.) If you're QA and testing only from the outside of course it'll all have to be in Cucumber.
I would certainly not systematically duplicate entire scenarios for the sake of any organization. The extra work required to maintain the duplication and the errors when you forgot would be far worse than the bit of extra work required to follow a slightly more complicated system that minimizes duplication!
Regarding web and mobile: How to handle different platforms depends on how different they are.
If you have a web app and a native (Android, iOS) mobile app the step implementations will be completely different and your tests will need to be in different projects altogether. That probably won't mean that much duplication, since the users and features in the web and mobile apps will probably be rather different.
If you have two web apps, one for desktop and one for mobile, there are no technology issues. But again it will depend on how similar the two apps are. If they're different, separate them at the top level (even before users). If they're very similar, separate them only when necessary and only at the scenario level.

Designing a modern platform in Rails 3

I'm in the early stages of prototyping a Rails 3 application that will expose a public API. The site has three separate concerns which I am planning to split across three subdomains.
api.mysite.com
The publicly exposed API.
admin.mysite.com
The admin portal for creating blogs (using the public API).
x.mysite.com
The public blog site created at admin.mysite.com where x is the name of the blog. This too will make use of the public API.
All three will share domain objects. For example, you should be able to login to admin.mysite.com using an account you created on api.mysite.com or x.mysite.com.
Questions
Should I attempt to build one rails application to handle all three concerns or should I split this in multiple applications each handling a specific concern?
What are the Pros & Cons of each?
Does anyone have any insight into how some of the larger sites (basecamp, github, shopify) are organized?
Your question is fairly general so I'll try and answer in general terms. And the fact that you mention "larger sites" leads me to the conclusion that you're concerned about scaling.
In the beginning it is definitely going to be easier to build one application - especially since the domain is shared. You can do separate controllers for the various interfaces (api, html, etc) but with a shared back-end. This will reduce code duplication and the complexity of keeping 3 apps in sync. Also remember that you might change your mind about features based on user feedback and you want to be nimble enough to respond quickly.
The main benefit I can see of separating out three different deployables is that you can have an independent deploy schedule for each. For example, a bug fix in the api won't have to wait for admin to be ready to deploy. Or that you can have separate teams working in parallel.
If you're careful about what you keep in your session you'll be able to deploy multiple instances of your application on multiple servers, pointing at the same database (a.k.a. horizontal scaling). Each of these instances are identical to the others and a load balancer (either dedicated hardware or virtual) directs traffic between them. Eventually this approach runs out of steam when your database can't handle the load. At that point you can look at more caching, sharding, no-sql and all sorts of clever scaling techniques.
Most (but not all) larger sites end up doing some sort of horizontal scaling with some sharding of data.
All told, focus on getting a useful application to your users. If things take off you can worry about scaling. More applications fail because the user experience is awful rather than not being able to scale.

Resources