How to host a composite models on AWS SageMaker - machine-learning

I created separate predictive models (using SageMaker's in-built algorithms) on different segments of the data. In production these models needs to be called based on the segment of input data.
Is it possible to host a composite model in SageMaker? How to define the config for deploying a composite model?

Currently SageMaker is hosting a single model behind a single endpoint. If you want to composite endpoints, you can do that using an AWS Lambda function that gets the input features, calls the multiple endpoints, and combines the results, before responding to the original request.

Related

Dynamic Spring Data Classes for Neo4J based on an Ontology

I have to create a web service that needs to interact with a Neo4J database using the Spring framework with Spring-Data-Neo4J. This requires a static data domain model, e.g. defined labels, relations, properties.
The Problem is, that my data is based on an Ontology (via the neosemantics plugin) which could be modified in the future. It would be great if the application could automatically adopt to it. This way, the data model could be extended by editing the ontology only and no additional programming knowledge would be necessary.
Does this mean I have to generate the Spring data classes dynamically (based on the Ontology) or is there a better way to achieve this with Spring-Data-Neo4J (or should I use a different framework)?
Sure, you could come up with a way to generate a set of classes from an ontology. But that is probably going to present more problems than it solves.
An automatically-generated set of classes may not correspond to an appropriate data model for your use cases. The determination of the appropriate data model still requires a human.
Also, the new classes may be incompatible with the existing client code. And you may have to migrate the existing DB over to the new data model. Fixing all that requires humans as well.
I ended up using the Java neo4j driver instead of Spring-Data-Neo4jand a generic node class implementation only having the fields id, list of labels and a map of properties. The set labels and properties can be checked against the ontology prior to creating the nodes in the database. This way I can enforce a specific set of node labels and properties by only modifying the ontology and without having to generate the specific Spring-Data-Neo4j data classes.

How to Separate Existing Project using Micro Service Architecture?

I have created one project in asp.net MVC for banking sector.
At the time of development,We did not think about scaling.So we did not used any kind of modular architecture.
But now I am facing lots of problem regarding deployment and development of new feature.
If I develop any new feature then currently I have to release entire project as there is only one namespeace - projectmain.dll.
Now I want to separate all controllers in seperate projects and I want to deploy it seperatly and using MicroService architecture I want to use this in main project.
Note
My Views are tightly coupled with controller.
So How can I migrate this entire controllers in MicroService Architecture?
Project Structure Explanation
"How to migrate to microservices" needs a very very long answer that can be a book. I can however, give you some directions that could help you.
First of all, you could follow the DDD approach to help you to correctly identify the bounded context of your application. In an ideal situation, every domain should correspond to a bounded context. So you could split by bounded context. Every bounded context could have one or more microservices, but a microservice should not be larger than a bounded context. From what I see, your controllers are already split by domain or at least by subdomain. You could try to make a microservice for each controller and see how it goes; it is possible to further split, i.e. by the Aggregate from the DDD (every Aggregate type could be a microservice).
Second, you should have separate projects for each microservice. Ideally, if you want resilience and scalability, a microservice should not call other microservice during an external request. This means that when a microservice receive a request from the clients, it should not call another microservice; for this it should have all the needed data from the other microservices already to its local storage (using background tasks, with synchronous or asynchronous calls).
Last but not least, each microservice should have its own database; they should not share the same database or table.
In this situation, I guess one option would be to convert your controllers into a Web API with each action method returning JSON data instead of returning a view to render. This would now be treated as a separate back end which could be bundled as one micro service. The next step would be to convert your razor views into a purely front end microservice (using a framework like angular or react) with 2 layers-
A service layer responsible for making API calls that fetch the JSON data returned by your Web API
A component layer responsible for mapping the JSON data fetched by your service to your UI elements/forms

Google Cloud Endpoints - How to implement partial responses?

I am currently developing a proof-of-concept web service in Java using Google Cloud Endpoints and Objectify. Currently I want to implement/define partial responses to client queries to minimize GAE datastore ops.
Here are my researches and observations so far:
On the GAE datastore layer I know there exists the concept of projection queries, which happens during the entity fetch phase, explained here. (optimization of datastore ops possible)
On the Google Cloud Endpoints layer I know there exists the concept of field masking, which happens after the entity was fetched from the GAE datastore, explained here. (optimization of datastore ops not possible)
From the YouTube-API I know there is this concept of partial resources, which seems to come close to the thing I want to achieve. (optimization of datastore ops already implemented)
Now my questions:
1.) Is there a "simple way" to implement partial responses like it is done in the YouTube-API e.g. using certain configurations or annotations?
2.) If there is no "simple way" to implement partial responses would it be a "preferred way" to decompose entities and build relational entities with different property groups? Then these relational entities could be composed to a partial response entity which is returned to the client. As far as I know the downside of this approach would be that every response entity needs to be saved first before it can be returned to client.
3.) Are there any other preferred solutions to this problem?

angularjs and ASP.NET MVC : best strategy for clientside models

I'm currently looking into client side model binding to HTML templates especially with angularjs. I was wondering what the best strategy is for retrieving clientside viewmodels from the server, e.g. a viewmodel containing not only the data for editing but also the data for select lists or drop down lists etc..
As I see it , one has several options
retrieve one viewmodel from the server using e.g. web api, containing ALL the data needed for the view model
render a client side viewmodel to javascript inside the server side html
retrieve data for the viewmodel using multiple web api calls, e.g one for the main data to be edited, and one for each additional data (select lists)
I didn't encounter many examples for option 1 as it seems that web api is used mostly for crud operations returning specific data for one type of object e.g. Person or Order
option 2 conforms to the practice of server side view models with asp.net mvc but I have not seen many examples using this technique in combination with angularjs
option 3 looks clean if one considers seperation of concerns, but has the disadvantage of multiple smaller ajax requests.
Could you share your thoughts and experiences ?
Personally, I use option #3. The app would make requests to "prepare the editor", such as populating dropdown lists, and requests to fetch the data you want to edit (or, if you are creating a new object, any default initial values). I think this separates concerns better than option #1, which ties together "model" data and "support" data.
But as you pointed out, this does make extra calls, and, if they are very numerous, can noticeably slow down the page. (or increase complexity; on a big form with lots of dependent fields, ordering may become important).
What I usually do is have the server provide a "combined" api (e.g. /editor/prepare-all) while also providing small pieces (e.g. /editor/prepare-dropdown-1, /editor/prepare-dropdown-2). When your editor loads, you use the combined one; if there are dependencies between fields, you can request only the data for the dependent fields (e.g. /editor/prepare-dropdown-2?dropdown1-value=123). I believe this has little impact on the server's complexity.
I would agree with st. never and have definitely used option #3 and I think combining $resource and Web API would be a perfect RAD combination. However, I've also worked on very complex screens where I've wanted sub-second response times so I've resorted to optimise the entire development 'column' - I develop using SQL Server as my backend database so I've used it's native support for XML to return structured XML from a stored procedure which I then serialise into a .Net Model (POCO) which I then pass to a Json serialiser for transfer to the browser. I might have some extra business processing to perform against the POCO but this still leads to a very simple code structure to transfer a fairly complex structure of data. Typically it's also very fast because I've made one call to the database and monitoring and optimising one stored procedure is very simple.

Domain Driven Design - where does data parsing belong

In this application I'm developing, the domain revolves around, say, electrical appliances. There are several specialized versions of this entity. Appliances can be submitted to the application, and this happens from web services using data transfer objects.
While this is working great, I am now looking at importing appliances from several text-based file formats as well. Consider this workflow:
Directory watcher service sees a new appliance file has been added
The service uses an application service from my application to submit the appliances described by the file
Now, the application service could have a method with the following name and signature: ApplianceService.Register(string fileContents). I'm thinking the directory watcher service will use this service method and pass it the entire contents of the file. The application service will then coordinate the parsing. Parsing the contents of the file and transforming it into complete appliances entities involves several steps. Now, my question is:
Question: Is this correct, or should the parsing logic live within the directory watcher service? Each type of file format is kind of a part of the domain, but then again, it's not. After the files are parsed into entities from either format, the entity will never know that it once was represented using that format. If the parsing logic should live within the watcher service, I would pass the new appliances to the registration service as data transfer objects.
I guess what I'm concerned about is how an appliance should be represented before it enters my application (using the application layer as point of entry). When submitting appliances from web services, I pass a sequence of appliance data transfer objects. This is different from taking a potentially oddly formatted file and parsing that into a data transfer object, since the mapping from the web service request to data transfer object is pretty straight forward, and not that complex.
Any thoughts on this are very much welcome.
According SRP (Single Responsibility Principle), you should keep your consideration. Directory Watcher service should do what it does best - watch for new files in a directory and pass them to another service, ie Appliance Service which converts them into data transfer objects. Now you can use your web services to submit those data transfer objects to the Application.
I would make an interface for Appliance Service, with at least one method called Convert(). Appliance Parsing Service class can implement the interface. Let's say later you have a different source (SQL) for appliances. You can write another class Appliance SQL Service that implements Appliance Service.
I'd say that the ApplicationService is the right place for the parsing logic, thought it would not be an entirely bad fit to put it in the DirectoryWatcher service.
My reasoning for that statement comes from a Single Responsibility Principle point of view: The DirectoryWatcher in particular should not be responsible for managing all the various input file formats. It should just grab what it receives and pass it on to the right place (already a very involved responsibility).
Where my head got a little turned around (which is maybe the same as yourself?) was that it isn't really the responsibility of the ApplicationService which coordinates your various domain entities. However, I feel that the ApplicationService is the right place to leverage some sort of Builder pattern, abstracting away the details of each method of parsing each file but also creating a clear place in the Domain where this parsing is coordinated.
As for each file format being part of the domain or not. I'd say that they are - you can imagine them all being expressed as part of the ubiquitous language, having various domain experts talking about the quirks of x file format or y file format and the data expressed. That sort of parsing and mapping is very much first class domain logic.
Another nice side of your original design is I think it would simplify adding in new input file sources and formats and modifying existing ones. You have decoupled the file source from the specific format, and created a nice interface point (the ApplicationService) where your new file providers access the core applications.

Resources