Multiple producers for one consumer or request - rsocket

I can see that I can route one request to a responder and that there's different implementations like fireAndForget and all that but I have a case where producers are responsible for their subset of data. Specifically, the producers are actually just Kafka Consumers who are assigned a unique set of partition of the overall topic.
Is there a way to basically route the request to all these producers, and let the producer decide if they have something to send in response or not? A "findAllUsers()" request would need data from all of them, so all of them would need to contribute a portion of the response. Is this possible with Rsocket or does it only support 1:1 connections?

Only peer to peer. I guess you want this.
// Create connections to all producers.
Flux<RSocket> rSockets = Flux.from(...)
Request request = ...
// Merge all results from producers.
Flux.merge(rSockets.flatMap(r -> r.requestStream(DefaultPayload.create(request...))))
.subscribe();

Related

Siesta Service configureTransformer for URL decode models based upon parameters

I have an endpoint which supports post requests. The URL is the same for all requests but the parameters will be different for each request. It is basically a free-form query service when the client can formulate the query and fields that will be returned in the response. I would like to be able to define methods on the service which will represent specific queries and a model for each query. But I am uncertain as to how I would go about configuring the transformer for each "query based" endpoint.
Is there a way to accomplish this or is it best to simply work with a json dictionary?
Thanks...
I think that I found the solution to my problem and it was rather simple. It was just a matter of building the resource and supplying it to configureTransformer.
func getUserIds() -> Request {
let res = resource(endPoint)
.withParam("query", "SELECT id FROM users where status='Active'")
configureTransformer(res) {
try self.jsonDecoder.decode(UserIdResponse.self, from: $0.content)
}
return res.request(.post)
}

iOS Offline first app single responsibility issue

I am switching to offline firs app from my already made online app.
This is how online works:
I have VIPER interactor which requests data from service. Service knows how to request data from API layer. So then I have callbacks with result or error and process it in integrator and then update local storage if needed. Nothing super hard here.
So all elements, Interactor, Service and API are single responsibility objects and do only one task:
Interactor handles if blocks logic to handle result or error and trigger presenter to display data
Service calls for API
API calls Alomofire to do rest of work with requests.
So now in offline first app I added RequestService where I store all my requests and then send it using Timer and in case connection is online.
So now I need to overload single responsibility somewhere to check next things.
So first off all I need to check reachability:
if noConnection() {
loadLocalDataToShow()
}
Next I need to make sure all requests has been sent:
if requestsService.pendingRequests > 0 {
loadLocalDataToShow()
}
So there are two approaches as I thinks:
Make global check. Like providing API layer to do these checks for me and return some enum Result(localData) or Result(serverData) after Alamofire returned for me with result or if no connection.
Or second one make interactor to do these checks like this:
func getData(completion ...) {
Service.getData() result in {
if requestService.pendingRequests > 0 {
completion(loadLocalData())
}
if result.connectionError {
completion(loadLocalData())
}
completion(result) //result it's returned data like array of some entities or whatever requested from the API and fetched directly from server via JSON
}
}
so now we will have kind of all of the same check for all interactions which requested data, but seems we have not broken single responsibility or maybe I am on a wrong way?
TL;DR: IMHO, I'd go with second one, and it won't be breaking SRP.
VIPER Interactors tend to have multiple services, so it's perfectly fine to have something like OnlineRequestService and OfflineRequestService in the interactor, and act accordingly.
Hence, you won't be breaking SRP if you decide which data/service to use in the interactor itself.
To elaborate more, let's say that there was an initial requirement for user, that they may use the app online/offline. How would you plan your architecture? I would create the services mentioned in the upper part, and let the interactor decide which service to use.
Interactor in VIPER is responsible for making requests, and it may go with different Services, such as CoreDataService, NetworkService, even UserDefaultsService. We cannot say that interactor is doing only one task, but it doesn't necessarily mean that it has more than one responsibility. Its responsibility is to take care of the flow between the data and the presenter, and if there needs to be a decision between which data (online/offline) to use, it would reside in the interactor's responsibility.
If it still doesn't feel right, you may create an additional interactor, but who/what would decide which interactor to use?
Hope this helps.

Spring AMQP custom message correlation using an identifier generated by the app

Spring AMQP custom message correlation using an identifier generated by the app for the outbound gateway using spring integration.
We have a requirement where we need to correlate messages for outbound gateway with an id generated by app, in which actual processing of the messages would happen in external system and the response for the request will come as post from the external system, so we cannot rely on amqp_correlation data.
If you provide the steps for this that will be great.
Solution Tried
Set the correlation key in the rabbit template
Create Message of type AMQP, set the header name with the correleation key set in the template with some generated value
Provide header-mapper in the AMQP outbound gateway for the custom header name
Result
Rabbit template was able to map with the custom header, However it generates its own value, Not using the value that was set in the request/reply messages
Please open a new feature JIRA Issue for this.
Bear in mind it will be your responsibility to ensure the correlationId is unique.
You might be able to work around it by subclassing the template and overriding sendToRabbit; and set up the correlationId there; you would have to save off the template's correlationId (ideally in the message in a different header, but perhaps in a Map) and have the server return that header too.
protected void sendToRabbit(Channel channel, String exchange, String routingKey, boolean mandatory,
Message message) throws IOException {
// fix up properties
super.doSend(...);
}
You would also have to override onMessage() to restore the proper correlationId for the inbound request.

What's the use of getCollisionKey() in relayjs?

I am new in relay and saw this on getCollisionKey on treasurehunt tutorial:
getCollisionKey() {
return `check_${this.props.game.id}`;
}
In the docs it states - Implement this method to return a collision key. Relay will send any mutations having the same collision key to the server serially and in-order.
Please help me understand what is getCollisionKey. Would really appreciate.
collisionKey is an identifier to help know when mutations needs to be executed one after the other or when they can be parallelised.
Why we need this is mostly because of network inconsistencies.
Take for example a mutation LikeOrUnlikePost. This mutation likes or unlikes the post depending on if you already like it or not.
Suppose you like the post, then 1s after you decide to unlike.
But the first mutation fails, so it isn't sent to your server, so only one LikeOrUnlikePost mutation is sent.
The result is that you think you unliked the post (you clicked twice), but in fact you only liked it (only one mutation succeed).
This is what collisionKey is for. It tells Relay to queue any mutations which have the same collision key.
In the case above, what would happen is the second mutation would get queued, and would never get executed as the first one fails.

Multiple objects waiting for the same API response

I have an API code, which loads a data necessary for my application.
It's as simple as:
- (void) getDataForKey:(NSString*) key onSuccess:(id (^)())completionBlock
I cache data returned from server, so next calls of that functions should not do network request, until there is some data missing for given key, then I need to load it again from server side.
Everything was okey as long as I had one request per screen, but right now I have a case where I need to do that for every cell on one screen.
Problem is my caching doesn't work because before the response comes in from the first one, 5-6 more are created at the same time.
What could be a solution here to not create multiple network request and make other calls waiting for the first one ?
You can try to make a RequestManager class. Use dictionary to cache the requesting request.
If the next request is the same type as first one, don't make a new request but return the first one. If you choose this solution, you need to manager a completionBlock list then you will be able to send result to all requesters.
If the next request is the same type as first one, waiting in another thread until the first one done. Then make a new request, you API will read cache automatically. Your must make sure your codes are thread-safe.
Or you can use operation queues to do this. Some documents:
Apple: Operation Queues
Soheil Azarpour: How To Use NSOperations and NSOperationQueues
May be there will be so many time consuming solutions for this. I have a trick. Create a BOOL in AppDelegate, its default is FALSE. When you receive first response, then set it TRUE. So when you go to other screen and before making request just check value of your BOOL variable in if condition. If its TRUE means response received so go for it otherwise in else don't do anything.

Resources