I am switching to offline firs app from my already made online app.
This is how online works:
I have VIPER interactor which requests data from service. Service knows how to request data from API layer. So then I have callbacks with result or error and process it in integrator and then update local storage if needed. Nothing super hard here.
So all elements, Interactor, Service and API are single responsibility objects and do only one task:
Interactor handles if blocks logic to handle result or error and trigger presenter to display data
Service calls for API
API calls Alomofire to do rest of work with requests.
So now in offline first app I added RequestService where I store all my requests and then send it using Timer and in case connection is online.
So now I need to overload single responsibility somewhere to check next things.
So first off all I need to check reachability:
if noConnection() {
loadLocalDataToShow()
}
Next I need to make sure all requests has been sent:
if requestsService.pendingRequests > 0 {
loadLocalDataToShow()
}
So there are two approaches as I thinks:
Make global check. Like providing API layer to do these checks for me and return some enum Result(localData) or Result(serverData) after Alamofire returned for me with result or if no connection.
Or second one make interactor to do these checks like this:
func getData(completion ...) {
Service.getData() result in {
if requestService.pendingRequests > 0 {
completion(loadLocalData())
}
if result.connectionError {
completion(loadLocalData())
}
completion(result) //result it's returned data like array of some entities or whatever requested from the API and fetched directly from server via JSON
}
}
so now we will have kind of all of the same check for all interactions which requested data, but seems we have not broken single responsibility or maybe I am on a wrong way?
TL;DR: IMHO, I'd go with second one, and it won't be breaking SRP.
VIPER Interactors tend to have multiple services, so it's perfectly fine to have something like OnlineRequestService and OfflineRequestService in the interactor, and act accordingly.
Hence, you won't be breaking SRP if you decide which data/service to use in the interactor itself.
To elaborate more, let's say that there was an initial requirement for user, that they may use the app online/offline. How would you plan your architecture? I would create the services mentioned in the upper part, and let the interactor decide which service to use.
Interactor in VIPER is responsible for making requests, and it may go with different Services, such as CoreDataService, NetworkService, even UserDefaultsService. We cannot say that interactor is doing only one task, but it doesn't necessarily mean that it has more than one responsibility. Its responsibility is to take care of the flow between the data and the presenter, and if there needs to be a decision between which data (online/offline) to use, it would reside in the interactor's responsibility.
If it still doesn't feel right, you may create an additional interactor, but who/what would decide which interactor to use?
Hope this helps.
Related
We're using Realm (Swift binding currently in version 3.12.0) from the earliest days in our project. In some early versions before 1.0 Realm provided change listeners for Results without actually giving changeSets.
We used this a lot in order to find out if a specific Results list changed.
Later the guys at Realm exchanged this API with changeSet providing methods. We had to switch and are now mistreating this API just in order to find out if anything in a specific List changed (inserts, deletions, modifications).
Together with RxSwift we wrote our own implementation of Results change listening which looks like this:
public var observable: Observable<Base> {
return Observable.create { observer in
let token = self.base.observe { changes in
if case .update = changes {
observer.onNext(self.base)
}
}
observer.onNext(self.base)
return Disposables.create(with: {
observer.onCompleted()
token.invalidate()
})
}
}
When we now want to have consecutive updates on a list we subscribe like so:
someRealm.objects(SomeObject.self).filter(<some filter>).rx.observable
.subscribe(<subscription code that gets called on every update>)
//dispose code missing
We wrote the extension on RealmCollection so that we can subscribe to List type as well.
The concept is equal to RxRealm's approach.
So now in our App we have a lot of filtered lists/results that we are subscribing to.
When data gets more and more we notice significant performance losses when it comes to seeing a change visually after writing something into the DB.
For example:
Let's say we have a Car Realm Object class with some properties and some 1-to-n and some 1-to-1 relationships. One of the properties is a Bool, namely isDriving.
Now we have a lot of cars stored in the DB and bunch of change listeners with different filters listing to changes of the cars collection (collection observers listening for changeSets in order to find out if the list was changed).
If I take one car of some list and set the property of isDriving from false to true (important: we do writes in the background) ideally the change listener fires fast and I have the nearly immediate correct response to my write on the main thread.
Added with edit on 2019-06-19:
Let's make the scenario still a little more real:
Let's change something down the hierarchy, let's say the tires manufacturer's name. Let's say a Car has a List<Tire>, a Tire has a Manufacturer and a Manufacturer has aname.
Now we're still listing toResultscollection changes with some more or less complex filters applied.
Then we're changing the name of aManufacturer` which is connected to one of the tires which are connected to one of the cars which is in that filtered list.
Can this still be fast?
Obviously when the length of results/lists where change listeners are attached to gets longer Realm's internal change listener takes longer to calculate the differences and fires later.
So after a write we see the changes - in worst case - much later.
In our case this is not acceptable. So we are thinking through different scenarios.
One scenario would be to not use .observe on lists/results anymore and switch to Realm.observe which fires every time anything did change in the realm, which is not ideal, but it is fast because the change calculation process is skipped.
My question is: What can I do to solve this whole dilemma and make our app fast again?
The crucial thing is the threading stuff. We're always writing in the background due to our design. So the writes itself should be very fast, but then that stuff needs to synchronize to the other threads where Realms are open.
In my understanding that happens after the change detection for all Results has run through, is that right?
So when I read on another thread, the data is only fresh after the thread sync, which happens after all notifications were sent out. But I am not sure currently if the sync happens before, that would be more awesome, did not test it by now.
By the way how do you create a STREAM?
I use AppendToStreamAsync directly, is this right or shall I create a
stream first then append onto this stream?
I also tried performing some tests but using the methods below I can write
events onto EventStore but can't read Events from it.
And most import question is how do I view my saving events in the Admin site of EventStore?
Here are the code:
public async Task AppendEventAsync(IEvent #event)
{
try
{
var eventData = new EventData(#event.EventId,
#event.GetType().AssemblyQualifiedName,
true,
Serializer.Serialize(#event),
Encoding.UTF8.GetBytes("{}"));
var writeResult = await connection.AppendToStreamAsync(
#event.SourceId.ToString(),
#event.AggregateVersion,
eventData);
Console.WriteLine(writeResult);
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
public async Task<IEnumerable<IEvent>> ReadEventsAsync(Guid aggregateId)
{
var ret = new List<IEvent>();
StreamEventsSlice currentSlice;
long nextSliceStart = StreamPosition.Start;
do
{
currentSlice = await connection.ReadStreamEventsForwardAsync(aggregateId.ToString(), nextSliceStart, 200, false);
if (currentSlice.Status != SliceReadStatus.Success)
{
throw new Exception($"Aggregate {aggregateId} not found");
}
nextSliceStart = currentSlice.NextEventNumber;
foreach (var resolvedEvent in currentSlice.Events)
{
ret.Add(Serializer.Deserialize(resolvedEvent.Event.EventType, resolvedEvent.Event.Data));
}
} while (!currentSlice.IsEndOfStream);
return ret;
}
Streams are created automatically as you write events. You should follow the recommended naming convention though as it enables a few features out of the box.
await Connection.AppendToStreamAsync("CustomerAggregate-b2c28cc1-2880-4924-b68f-d85cf24389ba", expectedVersion, creds, eventData);
It is recommended to call your streams as "category-id" - (where category in our case is the aggregate name) as we use are using DDD+CQRS pattern
CustomerAggregate-b2c28cc1-2880-4924-b68f-d85cf24389ba
The stream matures as you write more events to the same stream name.
The first events ID becomes the "aggregateID" in our case and then each new
eventID after that is unique. The only way to recreate our aggregate is
to replay the events in sequence. If the sequence fails an exception is thrown
The reason to use this naming convention is that Event Store runs a few default internal projection for your convenience. Here is a very convoluted documentation about it
$by_category
$by_event_type
$stream_by_category
$streams
By Category
By category basically means there is stream created using internal projection which for our CustomerAggregate we subscribe to $ce-CustomerAggregate events - and we will see only those "categories" regardless of their ID's - The event data contains everything we need there after.
We use persistent subscribers (small C# console applications) which are setup to work with $ce-CustomerAggregate. Persistent subscribers are great because they remember the last event your client acknowledged. So if the application crashes, you start it and it starts from the last place that application finished.
This is where event store starts to shine and stand out from the other "event store implementations"
Viewing your events
The example with persistent subscribers is one way to set things up using code.
You cannot really view "all" your data in the admin site. The purpose of the admin site it to manage projections, manage users, see some statistics, create some projections, and have a recent view of streams and events only. (If you know the ID's you can create the URL's as you need them - but you cant search for them)
If you want to see ALL the data then you use the RESTfull API using by using something like Postman. Maybe there is a 3rd party software that can create a grid like data source viewer but I am unaware of this. That would probably also just hook into the REST API and you could create your own visualiser this way quite quickly.
Again back to code, you can also always read all events from 0 using on of the libraries - which incidentally using DDD+CQRS you always read the aggregates stream from 0 to rebuilt its state. But you can do the same for other requirements.
In some cases looking at how to use snapshots makes replaying events allot faster, if you have an extremely large stream to deal with.
Paradigm shift
Event Store has quite a learning curve and is a paradigm shift from conventional transactional databases. Event Stores best friend is CQRS - We use a slightly modified version of the CQRS Lite open source framework
To truly appreciate Event Store you would need to understand DDD concepts and then dig into CQRS/ES - There are a few good YouTube videos and examples.
I have an API code, which loads a data necessary for my application.
It's as simple as:
- (void) getDataForKey:(NSString*) key onSuccess:(id (^)())completionBlock
I cache data returned from server, so next calls of that functions should not do network request, until there is some data missing for given key, then I need to load it again from server side.
Everything was okey as long as I had one request per screen, but right now I have a case where I need to do that for every cell on one screen.
Problem is my caching doesn't work because before the response comes in from the first one, 5-6 more are created at the same time.
What could be a solution here to not create multiple network request and make other calls waiting for the first one ?
You can try to make a RequestManager class. Use dictionary to cache the requesting request.
If the next request is the same type as first one, don't make a new request but return the first one. If you choose this solution, you need to manager a completionBlock list then you will be able to send result to all requesters.
If the next request is the same type as first one, waiting in another thread until the first one done. Then make a new request, you API will read cache automatically. Your must make sure your codes are thread-safe.
Or you can use operation queues to do this. Some documents:
Apple: Operation Queues
Soheil Azarpour: How To Use NSOperations and NSOperationQueues
May be there will be so many time consuming solutions for this. I have a trick. Create a BOOL in AppDelegate, its default is FALSE. When you receive first response, then set it TRUE. So when you go to other screen and before making request just check value of your BOOL variable in if condition. If its TRUE means response received so go for it otherwise in else don't do anything.
I have an object that needs to be initialised with data from network and doesn't really make sense without the downloaded data. But it seems to me that doing an asynchronous network call in its init method is not a good idea because the object will not be ready to user right away and it might cause confusion. Should I only use a basic init method that will alloc init all its properties to create an empty object, and have other (non-init) methods populate the data from the network which will be called explicitly by other objects (such as the view controller using this object)? What would be a good way to approach this?
I think that the solution comes from having the right order of code running:
1) Go to network and fetch data (show the user acivity indicator in this time)
2) Server return response -> fetch the response into your object
3) Show the data or use it
Flash, which is a temporary storage map for one request.
I am wondering how this is implemented on the grails core framework
In particular I'm interested in the class(es) responsible for putting the flash map in the request and then taking it out once the request has finessed processing.
Flash is actually a temporary storage map for the present request and the next request only. It won't retain the entries after the next request unless entries are repopulated in the next request (which would be current in future). Here is how it works in Grails:
FlashScope interface which extends Map itself has two methods next() and getNow() is implemented by GrailsFlashScope. All of which can be found in grails-web-mvc.
GrailsFlasScope mainly maintains two concurrent HasMap (one for current request and second for the next request) to hold the entries. It implements next() from FlashScope to do the cleaning and "restricting-to-next-request-only" part as:
a. clear current
b. make next as current
c. clear next
Next thing to focus will be GrailsWebRequestFilter (implements OncePerRequestFilter) which makes sure that there is always a single execution of request per dispatch.
All http servlet requests are filtered by GrailsWebRequestFilter. This filter sets the flash scope to next so that every time the latest and valid info is retrieved.
Now the question is how does FlashScope reconciles current and next map? Well, that is why FlashScope is extended from Map. FlashScope overrides get(key) from map to reconcile both the maps by making sure values are retrieved from next map otherwise switch to current map.
How is flash available to controllers by default? All controller inherit ControllersApi which inherits CommonWebApi.
I hope you get what you were looking for..
If you print the class of the object:
class MyController {
def index() {
println flash.getClass().name
}
}
You will see that's org.codehaus.groovy.grails.web.servlet.GrailsFlashScope. If you look at the code, there are two ConcurrentHashMap's: one for the current request and another to the next request.
To make it available, the instance is stored in the session (see registerWithSessionIfNecessary).