I'm having an async call with a completionhandler that fetches data for me through a query. These queries can vary based upon the users action.
My data call looks like this;
class DataManager {
func requestVideoData(query: QueryOn<VideoModel>, completion: #escaping (([VideoModel]?, UInt?, Error?) -> Void)) {
client.fetchMappedEntries(matching: query) { (result: Result<MappedArrayResponse<FRVideoModel>>) in
completion(videos, arrayLenght, nil)
}
}
}
My ViewController looks like this;
DataManager().requestVideoData(query: /*One of the queries below*/) { videos, arrayLength, error in
//Use the data fetched based on the query that has been entered
}
My queries look like this;
let latestVideosQuery = QueryOn<FRVideoModel>().limit(to: 50)
try! latestVideosQuery.order(by: Ordering(sys: .createdAt, inReverse: true))
And this;
let countryQuery = QueryOn<FRVideoModel>()
.where(valueAtKeyPath: "fields.country.sys.contentType.sys.id", .equals("country"))
.where(valueAtKeyPath: "fields.country.fields.countryTitle", .includes(["France"]))
.limit(to: 50)
But I'm not completely sure how I would implement these queries the right way so they correspond with the MVC model.
I was thinking about a switch statement in the DataManager class, and pass a value into the query parameter on my ViewController that would result in the right call on fetchMappedEntries(). The only problem with this is that I still need to execute the correct function according to my query in my VC, so I would need a switch statement over there as well.
Or do I need to include all my queries inside my ViewController? This is something I think is incorrect because it seems something that should be in my model.
This is somewhat subjective. I think you are right to want to put the construction of the queries in your DataManager and not in your view controller.
One approach is to dumb down the request interface, so that the view controller only needs to pass a simple request, say:
struct QueryParams {
let limit: Int?
let country: String?
}
You would then need to change your DataManager query function to take this instead:
func requestVideoData(query: QueryParams, completion: #escaping (([VideoModel]?, UInt?, Error?) -> Void))
Again, this is subjective, so you have to determine the tradeoffs. Dumbing down the interface limits the flexibility of it, but it also simplifies what the view controller has to know.
In the end I went with a slightly modified networking layer and a router where the queries are stored in a public enum, which I can then use in my functions inside my ViewController. Looks something like this;
public enum Api {
case latestVideos(limit: UInt)
case countryVideos(countries: [String], limit: UInt)
}
extension Api:Query {
var query: QueryOn<VideoModel> {
switch self {
case .latestVideos(let limit):
let latestVideosQuery = QueryOn<VideoModel>().limit(to: limit)
try! latestVideosQuery.order(by: Ordering(sys: .createdAt, inReverse: true))
return latestVideosQuery
case .countryVideos(let countries, let limit):
let countryQuery = QueryOn<VideoModel>()
.where(valueAtKeyPath: "fields.country.sys.contentType.sys.id", .equals("country"))
.where(valueAtKeyPath: "fields.country.fields.countryTitle", .includes(countries))
.limit(to: limit)
return countryQuery
}
}
}
And the NetworkManager struct to fetch the data;
struct NetworkManager {
private let router = Router<Api>()
func fetchLatestVideos(limit: UInt, completion: #escaping(_ videos: [VideoModel]?, _ arrayLength: UInt?,_ error: Error?) -> Void) {
router.requestVideoData(.latestVideos(limit: limit)) { (videos, arrayLength, error) in
if error != nil {
completion(nil, nil, error)
} else {
completion(videos, arrayLength, nil)
}
}
}
}
Related
In a personal project of mine, I have created an API caller to retrieve a user's saved tracks from the Spotify API. The Spotify endpoint which I am using has a limit (maximum of 50 tracks per request) as well as an offset (starting index of first track in request), which is why I decided to use a FOR loop to get a series of track pages (each 50 tracks) and append them to a global array. The data is loaded from the main thread, and while the data is being requested, I display a child view controller with a spinner view. Once the data request has completed, I remove the spinner view, and transition to another view controller (passing the data as a property).
I have tried many things, but the array of tracks is always empty following the API request. I have a feeling it has to do with the synchronicity of my request, or maybe its possible that I'm not handling it correctly. Ideally, I would like to wait until the request from my API finishes, then append the result to the array. Do you have any suggestions on how I could solve this? Any help is much appreciated!
func createSpinnerView() {
let loadViewController = LoadViewController.instantiateFromAppStoryboard(appStoryboard: .OrganizeScreen)
add(asChildViewController: loadViewController)
DispatchQueue.main.async { [weak self] in
if (self?.dropdownButton.dropdownLabel.text == "My saved music") {
self?.fetchSavedMusic() { tracksArray in
self?.tracksArray = tracksArray
}
}
...
self?.remove(asChildViewController: loadViewController)
self?.navigateToFilterScreen(tracksArray: self!.tracksArray)
}
}
private func fetchSavedMusic(completion: #escaping ([Tracks]) -> ()) {
let limit = 50
var offset = 0
var total = 200
for _ in stride(from: 0, to: total, by: limit) {
getSavedTracks(limit: limit, offset: offset) { tracks in
//total = tracks.total
self.tracksArray.append(tracks)
}
print(offset, limit)
offset = offset + 50
}
completion(tracksArray)
}
private func getSavedTracks(limit: Int, offset: Int, completion: #escaping (Tracks) -> ()) {
APICaller.shared.getUsersSavedTracks(limit: limit, offset: offset) { (result) in
switch result {
case .success(let model):
completion(model)
print("success")
case .failure(let error):
print("Error retrieving saved tracks: \(error.localizedDescription)")
print(error)
}
}
}
private func navigateToFilterScreen(tracksArray: [Tracks]) {
let vc = FilterViewController.instantiateFromAppStoryboard(appStoryboard: .OrganizeScreen)
vc.paginatedTracks = tracksArray
show(vc, sender: self)
}
First you need to call completion when all of your data is loaded. In your case you call completion(tracksArray) before any of the getSavedTracks return.
For this part I suggest you to recursively accumulate tracks by going through all pages. There are multiple better tools to do so but I will give a raw example of it:
class TracksModel {
static func fetchAllPages(completion: #escaping ((_ tracks: [Track]?) -> Void)) {
var offset: Int = 0
let limit: Int = 50
var allTracks: [Track] = []
func appendPage() {
fetchSavedMusicPage(offset: offset, limit: limit) { tracks in
guard let tracks = tracks else {
completion(allTracks) // Most likely an error should be handled here
return
}
if tracks.count < limit {
// This was the last page because we got less than limit (50) tracks
completion(allTracks+tracks)
} else {
// Expecting another page to be loaded
offset += limit // Next page
allTracks += tracks
appendPage() // Recursively call for next page
}
}
}
appendPage() // Load first page
}
private static func fetchSavedMusicPage(offset: Int, limit: Int, completion: #escaping ((_ tracks: [Track]?) -> Void)) {
APICaller.shared.getUsersSavedTracks(limit: limit, offset: offset) { result in
switch result {
case .success(let model):
completion(model)
case .failure(let error):
print(error)
completion(nil) // Error also needs to call a completion
}
}
}
}
I hope comments will clear some things out. But the point being is that I nested an appendPage function which is called recursively until server stops sending data. In the end either an error occurs or the last page returns fewer tracks than provided limit.
Naturally it would be nicer to also forward an error but I did not include it for simplicity.
In any case you can now anywhere TracksModel.fetchAllPages { } and receive all tracks.
When you load and show your data (createSpinnerView) you also need to wait for data to be received before continuing. For instance:
func createSpinnerView() {
let loadViewController = LoadViewController.instantiateFromAppStoryboard(appStoryboard: .OrganizeScreen)
add(asChildViewController: loadViewController)
TracksModel.fetchAllPages { tracks in
DispatchQueue.main.async {
self.tracksArray = tracks
self.remove(asChildViewController: loadViewController)
self.navigateToFilterScreen(tracksArray: tracks)
}
}
}
A few components may have been removed but I hope you see the point. The method should be called on main thread already. But you are unsure what thread the API call returned on. So you need to use DispatchQueue.main.async within the completion closure, not outside of it. And also call to navigate within this closure because this is when things are actually complete.
Adding situation for fixed number of requests
For fixed number of requests you can do all your requests in parallel. You already did that in your code.
The biggest problem is that you can not guarantee that responses will come back in same order than your requests started. For instance if you perform two request A and B it can easily happen due to networking or any other reason that B will return before A. So you need to be a bit more sneaky. Look at the following code:
private func loadPage(pageIndex: Int, perPage: Int, completion: #escaping ((_ items: [Any]?, _ error: Error?) -> Void)) {
// TODO: logic here to return a page from server
completion(nil, nil)
}
func load(maximumNumberOfItems: Int, perPage: Int, completion: #escaping ((_ items: [Any], _ error: Error?) -> Void)) {
let pageStartIndicesToRetrieve: [Int] = {
var startIndex = 0
var toReturn: [Int] = []
while startIndex < maximumNumberOfItems {
toReturn.append(startIndex)
startIndex += perPage
}
return toReturn
}()
guard pageStartIndicesToRetrieve.isEmpty == false else {
// This happens if maximumNumberOfItems == 0
completion([], nil)
return
}
enum Response {
case success(items: [Any])
case failure(error: Error)
}
// Doing requests in parallel
// Note that responses may return in any order time-wise (we can not say that first page will come first, maybe the order will be [2, 1, 5, 3...])
var responses: [Response?] = .init(repeating: nil, count: pageStartIndicesToRetrieve.count) { // Start with all nil
didSet {
// Yes, Swift can do this :D How amazing!
guard responses.contains(where: { $0 == nil }) == false else {
// Still waiting for others to complete
return
}
let aggregatedResponse: (items: [Any], errors: [Error]) = responses.reduce((items: [], errors: [])) { partialResult, response in
switch response {
case .failure(let error): return (partialResult.items, partialResult.errors + [error])
case .success(let items): return (partialResult.items + [items], partialResult.errors)
case .none: return (partialResult.items, partialResult.errors)
}
}
let error: Error? = {
let errors = aggregatedResponse.errors
if errors.isEmpty {
return nil // No error
} else {
// There was an error.
return NSError(domain: "Something more meaningful", code: 500, userInfo: ["all_errors": errors]) // Or whatever you wish. Perhaps just "errors.first!"
}
}()
completion(aggregatedResponse.items, error)
}
}
pageStartIndicesToRetrieve.enumerated().forEach { requestIndex, startIndex in
loadPage(pageIndex: requestIndex, perPage: perPage) { items, error in
responses[requestIndex] = {
if let error = error {
return .failure(error: error)
} else {
return .success(items: items ?? [])
}
}()
}
}
}
The first method is not interesting. It just loads a single page. The second method now collects all the data.
First thing that happens is we calculate all possible requests. We need a start index and per-page. So the pageStartIndicesToRetrieve for case of 145 items using 50 per page will return [0, 50, 100]. (I later found out we only need count 3 in this case but that depends on the API, so let's stick with it). We expect 3 requests starting with item indices [0, 50, 100].
Next we create placeholders for our responses using
var responses: [Response?] = .init(repeating: nil, count: pageStartIndicesToRetrieve.count)
for our example of 145 items and using 50 per page this means it creates an array as [nil, nil, nil]. And when all of the values in this array turn to not-nil then all requests have returned and we can process all of the data. This is done by overriding the setter didSet for a local variable. I hope the content of it speaks for itself.
Now all that is left is to execute all requests at once and fill the array. Everything else should just resolve by itself.
The code is not the easiest and again; there are tools that can make things much easier. But for academical purposes I hope this approach explains what needs to be done to accomplish your task correctly.
NOTE: I have seen other posts but my problem is a little different
I have a helper class to access Realm. Every function in this class creates it's own instance of the Realm object to avoid thread issues, to be specific Realm accessed from incorrect thread.; This works perfectly fine for disk Realm; however, for my in memory realm the data is inserted successfully but when I try to retrieve it I get nothing. I thought maybe Realm is being accessed from different threads so what I did is I created a DispatchQueue and I always access realm from that queue.
Here is my code
protocol Cachable {}
protocol InMemoryCache {
func create<T: Cachable>(model: T.Type,
_ completion: #escaping (Result<T, Error>) -> ())
func save(object: Cachable,
_ completion: #escaping (Result<Void, Error>) -> ())
func fetch<T: Cachable>(model: T.Type,
predicate: NSPredicate?,
sorted: Sorted?,
_ completion: #escaping (Result<[T], Error>) -> ())
}
enum RealmInMemoryCacheError: Error {
case notRealmSpecificModel
case realmIsNil
case realmError
}
final class RealmInMemoryCache {
private let configuration: Realm.Configuration
private let queue: DispatchQueue
init(_ configuration: Realm.Configuration) {
self.queue = DispatchQueue(label: "inMemoryRealm", qos: .utility)
self.configuration = configuration
}
}
extension RealmInMemoryCache : InMemoryCache{
func create<T>(model: T.Type,
_ completion: #escaping (Result<T, Error>) -> ()) where T : Cachable {
self.queue.async {
guard let realm = try? Realm(configuration: self.configuration) else {
return completion(.failure(RealmInMemoryCacheError.realmIsNil))
}
guard let model = model as? RealmSwift.Object.Type else {
return completion(.failure(RealmInMemoryCacheError.notRealmSpecificModel))
}
do {
try realm.write { () -> () in
let newObject = realm.create(model, value: [], update: .all) as! T
return completion(.success(newObject))
}
} catch {
return completion(.failure(RealmInMemoryCacheError.realmError))
}
}
}
func save(object: Cachable,
_ completion: #escaping (Result<Void, Error>) -> ()) {
self.queue.async {
guard let realm = try? Realm(configuration: self.configuration) else {
return completion(.failure(RealmInMemoryCacheError.realmIsNil))
}
guard let object = object as? RealmSwift.Object else {
return completion(.failure(RealmInMemoryCacheError.notRealmSpecificModel))
}
do {
try realm.write { () -> () in
realm.add(object, update: .all)
return completion(.success(()))
}
} catch {
return completion(.failure(RealmInMemoryCacheError.realmError))
}
}
}
func fetch<T>(model: T.Type,
predicate: NSPredicate?,
sorted: Sorted?,
_ completion: #escaping (Result<[T], Error>) -> ()) where T : Cachable {
self.queue.async {
guard let realm = try? Realm(configuration: self.configuration) else {
return completion(.failure(RealmInMemoryCacheError.realmIsNil))
}
guard
let model = model as? RealmSwift.Object.Type else {
return completion(.failure(RealmInMemoryCacheError.notRealmSpecificModel))
}
var objects = realm.objects(model)
if let predicate = predicate {
objects = objects.filter(predicate)
}
if let sorted = sorted {
objects = objects.sorted(byKeyPath: sorted.key, ascending: sorted.ascending)
}
return completion(.success(objects.compactMap { $0 as? T}))
}
}
}
extension Object: Cachable {}
struct Sorted {
var key: String
var ascending: Bool = true
}
I eliminated code that doesn't add any benefit to the question hence you see empty/missing things in the above code. However, the code above works 100% copied and pasted.
I tried creating realm in the initialized instead so I have a strong reference to it; however, that causes issues with thread safety, it may work for few times but it would at some point crash the app due to the error Realm accessed from incorrect thread.
As you may tell, my goal is to make the above code generic and 100% thread safe even if called from a background thread say in a different function. Reason behind it is imagine the above class is an API and different programmers will use it, and sometimes they will call a function on a background thread for example without actually knowing what is going under the hood. I do not want the application to crash if they did such a thing.
EDIT: This is how the helper class is initialized
let realmInMemory = RealmInMemoryCache(Realm.Configuration(inMemoryIdentifier: "globalInMemoryRealm")
// Then I can use it like so (replace model with your realm model)
realmInMemory.create(model) { result in {
switch result {
...
}
}
EDIT 2: Here is a full example of how the above class works
import RealmSwift
final class MessageRealmEntity: Object {
#objc dynamic var id: String = ""
#objc dynamic var message: String = ""
convenience init(id: String, message: String) {
self.init()
self.id = id
self.message = message
}
override static func primaryKey() -> String? {
"id"
}
}
// THIS IS NOT PART OF THE PROBLEM, THIS `Main` CLASS IS JUST A DRIVER. THE CODE INSIDE IT COULD RUN ANYWHERE.
final class Main {
let realmInMemory = RealmInMemoryCache(Realm.Configuration(inMemoryIdentifier: "globalInMemoryRealm"))
func run() {
DispatchQueue.global(qos: .background).async {
let semaphore: DispatchSemaphore = DispatchSemaphore(value: 0)
var entity = MessageRealmEntity(id: "1", message: "Hello, World!")
self.realmInMemory.save(object: entity) { result in
switch result {
case .success(_):
print("Saved successfully")
case .failure(let error):
print("Got error")
}
semaphore.signal()
}
_ = semaphore.wait(wallTimeout: .distantFuture)
self.realmInMemory.fetch(model: MessageRealmEntity.self, predicate: nil, sorted: nil) { result in
switch result {
case .success(let messages):
print(messages.count) // This will return 0 when it should be 1 since we inserted already
case .failure(let error):
print("Got error")
}
semaphore.signal()
}
_ = semaphore.wait(wallTimeout: .distantFuture)
}
}
}
let main: Main = Main()
main.run()
All other methods are called the same way.
EDIT3:
I opened github issue if anyone is interested to follow it, here is the link: https://github.com/realm/realm-cocoa/issues/7017 there is a video and more explanation there
This is a github link to download a project to reproduce the bug https://github.com/Muhand/InMemoryRealm-Bug
After reading the documentation of Realm again and again and keep thinking of what #Jay have said in the comments, I paid more attention to this quote from the documentation
Notice: When all in-memory Realm instances with a particular identifier go out of scope with no references, all data in that Realm is deleted. We recommend holding onto a strong reference to any in-memory Realms during your app’s lifetime. (This is not necessary for on-disk Realms.)
Key sentence in the above quote is
When all in-memory Realm instances with a particular identifier go out of scope with no references, all data in that Realm is deleted.
In other words my Realm object is going out of scope everytime I try to save or fetch or do any other function
for example:
self.queue.async {
guard let realm = try? Realm(configuration: self.configuration) else {
completion(.failure(RealmInMemoryCacheError.realmIsNil))
return
}
guard let object = object as? RealmSwift.Object else {
completion(.failure(RealmInMemoryCacheError.notRealmSpecificModel))
return
}
do {
try realm.write { () -> () in
realm.add(object, update: .all)
completion(.success(()))
return
}
} catch {
completion(.failure(RealmInMemoryCacheError.realmError))
return
}
}
In the above code Realm goes out of scope as soon as that queue is done with it. Then Realm will look if there is any other variable in memory with the same identifier if so then it does nothing otherwise it will delete the current realm to optimize.
So the solution to the problem is basically to create a strong reference to realm with this identifier and then in each function re-create realm with the same identifier as well to avoid Realm accessed from incorrect thread This will however render in an extra variable that is not used but I think that is okay for now, at least it is my solution until something official comes from Realm. Keep in mind the process of re-initializing should not be an overhead since Realm takes care of that optimization.
Here is what I have done
final class RealmInMemoryCache {
private let configuration: Realm.Configuration
private let queue: DispatchQueue
private let strongRealm: Realm <-- Added variable
init(_ configuration: Realm.Configuration) {
self.queue = DispatchQueue(label: "inMemoryRealm", qos: .utility)
self.configuration = configuration
self.strongRealm = try! Realm(configuration: self.configuration) <-- Initialized here
}
}
and in other functions I do something like the following
...
self.queue.async {
guard let realm = try? Realm(configuration: self.configuration) else {
completion(.failure(RealmInMemoryCacheError.realmIsNil))
return
}
...
What threw me off thinking my Realm was going out of scope is when I setup breakpoints Realm started behaving perfectly fine. Although I still don't know 100% for sure why but my thinking is that xcode debugger might have created a strong reference to Realm when I write the command po realm.objects(...) in lldb.
I will accept this answer for now, unless someone has a better solution.
I'm calling three services in a row. When I'm calling third service, I need to use variable from first service response which is userModel. I can get second service response from which is initModel but can't reach first model userModel. My Question is that how can I use userModel in done block by returning it then blocks?
P.S: I tried to return -> Promise<(UserModel,InstallationModel)> in first call but because UserModel is already an object not a promise, I need to convert it to promise to return it. This looks like me a bad way to do it.
As you can see I'm storing it with self.userModel = userModel which I don't wanna do.
func callService(completionHandler: #escaping (Result<UserModel>) -> Void) {
SandboxService.createsandboxUser().then { userModel -> Promise<InstallationModel> in
self.userModel = userModel
return SandboxService.initializeClient(publicKey: self.keyPairs.publicKey)
}.then { initModel -> Promise<DeviceServerResponseModel> in
self.initModel = initModel
if let unwrappedUserModel = self.userModel {
return SandboxService.deviceServerServiceCaller(authKey: initModel.token.token,apiKey:unwrappedUserModel.apiKey,privaKey: self.keyPairs.privateKey)
}
throw ServiceError.handleParseError()
}.then { serverResponseModel -> Promise<UserModel> in
if let unwrappedInitModel = self.initModel, let unwrappedUserModel = self.userModel {
return SandboxService.sessionServiceCaller(authKey: unwrappedInitModel.token.token, apiKey: unwrappedUserModel.apiKey, privaKey: self.keyPairs.privateKey)
}
throw ServiceError.handleParseError()
}.done { userModel in
completionHandler(Result.success(userModel))
}.catch { error in
completionHandler(Result.error(error))
}
}
I had also opened issue at PromiseKit page #Github. I'm sharing answer of Mxcl from Github to here also.
func callService(completionHandler: #escaping (Result<UserModel>) -> Void) {
SandboxService.createsandboxUser().then { userModel in
firstly {
SandboxService.initializeClient(publicKey: self.keyPairs.publicKey)
}.then { initModel in
SandboxService.deviceServerServiceCaller(authKey: initModel.token.token, apiKey: userModel.apiKey,privaKey: self.keyPairs.privateKey).map{ ($0, initiModel) }
}.then { serverResponseModel, initModel in
SandboxService.sessionServiceCaller(authKey: initModel.token.token, apiKey: userModel.apiKey, privaKey: self.keyPairs.privateKey)
}
}.pipe(to: completionHandler)
}
I'm not familiar with PromiseKit but since it is a framework, you cannot really edit the methods in a way that you could include userModel in callback of .done method. So what I would do is, have an optional value declared in the class where this code block is executed with the type of userModel, and then set it to received value from first call, then set it back to nil after using it in second one. Like following:
Lets assume type of userModel is UserModel.
final class SampleFetcher {
let userModel: UserModel?
func fetch() {
SandboxService.createsandboxUser().then { userModel in
SandboxService.initializeClient()
// save userModel here.
userModel = userModel
}.done { initModel in
// Use it here
guard let userModel = userModel else {
return
}
SandboxService.deviceServerServiceCaller(secretID: "")
// after you are done, set it to nil
userModel = nil
}.catch { error in
}
}
}
If it wasn't a framework, you could write functions in a way that you could include userModel in second callback as well.
I am new to RxSwift and MVVM.
my viewModel has a method named rx_fetchItems(for:) that does the heavy lifting of fetching relevant content from backend, and returns Observable<[Item]>.
My goal is to supply an observable property of the viewModel named collectionItems, with the last emitted element returned from rx_fetchItems(for:), to supply my collectionView with data.
Daniel T has provided this solution that I could potentially use:
protocol ServerAPI {
func rx_fetchItems(for category: ItemCategory) -> Observable<[Item]>
}
struct ViewModel {
let collectionItems: Observable<[Item]>
let error: Observable<Error>
init(controlValue: Observable<Int>, api: ServerAPI) {
let serverItems = controlValue
.map { ItemCategory(rawValue: $0) }
.filter { $0 != nil }.map { $0! } // or use a `filterNil` operator if you already have one implemented.
.flatMap { api.rx_fetchItems(for: $0)
.materialize()
}
.filter { $0.isCompleted == false }
.shareReplayLatestWhileConnected()
collectionItems = serverItems.filter { $0.element != nil }.dematerialize()
error = serverItems.filter { $0.error != nil }.map { $0.error! }
}
}
The only problem here is that my current ServerAPI aka FirebaseAPI, has no such protocol method, because it is designed with a single method that fires all requests like this:
class FirebaseAPI {
private let session: URLSession
init() {
self.session = URLSession.shared
}
/// Responsible for Making actual API requests & Handling response
/// Returns an observable object that conforms to JSONable protocol.
/// Entities that confrom to JSONable just means they can be initialized with json.
func rx_fireRequest<Entity: JSONable>(_ endpoint: FirebaseEndpoint, ofType _: Entity.Type ) -> Observable<[Entity]> {
return Observable.create { [weak self] observer in
self?.session.dataTask(with: endpoint.request, completionHandler: { (data, response, error) in
/// Parse response from request.
let parsedResponse = Parser(data: data, response: response, error: error)
.parse()
switch parsedResponse {
case .error(let error):
observer.onError(error)
return
case .success(let data):
var entities = [Entity]()
switch endpoint.method {
/// Flatten JSON strucuture to retrieve a list of entities.
/// Denoted by 'GETALL' method.
case .GETALL:
/// Key (underscored) is unique identifier for each entity, which is not needed here.
/// value is k/v pairs of entity attributes.
for (_, value) in data {
if let value = value as? [String: AnyObject], let entity = Entity(json: value) {
entities.append(entity)
}
}
// Need to force downcast for generic type inference.
observer.onNext(entities as! [Entity])
observer.onCompleted()
/// All other methods return JSON that can be used to initialize JSONable entities
default:
if let entity = Entity(json: data) {
observer.onNext([entity] as! [Entity])
observer.onCompleted()
} else {
observer.onError(NetworkError.initializationFailure)
}
}
}
}).resume()
return Disposables.create()
}
}
}
The most important thing about the rx_fireRequest method is that it takes in a FirebaseEndpoint.
/// Conforms to Endpoint protocol in extension, so one of these enum members will be the input for FirebaseAPI's `fireRequest` method.
enum FirebaseEndpoint {
case saveUser(data: [String: AnyObject])
case fetchUser(id: String)
case removeUser(id: String)
case saveItem(data: [String: AnyObject])
case fetchItem(id: String)
case fetchItems
case removeItem(id: String)
case saveMessage(data: [String: AnyObject])
case fetchMessages(chatroomId: String)
case removeMessage(id: String)
}
In order to use Daniel T's solution, Id have to convert each enum case from FirebaseEndpoint into methods inside FirebaseAPI. And within each method, call rx_fireRequest... If I'm correct.
Id be eager to make this change if it makes for a better Server API design. So the simple question is, Will this refactor improve my overall API design and how it interacts with ViewModels. And I realize this is now evolving into a code review.
ALSO... Here is implementation of that protocol method, and its helper:
func rx_fetchItems(for category: ItemCategory) -> Observable<[Item]> {
// fetched items returns all items in database as Observable<[Item]>
let fetchedItems = client.rx_fireRequest(.fetchItems, ofType: Item.self)
switch category {
case .Local:
let localItems = fetchedItems
.flatMapLatest { [weak self] (itemList) -> Observable<[Item]> in
return self!.rx_localItems(items: itemList)
}
return localItems
// TODO: Handle other cases like RecentlyAdded, Trending, etc..
}
}
// Helper method to filter items for only local items nearby user.
private func rx_localItems(items: [Item]) -> Observable<[Item]> {
return Observable.create { observable in
observable.onNext(items.filter { $0.location == "LA" })
observable.onCompleted()
return Disposables.create()
}
}
If my approach to MVVM or RxSwift or API design is wrong PLEASE do critique.
I know it is tough to start understanding RxSwift
I like to use Subjects or Variables as inputs for the ViewModel and Observables or Drivers as outputs for the ViewModel
This way you can bind the actions that happen on the ViewController to the ViewModel, handle the logic there, and update the outputs
Here is an example by refactoring your code
View Model
// Inputs
let didSelectItemCategory: PublishSubject<ItemCategory> = .init()
// Outputs
let items: Observable<[Item]>
init() {
let client = FirebaseAPI()
let fetchedItems = client.rx_fireRequest(.fetchItems, ofType: Item.self)
self.items = didSelectItemCategory
.withLatestFrom(fetchedItems, resultSelector: { itemCategory, fetchedItems in
switch itemCategory {
case .Local:
return fetchedItems.filter { $0.location == "Los Angeles" }
default: return []
}
})
}
ViewController
segmentedControl.rx.value
.map(ItemCategory.init(rawValue:))
.startWith(.Local)
.bind(to: viewModel.didSelectItemCategory)
.disposed(by: disposeBag)
viewModel.items
.subscribe(onNext: { items in
// Do something
})
.disposed(by: disposeBag)
I think the problem you are having is that you are only going half-way with the observable paradigm and that's throwing you off. Try taking it all the way and see if that helps. For example:
protocol ServerAPI {
func rx_fetchItems(for category: ItemCategory) -> Observable<[Item]>
}
struct ViewModel {
let collectionItems: Observable<[Item]>
let error: Observable<Error>
init(controlValue: Observable<Int>, api: ServerAPI) {
let serverItems = controlValue
.map { ItemCategory(rawValue: $0) }
.filter { $0 != nil }.map { $0! } // or use a `filterNil` operator if you already have one implemented.
.flatMap { api.rx_fetchItems(for: $0)
.materialize()
}
.filter { $0.isCompleted == false }
.shareReplayLatestWhileConnected()
collectionItems = serverItems.filter { $0.element != nil }.dematerialize()
error = serverItems.filter { $0.error != nil }.map { $0.error! }
}
}
EDIT to handle problem mentioned in comment. You now need to pass in the object that has the rx_fetchItems(for:) method. You should have more than one such object: one that points to the server and one that doesn't point to any server, but instead returns canned data so you can test for any possible response, including errors. (The view model should not talk to the server directly, but should do so through an intermediary...
The secret sauce in the above is the materialize operator that wraps error events into a normal event that contains an error object. That way you stop a network error from shutting down the whole system.
In response to the changes in your question... You can simply make the FirebaseAPI conform to ServerAPI:
extension FirebaseAPI: ServerAPI {
func rx_fetchItems(for category: ItemCategory) -> Observable<[Item]> {
// fetched items returns all items in database as Observable<[Item]>
let fetchedItems = self.rx_fireRequest(.fetchItems, ofType: Item.self)
switch category {
case .Local:
let localItems = fetchedItems
.flatMapLatest { [weak self] (itemList) -> Observable<[Item]> in
return self!.rx_localItems(items: itemList)
}
return localItems
// TODO: Handle other cases like RecentlyAdded, Trending, etc..
}
}
// Helper method to filter items for only local items nearby user.
private func rx_localItems(items: [Item]) -> Observable<[Item]> {
return Observable.create { observable in
observable.onNext(items.filter { $0.location == "LA" })
observable.onCompleted()
return Disposables.create()
}
}
}
You should probably change the name of ServerAPI at this point to something like FetchItemsAPI.
You run into a tricky situation here because your observable can throw an error and once it does throw an error the observable sequence errors out and no more events can be emitted. So to handle subsequent network requests, you must reassign taking the approach you're currently taking. However, this is generally not good for driving UI elements such as a collection view because you would have to bind to the reassigned observable every time. When driving UI elements, you should lean towards types that are guaranteed to not error out (i.e. Variable and Driver). You could make your Observable<[Item]> to be let items = Variable<[Item]>([]) and then you could just set the value on that variable to be the array of items that came in from the new network request. You can safely bind this variable to your collection view using RxDataSources or something like that. Then you could make a separate variable for the error message, let's say let errorMessage = Variable<String?>(nil), for the error message that comes from the network request and then you could bind the errorMessage string to a label or something like that to display your error message.
I'm building an app, where I need to load data in chunks, I mean first load 5 items and then proceed with another 5, but I can't figure out how to do that. At the moment I chunk up my list of items, so I get a list of lists with 5 items in each. Right now the for-loop just fires away with requests, but I want to wait for the response and then proceed in the for loop.
I use alamofire, and my code looks like this.
private func requestItemsForField(items: [Item], completion: #escaping (_ measurements: Array<Measurement>?, _ success: Bool) -> ()) {
let userPackageId = UserManager.instance.selectedUserPackage.id
let params = ["userPackageId": userPackageId]
for field in fields {
let url = apiURL + "images/\(field.id)"
let queue = DispatchQueue(label: "com.response-queue", qos: .utility, attributes: [.concurrent])
Alamofire.request(url, method: .get, parameters: params, headers: headers()).responseArray(queue: queue, completionHandler: { (response: DataResponse<[Item]>) in
if let items = response.result.value as [Item]? {
NotificationCenter.default.post(name: NSNotification.Name(rawValue: "itemsLoadedNotification"), object: nil)
completion(items, true)
}
else {
print("Request failed with error: \(response.result.error)")
completion(nil, false)
}
})
}
}
This is where i chunk up my list, and pass it to the above.
private func fetchAllMeasurements(completion: #escaping (_ measurements: [Item]?, _ done: Bool) -> ()) {
let fieldSet = FieldStore.instance.data.keys
var fieldKeys = [Item]()
for field in fieldSet {
fieldKeys.append(field)
}
// Create chunks of fields to load
let fieldChunks = fieldKeys.chunkify(by: 5)
var measurementsAll = [Measurement]()
for fields in fieldChunks {
requestItemsForField(fields: fields, completion: { (measurements, success) in
if let currentMeasurement = measurements {
measurementsAll.append(contentsOf: currentMeasurement)
}
completion(measurementsAll, true)
}
})
}
}
you need to get number of measurements you will have (for example server has 34 measurements) with your request and then code something like
var serverMeasurementsCount = 1 //should be for first request
func requestData() {
if self.measurements.count < self.serverMeasurementsCount {
...requestdata { data in
self.serverMeasurementsCount = data["serverMeasurementsCount"]
self.measurements.append(..yourData)
self.requestData()
}
}
or call requestData not inside completion handler or somewhere else
edit: fixed code a bit (serverMeasurementsCount = 1)
Instead of using a for loop, it sounds like you need do something like var index = 0 to start with, and call requestItemsForField() sending in fieldChunks[index] as the first parameter. Then in the completion handler, check to see whether there's another array element, and if so, call requestItemsForField() again, this time sending in fieldChunks[index+1] as the first parameter.
One solution would be to make a new recursive function to populate the items, add a new Bool parameter in closure as isComplete. then call the function on completion of isComplete boolean. to break the recursive function, add a global static variable of itemsCountMax, if itemCountMax == itemsCount break the recursive function.