I have the following data structure.
And I want to do pagination/infinite scroll on this data by doing QueryOrderedByValue. The problem is I don't know how to query the data from two different branches at once. I know how to do with just one branch but not sure how to do it with two branches.
The current method I am doing now is running this through each branch:
metadataRef.queryOrderedByValue().queryEnding(atValue: highest).queryLimited(toLast: UInt(limit))
This is problematic because the data is no longer in order, and the array is likely reordered each time the data is pulled. If anyone has a solution for this, I'd be very grateful! Also, I am doing this in Swift, so swift code would be helpful too.
EDIT: The only other solution I can think of for this is to download all the data at once and then order them. But I worry this might be an issue when there's hundreds of thousands of entries.
Figured that I need a news feed node for each user. This is a good read if you are running into the same problem.
https://firebase.googleblog.com/2015/10/client-side-fan-out-for-data-consistency_73.html
Related
I am having an issue where not all data that is being sent from the server is being loaded into the entities. I request an array of objects that contain another array of objects and when I examine the request I can see that all of the expected data is their. However, when it is serialized into the javascript objects the child array only contains one of the three items that was sent from the server.
What's even more odd is that for me it is always the same parent objects that are missing data in their child array but if someone else goes in then they have completely different parent objects that are missing data in their child arrays, but the items that were missing for me are there for them.
I am not really looking for coding help here, I just want to know if anyone else has experienced anything like this before and how they may have resolved it or if anyone might know of any possible causes for this behavior.
Thanks
#JCherryHomes, Yeah! OMG, that is frustrating! Our first application had this problem regularly. Most of the time I was able to eliminate it by simplifying the data model (our context was and is our entire database). That wasn't a long term solution, but it did get the problem down to a reverse navigation property on a single (apparently not directly related) table.
Eventually I moved as much definition (foreign keys, relationships, etc) into the context and out of data annotations and we got it working. Haven't had a problem since. But I'm not sure if what I did fixed it or if some other aspect of the fooling around worked.
This Question got some clues, but didn't lead me to an answer (my fault, I'm sure). Maybe some of the responses from Jay will help you.
Good Luck
I suspect that a specific collection is no more in use, but I want to do some check before moving it offline.
A query on tbl_Command gives data only for a couple of weeks and it is not enough for me.
So this is more of an application design question. But I think it can be 'answered' and not just discussed. :)
I'm using RestKit for an application we're building. It obviously makes it super easy to put stuff into either straight objects or core data objects.
In the specific instance I'm dealing with, we have comments, much like comments on a facebook post.
Now, the nicest thing about storing these comments in core data is that with NSFRC I can sort them super easily and deal with updating/inserting automatically into the right spots into the timeline. But there's a couple sticking points there as well.
For instance with infinite loading, I now have to manage loading the comments in between the new most recent comments and the old stored comments. (Maybe the first time I grabbed 25, but there has been 100 new comments since then. So I retrieve the latest 25 first, then have to have an auto load cell in between the new comments and the old until I run into those, then have to paginate any after.
Aside from that, then you are also storing potentially thousands of comments in core data. Maybe it's not a big deal for quite a long time, but eventually you might want to start cleaning up old comments with a GCD task.
So what are the leading thoughts on what to store in core data, and what to keep as transient objects. (Maybe storing those in a cache like NSCache or the new Tumblr cache https://github.com/tumblr/TMCache).
Edit
Ok maybe I should clarify a little here. I get the purpose of Core Data... for persisting across app restarts and having an object graph with relationships. I make plenty of use of it. I guess what I'm wondering about here is the grey areas where I would like things to persist for the sake of not always having to wait for a network call, and offline availability.
But much like stories and comments on facebook, there are always going to be a constant stream of new ones coming in, and you don't necessarily care about 300 comments on an old post. Someone could come back to view comments on their 'post' quite a few times, or someone may just be browsing 'posts' and comments casually, and never coming back to them.
So I'm just trying to consider the strategy for something like this where you have potentially lots of entities (comments) coming down from a service. Sometimes people will want to view them several times (their own 'post') and sometimes they are just browsing through. When trying to see how others do this, it seems some stuff it all into core data, some (like Facebook) seem to store 25-50 most recent in the db, and any beyond that are transient (they probably are clearing out older stories and comments regularly too.)
Core Data is not designed to be used as "dumb data storage", but rather object persistence. So, anything that you want to persist between uses of your app should go into Core Data.
If you are using Core Data properly, it will take care of all of your caching for you as well.
EDIT:
Anything that is going to change too often for your taste or that you just don't want to permanently store, NSCache may be a better option. If you don't think your user will look at it again tomorrow, leave those bits out of your persistence. (IMHO)
Create a scond repository. Either select a time period that is known as 'recent' or provide a preference for such. Periodically look at the primary repository and find objects now older than the recent range, and move those objects to the scond repository.
Then provide users the means to search in just recent or all.
If all they want are recent values the searches should be faster, and nothing is lost.
i need to explain the practical problems that might be encountered when transforming their transactional (and other) data from their diverse sources into the Data Warehouse. according to my knowledge this is about cleansing and scrubbing data. if anyone knows about any practical problem please help me.thanks for your help
That's a broad topic, but I'll offer a few good starting points.
For starters, think about history. If a transaction updates some data point, do you need to apply that retroactively, or do you need to remember what the value was at any given point in time. For example, suppose you have a monthly report of customers by city, and one of your customers moves. How should the DW reflect that.
Think about data acceptance. Is every input row a good input? For example, if you're dealing with web data, there are crawlers and spammers that you might not want to count the same as you count user traffic.
Think about data synchronization. Do all your inputs use the same keys? Do you know how to translate between them? Does Team A mean the same thing by "cust_id" as Team B does? A project glossary is very helpful here.
Think about localization. Are you inputs all in the same time zone? Do they all use the same calendar system? Do you need to handle unicode?
Think about reporting. Are the data you're capturing able to answer the questions people will ask of the DW? If not, how can you capture data that can?
Think about presentation. Should you be showing customers the same data you're using for internal reporting? Does finance need to see a different slice of the data than marketing?
This really only scratches the surface of the issues that come up on a major DW project. I would refer you to Ralph Kimball's assorted books on Data Warehousing for a more in depth discussion of problems and solutions. Hope this helps you get started.
You give the answer in your question.
According to my knowledge this is about cleansing and scrubbing data.
And you are correct. Cleansing data means that you have a company-wide list of clean element attributes, and a mapping that changes the unclean elements into clean elements.
Processing the data against the clean element attributes is a piece of cake compared to creating the company-wide list of clean element attributes.
You have to get people from different departments to agree on what data to warehouse, and to agree on what each element means. This is a difficult sociological problem. It's not a terribly hard technical problem.
Good luck getting your company-wide list of clean element attributes.
sorry if the question sounds so weird, but I don' really know how else to put it.
Essentially, my application will a bunch of objects. Each objects has somekind of post/comment structure, the unique thing though is, that it is more or less static, so i figure out it would make no sense to put in every single post and comment into my database, because that would cause more database load? Instead of this, I was thinking about putting the JSON representation of the post with its comments, thus only causing one database access per object. I would then render the JSON object in the controller or view or something. Is this a valid solution?
No!
You loose all ability to query that data at no benefit unless you are at massive scale. The database's job is to pull that stuff out for you efficiently, and if you create the proper indexes and implement the proper caching strategies, you shouldn't have any issues with database load. You want to replace all the goodness of the Rails ORM with your own decidedly less useful version in the interest of a speed gain, waaay before you need it.
What if later you want to do a most popular comments sidebar widget? Or you want to page through the comments, regardless of the post they are associated with, in a table for moderation? What if you want your data to be searchable?
Don't sacrifice your ability to easily query and manipulate the data for premature optimization.
Though it sounds a good idea but I don't think that it will work in the long run thinking of what is going to happen when you have many comments on your posts. You will have to get the long string from the database and then add the new comment to it and then update it in the data. This will be very inefficient compared to just inserting one more comment in the table.
Also, just think what is going to happen, if at some point, you will have to give user the option to update the comment. Getting the particular comment from that long string and then update it will be a nightmare, don't you think?
In general you want to use JSON and the like as a bit of a last resort. Storing JSON in the db makes sense if your information isn't necessarily known ahead of time. It is not a substitute for proper data modelling and is not a win performance-wise.
To give you an idea where I am looking at using it in a project, in LedgerSMB we want to be able to have consultants track additional information on some db objects. Because we don't know what it will be in advance JSON makes a lot of sense. We don't expect to be searching on the data or support searches on the data but if we did that could be arranged using plv8js.