Just for fun I am building a tweeter clone to get a better understanding of C*
All the suggested C* schemes that I have seen around are using more or less the same modeling technique. The issue is that I have my doubts about the scalability of modeling the twitter timeline in this fashion.
The problem:
What will happen if I have a userA (rock star) or more that is extremely popular and is followed by 10k+ users?
Each time the userA publishes a tweet we will have to insert into the timeline table 10k+ tweets for each of his followers.
Questions:
Will this model really scale?
Can anyone suggest me an alternative ways of modeling the timeline that can really scale?
C* Schema:
CREATE TABLE users (
uname text, -- UserA
followers set, -- Users who follow userA
following set, -- UserA is following userX
PRIMARY KEY (uname)
);
-- View of tweets created by user
CREATE TABLE userline (
tweetid timeuuid,
uname text,
body text,
PRIMARY KEY(uname, tweetid)
);
-- View of tweets created by user, and users he/she follows
CREATE TABLE timeline (
uname text,
tweetid timeuuid,
posted_by text,
body text,
PRIMARY KEY(uname, tweetid)
);
-- Example of UserA posting a tweet:
-- BATCH START
-- Store the tweet in the tweets
INSERT INTO tweets (tweetid, uname, body) VALUES (now(), 'userA', 'Test tweet #1');
-- Store the tweet in this users userline
INSERT INTO userline (uname, tweetid, body) VALUES ('userA', now(), 'Test tweet #1');
-- Store the tweet in this users timeline
INSERT INTO timeline (uname, tweetid, posted_by, body) VALUES ('userA', now(), 'userA', 'Test tweet #1');
-- Store the tweet in the public timeline
INSERT INTO timeline (uname, tweetid, posted_by, body) VALUES ('#PUBLIC', now(), 'userA', 'Test tweet #1');
-- Insert the tweet into follower timelines
-- findUserFollowers = SELECT followers FROM users WHERE uname = 'userA';
for (String follower : findUserFollowers('userA')) {
INSERT INTO timeline (uname, tweetid, posted_by, body) VALUES (follower, now(), 'userA', 'Test tweet #1');
}
-- BATCH END
Thanks in advance for any suggestions.
In my opinion the schema that you outlined or a similar one is best given the use case (see latest tweets user X subscribed for + see my tweets).
There are two gotchas, however.
I don't think Twitter uses Cassandra for storing tweets, probably for the same reasons you're starting to think about. The feed doesn't seem like a great idea for running on Cassandra, because you don't want to persist these countless copies of other people's tweets forever, but rather keep some sort of sliding window updated for each user (most users don't read 1000s of tweets down from the top of their feed, I'm guessing). So we're talking about a queue, and a queue that's in some cases updated essentially in real time. Cassandra can only support this pattern at the far end of scale with some coercion. I don't think it was designed for massive churn.
In production another database with better support for queues would probably be picked--maybe something like sharded Redis with its list support.
For the example you gave, the problem is not as bad as it may seem, because you don't need to do this update in a synchronous batch. You can post to the author's lists, return quickly and then do all other updates with an asynchronous worker that's running in the cluster pushing out updates with best effort QoS.
Finally, since you've asked about alternatives, here is a variation that I could think of. It may be conceptually closer to the queue I mentioned, but under the hood it will run into a lot of the same problems related to heavy data churn.
CREATE TABLE users(
uname text,
mru_timeline_slot int,
followers set,
following set,
PRIMARY KEY (uname)
);
// circular buffer: keep at most X slots for every user.
CREATE TABLE timeline_most_recent(
uname text,
timeline_slot int,
tweeted timeuuid,
posted_by text,
body text,
PRIMARY KEY(uname, timeline_slot)
);
Related
I'm looking to build an app that functions like a dating app:
User A fetches All Users.
User A removes Users B, C, and D.
User A fetches All Users again - excluding Users B, C, and D.
My goal is to perform a query that does not read the User B, C, and D documents in my fetch query.
I've read into array-contains-any, array-contains, not-in queries, but the 10 item limit prevents me from using these as options because the "removed users list" will continue to grow.
2 workaround options I've mulled over are...
Performing a paginated fetch on All User documents and then filtering out on the client side?
Store all User IDs (A, B, C, D) on 1 document in an array field, fetch the 1 document, and then filter client side?
Any guidance would be extremely appreciated either on suggestions around how I store my data or specific queries I can perform.
You can do it the other way around.
Instead of a removed or ignored array at your current user, you have an array of ignoredBy or removedBy in which you add your current user.
And when you fetch the users from the users collection, you just have to check if the requesting user is part of the array ignoredBy. So you don’t have tons of entries to check in the array, it is always just one.
Firestore may get a little pricey with the Tinder model but you can certainly implement a very extensible architecture, well enough to scale to millions of users, without breaking a sweat. So the user queries a pool of people, and each person is represented by their own document, this much is obvious. The user must then take an action on each person/document, and, presumably, when an action is taken that person should no longer reappear in the user's queries. We obviously can't edit the queried documents because there could be millions of users and that wouldn't scale. And we shouldn't use arrays because documents have byte limits and that also wouldn't scale. So we have to treat a collection like an array, using documents as items, because collections have no known limit to how many documents they can contain.
So when the user takes an action on someone, consider creating a new document in a subcollection in the user's own document (user A, the one performing the query) that contains the person's uid, and perhaps a boolean to determine if they liked or disliked that person (i.e. liked: true), and maybe a timestamp for UI purposes. This new document is the item in your limitless array.
When the user later performs another query, those same users are going to reappear in the results, which you need to filter out. You have no choice but to check if each person's uid is in this subcollection. If it is, omit the document and move to the next. But if your UI is configured like Tinder's, where there isn't a list of people to scroll through but instead cards stacked on top of each other, this is no big deal. The user will only be presented with one person at a time and they won't know how many you're filtering out behind the scenes. With a paginated list, the user may see odd behavior like uneven pages. The drawback is that you're now paying double for each query. Each query will cost you the original fetch and the subcollection-check fetch. But, hey, with this model you can scale to millions of users without ever breaking a sweat.
I build a social network with Neo4j, it includes:
Node labels: User, Post, Comment, Page, Group
Relationships: LIKE, WRITE, HAS, JOIN, FOLLOW,...
It is like Facebook.
example: A user follow B user: when B have a action such as like post, comment, follow another user, follow page, join group, etc. so that action will be sent to A. Similar, C, D, E users that follow B will receive the same notification.
I don't know how to design the data model for this problem and I have some solutions:
create Notification nodes for every user. If a action is executed, create n notification for n follower. Benefit: we can check that this user have seen notification, right? But, number of nodes quickly increase, power of n.
create a query for every call API notification (for client application), this query only get a action list of users are followed in special time (24 hours or a 2, 3 days). But Followers don't check this notification seen or yet, and this query may make server slowly.
create node with limited quantity such as 20, 30 nodes per user.
Create unlimited nodes (include time of action) on 24 hours and those nodes has time of action property > 24 hours will be deleted (expire time maybe is 2, 3 days).
Who can help me solve this problem? I should chose which solution or a new way?
I believe that the best approach is the option 1. As you said, you will be able to know if the follower has read or not the notification. About the number of notification nodes by follower: this problem is called "supernodes" or "dense nodes" - nodes that have too many connections.
The book Learning Neo4j (by Rik Van Bruggen, available for download in the Neo4j's web site) talk about "Dense node" or "Supernode" and says:
"[supernodes] becomes a real problem for graph traversals because the graph
database management system will have to evaluate all of the connected
relationships to that node in order to determine what the next step
will be in the graph traversal."
The book proposes a solution that consists in add meta nodes between the follower and the notification (in your case). This meta node should got at most a hundred of connections. If the current meta node reaches 100 connections a new meta node must be created and added to the hierarchy, according to the example of figure, showing a example with popular artists and your fans:
I think you do not worry about it right now. If in the future your followers node becomes a problem then you will be able to refactor your database schema. But at now keep things simple!
In the series of posts called "Building a Twitter clone with Neo4j" Max de Marzi describes the process of building the model. Maybe it can help you to make best decisions about your model!
Hi I am working on a social project that need to show a user's followers and following. And I am using the AnyPic project as example https://parse.com/tutorials. In Anypic example, showing number of followers is easy. You just need to get a list of followers of a user and count how many are there. But my question is what if there are 500K or 1M followers, will this approach be slow? Or should we do something different.
For example, we will still follow the anyPic example and have a class(or table) to record who is following who. And at the same time, we have an Integer column called "Number of Followers" in the user table. Everything a user follows userA, we will increment UserA's "Number of Followers". So whenever we need to know the number of followers of UserA, we can simply look at the "Number of Followers" column. But I prefer not to do it this way seems it adds some extra complexity.
Please let me know what you think about this. Or maybe Parse is so fast and powerful that I just don't need to worry about this issue at all.
I've learned with Parse: some times the "dirty way" is the best choise. Do you know "Parse cloudCode"? Just use afterSave function to increment the number of followers.
Ps.: Choose your best strategy (Join Table or Parse Relation) based on Parse information: https://parse.com/docs/relations_guide#manytomany
We need to find all the courses for a user whose startDate is less than today's date and endDate is greater than today's date. We are using API
/d2l/api/lp/{ver}/enrollments/myenrollments/?orgUnitTypeId=3
In one particular case I have more than 18 thousand courses against one user. The service can not return 18 thousand records at one go, I can only get 100 records at a time, so I need to use bookmark fields to fetch data in set of 100 records. Bookmark is the courseId of the last 100th record that we fetched, to get next set of 100 records.
/d2l/api/lp/{ver}/enrollments/myenrollments/?orgUnitTypeId=3&bookmark=12528
I need to repeat the loop 180 times, which results in "Request time out" error.
I need to filter the record on the basis of startDate and endDate, no sorting criteria is mentioned which can sort the data on the basis of startDate or endDate. Can anyone help me to find out the way to sort these data, or tell any other API which can do such type of sorting?
Note: All the 18 thousand records has property "IsActive":true
Rather than getting to the list of org units by user, you can try getting to the user by the list of org units. You could try using /d2l/api/lp/{ver}/orgstructure/{orgUnitId}/descendants/?ouTypeId={courseOfferingType} to retrieve the entire list of course offering IDs descended from the highest common ancestor known for the user's enrollments. You can then loop through /d2l/api/lp/{ver}/courses/{orgUnitId} to fetch back the course offering info for each one of those org units to pre-filter and cut out all the course offerings you don't care about based on dates. Then, for the ones left, you can check for the user's enrollment in each one of those to figure out which of your smaller set the user matches with.
This will certainly result in more calls to the service, not less, so it only has two advantages I can see:
You should be able to get the entire starting set of course offerings you need off the hop rather than getting it back in pages (although it's entirely possible that this call will get turned into a paging call in the future and the "fetch all the org units at once" nature it currently has deprecated).
If you need to do this entire use-case for more than one user, you can fetch the org structure data once, cache it, and then only do queries for users on the subset of the data.
In the mean time, I think it's totally reasonable to request an enhancement on the enrollments calls to provide better filtering (active/nonactive, start-dates, end-dates, and etc): I suspect that such a request might see more traction than a request to give control to clients over paging (i.e. number of responses in each page frame).
I'm trying to build a (simple) twitter-clone which uses CouchDB as Database-Backend.
Because of its reduced feature set, I'm almost finished with coding, but there's one thing left I can't solve with CouchDB - the per user timeline.
As with twitter, the per user timeline should show the tweets of all people I'm following, in a chronological order. With SQL it's a quite simple Select-Statement, but I don't know how to reproduce this with CouchDBs Map/Reduce.
Here's the SQL-Statement I would use with an RDBMS:
SELECT * FROM tweets WHERE user_id IN [1,5,20,33,...] ORDER BY created_at DESC;
CouchDB schema details
user-schema:
{
_id:xxxxxxx,
_rev:yyyyyy,
"type":"user",
"user_id":1,
"username":"john",
...
}
tweet-schema:
{
"_id":"xxxx",
"_rev":"yyyy",
"type":"tweet",
"text":"Sample Text",
"user_id":1,
...
"created_at":"2011-10-17 10:21:36 +000"
}
With view collations it's quite simple to query CouchDB for a list of "all tweets with user_id = 1 ordered chronologically".
But how do I retrieve a list of "all tweets which belongs to the users with the ID 1,2,3,... ordered chronologically"? Do I need another schema for my application?
The best way of doing this would be to save the created_at as a timestamp and then create a view, and map all tweets to the user_id:
function(doc){
if(doc.type == 'tweet'){
emit(doc.user_id, doc);
}
}
Then query the view with the user id's as keys, and in your application sort them however you want(most have a sort method for arrays).
Edited one last time - Was trying to make it all in couchDB... see revisions :)
Is that a CouchDB-only app? Or do you use something in between for additional buisness logic. In the latter case, you could achieve this by running multiple queries.
This might include merging different views. Another approach would be to add a list of "private readers" for each tweet. It allows user-specific (partial) views, but also introduces the complexity of adding the list of readers for each new tweet, or even updating the list in case of new followers or unfollow operations.
It's important to think of possible operations and their frequencies. So when you're mostly generating lists of tweets, it's better to shift the complexity into the way how to integrate the reader information into your documents (i.e. integrating the readers into your tweet doc) and then easily build efficient view indices.
If you have many changes to your data, it's better to design your database not to update too many existing documents at the same time. Instead, try to add data by adding new documents and aggregate via complex views.
But you have shown an edge case where the simple (1-dimensional) list-based index is not enough. You'd actually need secondary indices to filter by time and user-ids (given that fact that you also need partial ranges for both). But this not possible in CouchDB, so you need to work around by shifting "query" data into your docs and use them when building the view.