Firebase data structure for chat app - ios

I'm looking to basically remake Kik within my app. For most guides I've seen on a firebase chat application, there is one major Messages node, and then underneath that there's a fan-out with messages for each user that reference messages in the main list.
With the way my Firebase is laid out at the moment, it would be easier to implement something like this:
users
chatPartners
02834092890428
chatMessages
2093840923840923
timestamp/userUID/etc.
and just have the actual chat inside of my user's node. This seems to also cut down majorly on having to sift through every single message in a messages node?
So when the users send messages to eachother, I'd update the "chat messages" node under the sender and the user.
Is there any reason NOT to do it like this? I see everyone doing it the first way I described, yet I don't see a reason why storing each chat under user--->chat partner --> the chat log would be an issue.

The only issue you may run in to is how the data is called. Note that when you call the 'Chat Log', because it is a child of 'Users' and 'chatPartners', you will be calling the data of everything in that branch, essentially loading every piece of data in the database under 'Users', which is time and performance sensitive.

Related

Swift 3 Firebase chat application duplicate messages

I was hoping for some advice/help. I have come across a neat little bug in the chat application I'm running that is currently using the anonymous Firebase authentication based around SWIFT3.
Once in the chat room of the application, and I return to the home/login page it appears to almost keep the previous user that was logged in on the current device and act as though you are multiple people. This makes it so that when sending a message, it returns two (or more depending on the amount you return to home/login and continue to the room) chat bubbles.
The one method I've tried was a response on this site previously, I believe it was "try! FIREAuth.auth().signOut()" attached to an #IBAction however I'm not sure if this is designed for anonymous users also?
I'm currently away from my code and can't give any snippets until this evening, but I will certainly answer anything I can until that point.
Thanks
Anonymous accounts function just like any other account, so can be logged out using FIRAuth.auth().signOut().
It's not possible to have more than one FIRAuth.auth().currentUser associated with a single device, so my best guess would be that you have multiple models being initialised for every controller initialisation.

How to invite users to join a multi player Gaming Session using Parse (swift)

I'm trying to develop a trivia app, much like Quiz Up but with multi players.
Here's what I thought of doing:-
Creating a class called 'Game Session' on Parse, that has information of who created it (PFUser.current), the name of the gaming session(name), and the names of users invited(invited_users). Think of this Gaming Session as a closed group where the users interact with each other only.
So there's a createSessionViewController, and a joinSessionViewController.
If User A creates a gaming session (in createSessionViewController) and sends invites out to User B and User C, they get to accept or decline these invites in joinSessionViewController.
Now from what I have researched is that I would have to query through all the objects in the class Game Session (in viewdidload of the joinSessionViewController) and use query.wherekey for eg, User B's object id is in the column "invited_users". If so, I return that Gaming Session's object. Is that right?
If that is the case, is that an efficient way of doing it? Because it seems like if the app gets popular and there are lots of objects in the class, then it could take up a lot of time to get the one object with User B's id.
I hope I made myself clear and you guys understand my question.
PS: I'm sort of new to parse and swift, so if you could give me detailed answers it would be much appreciated.
Your logic is correct but I would also strongly suggest you take a look at Parse-LiveQuery. This tool allows you to subscribe to a PFQuery you are interested in. Once subscribed, the server will notify clients whenever a PFObject that matches the PFQuery is created or updated, in real-time.
https://github.com/ParsePlatform/parse-server/wiki/Parse-LiveQuery
https://github.com/ParsePlatform/ParseLiveQuery-iOS-OSX
Your assumption is correct and that is indeed one way you could go about doing that although it has drawbacks as you mentioned. If you felt like putting more effort into it, you can write JavaScript parse cloud code that executes after an item is saved (for example after a game session is created) and send out silent push notifications with the new objects id to the users who were invited. You could then use that push notification data to know the exact ids instead of having to query for them. This is much more advanced though. For whatever your app is, the simple route of having a model query the data on load should be fine. If you find yourself in a situation where performance is hindered due to this, well then congratulations.

Parse: Basic App to App Communication

I am trying to figure out what the basic steps are for getting data passed between users. For example, say I, a user of the app, want to send another user of my app a message or a geopoint or some other form of data. I know the first step would be posting the data to Parse, which I don't have a problem with. But then, how would the other user know there is data to retrieve and also how would they go about retrieving it. Are push notifications the proper and only way of letting the recipient's app know its being sent something? When the recipient app knows there is data posted intended for it to retrieve, how does it go about locating it with a PFQuery? Does the posting app need to supply the receiving app with a UID of some form that the receiving app can then use in its query to locate the data? This is kind of the last puzzle piece for my app and unfortunately it's the only thing Parse didn't make clear to me. It is more than likely user error on my part not finding the correct documentation but, app to app communication is key in most apps and so I need to figure out the defacto way that Parse accomplishes this. Thanks in advance for any help!
You can have a relational table lets say "Messages" table in Parse,
with properties, sender (Pointer to a User), recipient (Pointer to a User) and message (String). and maybe a 'read' Boolean.
You could then query the messages table, With something like:
PSEUDO:
get all messages where recipient is equal to logged in user.
and display these messages on the UI.
Its pretty straightforward, I have done simple messaging service with Parse before
Thanks guys! In the end, I think it is best for a device to not have to be querying for changes but rather to be notified when it has new data to retrieve. Thus, for my uses, I think a combination of your answers, especially with the "onSave" hook function mentioned by Bruno, is the best solution.

What's the most efficient way to create an alert queue for a model with hundreds of millions of entries?

I am currently working on an application in Rails (though language/framework shouldn't matter for this question since it is more of a theoretical one). I'm working on wrapping my head around this problem:
Say I am tracking millions of blogs online and am plugged into their RSS feeds. My app pings these feeds every few few minutes to see if there has been any new activity across any of these millions of blogs. If there is any new activity, I want to alert users of my application who have signed up to receive alerts for specific blogs that there has been an alert.
Does it make sense to have a user_blog_alerts table (where a user can specify custom keywords to be alerted about) and continuously check this table against every new entry that comes in from my feed? And when there is a match, to add them to a queue (using Redis)?
What is the best, most efficient way to build and model this alerting system? Am I even thinking about this in the right way? Are there any good examples or tutorials on this when working with such large amounts of data?
I'm not sure what the right way to do this is, but the thought of continuously scanning a table over and over sounds exhausting (ie. unscalable).
Off the top of my head, what if you created a LIST for every blog in Redis. The values would be the user IDs of those who wanted an alert. The key name would contain the blog id (ex: "user_blog_alerts:12345").
Then when you got a new post for blog 12345 it's a simple lookup to see if that key exists. If it does, then fire off alerts for each user in the list.

Integrating twitter,facebook and other services in one single site

I need to develop an application which should help me in getting all the status,messages from different servers like Twitter,facebook etc in my application and also when i post a message it should gets updated in all the services. I am using authlogic for authentication. Can anyone suggest me what gems/plug-ins i can use..
I need API help to get all the tweets/messages to be displayed in my application and also ways to post the messages to the corresponding services by posting it from my application. Can anyone help me from design point.
Walk through what you'd want to do in your head. Imagine the working site, imagine your webapp working before you start. So your user logs in (handled by authlogic) and sees a textbox called "What are you doing right now?". The user fills in a status message and clicks "post". The status message appears at the top of their previously posted messages.
Start with the easy part. Create a class that posts to two services. Use the twitter gem and rfacebook to post to two already defined services. In the future, you'll want to let the user associate services to their account and you would iterate through the associated services and post the message to each. Once you have this working, you can refactor or polish the UI a bit to round out this feature. I personally would do the "add a social media account to my profile" feature towards the end.
Harder is the reading of the data (strangely enough) because you're going to have to figure out how to store it. You could store nothing but I suspect you'd run into API limits just searching all the time (could design around this). I would keep a little cache of posts associated to the user's social media account. In this way, the data model would look like this:
A user has many social media accounts.
A social media account has many posts. (cache)
Of course, now you need to schedule the caching of the posts. This could be done manually, based on an event (like when they login) or time based. So when the update happens, you load up the posts for that social media account and the user will see the posts the next time they hit the page. For real-time push to the client's browser while they stare at the screen, use faye (non-trivial) and ajax to pull the new posts to the top of the social media stream view.
The time based one is tricky because you'd either have to have a cron job run or have rails handle it all with a gem like clockwork. But then you have to leave rails running. I've also solved this by having a class in /lib do all the work and a simple web call kicks off the update. But it wasn't in a multi-user use case. So that might not work. In any case, you'll want to have some nice reusable code for these problems since update requests can come from many different sources.
You'll also have to deal with the API limits. When pulling down content from twitter, you won't get everything. That will just have to be known by the user or you'll have to indicate a "break in time" somehow.
The UI should be pretty easy (functionally anyway), because you know which source the post/content is coming from. It'd be easy to throw a little icon next to the post to display which social media site it's coming from.
Anyway, good luck, sounds like a fun project.

Resources