ActionCable subscriber does not get an update - ruby-on-rails

Consider scenario in the App:
User submits an answer then gets taken to a page where they get subscribed to an ActionCable.
Every time a user submits an answer, a broadcast also happens so that subscribed users can be updated with the new user and their answer.
Problem: When two users submits an answer at almost the same time (within 1 second)
one user ends up seeing two answers, theirs and the other user
the other user ends up only seeing their answer
Question:
Is there a way to make it so that both players see each other's answers?
I've been searching for solutions but have not found any. I may have stumbled upon the cause of the issue from rubyonrails.org:
If you're not streaming a broadcasting at the very moment it sends out an update, you will not get that update, even if you connect after it has been sent.

Is there a way to make it so that both players see each other's answers?
Load the answers from the database on page load, and only use broadcasts to "update" what gets seen on-screen in real-time.
If updates are broadcasted for data already on-screen, either do nothing or use that to replace the answer already shown.
Subscribing to or listening to WebSocket messages is like tuning into a radio station. You have no idea what may have broadcasted before you started listening, so you'll need to get that data from elsewhere.
Alternatively, if you'd prefer to start with a blank slate, you could instruct the client-side code that after it is subscribed, to make an API call (or ActionCable request) to get all the relevant data needed, and also use broadcasts to "update" that data.

Related

XMPP, sending one message to thousands of jabberIDs - jabber ends up sending it to only a random part of selected group of JIDs

We've got one 'superuser' account that we use to send messages to selected JIDs. Lets say we've selected ones we want to send a message to, and we got ourselves a huge array of user JIDs (20k at this point). We've got a deamon running in the background sending one message at a time to each user, stopping for a minute after sending 2000 of messages (2500/minute limit). We are using xmpp4r as a client that handles sending messages. Every user has same #xmpp.address. <body> is the same in every message.
Our tigase logs (because thats what we're using), show that the messages did actually hit the jabber server and were sent to appropriate users, one at a time.
The issue we're experiencing is that although everything seemed fine, only a part of users actually got the message. (for example, at one point, considering 100 first messages sent - 1..20 and 91..100 got delivered, the middle 70 did not get delivered at all), we improved couple of things in the meantime but this still might be a clue.
We tried creating an array of 10000 duplicated JIDS (jids of couple of users were duplicated thousands of times), and every single message got delivered (and in the right order).
We already spent a couple of days trying different scenarios and are starting to run out of ideas what might be going wrong.
Got any idea's what we might have missed?
I am form Tigase team. First thing, I recommend to use our online forums as this is where we normally respond to questions. We may not see questions posted here.
Anyway...
There are some details not included in your post.
What do you connect and how? Over standard XMPP connection, over Bosh, something else?
What do you mean by "duplicated JIDs"? How did you duplicate JIDs?
Are all the users for which a message is sent online during the test?
If you can see message in Tigase logs, you should also see what happens to it. Was it submitted to network socket for delivery to a client?
What kind of HW did you use? Is there a chance that the server was overloaded and simply dropped some messages? Seems unlikely if you talk about 100 messages and 70 of them not delivered.
How do you actually know that a message was not delivered and are you sure the client/user was connected at the time?

Twitter - public Stream handling deletion notices

I am using the Twitter public stream API to search for some keywords. I am writing my script in Java and therefore I use twitter4j. Now I stumbled over the information about status deletion notices:
Status deletion notices (delete)
These messages indicate that a given Tweet has been deleted. Client
code must honor these messages by clearing the referenced Tweet from
memory and any storage or archive, even in the rare case where a
deletion message arrives earlier in the stream that the Tweet it
references.
https://dev.twitter.com/docs/streaming-apis/messages#Status_deletion_notices_delete
So I created methods to remove records from my database when such a notice occurs. Unfortunately such a notice never occurs. I searched to find out what I am doing wrong and found some posts in the twitter developer section concerning the same problem:
https://dev.twitter.com/discussions/17393
https://dev.twitter.com/discussions/19943
https://dev.twitter.com/issues/1355
https://dev.twitter.com/discussions/12836
but unfortunately all these discussions got no answer. So for me it seems like I did no mistake with my code but twitter4j never sends me an deletion notice.
I want to respect the privacy of the twitter users - at least for legal reasons. So my question is:
What can I do to respect the privacy of the users ?
What do I have to do to satisfy my legal duties ?
One alternative seems to be to periodically iterate through all saved Tweets in my Database and request them from twitter to see whether I get a result back or not (so they were deleted). But this doesn't seem to be a practicable way because the data will get more and more and therefore at some point of time I will have limitations (in time, allowed twitter requests, ...). So what should I do?
Thanks in advance! Your help is greatly appreciated.
Ludwig
twitter4j v.3.0.6
Given the nature of the volume of tweets, it's unreasonable to assume that you would check to see if all the tweets are still there. You should make sure that you properly act on a delete notice from twitter. The onus is on them to actually send the delete notification.
That being said, I receive delete notifications from twitter. However, we aren't using the public stream, we are using sitestreams, which relies on authorizing specific social accounts and streaming all updates for those accounts (e.g. favorites, follows, blocks, tweets, retweets, etc) to us in realtime.
If you are doing a stream with filters, for example, it's probably not feasible (or at least very taxing) to run all deleted items through the same pipeline as new items. Or perhaps, to guess at which you were sent based on the times that you were running your filter.
As noted in the issue you linked to, the public streaming API will not necessarily send them out. I'd endeavor to handle them, and possibly provide a tool to manually remove any if a request comes in through another channel, but not worry too much about it, given that twitter doesn't provide the proper facility to be notified of such instances.

Core data posting data to web service preventing duplicate posts?

This is perhaps a simple one but I have been running around in circles for a few hours trying to work out the best way to do this.
Essentially my app allows a user to create a post entry, which is then saved into core data and then posted to a web service if the Internet is available during this time the posting to the web service is done in a background thread allowing the user to carry on working.
The records are flagged SendToWebService = 1
My issue now is that the user can view a list of the entries they made in the app and select to re post it to the web service if it has not already happened, however this is causing duplicate posts as the previous background thread is still working on posting the entry as it is uploading an image or something big.
Any suggestions on how to best handle this?
Thanks
I would suggest having 3 flags for uploads in your core data object.
0 => upload failed,
1 => currently uploading,
2 => upload complete.
As soon as the user selects to upload the post set the flag to currently uploading, in which case you set the update button to a spinner or something. When it completes, either failed or finished then change the upload button to done or re-upload depending on the flag.
This seems like the obvious answer hope I understood your question correctly.
how about this, set SendToWebService=1 for the post that you are currently sending, if it goes through leave it 1 or delete the entry (depending on your implementation) but for some reason if it fails to post, set your SendToWebService back to 0. so when a post is in progress of being sent, it would appear as if its sent.
But if you want to be more transparent about the functionality, create another Boolean called InProgress or something and then turn it 1 when you are sending a request and do not let user post posts who have InProgress True and you can show which ones are in process of being sent in the UI as well, if it gets posted, turn your SendToWebService=1 , if not then Turn your InProgress again to 0
Hope that helped
In case the user is viewing the list of entries from the database than the easiest way wold be:
When post event happen, save the post in database as sent to server and start the background thread.
When the thread completes the run, check if the upload failed mark the entry in db as not uploaded, if it was with success do nothing.
By saving the state of the upload in db, the state will persist even if the user changes the screen or closes the app.

Ruby on Rails Listener

Is there such a thing as a listener in Ruby? I know they have session listeners for JSP, so I was wondering if there was something similar. Ultimately, I would like to create a listener to listen to when session are created or destroyed so I can display a count of how many users are currently online.
You do not know how many users are online on a webserver, unless they have a permanent active connection (for example something like long-polling, comet).
Your best bet is to look into your session store and count the users that are active in the last X minutes.
A listener will not fire reliably when a user disconnects. For example with 'login' and 'logout' logic you will not get all logout actions, because internet connections can fail, users can forget to press the logout button, etcetera.
Update: for Rails, check this forum post: http://railsforum.com/viewtopic.php?id=18480 which explains the solution in more detail.

Integrating twitter,facebook and other services in one single site

I need to develop an application which should help me in getting all the status,messages from different servers like Twitter,facebook etc in my application and also when i post a message it should gets updated in all the services. I am using authlogic for authentication. Can anyone suggest me what gems/plug-ins i can use..
I need API help to get all the tweets/messages to be displayed in my application and also ways to post the messages to the corresponding services by posting it from my application. Can anyone help me from design point.
Walk through what you'd want to do in your head. Imagine the working site, imagine your webapp working before you start. So your user logs in (handled by authlogic) and sees a textbox called "What are you doing right now?". The user fills in a status message and clicks "post". The status message appears at the top of their previously posted messages.
Start with the easy part. Create a class that posts to two services. Use the twitter gem and rfacebook to post to two already defined services. In the future, you'll want to let the user associate services to their account and you would iterate through the associated services and post the message to each. Once you have this working, you can refactor or polish the UI a bit to round out this feature. I personally would do the "add a social media account to my profile" feature towards the end.
Harder is the reading of the data (strangely enough) because you're going to have to figure out how to store it. You could store nothing but I suspect you'd run into API limits just searching all the time (could design around this). I would keep a little cache of posts associated to the user's social media account. In this way, the data model would look like this:
A user has many social media accounts.
A social media account has many posts. (cache)
Of course, now you need to schedule the caching of the posts. This could be done manually, based on an event (like when they login) or time based. So when the update happens, you load up the posts for that social media account and the user will see the posts the next time they hit the page. For real-time push to the client's browser while they stare at the screen, use faye (non-trivial) and ajax to pull the new posts to the top of the social media stream view.
The time based one is tricky because you'd either have to have a cron job run or have rails handle it all with a gem like clockwork. But then you have to leave rails running. I've also solved this by having a class in /lib do all the work and a simple web call kicks off the update. But it wasn't in a multi-user use case. So that might not work. In any case, you'll want to have some nice reusable code for these problems since update requests can come from many different sources.
You'll also have to deal with the API limits. When pulling down content from twitter, you won't get everything. That will just have to be known by the user or you'll have to indicate a "break in time" somehow.
The UI should be pretty easy (functionally anyway), because you know which source the post/content is coming from. It'd be easy to throw a little icon next to the post to display which social media site it's coming from.
Anyway, good luck, sounds like a fun project.

Resources