Twitter app development best practices? - twitter

Let's imagine app which is not just another way to post tweets, but something like aggregator and need to store/have access to tweets posted throught.
Since twitter added a limit for API calls, app should/may use some cache, then it should periodically check if tweet was not deleted etc.
How do you manage limits? How do you think good trafficed apps live while not whitelistted?

To name a few.
Aggressive caching. Don't call out to the API unless you have to.
I generally pull down as much data as I can upfront and store it somewhere. Then I operate off the local store until it runs out and needs to be refreshed.
Avoid doing things in real time. Queue up requests and make them on a timer.
If you're on Linux, cronjobs are the easiest way to do this.
Combine requests as much as possible.

Well you have 100 requests per hour, so the question is how do you balance it between the various types of requests. I think the best option is the way is how TweetDeck which allows you to set the percentage and saves the rest of the % for posting (because that is important too):
(source: livefilestore.com)
Around the caching a database would be good, and I would ignore deleted ones - once you have downloaded the tweet it doesn't matter if it was deleted. If you wanted to, you could in theory just try to open the page with the tweet and if you get a 404 then it's been deleted. That means no cost against the API.

Related

Call API but not for every user

I would like to do something similar to this: Rails way to call external API from view?
But I don't want to call the API for every request from users because that would put a lot of unnecessary load on the API server and deplete my quota too fast.
Is there any way to cache the response from every 100th user and display the cached version to every other user or something of the sort? There's probably something already out there to do this, but I'm very new to Ruby and would appreciate some help.
There are numerous ways to achieve what you are looking for. I would advise against caching the response per xxx user, since there are many variables around days and times where traffic will be more strenuous than others. I would advise that you ask yourself what the behaviour of the method is. Is it to pull some complex data or would it just be a simple count? If real-time information is not important, what is an acceptable timeframe for the information to be cached?
If the answer to the above questions can be answered in time metric rather than xxx Users visiting, then you may want to use the built in Rails.cache, by defining the metric collection method in a helper and then calling from a view:
def method_to_call
Rails.cache.fetch("some_method", expires_in: 1.hour) do
SomeThing.to_cache
end
end
from here you can forecast your access to the API and be certain of your usage over a defined time period, without worrying about what times of day your website may be more busy, or any unexpected spikes in application usage.
If you want to cache per xxx user visit, I would highly recommend redis. It's a fantastic piece of software that is incredibly fast and scalable. It's a key value pair store that can hold the data around unique users and page views.
Another question to ask is are you caching on individual user or individual page view? Based on the answer you can store user id or page view count and have conditional logic to refresh the cache on each xxx metric. Performance should not be too much of an issue if you have some due diligence to clear the store every week or so, depending on the data stored.
When you get to large scales of caching you might have to think about the infrastructure of hosting a redis instance. Will you need a dedicated server? Is docker a viable option for a production redis? Can you host the redis instance on the same instance of the application? All of these possible overheads favour the initial approach, but again it is dependant on your needs.

Lowering total requests per month on parse server Swift

I am currently building an app that will run on parse server on back4app. I wanted to know if there are any tips for lowering requests. I feel like my current app setup is not taking advantage of any methods to lower requests.
For example: when i call a cloud code function is that one request even if the cloud code function has multiple queries in it? Can I use cloud code to lower requests some how?
another example : If I use parse local data store rather than constantly getting data from server can that lower requests or does it not really because you would still need to update changes later on. Or do all the changes get sent at once and count as one request.
Sorry I am very new to looking at how requests and back end pricing is measured in general. I want to make sure I can be as efficient as possible in order to get my app out without going over budget.
Take a look in this link here:
http://docs.parseplatform.org/ios/guide/#performance
Most part of the tips there are useful both for performance and number of requests.
About your questions:
1) Cloud code - each call to a cloud code function counts as a single request, no matter how many queries you do
2) Client side cache - for sure it will reduce the total amount of requests you do in the server

Should I POST high-rate user actions to my server on a per-action basis or send the batch of events once the session is closed?

I'm building a site where users can watch a video and click as many times as they want to "like" it. It's a bit like Periscope's "Hearts" function for those who know it.
The viewers are viewing the video on a web browser for now. Every "Like" is input into a heroku-hosted REDIS instance, so the write/read are fairly cheap. However potentially there could be a high rate of simultaneous input as many users watch a video at the same time.
In this scenario, I'm facing two options:
Send an event to the REDIS instance every time the user "likes." convenience: story the "like" right away with all relevant information. Inconvenience: lots of concurrent likes into the server.
Cache the "likes" locally and only send to REDIS once the session is over. Problem: at any time the user can close his browser (and potentially never return) so the "like" information could be lost permanently.
Any advice on which option is preferable?
Don't cache.
First, it's a really big complication as you won't know when the session is really over.
Second, Redis increment is probably as fast or faster than your cache. I bet your concern is Rails only, not Redis.
You may eventually want to make another endpoint - maybe a simple Sinatra app - to simply handle likes. I noticed autosuggest gems sometimes do this (for example) and it saves all the overhead of a rails request.
If it is a successful app, the concern could be someone writing a script to 'like' continually. You may need to put in some throttle to allow a limited number of requests over time.

Getting most recent paths visited across sessions in Rails app

I have a simple rails app with no database and no controllers. It uses High Voltage for routing queries, then uses javascript to go get data using the params hash.
A typical URL looks like this:
http://example.com/?id=37ed660aa222e61ebbbc02db
I'd like to grab the ten unique URLs users have most recently visited and pass them to a view. Note that I said users, preferably across concurrent sessions.
Is there a way to retrieve this using ActiveSupport::Notifications or Production.log? Any examples, including where the code should best go, would be greatly appreciated!
I think that Redis would be ideally suited to this. It's one of the NoSQL key-value store db's, but its support for the value part being an ordered list, queue, etc. should make it easy to store unique urls in a FIFO list as they are visited, limit the size of that list (discard urls at the 'old' end of the list), and retrieve the most recent N urls to pass to your view. Your list should stay small enough that it would all stay in memory and be very fast. You might be able to do this with memcached or mongo or another one as well; I think it would be best though if the solution kept the stored values in memory.
If you aren't already using redis (or similar), it might seem like overkill to set it up and maintain just for this feature. But you can make it pay for itself by also using it for caching, background job processing (Resque / Sidekiq), and probably other things in your app.

iOS App Offline and synchronization

I am trying to build an offline synchronization capability into my iOS App and would like to get some feedback/advice from the community on the strategy and best practice to be followed to do the same. The app details are as follows:
The app shows a digital catalog to users and allows them to perform actions like creating and placing orders, among others.
Currently the app only works when online, and we have APIs for all actions like viewing the catalog, creating/placing orders which return JSON data.
We would like to provide offline/synchronization capability to users, through which users can view the catalog and create/place orders while offline, and when they come online the order details will be synchronized and updated to our server.
We would also like to pull the latest data from the server, and have the app keep itself up to date in case of catalog changes or order changes that happened at the Server while the app was offline.
Can you guys help me to come with the best design and approach for handling this kind of functionality?
I have done something similar just in the beginning of this year. After I read about NSOperationQueue and NSOperation I did a straight forward approach:
Whenever an object is changed/added/... in my local database, I add a new "sync"-operation to the queue and I do not care about, if the app is online or offline (I added a reachability observer which either suspended the queue or takes it back working; of course, I do re-queueing if an error occurs (lost network during sync)). The operation itself reads/writes the database and does the networking stuff. My ViewController use a NSFetchedResultsController (with delegate=self) to get callbacks on changes. In some cases I needed some extra local data (it is about counting objects), where I have used NSManagedObjectContextObjectsDidChangeNotification.
Furthermore, I have used Multi-Context CoreData which sounded quite reasonable to use (I have only two contexts).
To get notified about changes from your server, I believe that iOS 7 has something new for you.
On the server side, you should read a little for the actual approach you want to go for: i.e. Data Synchronization by Dan Grover or Developing Android REST Client Applications (of course there are many more good articles out there).
Caution: you might be disappointed when you expect an easy solution. Your requirement is not unusual, but the solution might become more complex than you expect - depending on the "business rules" and other reasonable requirements. If you intelligently restrict your requirements you may find a solution which you can implement yourself, otherwise you may also consider to use a commercial product.
I could imagine, that if you design the business logic such that it takes an offline state into account and exposes this explicitly in the business logic, you may find a solution which you can implement yourself with moderate effort. What I mean by this is for example, when a user creates an order, it is initially in "not committed" stated. The order will only be committed when there is access to the server and if the server gives the "OK" that this order can actually be placed by this user. The server may also deny the order, sending corresponding messages to the user.
There are probably quite a few subtle issues that may arise due to the requirement of eventual consistency.
See also this question which contains pointers to solutions from commercial products, and if you visit their web sites give valuable information about the complexity of the problem and how this can be solved.

Resources