I want to make a twitter APP that tweets at 12:00:00 and exactly at 12:00:00, reducing all delays to the minimum(trying to be faster than just scheduling the tweet in tweetdeck).
Any ideas for which language to use, which twitter API(streaming, REST...), suggestions of possible algorithms...?
You just cannot anticipate the duration of your Http Request nor the time that Twitter will take to process it.
Usually a request is processed in less than second. So I would personally initiate the Http Request to start at 11:59:50 if your connection is good enough.
I am personally the developer of the Tweetinvi library which is developed in C# that will let you publish a tweet in a single line of code, but I don't think any library or algorithm will be able to anticipate the time for Twitter to process your request.
What you could potentially do is have some statistics. Store the DateTime just before you invoke the line of code to publish your tweet. When your tweet is published look at the DateTime of creation in the json. Compare the 2 DateTime.
Repeat the previous operation multiple times and perform some statistics to improve your chance to publish at the exact time.
Related
Chrome gives a granular breakdown of the request lifecycle for a single resource.
The lifecycle shows how much time is spent in the following categories:
Queuing
Stalled
Request sent
Waiting (Time to first byte (TTFB))
Content Download
I was wondering does AFNetworking give us the ability to captures these categories?
I tried digging but there's not much info about these.
Although I have calculated the complete duration of request hit to response got time, but how could we make use of such granular breakdown through AFNetworking in order to improve our app performance.
I am currently building an app that will run on parse server on back4app. I wanted to know if there are any tips for lowering requests. I feel like my current app setup is not taking advantage of any methods to lower requests.
For example: when i call a cloud code function is that one request even if the cloud code function has multiple queries in it? Can I use cloud code to lower requests some how?
another example : If I use parse local data store rather than constantly getting data from server can that lower requests or does it not really because you would still need to update changes later on. Or do all the changes get sent at once and count as one request.
Sorry I am very new to looking at how requests and back end pricing is measured in general. I want to make sure I can be as efficient as possible in order to get my app out without going over budget.
Take a look in this link here:
http://docs.parseplatform.org/ios/guide/#performance
Most part of the tips there are useful both for performance and number of requests.
About your questions:
1) Cloud code - each call to a cloud code function counts as a single request, no matter how many queries you do
2) Client side cache - for sure it will reduce the total amount of requests you do in the server
This question already has answers here:
How do I handle long requests for a Rails App so other users are not delayed too much?
(3 answers)
Closed 6 years ago.
I have an application, which does a lot of computation on few pages(requests). The web interface sends an AJAX request. The computation takes sometimes about 2-5 minutes. The problem is, by this time AJAX request times out.
We can certainly increase the timeout on the web portal, but that doesn't sound like right solution. Also, to improve performance:
Removed N+1/Duplicate queries
Implemented Caching
What else could be done here to reduce the calculation time?
Also, if it still takes longer, I was thinking of following solutions:
Do the computation beforehand and store it in DB. So when the actual request comes, there is no need of calculation. (Apprehensive about this approach. Since we will have to modify/Erase-and-recalculate this data, whenever there is some application logic change.)
Load the whole data in cache when application starts/data gets modified. But for the first time computation has to be done. Also, can't keep whole data in the cache when the application starts. So need to store it in the cache as per demand.
Maybe, do something like Angular promise, where promise gets fulfilled when the response comes from the server.
Do we have any alternative to do this efficiently?
UPDATE:
Depending on user input, the calculation might happen in few seconds. And also it might take 2-5 minutes. The scenario is, user imports an excel. The excel has been parsed and saved in DB. Now on another page, user wants to see the report/analytics graph derived with few calculations on the imported data(which has already been saved to db with background job). The calculation has to be done with many factors, so do not want to save it in DB(As pointed above). Also, when user request the report/analytics graph, It'll be bad experience to tell him that graph will be shown after sometime. You'll get email/notification etc.
The extremely typical solution is to enqueue a job for background processing, and return a job ID to the front-end. Your front-end can then poll for completion using that job ID, or you can trigger a notification such as an email to be sent to the user when the job completes.
There are a multitude of gems for this, and it is such a popular and accepted solution that Rails introduced its own ActiveJob for this exact purpose.
Here are a few possible solutions:
Optimize your tables with indexes to reduce data fetching time.
Preload all rows you'll be dealing with at the beginning, so you won't do a query each time you calculate something... it's faster/easier to #things.select { |r| r.blah } than to Thing.where(conditions)
Instead of all that, just do the computing in PLSQL on the database side. Sure, it's not the same as writing Ruby code but it could be faster.
And yes, cache the whole results set into memcache or redis or something (and expire when something change)
Run the calculation in the background (crontab?) and store the results in a JSON somewhere, or cache the entire HTML file (if you're not localizing or anything)
PS: I'm doing 1,2,3 combined with 5 (caching JSON results into memcache and then pulling the array and formatting/localizing) for a few M records from about 12 tables... sports data mainly.
I have an App that has the locations of 10 different places.
Given your current location, the app should return the estimated arrival time for those 10 locations.
However, Apple has said that
Note: Directions requests made using the MKDirections API are server-based and require a network connection.
There are no request limits per app or developer ID, so well-written apps that operate correctly should experience no problems. However, throttling may occur in a poorly written app that creates an extremely large number of requests.
The problem is that they make no definition on what a well written app is. Is 10 requests bad? Is 20 requests an extremely large number?
Has any one done an app like this before to provide some guidance? If Apple does begin throttling the requests, then people will blame my app and not Apple. Some advice please..
Hi investigate class MKRoute this class contains all information you need.
This object contains
expectedTravelTime
Also you should consider LoadingThrottled
The data was not loaded because data throttling is in effect. This
error can occur if an app makes frequent requests for data over a
short period of time.
For prevent your request from bing throttled, reduce number of requests.
Try to use Completion Handlers to know if you request is finished and only after send another request or cancel previous. From my experience try to handle this request just as regular network request just be sure you are not spamming unnecessary requested to the Apple API. But this is not 100% guarantee that Apple won't throttle your requests.
In this use-case I need to monitor Twitters stream for tweets with certain hash-tags and then pull those tweets out and store them. I am using Twitter4J for this and Twitters Streaming API. The hash-tags to monitor change frequently so I would like to refresh the filter every 10 minutes or so. When I refresh I am simply pulling all the new hash-tags from the data layer and passing them to the filter query. My two questions:
Is there anything wrong with stopping the connection every 10 minutes and refreshing (in terms of Twitters rate limits etc)
Is there anything to prevent me losing tweets that are made during the short refresh pause?
Thanks in advance.
You should not reconnect any more often than once every ten minutes, or you may be rate limited. You can form your new connection before dropping your old connection, which should help avoid data loss. Note that you may only have one outstanding connection at a time.