How to effectively use CloudKit daily transfer limit of of 25mb - ios

I'm creating a highly picture oriented app that might end up using a lot of ckassets. But I read that there is a 25mb limit on daily data transfer per user. My question is is this data transferrable? If one user uses 0 then some other person can use 50?
I feel like 25mb limit on data transfer seems so small since one pic is 100k so one can only play w 250 pics max per day. It just seems like such a drastic limitation. Thank you.

The data transfer limits for CloudKit are monthly and are based on the number of active users. You get 50MB/month per user with a minimum of 2GB.
The 50MB/month/user is only used to calculate the free quota; it is not an actual per-user limit, so if some users transfer 150MB and some transfer 0 that is fine. You only pay if your total transfer for all users exceeds 50MB*number of users (or 2GB if you have less than 40 users)
In your question you quote 25MB/day but the limit is actually monthly, so if every user used 50MB a month that would mean they could transfer about 16 images per day.
Extra data is fairly inexpensive though. Say you had 40 users and they transferred 50 images each per day, that would be 6GB per month, which would cost you $0.40
Note that the maximum free transfer is 200TB/month so above 4,000,000 active users the 50MB/user no longer applies, the available transfer is less on a per-user basis but the 200TB is still applied as an aggregate across all users.

Related

YouTube API quota been reduced

I have encounter some problems when using YouTube API recently, and I would like to ask if you have the same problem like I do and if anyone have any solution.
Before, I had 100 millions and 50 millions quotas per day, but I just found out that the quota of some keys with less usage has decreased a lot (500 millions has decreased to 300K and the one with 1 million has decreased to 600K and 10K )
The info that I found are all from the project of 2016, and the quotas are all 10K or other case like the whole project had been shut down, so the quota is 0. And none of them is the same as the problem that we have encountered. So I would like to ask if anybody knows why does this happen and how can we prevent and resolve it. Thanks a lot!
The default quota limit is now 10,000 units:
Projects that enable the YouTube Data API have a default quota
allocation of 10 thousand units per day, an amount sufficient for the
overwhelming majority of our API users...
https://developers.google.com/youtube/v3/getting-started#quota
Previously they had given 1 million units to new accounts. My own APIs were each reduced from 1 million to 10k also, because I never use even 5k units. You can ask for more units if you reach the quota limit, inside your Developer Console, IAM & Admin > Quotas > EDIT QUOTAS:
The only way to increase your Quota is to fill out this form https://support.google.com/youtube/contact/yt_api_form and submit your request to YouTube. Then you have to wait no less than two weeks.
Be careful: if your app doesn't respect YouTube TOS, they will terminate it

Effect of changing memory limit

I have been having trouble with a page crashing on my website and I have worked out that it is because the memory limit is too low. I have read an article (http://codingcyber.com/how-to-increase-php-memory-limit-on-godaddy-hosting-882/#) and decided to buy more RAM and I am about to increase the memory limit.
Before I do, I just want to know what will happen if multiple users are using the site all at once. I guess a clearer way to explain my question is with an analogy.
If I have 2048mb ram and a memory_limit = 256mb, what happens if 20 users all login at once and use 50mb of the RAM? I imagine that since no one has exceeded the 256mb limit and the total RAM used (50mb * 20 users = 2000mb) is just under the 2048 limit, that the site should be ok, but I just want to confirm that this is correct (I've never done anything like this before).
Thanks for confirming or correcting.
Just spoke with godaddy and they explained that it's not a problem to have extra people as long as the TOTAL RAM used is lower than the site limit. If one person goes over it will only impact them. If everyone goes over the limit as a whole (unlikely in my case) that's when we will have problems...

Flurry / Google Analytics / Localytics bandwidth consumption on iOS

I'm choosing an analytics service for my iOS app. I want to track quite a lot of events and the app I'm developing is going to be used outdoors, so there will be no wi-fi connection available, and even the cellular connectivity can be of a poor quality.
Analytics is the only thing that requires network connectivity in my app. Recently I've checked how much traffic it consumes, and it consumes much more than I've expected. That was about 500KB for Google Analytics and about 2MB for Flurry, and that's just for a 2-minute long session with a few hundred events. It seems very inefficient to me. (Flurry logs a little bit more parameters, but definitely not 4 times more.)
I wonder — have anybody compared other popular analytics solutions for their bandwidth consumption? Which one is the slimmest one?
Thank you
If you don't need real time data (and you probably don't with outdoor app), you can get the best network compression for Analytics by dispatching more hits at once to benefit from batching and compression. To do that set the dispatch interval to 30 minutes. The maximum size of uncompressed hit that analytics will accept is about 8k so you should be sending less then that. With compression that would bring it down to ~25% of the original size for individual hit assuming mostly ascii data. To generate 500k of data you should be sending few hundred hits individually. With batching and compression the hits will shrink down more efficiently. Usually batch of 20 hits will compress to less then 10% of the uncompressed size or about 800 bytes per hit at most. For further network savings just send less data per event or fewer events. Btw, Analytics has a rate limit of 60 tokens that are replenished at a rate of 1 hit every 2 seconds. If you are sending few hundred events in short period of time your data is likely getting rate limited.
https://developers.google.com/analytics/devguides/collection/ios/limits-quotas#ios_sdk

What is the maximum database size (50 MB or 100 MB or 150MB ) that can be saved in local datastore in ios by using Parse API

I need to store maximum data in local Datastore by using Parse API in iOS HYDRIDE APPLICATION. Can any one tell me how much (Maximum) of data is store in local datastore according to Parse API.
Thank you,
Madhav
Think about Parse as a business. How does Parse make money? Off of queries and API request limits. If you go over a threshold of API request per second they charge you fees right? Thats how they make money, so more the better to them. So essentially you surpass their 30 per second then you may be subject to fees. Just remember your app can be as large as 2GB, but don't neglect your core audience and force them to download a 2GB app when you can update information as necessary. However, with that being said, any device, computer, regardless of its capacity, has limitations when memory and disk space is taken into consideration. Parse is an online resource, you should use it for that. PFQueryTableViewController is a powerful tool for Parse related executions, you should take advantage of that.
You can also set limits to queries with Parse, which is a good thing for those that like free stuff.
Also, here is a reference to your same question worth reviewing:
What is the maximum stoarge limit for parse local data store in android

How do different visitor metrics relate?

Hypothetically, tets say someone tells you to to expect X (like 100,000 or something) number of unique visitors per day as a result of a successful marketing campaing.
How does that translate to peak requests/second? Peak simultaneous requests?
Obviously this depends on many factors, such as typical number of pages requested per user session or load time of a typical page, or many other things. These are other variables Y, Z, V, etc.
I'm looking for some sort of function or even just a ratio to estimate these metrics. Obviously for planing out the production environment scalability strategy.
This might happen on a production site I'm working on really soon. Any kind of help estimating these is useful.
Edit: (following indication that we have virtually NO prior statistics on the traffic)
We can therefore forget about the bulk of the plan laid out below and directly get into the "run some estimates" part. The problem is that we'll need to fill-in parameters from the model using educated guesses (or plain wild guesses). The following is a simple model for which you can tweak the parameters based on your understanding of the situation.
Model
Assumptions:
a) The distribution of page requests follows the normal distribution curve.
b) Considering a short period during peak traffic, say 30 minutes, the number of requests can be considered to be evenly distributed.
This could be [somewhat] incorrect: for example we could have a double curve if the ad campaign targets multiple geographic regions, say the US and the Asian markets. Also the curve could follow a different distribution. These assumptions are however good ones for the following reasons:
it would err, if at all, on the "pessimistic side" i.e. over-estimating peak traffic values. This "pessimistic" outlook can further be further adopted by using a slightly smaller std deviation value. (We suggest using 2 to 3 hours, which would put 68% and 95% of the traffic over a period of 4 and 8 hours (2h std dev) and 6 and 12 hours (3h stddev), respectively.
it makes for easy calculations ;-)
it is expected to generally match reality.
Parameters:
V = expected number of distinct visitors per 24 hour period
Ppv = average number of page requests associated with a given visitor session. (you may consider using the formula twice, one for "static" type of responses, and the other for dynamic responses, i.e. when the application spends time crafting a response for a given user/context)
sig = std deviation in minutes
R = peak-time number of requests per minute.
Formula:
R = (V * Ppv * 0.0796)/(2 * sig / 10)
That is because, with a normal distribution, and as per z-score table, roughly 3.98% of the samples fall within 1/10 of a std dev, on one or the other side of the mean (of the very peak), therefore get almost 8 percent of the samples within one tenth of a std dev on each side, and with the assumption of relatively even distribution during this period, we just divide by the number of minutes.
Example: V=75,000 Ppv=12 and sig = 150 minutes (i.e 68% of traffic assumed to come over 5 hours, 95% over 10 hours, 5% for the other 14 hours of the day).
R = 2,388 requests per minute, i.e. 40 requests per second. Rather Heavy, but "doable" (unless application takes 15 seconds per request...)
Edit (Dec 2012):
I'm adding here an "executive summary" of the model proposed above, as provided in comments by full.stack.ex.
In this model, we assume most people visit our system, say, at noon. That's the peak time. Others jump ahead or lag behind, the farther the fewer; Nobody at midnight. We chose a bell curve that reasonably covers all requests within the 24h period: with about 4 sigma on the left and 4 on the right, "nothing" significant remains in the long tail. To simulate the very peak, we cut out a narrow stripe around the noon and count the requests there.
It is noteworthy that this model, in practice, tends to overestimate the peak traffic, and may prove to be more useful at estimating "worse case" scenario rather than more plausible traffic patterns. Tentative suggestions to improve the estimate are
to extend the sig parameter (to acknowledge that the effective traffic period of relatively high traffic is longer)
to reduce the overall amount of visits for the period considered i.e. reduce the V parameter, by say 20% (to acknowledge that about that many visit happen outside of any peak time)
to use a different distribution such as say the Poisson or some binomial distribution.
to consider that there are a number of peaks each day, and that the traffic curve is actually the sum of several normal (or other distribution) functions with similar spread, but with a distinct peak. Assuming that such peaks are sufficiently apart, we can use the original formula, only with a V factor divided by as many peaks as considered.
[original response]
It appears that your immediate concern is how the server(s) may handle the extra load... A very worthy concern ;-). Without distracting you from this operational concern, consider the process of estimating the scale of the upcoming surge, also provides an opportunity of preparing yourself to gather more and better intelligence about the site's traffic, during and beyond the ad-campaign. Such information will in time prove useful for making better estimates of surges etc, but also for guiding some of the site's design (for commercial efficiency as well as for improving scalability).
A tentative plan
Assume qualitative similarity with existing traffic.
The ad campaign will expose the site to a distinct population (type of users) than its current visitors/users population: different situations select different subjects. For example the "ad campaign" visitors may be more impatient, focussed on a particular feature, concerned about price... as compared to the "self selected ?" visitors. Never the less, by lack of any other supporting model and measurement, and for sake of estimating load, the general principle could be to assume that the surge users will on-the-whole behave similarly to the self-selected crowd. A common approach is "run numbers" on this basis and to use educated guesses to slightly bend the coefficients of the model to accommodate for a few distinctive qualitative distinctions.
Gather statistics about existing traffic
Unless you readily have better information for this (eg. tealeaf, Google Analytics...) your source for such information may simply be the webserver's log... You can then build some simple tools to extract parse these logs and extract the following statistics. Note that these tools will be reusable for future analysis (eg: of the campaign itself), and also look for opportunities of logging more/different data, without significantly changing the application!
Average, Min, Max, Std Dev. for
number of pages visited per session
duration of a session
percentage of 24 hour traffic for each hour of a work day (exclude week-ends and such, unless of course this is a site which receives much traffic during these periods) These percentages should be calculated over several weeks at least to remove noise.
"Run" some estimates:
For example, start with peak use estimate, using the peak hour(s) percentage, the average daily session count, the average number of pages hits per session etc. This estimate should take into account the stochastic nature of traffic. Note that you don't have to, in this phase, worry about the impact of the queuing effect, instead, assume that the service time relative to the request period is low enough. Therefore just use a realistic estimate (or rather a value informed from the log analysis, for these very high usage periods), for the way the probability of a request is distributed over short periods (say of 15 minutes).
Finally, based on the numbers you obtained in this fashion, you can get a feel for the type of substained load this would represent on the server, and plan to add resources, to refactor part of the application. Also -very important!- if the outlook for sustained at-capacity load, start running the Pollaczek-Khinchine formula, as suggested by ChrisW, to get a better estimate of the effective load.
For extra credit ;-) Consider running some experiments during the campaign for example by randomly providing a distinct look or behavior for some of the pages visited, and by measuring the impact this may have (if any) on particular metrics (registration for more info, orders place, number of pages visited ...) The effort associated with this type of experiment may be significant, but the return can be significant as well, and if nothing else it may keep your "useability expert/consultant" on his/her toes ;-) You'll obviously want to work on defining such experiments, with the proper marketing/business authorities, and you may need to calculate ahead of time the minimum percentage of users upon which the alternate site would be proposed, to keep the experiment statistically representative. It is indeed important to know that the experiment doesn't need to be applied to 50% of the visitors; one can start small, just not so small that possible variations observed may be due to random...
I'd start by assuming that "per day" means "during the 8-hour business day", because that's a worse-case scenario without perhaps being unecessarily worst-case.
So if you're getting an average of 100,000 in 8 hours, and if the time at which each one arrives is random (independent of the others) then in some seconds you're getting more and in some seconds you're getting less. The details are a branch of knowledge called "queueing theory".
Assuming that the Pollaczek-Khinchine formula is applicable, then because your service time (i.e. CPU time per request) is quite small (i.e. less than a second, probably), therefore you can afford to have quite a high (i.e. greater than 50%) server utilization.
In summary, assuming that the time per request is small, you need a capacity that's higher (but here's the good news: not much higher) than whatever's required to service the average demand.
The bad news is that if your capacity is less than the average demand, then the average queueing delay is infinite (or more realistically, some requests will be abandoned before they're serviced).
The other bad news is that when your service time is small, you're sensitive to temporary fluctations in the average demand, for example ...
If the demand peaks during the lunch hour (i.e. isn't the same average demand as during other hours), or even if for some reason it peaks during a 5-minute period (during a TV commercial break, for example)
And if you can't afford to have customers queueing for that period (e.g. queueing for the whole lunch hour, or e.g. the whole five-minute commercial break)
... then your capacity needs to be enough to meet those short-term peak demands. OTOH you might decide that you can afford to lose the surplus: that it's not worth engineering for the peak capacity (e.g. hiring extra call centre staff during the lunch hour) and that you can afford some percentage of abandoned calls.
That will depend on the marketing campaign. For instance a TV ad will bring a lot of traffic at once, for a newspaper ad it will be spread out more over the day.
My experience with marketing types has been that they just pull a number from where the sun doesn't shine, typically higher than reality by at least an order of magnitude

Resources