I would like to understand the CloudKit free usage calculation, but I can't.
Could anyone describe what 40 requests per seconds (10 per 100.000 users) are? I couldn't find any definition what a request is. If I had 2 apps and every app would ping my CloudKit server at the same time, would it result in two requests per second (for the described moment)? How do I know how to limit the request in my apps and how to queue the requests so they can be done later when the time comes where the limit is not reached at the CloudKit server?
What about the 2GB data transfer (50 mb per user)? How should I understand these 50 mb, per second, per day for the eternity? What will happen if one user for one of my apps used 50 mb traffic?
How do I limit my app and still have a good clint server communication? Will I get an error when the limit is reached and won't automatically charged by Apple?
I do really like the programming ease of CloudKit but I'm kinda scared that it could go all way wrong and I will get charged for misunderstanding.
It is really hard for me to imagine how it is calculated.
I think your biggest concern will be quelled by knowing that you can set usage limits on these services. If you've hit this limit then the service will return an error and you can handle that in your app.
40 requests per second is across all users and devices. If you have 3600 users and they all pinged the server once per hour, that would average out to about 1/second. While that won't be enough to build a service like facebook, instagram, or twitter, it would probably be sufficient for getting weather data, a daily schedule, or food truck locations. For up to 4,000,000 users, the free tier will cover each user checking at most once every three hours with an even distribution.
2GB data transfer is for all of your users. Since the scaling doesn't take effect until you have 100,000 users, 2GB of data transfer is a pretty good amount to get you off the ground. Since it scales at 50MB per user, it's easy to figure out how much you can trust your app to communicate with the server. If just one user goes over but you're still under the total usage then you won't get charged. If you do go over, it's $0.10/GB of data transfer.
You could limit your app to only communicate so much until the user needs to pay for a premium service. If you allowed 50MB/user/month of Data Transfer and let the user know when they approached this limit that they'd have to pay then you'd never go over. You could also have ads on the device that essentially pay for the service to scale thus allowing users who use the app more to have more privileges than passive users but still allowing everyone to have a base usage.
The prices are at the bottom of this page and are fairly reasonable. You can definitely get a cheaper rate if you build things yourself and use AWS, but you'd need to be in the millions of users and/or have high demands for that to be a better option.
40 requests per second is across all users and devices. If you have
3600 users and they all pinged the server once per hour, that would
average out to about 1/second. While that won't be enough to build a
service like facebook, instagram, or twitter, it would probably be
sufficient for getting weather data, a daily schedule, or food truck
locations. For up to 4,000,000 users, the free tier will cover each
user checking at most once every three hours with an even
distribution.
Just about the 40 requests / second limit:
If this is correct, I sincerely don't understand why so many people in the forums says this is more than enough. For certain apps it might be enough to sync once per hour but if you want to keep save-game files synchronized between devices then 40 request /second is ridiculous. A weather app? don't make me laugh. On 90% of the apps out there you are going need to to insert, update, and delete, I wonder how many requests a simple update it is... I hope just 1, but I seriously doubt it.
On Firebase there is not a request limit like this one and the upload is FREE. They just charge you for the downloads.
I might be missing something about this CloudKit thing because I don't get that ridiculous limit.
Related
If you set up a OAuth for Youtube within your app that allows users to upload videos, does each video cost towards your 10,000pt quota?
I run a personal uploading bot and it does 3ish uploads per day within the 10,000 point quota but if I was to scale out as an app this wouldn't work since 5 users would max it out.
So if a user approves your app for upload permissions, would this cost toward your clients 10,000pts or is it 10,000 points per user per day?
Also how easy is YouTube's quota expansion form process if it is the former?
https://support.google.com/youtube/contact/yt_api_form?hl=en
By checking the quota calculator you will be able to see what each call costs. The vido.insert call for example costs 1600 quota.
If you check the google developer console and check your quota it might look something like this.
As you can see one of them states "per user" while the other does not.
Queries per day 10,000 is a project based quota. while Queries per minute per user is a user based quota.
It sounds like you should be applying for a quota extension if the 10,000 limit is not enough for your needs.
Also how easy is YouTube's quota expansion form process if it is the former?
Its a long process google says it takes twenty days my experience is three to six months average. You need to be prepared to get a NO. You also need to be prepared to have your quota shut down suddenly because they detect something they identity as spam or a violation. In the event of a shutdown you will need to apply for a new extension. Which again will take time.
I'm about to release an iOS app, and deploy its backend (rails backend that serves the iOS app) to heroku.
I have very little knowledge when it comes to the practical price you will pay based on traffic, etc. This link (http://notes.ericjiang.com/posts/881) states... Nowadays, especially with faster code and faster computers, a standard 512MB dyno can power websites with tens of thousands of hits per hour.
I'm trying to get a rough estimate of how much running my backend on heroku could cost me. What's the best way to figure this out? The pricing is all very straightforward. It basically just comes down to how many dyno's I'm going to need.
If I get 5 beta testers to run my iOS app for a 10 minute window, can I extrapolate some statistics as to how much my backend is being used? Is it the 'hits' that matter, or the 'data' transferred, or the 'time' the backend is actively doing something, like queueing up some resulting data?
Is there a formula to figure it out? Let's say say a user averages 10 hits per minute, and I constantly have an average of 5000 users. That would be 3 million hits per hour. What exactly should I be looking for in trying to determine an accurate pricing for my first backend?
While Heroku do have some limits surrounding bandwidth (not requests) for the most part your cost is close to fixed.
Monthly pricing is typically made up from a combination of:
Dynos
Databases
Addons
Premium support
Heroku provide a price calculator on their website. Further, standard (non-hobby) dynos and up include metrics around CPU usage and memory usage.
My suggestion if you're just starting out? Start with one web dyno and a Postgres database. Beta test your app and check your metrics. A Rails app on a single Standard 1X dyno can handle a reasonable amount of traffic (depending on what else it might be doing) and if you need to add more dynos it's only a command line interface away.
Hope that helps.
I'm looking for some advice here. My school's student section registration process is online and involves around 6,000 students
They base seating off first come first serve basis. Every year they open the site at noon and floods of people get on a try to register as fast as possible to get good seats. Every year without fail the server crashes and everyone is mad.
After several years of being frustrated myself, I've offered to redo their registration system.
My plan is to rewrite it in ruby on rails, and use heroku for hosting.
Does a heroku dyno only handle one request at a time?
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
Does a heroku dyno only handle one request at a time?
Yes. Heroku dynos are single threaded.
Heroku scales up to 50 dynos. Will that be enough to handle around
6,000 users with about 5 pageviews per transaction in a short amount
of time, say a half hour?
This depends on how fast your page loads. For arguments sake let's pretend it takes 2s per page request (as per Google Analytics recommendation) and you need to load 6,000 users x 5 page views / 30 minutes - 1000 page views per minute.
At 2s per page load, one single dyno would load 30 page views per minute. At 50 dynos, this would be 1500 page views per minute. This would obviously allow you to exceed your overall goal and leave you some room for error, but if all 6000 users are hitting the page at once then a single Heroku app may not be able to keep up depending on your timeout. You would need to implement a user queue system - explained below.
Any helpful strategies or tips you can give me before I dive into this
project?
All that said, a 2s load time may vary depending on the assets your page needs to load, the amount it needs to interact with a database, it's queries, caching, etc. Your page can also potentially serve much faster.
You also need to worry about the initial hit of all the users. This could be taken care of via a first come first serve queue system - similar to that used by Ticketmaster if you've ever used their site. This could be accomplished via AWS SQS or your preference of queue system.
With a user queue and caching of your assets and common database queries, you should be able to accomplish this with 50 or less dynos.
EDIT: I'm taking your word for it that Heroku will run 50 web dynos. They show 24 as max on their pricing page, but I cannot find any info one way or another.
Does a heroku dyno only handle one request at a time?
It depends on web server you use (https://devcenter.heroku.com/articles/dynos#dynos-and-requests). If you want more concurrency within a dyno, I'd suggest taking a look at something like Puma.
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
You can have more than 50 dynos. A specific answer for you app is going to be way better that a guess or generalization. Run a load test against your site (e.g., using Blitz) and find out the real numbers. Costs for add-ons are pro-rated per second, so you only pay for the period you have it installed. So make sure you uninstall or downgrade Blitz once you've finished your test.
I was about to code a mobile web twitter client with a lot of functions in mind, but I was going through their API, noticed they limit requests to 350 per hour, and they've also disabled white-listing of IP addresses. This doesn't seem feasible for a large scale app, is there anyway to bypass. Or just dump the entire project. Programming lang is chiefly PHP.
Thats 350 requests per hour per user, not global per application.
By that what I mean is, if your application is used by 100 users, each one of those users has their own limit for their account.
So your app can refresh around once every 10 seconds or so, that seems plenty to my mind.
I am configuring a load test and am curious/confused on settings. I am testing an intranet website, that is expected to have 6000 concurrent users. My employer had some previous consultant tell them that the load test users does not matter and that we need to worry about requests/second. They have previously determined that those 6000 users would generate 30 rps, while I feel that is not correct we need to show that we can exceed that number. The previous load test was set for only 200 users and the results showed that it did exceed the 200 rps. They were happy with the results, but that is not how I understand this.
My question is, if we need to support 6000 concurrent users should I just set my users to 6000 and run, or is the rps an adequate piece of data to rely on?
It is really hard to measure the apples of a "Virtual User" with the orange that is a real person. A real person may take seconds to minutes to read a webpage and then take some action. A virtual user will be able to process a webpage every few seconds.
To test adequately you need to figure out a common unit of "work" between real users and the load we can generate with Visual Studio. The consultant probably recommended that RPS be used as it is easy to measure from any loadtest with whatever webtests inside it. It is a good measure.
The accuracy of the RPS measure rests on the assumptions made about your users.
The math works a little like:
I have 6000 users, who need to use the site every day. Mostly they log in in the mornining, work a bit before morning tea and hit the site more heavily from 2pm-3:30pm. Say
Looking at previous logs for a site or just guessing you can say:
Maybe at peak a users hits the site every minute or so.
Figuring at peak site usage 30% of the users are working.
So
Users:6000
Peak percentage: 30%
RPS/users: 1/60
6000 * 30% * 1/60 = 30 RPS.
So if the site can process 200RPS we can roughly say it is equivalent to all 6000 users hitting the site for a page every minute.
6000 * 100% * 1/60 = 200 RPS.
When you change the assumptions about your real users, the number of RPS changes, often dramatically.