Hey guys so I developed a social network on iOS and used parse for the back end. Our app has taken off and over 50,000 images have been posted in ten days. Aside from hitting the 600 req/sec api limit soon it appears we might fill up the 100gb storage limit sooner. Does this limit (file storage) reset monthly, or once you hit 100gb you are done. It seems like a tiny amount of storage for a PaaS company.
According to the Parse.com website, you receive 2TB file storage in with any package, not 100GB. If you're asking if they give you an additional 2TB each month, the answer would be no. At the beginning of the next month, you are still using the space, it does not reset (unlike, for example, bandwidth). This is the case with (probably) all cloud (SaaS, IaaS, PaaS, etc.) providers. You can increase the amount of file storage for 10c/GB per month.
As for database storage, it seems that 100GB is the hard limit. Again, being storage, you do not get an extra 100GB per month.
If your database is larger than 100GB and you are hitting more than 600 req/sec (averaged over a minute - i.e. 36000 req/min) then you may want to consider building your own infrastructure, perhaps in AWS or similar, so you can scale it properly. You may also want to consider moving your uploaded images outside of the database if they are not already - DB storage is considerably more expensive than file storage - both in terms of cost and performance.
Parse.com has larger plans available - up to a point.
HOWEVER - if you are going to be doing 600 requests a second (wow, since that's 50 MILLION requests a day) you'll need to look at two possibilities:
You can keep you requests under this limit by using local caching, streamlined calls, etc.
You will eventually need to migrate off of the Parse ecosystem.
If memory serves, there used to be an option to get a custom plan with more requests/second. It seems to have disappeared from the Pricing page, to be replaced with this:
What is the cost for an app with a burst limit above 600 requests per second? What happens if I require more than 600 requests/second?
We do not provide custom plans for apps that require more than 600 requests per second.
UPDATE: It looks like there is also a hard limit of 100GB of file storage...
The overage rate for database size is $10/GB but we only allow increases in increments of 20GB. When you you exceed 20GB of database size we will increase your soft limit to 40GB and begin charging you an incremental $200/month. When you hit your soft limit of 40GB we will increase your soft limit to 60GB... and so on up to a hard limit of 100GB.
Related
Perf testing a tool. We generate a bunch of metrics per test run. But we want to keep each test run separate. This seems like a db per run would allow us to do that, and at the same time allow us to give the tools we create to customers who would only have 1 install, and thus need only one db. But we are talking hundreds of db's... granted they should be smaller as most would only be for a set of metrics covering a couple of hours. But will influxdb limit us? or will performance suffer significantly?
Looks memory/virtual memory appears to be the limiting factor.
On two 16gb boxes I added 500 db's with data on one, and 500 sets of data to the same db on the other. The data was pretty small actually, the individual dbs were 440K after being loaded.
Memory use on the 500 db's was way higher.
500dbs 19.9g virtual, 3.3g resident
1db 2.5g virtual, .9g resident
I would like to understand the CloudKit free usage calculation, but I can't.
Could anyone describe what 40 requests per seconds (10 per 100.000 users) are? I couldn't find any definition what a request is. If I had 2 apps and every app would ping my CloudKit server at the same time, would it result in two requests per second (for the described moment)? How do I know how to limit the request in my apps and how to queue the requests so they can be done later when the time comes where the limit is not reached at the CloudKit server?
What about the 2GB data transfer (50 mb per user)? How should I understand these 50 mb, per second, per day for the eternity? What will happen if one user for one of my apps used 50 mb traffic?
How do I limit my app and still have a good clint server communication? Will I get an error when the limit is reached and won't automatically charged by Apple?
I do really like the programming ease of CloudKit but I'm kinda scared that it could go all way wrong and I will get charged for misunderstanding.
It is really hard for me to imagine how it is calculated.
I think your biggest concern will be quelled by knowing that you can set usage limits on these services. If you've hit this limit then the service will return an error and you can handle that in your app.
40 requests per second is across all users and devices. If you have 3600 users and they all pinged the server once per hour, that would average out to about 1/second. While that won't be enough to build a service like facebook, instagram, or twitter, it would probably be sufficient for getting weather data, a daily schedule, or food truck locations. For up to 4,000,000 users, the free tier will cover each user checking at most once every three hours with an even distribution.
2GB data transfer is for all of your users. Since the scaling doesn't take effect until you have 100,000 users, 2GB of data transfer is a pretty good amount to get you off the ground. Since it scales at 50MB per user, it's easy to figure out how much you can trust your app to communicate with the server. If just one user goes over but you're still under the total usage then you won't get charged. If you do go over, it's $0.10/GB of data transfer.
You could limit your app to only communicate so much until the user needs to pay for a premium service. If you allowed 50MB/user/month of Data Transfer and let the user know when they approached this limit that they'd have to pay then you'd never go over. You could also have ads on the device that essentially pay for the service to scale thus allowing users who use the app more to have more privileges than passive users but still allowing everyone to have a base usage.
The prices are at the bottom of this page and are fairly reasonable. You can definitely get a cheaper rate if you build things yourself and use AWS, but you'd need to be in the millions of users and/or have high demands for that to be a better option.
40 requests per second is across all users and devices. If you have
3600 users and they all pinged the server once per hour, that would
average out to about 1/second. While that won't be enough to build a
service like facebook, instagram, or twitter, it would probably be
sufficient for getting weather data, a daily schedule, or food truck
locations. For up to 4,000,000 users, the free tier will cover each
user checking at most once every three hours with an even
distribution.
Just about the 40 requests / second limit:
If this is correct, I sincerely don't understand why so many people in the forums says this is more than enough. For certain apps it might be enough to sync once per hour but if you want to keep save-game files synchronized between devices then 40 request /second is ridiculous. A weather app? don't make me laugh. On 90% of the apps out there you are going need to to insert, update, and delete, I wonder how many requests a simple update it is... I hope just 1, but I seriously doubt it.
On Firebase there is not a request limit like this one and the upload is FREE. They just charge you for the downloads.
I might be missing something about this CloudKit thing because I don't get that ridiculous limit.
I am making my first App. I am new to both SQL and GAE. Google Cloud SQL has tier "D0", which has "included I/O per day" of 200k. I have an example, could you please explain how many I/O's is this example?
Suppose I have a table in my Cloud SQL of 10 rows and 3 headers. the headers are "article name", "author", "date of publishing". so there are 30 fields in total. When a user starts my App and requests latest information, I want to send the user all 30 fields. I can send this to the user with a single SQL code.
Is the execution of that query counted as thirty I/O because 30 fields were transferred or one I/O because one SQL query was run?
Appreciate your help.
The pricing guide has this to say;
The number of I/O requests to storage made by your database instance depends on your queries, workload and data set. Cloud SQL will cache data in memory to serve your queries efficiently and to minimise the number of I/O requests.
In other words, neither of the two options, some queries may be served entirely from memory, generating no I/O, while some may generate many I/O requests. Optimising the database well with indexes will make your queries cheaper, generating table scans over large tables will cost more.
In short, same good practice rules apply as keeping a fast database as on a local machine, but not doing the optimisation won't just make your queries slower, but make them cost more.
The # of I/Os refers to disk operations. So that really depends on the query and the cached data.
I'm looking for some advice here. My school's student section registration process is online and involves around 6,000 students
They base seating off first come first serve basis. Every year they open the site at noon and floods of people get on a try to register as fast as possible to get good seats. Every year without fail the server crashes and everyone is mad.
After several years of being frustrated myself, I've offered to redo their registration system.
My plan is to rewrite it in ruby on rails, and use heroku for hosting.
Does a heroku dyno only handle one request at a time?
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
Does a heroku dyno only handle one request at a time?
Yes. Heroku dynos are single threaded.
Heroku scales up to 50 dynos. Will that be enough to handle around
6,000 users with about 5 pageviews per transaction in a short amount
of time, say a half hour?
This depends on how fast your page loads. For arguments sake let's pretend it takes 2s per page request (as per Google Analytics recommendation) and you need to load 6,000 users x 5 page views / 30 minutes - 1000 page views per minute.
At 2s per page load, one single dyno would load 30 page views per minute. At 50 dynos, this would be 1500 page views per minute. This would obviously allow you to exceed your overall goal and leave you some room for error, but if all 6000 users are hitting the page at once then a single Heroku app may not be able to keep up depending on your timeout. You would need to implement a user queue system - explained below.
Any helpful strategies or tips you can give me before I dive into this
project?
All that said, a 2s load time may vary depending on the assets your page needs to load, the amount it needs to interact with a database, it's queries, caching, etc. Your page can also potentially serve much faster.
You also need to worry about the initial hit of all the users. This could be taken care of via a first come first serve queue system - similar to that used by Ticketmaster if you've ever used their site. This could be accomplished via AWS SQS or your preference of queue system.
With a user queue and caching of your assets and common database queries, you should be able to accomplish this with 50 or less dynos.
EDIT: I'm taking your word for it that Heroku will run 50 web dynos. They show 24 as max on their pricing page, but I cannot find any info one way or another.
Does a heroku dyno only handle one request at a time?
It depends on web server you use (https://devcenter.heroku.com/articles/dynos#dynos-and-requests). If you want more concurrency within a dyno, I'd suggest taking a look at something like Puma.
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
You can have more than 50 dynos. A specific answer for you app is going to be way better that a guess or generalization. Run a load test against your site (e.g., using Blitz) and find out the real numbers. Costs for add-ons are pro-rated per second, so you only pay for the period you have it installed. So make sure you uninstall or downgrade Blitz once you've finished your test.
I was about to code a mobile web twitter client with a lot of functions in mind, but I was going through their API, noticed they limit requests to 350 per hour, and they've also disabled white-listing of IP addresses. This doesn't seem feasible for a large scale app, is there anyway to bypass. Or just dump the entire project. Programming lang is chiefly PHP.
Thats 350 requests per hour per user, not global per application.
By that what I mean is, if your application is used by 100 users, each one of those users has their own limit for their account.
So your app can refresh around once every 10 seconds or so, that seems plenty to my mind.