May I clarify that the number of connection showing at each point is the maximum number of simutaneous connection reached during that particular hour? or it means the total number of connections made during that particular hour? thanks
The usage stats indeed show the maximum number of users that were connected at any one time during that hour.
It is not the total number of users that connected during that hour. If you want to know that, you can easily build it by having each user write an event to the database when it connects or by using a product like Firebase Analytics.
Related
i have a project on Google Cloud Platform, using the Youtube Data V3 API. Everything was going well, earlier this month after receiving several emails alerting that I had to do an Audit, the queries for the day stopped completely. they were completely zeroed.
I followed the link to perform the Audit, and i successfully completed all the changes that were told to me in my application, strictly following the regulations. The audit went well. No further changes were required from me.
But the issue is that the queries per day remain at zero. I can't edit. It occurred to me that maybe using the Google Cloud Trial there could be some change. Negative. I'm still unable to increase the limits, not even using the balance they give you as a gift.
The project used approximately a margin of about 25,000 to 300,000 queries / day. I have requested 500.000 queries / day filling in the quota expansion form to have a little more margin.
Meanwhile the project has been stopped for almost a month. If anyone knows something or how I should proceed about it,
Thank you very much.
Have a nice day,
We have an application that sends mail merge campaigns via Graph API. In the app, we keep track of the timings and benchmarks per campaign. We found that some of our users will run into an issue where their mail merge campaign is sending very slow sometimes. Normally for a campaign with about 1k recipients, the average send per email will be less than 1-2 seconds. But sometimes we get averaging around 17-30 seconds per email send with Graph API. For a total of 1k recipients, this is a long time to process a mail merge campaign. More often than not, this causes the total time for a mail merge campaign to finish between 2-4 hours. Note, it's not necessarily because of the number of emails / recipients. We've seen this happen when the recipients range from 500 to 2k even.
I'm not too sure whether the issue here is with Graph API or the user's mailbox throttling (sending limits). I suspect it's the prior because, often, users will resend the same campaign when they see that the first campaign is very slow. What ends up happening is that the second campaign will actually finish very quickly, even before the slow campaign has processed half of them.
It almost seems like the server processing my first set of Graph API requests is warming up. Is there any configuration that I can set in the API request that may help with this? I couldn't find anything in the documentation referencing this.
Has anyone else had experience with this issue? Is there some configuration that the tenant can adjust or review that may be a root cause of this?
I would like to use a spreadsheet to show data to 50000 to 100000 people at a time so Can Any One tell me how many people can download the json file of a spreadsheet at a time.
The Sheets API has a has a default limit of 40,000 queries per day.
You also have
Write/Read requests per 100 seconds 500
Write/Read requests per 100 seconds per user 100
Write/Read requests per day 40,000
As long as you don't exceed those, you'll be fine. However if you go past the limit, you need to create a billing account so you can ask for additional quota.
I would like to understand the CloudKit free usage calculation, but I can't.
Could anyone describe what 40 requests per seconds (10 per 100.000 users) are? I couldn't find any definition what a request is. If I had 2 apps and every app would ping my CloudKit server at the same time, would it result in two requests per second (for the described moment)? How do I know how to limit the request in my apps and how to queue the requests so they can be done later when the time comes where the limit is not reached at the CloudKit server?
What about the 2GB data transfer (50 mb per user)? How should I understand these 50 mb, per second, per day for the eternity? What will happen if one user for one of my apps used 50 mb traffic?
How do I limit my app and still have a good clint server communication? Will I get an error when the limit is reached and won't automatically charged by Apple?
I do really like the programming ease of CloudKit but I'm kinda scared that it could go all way wrong and I will get charged for misunderstanding.
It is really hard for me to imagine how it is calculated.
I think your biggest concern will be quelled by knowing that you can set usage limits on these services. If you've hit this limit then the service will return an error and you can handle that in your app.
40 requests per second is across all users and devices. If you have 3600 users and they all pinged the server once per hour, that would average out to about 1/second. While that won't be enough to build a service like facebook, instagram, or twitter, it would probably be sufficient for getting weather data, a daily schedule, or food truck locations. For up to 4,000,000 users, the free tier will cover each user checking at most once every three hours with an even distribution.
2GB data transfer is for all of your users. Since the scaling doesn't take effect until you have 100,000 users, 2GB of data transfer is a pretty good amount to get you off the ground. Since it scales at 50MB per user, it's easy to figure out how much you can trust your app to communicate with the server. If just one user goes over but you're still under the total usage then you won't get charged. If you do go over, it's $0.10/GB of data transfer.
You could limit your app to only communicate so much until the user needs to pay for a premium service. If you allowed 50MB/user/month of Data Transfer and let the user know when they approached this limit that they'd have to pay then you'd never go over. You could also have ads on the device that essentially pay for the service to scale thus allowing users who use the app more to have more privileges than passive users but still allowing everyone to have a base usage.
The prices are at the bottom of this page and are fairly reasonable. You can definitely get a cheaper rate if you build things yourself and use AWS, but you'd need to be in the millions of users and/or have high demands for that to be a better option.
40 requests per second is across all users and devices. If you have
3600 users and they all pinged the server once per hour, that would
average out to about 1/second. While that won't be enough to build a
service like facebook, instagram, or twitter, it would probably be
sufficient for getting weather data, a daily schedule, or food truck
locations. For up to 4,000,000 users, the free tier will cover each
user checking at most once every three hours with an even
distribution.
Just about the 40 requests / second limit:
If this is correct, I sincerely don't understand why so many people in the forums says this is more than enough. For certain apps it might be enough to sync once per hour but if you want to keep save-game files synchronized between devices then 40 request /second is ridiculous. A weather app? don't make me laugh. On 90% of the apps out there you are going need to to insert, update, and delete, I wonder how many requests a simple update it is... I hope just 1, but I seriously doubt it.
On Firebase there is not a request limit like this one and the upload is FREE. They just charge you for the downloads.
I might be missing something about this CloudKit thing because I don't get that ridiculous limit.
I am new to DynamoDB. I am very much confused about provisioned throughput. I am creating a iPhone game in which the users can chat within the game. I am having a Chat table. The Chat table contains GameID, UserID and Message. How do I find the size of the item to calculate throughput. The size of the item entirely depends on the Message right? How to calculate the size of an item?
Amazon tells that we can either modify the throughput by using UpdateTable API or by manually from the console. If I want to change it form code, how will I know that the provisioned throughput has been exceeded for a certain table? How to check that from code?
I am also confused about the CloudWatch. How to understand this?
Could anyone please help me? Please don't point me to the documentation.
Thanks.
I will do my best to help with the confusion.
DynamoDB is a key:value database
CloudWatch is Amazon's products monitoring tool
Provisioned throughput is roughly the number Items KB you plan to Read/Write per seconds
Whenever you exceed your provisioned throughput,
DynamoDB answers with ProvisionedThroughputExceededException
DynamoDB notifies CloudWatch
What Cloudwatch does is basically record and aggregates data-points. For most applications, it will only keep track of aggregated data over each consecutive 5min periods.
You can then access these data for "manual" monitoring or set up "alarms".
There was a really interesting question on SO a couple of weeks earlier on DynamoDB auto-scaling using alarms. You might be interested in reading it: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/ErrorHandling.html
Knowing this, you can start building your application.
As for every DynamoDB services, one needs credentials to access it. Even though they can be restricted to a specific table or set of action, it is very dangerous to bundle them in an application. Would you give MySQL or MongoDB or credentials, even Read Only to any untrusted people ?
May I suggest you do build your application to rely on a server of your own ? This server being trusted and build by you, you could safely perform any authorization check there and grant it full access to your table.
I hope this helps. Feel free to ask for more precisions.