What's the allowable range for discrepancies between a caller's timestamp value and the back-end service's time when making calls to Desire2Learn's Valence REST APIs?
The timestamp threshold is narrow, but, if you are outside of the range the server will send you back the timestamp of the server. (See http://docs.valence.desire2learn.com/basic/apicall.html#disposition-and-error-handling). However, it is easiest to use a one of the libraries on the valence site as they already deal with parsing that result condition and retaining an offset in the event the Timestamp out of range error occurs.
Related
We will be sending requests for outbound calls via REST API to Twilio Studio via batch each morning. However, the order in which they are sent is arbitrary, and some called parties will be in time zones in which, calls should not be made at that time (e.g. calling PST time zones at 8:00AM EST). How can we deal with this? I could put in a split based on the State, which would be known. However then what? Could I include a loop based on a time check? If so, it is conceivable that the number of called parties waiting for their time zone to become eligible would exceed the number of concurrent outbound calls which are allowed. Would this then prevent normally eligible calls from being placed, or do flow executions not count towards this limit unless a call has already been placed?
I had thought about storing the queued requests in Sync, and executing them based on the State criteria in conjunction with a time check function. However, I'm not sure if this would even work.
Is there some means of sorting, or otherwise selecting queued API requests based on a criteria?
Any help would be appreciated. Thank you!
The decision on when to place the call would be determined outside of Twilio.
You would first identify which timezone the customers are in and group those based on Pacific, Mountain, Central, Eastern, say using the address within your CRM - which is safer then using their area code.
Then, once the time is appropriate for that timezone, you would make a call to the Twilio Studio Executions end-point to place each of the calls.
You can monitor queue_time to determine how many milliseconds a call remains in the queue before being placed, in case you need to increase you CPS (or slow down your calling) and to avoid abnormally large queues resulting in calls placed outside allowed business hours.
So, in short, the queueing logic is handled on your side rather then on Twilio's side.
Because of our specific use case, it is desirable to have the functionality self-contained. We prioritize inbound calls and the call volume varies, so the number of concurrent calls placed by the outbound IVR is very low. This means that a call can be queued for an extended period of time, and our allowed calling window may expire. Therefore, we must make this check immediately prior to making the attempt
I was able to resolve this with a function which checks the current time via a new Date().toISOString() and adds or subtracts the offset based on the time zone of the called party.
I am using a Twilio Function which has an array of phone numbers.
I would like to be able to store these phone numbers in a 3rd party cloud database which we can edit with our CRM.
Then I'd write another Twilio function that will check the database and update the array in Twilio Functions with the latest data.
Alternatively if there is any other way that the first Twilio function could get the latest data from the database and store it in memory that would be great. I'd like to avoid checking the database for every request if possible in order to make the function as fast as possible.
Any help greatly appreciated!
Twilio developer evangelist here.
Currently, as Functions is in public beta, there is no API for Functions. So you cannot update Functions on Environment Variables for functions yet.
Also, due to beta limitations, you are unable to install Node modules, such as database drivers, so accessing remote data stores is currently not straightforward.
You can, from within a Function, make HTTP requests though. So, if your CRM could return a list of numbers in response to an HTTP request, then you could fetch them that way.
In terms of storing data in memory for Functions, this is not to be relied upon. Functions are short lived processes so the memory is volatile.
In your case, since you use a list of numbers, you could load the list in the first call to your Function and then pass those numbers through the URL for the remaining calls, so that you only need to make a request the first time.
Let me know if that helps at all.
I am currently building an app that will run on parse server on back4app. I wanted to know if there are any tips for lowering requests. I feel like my current app setup is not taking advantage of any methods to lower requests.
For example: when i call a cloud code function is that one request even if the cloud code function has multiple queries in it? Can I use cloud code to lower requests some how?
another example : If I use parse local data store rather than constantly getting data from server can that lower requests or does it not really because you would still need to update changes later on. Or do all the changes get sent at once and count as one request.
Sorry I am very new to looking at how requests and back end pricing is measured in general. I want to make sure I can be as efficient as possible in order to get my app out without going over budget.
Take a look in this link here:
http://docs.parseplatform.org/ios/guide/#performance
Most part of the tips there are useful both for performance and number of requests.
About your questions:
1) Cloud code - each call to a cloud code function counts as a single request, no matter how many queries you do
2) Client side cache - for sure it will reduce the total amount of requests you do in the server
Is there an option in DynammoDB to store auto incremented ID as primary key in tables? I also need to store the server time in tables as the "created at" fields (eg., user create at). But I don't find any way to get server time from DynamoDB or any other AWS services.
Can you guys help me with,
Working with auto incremented IDs in DyanmoDB tables
Storing server time in tables for "created at" like fields.
Thanks.
Actually, there are very few features in DynamoDB and this is precisely its main strength. Simplicity.
There are no way automatically generate IDs nor UUIDs.
There are no way to auto-generate a date
For the "date" problem, it should be easy to generate it on the client side. May I suggest you to use the ISO 8601 date format ? It's both programmer and computer friendly.
Most of the time, there is a better way than using automatic IDs for Items. This is often a bad habit taken from the SQL or MongoDB world. For instance, an e-mail or a login will make a perfect ID for a user. But I know there are specific cases where IDs might be useful.
In these cases, you need to build your own system. In this SO answer and this article from DynamoDB-Mapper documentation, I explain how to do it. I hope it helps
Rather than working with auto-incremented IDs, consider working with GUIDs. You get higher theoretical throughput and better failure handling, and the only thing you lose is the natural time-order, which is better handled by dates.
Higher throughput because you don't need to ask Dynamo to generate the next available IDs (which would require some resource somewhere obtaining a lock, getting some numbers, and making sure nothing else gets those numbers). Better failure handling comes when you lose your connection to Dynamo (Dynamo goes down, or you are bursty and your application is doing more work than currently provisioned throughput). A write-only application can continue "working" and generating data complete with IDs, queueing it up to be written to dynamo, and never worry about ID collisions.
I've created a small web service just for this purpose. See this blog post, that explains how I'm using stateful.co with DynamoDB in order to simulate auto-increment functionality: http://www.yegor256.com/2014/05/18/cloud-autoincrement-counters.html
Basically, you register an atomic counter at stateful.co and increment it every time you need a new value, through RESTful API.
I've got a REST API that uses OAuth for authentication. This API will primarily be used for mobile applications.
One of the developers working on a mobile app (he seems to be unfamiliar with OAuth) asked about the timestamp restriction. He was concerned that if the user's clock is off, the app will not work, because if it's off by more than 5 minutes (the current restriction my app uses), the request will be assumed to be a replay attack, and be rejected.
I'm wondering, have you (either as app developer or API developer) run into this problem in the wild? How have you worked around it? What is a reasonable restriction on the timestamp synchronization?
I just had this exact same question and after doing a double take on the documentation I think that I have entirely screwed up the implementation of the timestamp by thinking in the same way you were, check it out:
http://oauth.net/core/1.0/#nonce
Unless otherwise specified by the Service Provider, the timestamp is expressed in the number of seconds since January 1, 1970 00:00:00 GMT. The timestamp value MUST be a positive integer and MUST be equal or greater than the timestamp used in previous requests.