I am using a Twilio Function which has an array of phone numbers.
I would like to be able to store these phone numbers in a 3rd party cloud database which we can edit with our CRM.
Then I'd write another Twilio function that will check the database and update the array in Twilio Functions with the latest data.
Alternatively if there is any other way that the first Twilio function could get the latest data from the database and store it in memory that would be great. I'd like to avoid checking the database for every request if possible in order to make the function as fast as possible.
Any help greatly appreciated!
Twilio developer evangelist here.
Currently, as Functions is in public beta, there is no API for Functions. So you cannot update Functions on Environment Variables for functions yet.
Also, due to beta limitations, you are unable to install Node modules, such as database drivers, so accessing remote data stores is currently not straightforward.
You can, from within a Function, make HTTP requests though. So, if your CRM could return a list of numbers in response to an HTTP request, then you could fetch them that way.
In terms of storing data in memory for Functions, this is not to be relied upon. Functions are short lived processes so the memory is volatile.
In your case, since you use a list of numbers, you could load the list in the first call to your Function and then pass those numbers through the URL for the remaining calls, so that you only need to make a request the first time.
Let me know if that helps at all.
Related
I'm new to working with CloudKit and database fetching and I've looked at the CKDataBaseOperation calls, so I'm trying to understand the real differences between adding an operation to a database and using "normal" function calls on that database if they both produce, more or less, the same results.
Why would adding an operation be more desirable over a function call and in what situations?
Thanks for helping me understand this. I'm trying to learn as much as I can about Swift.
Overview:
In CloudKit most of the tasks have 2 ways of doing things:
Convenience APIs (functions with completion handlers)
Operations
1. Convenience APIs
Advantages:
As the name implies, they are convenient to use
Disadvantage:
Usually requires more server requests.
Can't build dependencies
2. Operations:
Advantages:
More configurable and more options.
Requires lesser server requests (Better for your server request quota)
It is built using Operation, so you get all the capabilities of Operation like dependencies (you will need them in a real app)
Disadvantages:
It is not so convenient to use, you need to create the operation. It takes a little more time to code but well worth it.
Example 1 (Fetch):
If you use CKDatabase.fetch, you would need to specify the record IDs that you want to fetch.
If you use CKQueryOperation, you can query based on field values.
Example 2 (Save / Update):
If you use CKDatabase.save, you can save 1 record with every function call. Each function call would result in a separate server request. If you want to save 200 records, you would have to run it in a loop and would make 200 server requests which is not very efficient. CloudKit also has a limit on the number of server requests you can make per second. This way you would exhaust your quota very quickly.
If you use CKModifyRecordsOperation, you can save 200 records all at once*, by passing it as an array. So you would be making far lesser server requests.
*Note: The server imposes a limit on the number of records it can save in 1 request but it is definitely better than creating a separate request to save each record.
Reference:
https://developer.apple.com/library/content/documentation/DataManagement/Conceptual/CloudKitQuickStart/Introduction/Introduction.html#//apple_ref/doc/uid/TP40014987-CH1-SW1
Watch WWDC CloudKit videos
Might help to learn and watch WWDC videos about Operation (earlier used to be referred as NSOperation)
I am currently building an app that will run on parse server on back4app. I wanted to know if there are any tips for lowering requests. I feel like my current app setup is not taking advantage of any methods to lower requests.
For example: when i call a cloud code function is that one request even if the cloud code function has multiple queries in it? Can I use cloud code to lower requests some how?
another example : If I use parse local data store rather than constantly getting data from server can that lower requests or does it not really because you would still need to update changes later on. Or do all the changes get sent at once and count as one request.
Sorry I am very new to looking at how requests and back end pricing is measured in general. I want to make sure I can be as efficient as possible in order to get my app out without going over budget.
Take a look in this link here:
http://docs.parseplatform.org/ios/guide/#performance
Most part of the tips there are useful both for performance and number of requests.
About your questions:
1) Cloud code - each call to a cloud code function counts as a single request, no matter how many queries you do
2) Client side cache - for sure it will reduce the total amount of requests you do in the server
In a World of Warcraft Vanilla Lua Addon Development, how can I issue an HTTP call to receive data back? If not, how can I get data from a web source into the game while playing?
I have a feeling the answer is tragically short, but would like the question asked and answered on Stack Overflow. My research came up lacking, and I recall doing some LUA in ~2007 and was disappointed.
Well, tragically short is an understatement. You simply can't. There was never any APIs that interacted directly with connections, let alone create any, let alone to arbitrary URLs.
Most of them just broadcast game events that occur from the game's connection, and the closest thing you can get to a "data stream" is add-on chat channels. But since bots violate the ToS, you wouldn't be able to make an account that responds to your addon's inquiries.
The closest thing you can get is building an "asynchronous mesh network", but that's only good if your addon has a considerable user base, and it's not guaranteed you'll get information timely.
The general idea is that your addon will have a public key (as in encryption), and you (only you) will detain a private key. Your addon emits a message to any connected peers, which store it on cross-realm SavedVariables, and you hope that someone will have characters on more than one realm. Upon login, the client addon will broadcast its latest packet (still encrypted) to that realm's addon channel, and hopefully within a week or so you might get the updated information across all clients.
A disadvantage is that you'll only get "push" notifications, the client won't be able to send any data back to you*.
That, or you could release a patch for the addon on Curse :P
BUT WAIT!
You mention vanilla, so I can presume you're developing this for a private server. Private servers often have one or a very small amount of realms, making the above mesh network much simpler. Instead of a mesh, just have encryption and manually login&broadcast on each realm every time you want to update the information retrieved.
Plus, you might even be able to contact the server devs to allow you an API that sends messages to the appropriate ingame addon channel (you'd have to ask).
Of course, if you pretend to make your addon server-agnostic, instead of tailored to a specific server, you're back to square one.
* Unless you are really dedicated to make that happen, because it's a ton of work.
There is no web API in vanilla WoW. There is a web browser widget in the game currently though, albeit very limited in usage.
If you have access to the server software code, you may be able to hook listening on specific game channels for user messages in a defined format, and let the server respond in a way for the addon to parse it.
I'm implementing an autocomplete functionallity in a mobile app. I plan to have an autocomplete function on Parse Cloud Code but I'm afraid of the latency/delay that could bring up.
Specifically I would like to know how is calling parse Cloud Functions compared to do calls to a regular webserver over a WebSocket connection.
NOTE: I see the iOS SDK call to Parse functions uses NSURLSession which will leverage KeepAlive by default. What I don't know if the server copes up with that.
Cloud functions can be done both sync and async however they are very stingy about how long the connection stays open. In other-words you'd have to separate api calls. So, your answer is no.
Also, I might mention this from their guide on iOS cloud code calls
There is a limit of 8 concurrent httpRequests per Cloud Code request, and additional requests will be queued up.
This means that even if you somehow forced the connection to stay open like a WebSocket...you couldn't have more than 8 people using that view/cloud function at a time or everyone else wouldn't be able to access the function.
However, Ive dealt with this myself and you have a few options....
1)Make your own SocketIO server that make rest requests to parses cloud code functions. Theres even a iOS SDK from SocketIO now. So this is a pretty easy option.
2)Accept that youll have a pretty high API call rate and keep making it.
3)Do what I did and Call all the objects you need at the beginning and have iOS perform the autocomplete on the fly. Heres a helpful search on cocoa controls that should give you a head start to get it handled check it out here. Any one of these would save you hours of time of trying to sort through and repopulate yourself. If you have a lot of objects you need to get. Remember if you are returning more than 100 results(the default return amount) set query.limit = 1000(max return limit).
Would like to maintain a local record of the price of all the phone calls that my application makes.
Am not sure what a good pattern for this would be. It looks like the price is not available in the arguments provided during the status call back when the call is closed. I assume this means I'll need to query Twilio's servers to find the price of the call. Can I do this immediately or do I need to wait a certain amount of time for the price to populate?
Is there another pattern that would be simpler, require fewer steps, or be less error prone that I am not seeing here?
Thanks!
Twilio evangelist here.
I'd recommend checking out the Usage Records API. These handy API's give you an easy way to get rollup data for your account, like how much your account spent yesterday, or how many outbound calls it made.
You can also set up Usage Triggers to proactively notify you when threshholds are met.
Hope that helps.