This morning I saw two event triggers in my account that I never saw before, and don't think were there last night. I don't know what they mean. I'm sure the words have meaning to the people at Twilio, but clicking help on Twilio brings me to stack overflow. The https://www.twilio.com/console/usage/triggers page shows the following:
FRIENDLY NAME USAGE CATEGORY CURRENT VALUE TRIGGER VALUE FREQUENCY LAST FIRED
Nearing monitor-reads free tier (5000). Consumed 4000 monitor-reads 0 4,000.00 ---
Nearing monitor-writes free tier (10000). Consumed 8000 monitor-writes 0 8,000.00 ---
Clicking on the event gives me no idea what it's talking about either.
CURRENT VALUE
0
LAST FIRED
Never
USAGE CATEGORY
monitor: reads
TRIGGER VALUE
4,000.00
TRIGGER BY
count
I search the words in Twilio to see what that means and found this but I still have no idea what it means either. Hoping one of the Twilio support people see this here.
Twilio developer evangelist here.
The monitor-reads and monitor-writes events refer to the Audit Events that you can have switched on for your account. The help article you linked to is actually out of date, particularly where it says everyone is opted in automatically (and I am working to get the article updated).
A better description of Audit Events comes from this page, which says:
Audit Events (previously known as Monitor Events) is designed to give Information Technology (IT) and Information Security (Info-Sec) teams detailed operational insight into their Twilio utilization. This feature is most beneficial to those with IT compliance and security requirements - eg storing detailed change logs for forensic analysis and security reviews. However, this feature is not beneficial, or needed, by most of Twilio’s customers.
If you need to track configuration changes within your Twilio project because of security and compliance requirements, please opt-in to Audit Events. Note that you will be charged for Audit Events written, stored, and read via the API.
I've also discovered that we do generate the triggers that you are seeing, even if you are not opted in. We are working to change that and avoid this confusion in the future.
All you need to know is that you are not automatically opted in to Audit Events and those triggers will not affect you unless you decide to opt into Audit Events in the future.
Related
I have a contact flow that is using a pre-recorded voice prompt with a lex bot for voice rec. This is the main menu verbiage:
“Thank you for calling. If you would like to use your keypad to select the menu options, say “keypad”, otherwise please listen to the following menu options. For billing questions, say “billing”. To report a missed pickup, say “missed pickup”. If you are a current customer with recycling or other account questions, say “other”. If you are not a current customer, and have questions, say “sales”. To hear the menu again, say “repeat menu”. For all other questions, please remain on the line.”
I have set the error handling in the Lex bot to speak "Sorry, I'm having a hard time understanding you. Let's try using the keypad instead to make sure we route your call properly."
This is working when an utterance is not matched or an invalid option is spoken or pressed. However, I cannot figure out if it's possible to allow the lex bot to timeout like in a normal DTMF contact flow and send the caller to the next step in the menu without playing the error handling in from the Lex bot.
Is this possible?
That's the thing, Lex is not meant to be used this way. It MUST have an input to process, and if it reaches Lex's timeout, then it will always return an error and deliver the error handling response.
So you will have to get fancy in the Connect Flow to catch the Lex error message, and turn it into your own handling of it. But it will be hard to know whether Lex is erroring because it didn't understand, or because the user chose not to respond.
Therefore, I would personally avoid building the bot in a way that allows the user to remain silent. The user must direct Lex every step of the way and have easy ways of backing out of an unwanted action.
Remember that Lex is much more powerful than the old automatic call systems, so trying to force Lex into that old system won't work well. Depending on how you design your bot, you can make the conversation much much more natural, accepting a very wide range of responses and directing those into proper actions.
Tips:
Things may have changed more recently, but when I was building Lex/Connect, it was not possible for the user to interrupt a playback message. So I had to also avoid what you are trying to do in the welcome message:
If you would like to use your keypad to select the menu options, say “keypad”, otherwise please listen...
Naturally, a user who does want to use the keypad will try to immediately say "keypad" and probably get frustrated by having to listen to the rest of the playback message. So I design every playback message to be short, deliver information first, and always end on the question. Often breaking the conversation up into more branching points to make the questions as specific as possible.
Don't worry about going back and forth with the user too many times. It gives the user comfort knowing they are on the right path to what they want and are able to control the conversation in smaller steps. They will get stressed, having to listen to long list of options and remembering what they are while figuring out which one best applies to them.
So make each question as clear as possible and avoid spoonfeeding options. It feels less natural to explicitly state to the user what they should say:
To report a missed pickup, say “missed pickup”.
That is unnatural.
A good middle ground would be asking one question with a list of options and pausing between each option. The user will understand that these are responses they should make, but won't feel unnaturally pressured into exact phrases. For example:
Would you like to, check your billing, report a missed pickup, ask about sales, or something else?
That is natural.
We are comfortable handling those types of questions because we often do that when speaking with humans. You may even want to use a question mark instead of commas so that the playback voice uses a questioning intonation with each option. It looks less natural in written form, but would probably sound more natural.
Last tip: Don't design your bot based on your experience talking with bots. Design your bot based on your experience talking with humans.
I recently began using the Youtube Data v3 API for a program that I'm writing which is purely for personal use. To give a brief summary of what it does, it checks the the live chat from my most recent (usually ongoing) livestream and performs actions based on certain keywords entered in chat (essentially commands for people to use from live chat). In order to do that, however, I have to constantly send requests to get a refreshed livechat. As it is now, it sends requests on 1 second intervals. I recently did a livestream to test out my program and it only took about 25 minutes for me to reach the daily quota limit of 10,000 units/day.
The request is:youtube.liveChatMessages().list(liveChatId=liveChatId,part="snippet")
It seems like every request I make costs 6 units, according to the math. I want to be able to host livestreams at lengths of up to 3 hours, which would require a significant quota increase. I'm aware that there is an option to fill out a form to request additional quota. However, it asks for business information such as a business name, business website, business mailing address, etc. Like I said before, I'm doing this for my own use only. I'm in no way part of a business, and just made my program as a personal project. Does anyone know if there's any way to apply for additional quota as an individual/hobbyist? If not, do you think just putting n/a in those fields would be acceptable? I did find another post where someone else had the exact same problem, but no one was able to give a helpful answer. Any advice would be greatly appreciated.
Unfortunately, and although only related, it seems as Google is for the money here. I also tried to do something similar myself (a very basic chat bot just reading the chat messages), and, although some other users on the net got some different results, they all have in common that, according to the doc how it should be done, all poll at this interval of about once a second (that's the timeout one get as part of the answer to a poll for new messages). I, along with a few others, got as most as about 5 minutes with polling once a second, some others, like you, got a few more minutes out of it. I changed the interval by hand in incrementing intervals of 5 seconds each: 5, 10, 15, etc... you get the picture. I can't remember on which value I finally tuned in, but I was only able to get about 2 1/2 hours worth with a rather long polling interval of just once every 10 seconds or so - still way enough for a simple chat bot just reading the chat. But also replying would had at least doubled the usage and hence halfed the time.
It's already a pain to get it working as an idividual as just setting up the required OAuth authentication requires one to at least provide basic information like providing a fixed callback and some legal and policy information. I always ended up in had it rejected with this standard reply "Your project seem to be for internal use only.". I even was able to got this G suite working (before it required payment) to set up an "internal" project (only possible if account belongs to a G suite organization account), but after I set up the OAuth login I got the error that my private account I wanted to use the bot on was not part of the organization and hence can't be used. TLDR: Just useless waste of time.
As far as I'm in for this for several months now there's just no way to get it done as a private individual for personal use. Yes, one can just set it up and have the required check rejected (as it uses the YouTube data API scopes), but one still stuck with that 10.000 units / day quota. Building your own powerful tool capable of doing more than just polling once every 10 to 30 seconds with just a minimum of interaction doesn't get you any further than just a few minuts, maybe one or two hours if you're lucky. If you want more you have to set up a business and pay for it - simple and short: Google wants you to pay for that service.
As Mixer got officially announced to be shut down on July 22nd you have exactly these two options:
Use one of the public available services like Streamlabs, Nightbot, etc ... They're backed by their respective "businesses" and by it don't seem to have those quota limits (although I just found some complaints on Streamlabs just from April - so about one month prior to when you posted this question where they admitted to had reached their limits - don't know if they already got it solved).
Don't use YouTube for streaming but rather Twitch - as Twitch doesn't have these limits and anybody is free to set up an API token either on the main account or on a second bot account (which is also explicitly explained in their docs). The downside of this are of course the objective sacrifices one has to suffer: a) viewers only have the quality of the streamer until one reaches at least affiliate b) caped at max 1080p60 with only 6.000kBit/s c) only short time of VOD storage
I myself wanted to use YouTube as my main platform (and currently do, but without my own stuff at the moment) and my own bot stuff and such as streaming on YouTube has some advantages over Twitch, but as YouTube wants me to pay what others (namely: Twitch) offer me for free (although overall not as good quality) it's an easy decision to make. Mixer looked promissing, as it also offered quite some neat features (overall better quality than Twitch, lower latency), but the requirements to get partner status were so high (2.000 followers along with another insane high number to reach) and Mixer itself just so little of a platform (I made the fun to count all the streamers and viewers - only a few hundred streamers with just a few 10.000s viewers the whole platform had less than some big Twitch channels on their own) - and now it's announced soon to be dead anyway.
Hope this may give you some input into what a small streamer has to consider and suffer from when chosing a platform - but after all what I experienced I have these information: Either do it like all the others: Stream on Twitch and use YouTube as an archive to export to from Twitch (although Twitch STILL doesn't have an auto-export of the latest VOD implemented - but I guess that could be done by some small script) - or if you want to stay on YouTube use some existing bot like Nightbot or any of the other services like Streamlabs.
If you get any other information on how to convince Google to increase the limit as an individual please let us know.
I use Twilio to operate an automated phone line that connects callers to resources for some very sensitive topics. If the phone numbers of callers were revealed due to a data breach or subpoena, it could have negative consequences for them. There's no need for us to log callers' numbers, and ideally I'd like to not store that information at all. However, these numbers show up in my usage logs:
I've searched for ways to prevent these numbers from being logged or to delete them after they've been logged, but I can't find anything documented. Is there a way to do this?
Partial answer: You can delete the record of a call using the API DELETE functionality:
DELETE https://api.twilio.com/2010-04-01/Accounts/{AccountSid}/Calls/{Sid}.json
You can have a script periodically request all calls made to the number and call DELETE for each one. If your system involves recordings, transcriptions, or texts, you need to do the same for them.
This is an acceptable solution for our needs, but it would be ideal if the numbers weren't logged in the first place, so I'm still interested in hearing others' answers.
I build an app that is able to store OData offline by using SAP Kapsel Plugins.
More or less it's the same as generated by WEB ID or similer to the apps in this example: https://blogs.sap.com/2017/01/24/getting-started-with-kapsel-part-10-offline-odatasp13/
Now I am at the point to check the error resolution potential. I created a sync conflict (chaning data on the server after the offline database was stored and changed something on the app and started a flush).
As mentioned in the documentation I can see the error in ErrorArchive and could also see some details. But what I am missing is the information of the "current" data on the database.
In the error details I can just see the data on the device but not the data changed on the server.
For example:
Device is loading some names into offline store
Device is offline
User A is changing some names
User B is changing one of this names directly online
User A is online again and starts a sync
User A is now informend about the entity that was changed BUT:
not the content user B entered
I just see the "offline" data.
Is there a solution to see the "current" and the "offline" one in a kind of compare view?
Please also note that the server communication is done by the Kapsel Plugin and not with normal AJAX calls. This could be an alternative but I am wondering if there is no smarter way supported by the API?
Meanwhile I figured out how to load the online data (manually).
This could be done by switching http handler back to normal one.
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
Anyhow this does not look like a proper solution and I also have the issue with the conflict log itself. It must be deleted before any refresh could be applied.
I could not find any proper documentation for that. Also ETag handling is hardly described in SAPUI5 and SAP Kapsel documentation.
This question is a really tricky one, due to its implications. I understand that you are simulating a synchronization error due to concurrent modification, and want to know if there is a way for the client to obtain the "current" server state in order to give the user a means to compare the local and server state.
First, let me give you the short answer: No, there is no way for the client to see the current server state "for reference" via the Offline APIs when there are synchronization errors. Doing an online query as outlined above might work, but it certainly is a bad idea.
Now for the longer answer, which explains why this is not necessarily a defect and why I said there are quite some implications to the answer.
Types of Synchronization Errors
We distinguish a number of synchronization errors, and in this context, we are clearly dealing with business-related issues. There are two subtypes here: Those that the user can correct, e.g. validation errors, and those that are issues in the business process itself.
If the user violates the input range, e.g. by putting a negative price for a product, the server would reply with the corresponding message: "-1 is not a valid input value for 'Price'". You, as a developer, can display such messages to the user from the error archive, and the ensuing fix is indeed a very easy one.
Now when we talk about concurrent modification, things get really, really nasty. In fact, I like to say that in this case there is an issue with the business process, because on one hand, we allow data to get out of sync. On the other hand, the process allows multiple users to manipulate the same piece of information. How all relevant users should now be notified and synchronize, is no longer just a technical detail, but in fact a new business process. There just is no way to generically device how to handle this case. In most cases, it would involve back-office experts who need to decide how the changes should be merged.
A Better Solution
Angstrom pointed out that there is no way to manipulate ETags on the client side, and you should in fact not even think about it. ETags work like version numbers in optimistic locking scenarios, and changing the ETag basically means "Just overwrite what's on the server". This is a no-go in serious scenarios.
An acceptable workaround would be the following:
Make sure the server returns verbose error messages so that the user can see what happened and what caused the conflict.
If that does not help, refresh the data. This will get you an updated ETag, and merge the local changes into the "current" server state, but only locally. "Merging" really means that local changes always overwrite remote changes.
The user now has another opportunity to review the data and can submit it again.
A Good Solution
Better is not necessarily good, so here is what you should really do: Never let concurrent modification happen because it is really expensive to handle. This implies that not the developer should address this issue, but the business needs to change the process.
The right question to ask is, "When you replicate data in a distributed system, why do you allow it to be modified concurrently at all?" Typically stakeholders will not like this kind of question, and the appropriate reaction is to work out a conflict resolution process together with them. Only then they will realize how expensive fixing that kind of desynchronization is, and more often than not they will see that adjusting the process is way cheaper than insisting in yet another back-office process to fix the issues it causes. Even if they insist that there is a need for this concurrent modification, they will now understand that it is not your task to sort this out and that they need to invest in a conflict resolution process.
TL;DR
There is no way to compare the server and client state to the server state on the client, but you can do a refresh to retain the local changes and get an updated ETag. The real solution, however, is to rework the business process, because this no longer is a purely technical issue.
The default solution is that SMP or HCPms is detecting errors by ETags. At client side there is no API to manipulate ETags in case of conflicts. A potential solution to implement a kind of diff view on the device would work like this:
Show errors
Cache errors (maybe only in memory?)
delete the errors
do a refresh of the database
build a diff view with current data and cached errors
The idea with
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
could also work but could be very tricky and may introduce side effects.
Maybe some requests are triggered against the "wrong" backend.
Would like to maintain a local record of the price of all the phone calls that my application makes.
Am not sure what a good pattern for this would be. It looks like the price is not available in the arguments provided during the status call back when the call is closed. I assume this means I'll need to query Twilio's servers to find the price of the call. Can I do this immediately or do I need to wait a certain amount of time for the price to populate?
Is there another pattern that would be simpler, require fewer steps, or be less error prone that I am not seeing here?
Thanks!
Twilio evangelist here.
I'd recommend checking out the Usage Records API. These handy API's give you an easy way to get rollup data for your account, like how much your account spent yesterday, or how many outbound calls it made.
You can also set up Usage Triggers to proactively notify you when threshholds are met.
Hope that helps.