Hello I have a workflow here
It sends callers to a queue with a single worker.
I want to implement voicemail and have followed the instructions here https://support.twilio.com/hc/en-us/articles/360021082934-Implementing-Voicemail-with-Twilio-Flex-TaskRouter-and-Insights
If I put the filter first in the workflow then it always goes to voicemail, if next then it never goes.
How can I make it so that if calls are maybe reserved but not answered they go to the voicemail queue?
{
"task_routing": {
"filters": [
{
"filter_friendly_name": "Voicemail ",
"expression": "1==1",
"targets": [
{
"queue": "theRealqueue",
"timeout": 10
},
{
"queue": "voicemail",
"timeout": 10
}
]
}
],
"default_filter": {
"queue": "gfhfghgfhghfghgfhfghgfh"
}
}
}
I believe that the timeout you have set for each of the targets counts as the time the task spends in that queue regardless of the target. In the case of the workflow you shared, when a task spends 10 seconds in theRealqueue and then times out, it also times out of the voicemail queue because that was also 10 seconds.
Try setting the voicemail queue target to a bigger timeout than theRealqueue.
"targets": [
{
"queue": "theRealqueue",
"timeout": 10
},
{
"queue": "voicemail",
"timeout": 20
}
]
There were 3 problems.
I decided to put in a,
"skip_if": "workers.available == 0" On the first filter and it makes sense now as in the GUI, it is DO NOT SKIP as the default, so to my thinking that means what it says.
And WOW it worked, I had earlier set a TASK RESERVATION TIMEOUT of 8 seconds but when I tried increasing this it never got to voicemail/never went to next step.
I could only get it to work with that 8 second TASK RESERVATION TIMEOUT, not a larger value so then I looked in the Studio Flow,
SEND TO FLEX had a timeout of 10 second, my bad. Increased and all good now.
The documentation/tutorial here is terrible, https://support.twilio.com/hc/en-us/articles/360021082934-Implementing-Voicemail-with-Twilio-Flex-TaskRouter-and-Insights
Select the default Assign to Anyone workflow, or the appropriate workflow if you have modified this.
Click Add a Filter.
Name your Filter Voicemail (or something similarly identifiable), and then change the TIMEOUT to 30 seconds. Click Add a Step when finished.
Click the QUEUE value, and then select the Voicemail queue you created in the previous section. Press Save when finished.
That is all not so relevant here, it seems that the thing that controls the going to next step is the TASK RESERVATION TIMEOUT and that no step would be skipped/passed unless a "skip_if": is defined.
I would really love to get clarification on all this.
But the steps that I have done have provided a solution. I banged my head into walls for a few days here.
Related
the get_chat_history and egt_chat_members methods throw a permanent waiting error -Waiting for 20 (23,21,22,18) seconds before continuing. get_chat works fine. This error appeared a couple of days ago.
async with tg_cl:
while True:
try:
async for members in tg_cl.get_chat_members(target):
members_chat.append(members)
break
except FloodWait as Err:
print("Flood wait: {} seconds".format(Err.value))
sleep(Err.value)
continue
...............
async with tg_cl:
while True:
try:
if 'join' in chat:
info_chat = await tg_cl.join_chat(chat)
else:
info_chat = await tg_cl.get_chat(chat)
async for messages in tg_cl.get_chat_history(chat, limit=1, offset_id=-1):
count_messages = messages.id
break
except FloodWait as Err:
print("Flood wait: {} seconds".format(Err.value))
sleep(Err.value)
continue
Pyrogram already handles FloodWait errors on its own, you don't need to apply any logic yourself.
When setting up your Client instance, you can set the sleep_threshold. This is the amount of time that Pyrogram will handle a FloodWait error on its own, without any logic needed from you. You can set it to an arbitrarily high value to not get any errors anymore. Keep in mind that Pyrogram will silently handle these errors itself and only print something like "waiting x seconds" in your output.
list_of_members = []
for member in app.get_chat_members(chat_id):
list_of_members.append(member.id)
print(list_of_members)
[123, 456, 789, ...]
Please note that in Channels you can only retrieve 200 members at a time, in chats only 10 000 (ten thousand), this is a hard limit by the Server.
See Pyrogram's documentation on the available arguments, as well as some examples:
https://docs.pyrogram.org/api/methods/get_chat_members
So far been dealing with a scenario in Apache Beam where given a certain HTTP code, I may be preserving the elements to restart in the next iteration.
Been implementing this with inner code, only using a time trigger.
.apply(
"Sample Window",
Window.into(FixedWindows.of(Duration.standardMinutes(1)))
.triggering(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(1)))
.withAllowedLateness(Duration.ZERO)
.discardingFiredPanes()
)
I was hardcoding my logic to handle the request of, let's say, 200 events. And also storing in-memory those events in case the request failed.
However, checking the docs I saw combined triggering...
Repeatedly.forever(AfterFirst.of(
AfterPane.elementCountAtLeast(100),
AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardMinutes(1))))
So did the same in my case.
.apply(
"Sample Window",
Window.<KV<String, String>>into(FixedWindows.of(Duration.standardMinutes(1)))
.triggering(
AfterFirst.of(
AfterPane.elementCountAtLeast(200),
AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardSeconds(1))
)
)
.withAllowedLateness(Duration.ZERO)
.discardingFiredPanes()
)
So been wondering now...
If triggered by number of elements within the 1 minute timeframe, what happens with those events? Are they reprocessed again? Should I manually remove them from the Window?
I'm also talking in the case the 200 elements fail. How can I make them prevail in the window?
In your trigger you are setting .discardingFiredPanes()
This will "discards elements in a pane after they are triggered."
Any subsequent panes will not contain elements that have already been output.
I have an application deployed on PCF and have a new relic service binded to it. In new relic I want to get an alert when my application is stopped . I don't know whether it is possible or not. If it is possible can someone tell me how?
Edit: I don't have access to New Relic Infrastructure
Although an 'app not reporting' alert condition is not built into New Relic Alerts, it's possible to rig one using NRQL alerts. Here are the steps:
Go to New Relic Alerts and begin creating a NRQL alert condition:
NRQL alert conditions
Query your app with:
SELECT count(*) FROM Transaction WHERE appName = 'foo'
Set your threshold to :
Static
sum of query results is below x
at least once in y minutes
The query runs once per minute. If the app stops reporting then count will turn the null values into 0 and then we sum them. When the number goes below whatever your threshold is then you get a notification. I recommend using the preview graph to determine how low you want your transactions to get before receiving a notification. Here's some good information:
Relic Solution: NRQL alerting with “sum of the query results”
Basically you need to create a NewRelic Alert with conditions that check if application available, Specially you can use Host not reporting alert condition
The Host not reporting event triggers when data from the Infrastructure agent does not reach the New Relic collector within the time frame you specify.
You could do something like this:
// ...
aggregation_method = "cadence" // Use cadence for process monitoring otherwise it might not alert
// ...
nrql {
// Limitation: only works for processes with ONE instance; otherwise use just uniqueCount() and set a LoS (loss of signal)
query = "SELECT filter(uniqueCount(hostname), WHERE processDisplayName LIKE 'cdpmgr') OR -1 FROM ProcessSample WHERE GENERIC_CONDITIONS FACET hostname, entityGuid as 'entity.guid'"
}
critical {
operator = "below"
threshold = 0
threshold_duration = 5*60
threshold_occurrences = "ALL"
}
Previous solution - turned out it is not that robust:
// ...
critical {
operator = "below"
threshold = 0.0001
threshold_duration = 600
threshold_occurrences = "ALL"
}
nrql {
query = "SELECT percentage(uniqueCount(entityAndPid), WHERE commandLine LIKE 'yourExecutable.exe') FROM ProcessSample FACET hostname"
}
This will calculate the fraction your process has against all other processes.
If the process is not running the percentage will turn to 0. If you have a system running a vast amount of processes it could fall below 0.0001 but this is very unprobable.
The advantage here is that you can still have an active alert even if the process slips out of your current time alert window after it stopped. Like this you prevent the alert from auto-recovering (compared to just filtering with WHERE).
I'm currently developing a Ruby on Rails application that on certain moment has to import a (at least for me) medium-large dataset using a third-party API. It has to do an average of 6000 API calls. One after another. It lasts about 20 minutes.
Right now I have made a rails task that does everything as I want (calls, write to db, etc). But now I want this task/code to be ALSO called from a button on the web. I know it's not a good approach to let the controller call the task so that's why I'm asking.
I want this import code to be available to be called from a controller and a task, because later I want to be able to call this task from a cronjob, and even if it's possible to have callbacks on the progress of the task on the controller, i.e. know how many calls are left.
I know it's not a good approach to let the controller call the task
There's nothing wrong with having a button trigger a background task like this, but of course you need to do so with care. For example, perhaps:
If the task is already running, don't let a second instance overlap.
If the task runs for too long, automatically kill it.
Carefully restrict who can trigger this.
There are many libraries available for implementing a progress bar, or you could even write a custom implementation. For example, see this blog post - which works by polling the current progress:
// app/views/exports/export_users.js.haml
:plain
var interval;
$('.export .well').show();
interval = setInterval(function(){
$.ajax({
url: '/progress-job/' + #{#job.id},
success: function(job){
var stage, progress;
// If there are errors
if (job.last_error != null) {
$('.progress-status').addClass('text-danger').text(job.progress_stage);
$('.progress-bar').addClass('progress-bar-danger');
$('.progress').removeClass('active');
clearInterval(interval);
}
progress = job.progress_current / job.progress_max * 100;
// In job stage
if (progress.toString() !== 'NaN'){
$('.progress-status').text(job.progress_current + '/' + job.progress_max);
$('.progress-bar').css('width', progress + '%').text(progress + '%');
}
},
error: function(){
// Job is no loger in database which means it finished successfuly
$('.progress').removeClass('active');
$('.progress-bar').css('width', '100%').text('100%');
$('.progress-status').text('Successfully exported!');
$('.export-link').show();
clearInterval(interval);
}
})
},100);
An variant approach you could consider is to use a websocket to see progress, rather than polling.
Convert the specific tasks into background jobs, i.e. (active job, sideqik), so your system can continue working while it's doing the tasks. Create classes for each task and call those classes within your background jobs or cronjobs.
One design pattern that could fit here is the "command" pattern, I gave you a list of things you can Google :).
Just move most of the code from the task to a module or method in a model. You can call this code from the task (as your do it now) or from a background job that would start through a controller when you press a button on a view.
I have a job scheduled in Application_start event using quartz.net, the trigger is fired every 1 min given by the variable repeatDurationTestData = "0 0/1 * * * ?";
The triggering starts when I first open the site, But stops after some random time when I close the browser and starts again when I open the site. Following is the code
IMyJob testData = new SynchronizeTestData();
IJobDetail jobTestData = new JobDetailImpl("Job", "Group", testData.GetType());
ICronTrigger triggerTestData = new CronTriggerImpl("Trigger", "Group", repeatDurationTestData);
_scheduler.ScheduleJob(jobTestData, triggerTestData);
DateTimeOffset? nextFireTime = triggerTestData.GetNextFireTimeUtc();
What Am i doing wrong here, Is this because of some misfire. Please suggest.
Thanks
At First I would use a simple trigger in this case as it takes a repeat interval and seems to fit better than the cron trigger would (from lesson 5 quartz.net website) :
SimpleTrigger trigger2 = new SimpleTrigger("myTrigger",
null,
DateTime.UtcNow,
null,
SimpleTrigger.RepeatIndefinitely,
TimeSpan.FromSeconds(60));
I would also recommend you don't put the quartz scheduler within the website. the main purpose of a job system is to work independently of anyother system so it generally fits naturally into a windows service. By putting it as part of the website you arn't guaranteed its going to keep going. If you loose the app pool or it restarts, you wont get a reliable result.
There is an example with the quartz.net download.
hope that helps.