I am testing the how publisher confirm works in Spring-AMQP, trying to find How to control max unconfirmed publishes using Spring AMQP?
Basically i want pause publisher when unconfirmed messages count is greater than some limit.
it seems rabbitTemplate.getUnconfirmed(age) gives list of unconfirmed messages but it removes them from unconfirmed list once method is called.
Yes; we don't currently have an API for that, getUnconfirned(age) is intended to expire pending confirms older than an age.
We should probably add an overloaded method (with no argument) which just gets the unconfirmed correlation data, or a method that just returns the number of such.
Feel free to open an "improvement" JIRA Issue and we should be able to get something in the next release.
Related
Requirement
Our business requires us to expand the functionality of the transfer feature that is already built in to
Twilio Flex.
What exists today
Today's functionality (out of the box) is partially what we need with the Actions.TransferTask functions. The issue with this is that when we choose an agent to cold transfer to, and they're not available, the call gets put back in to a queue and picked up by the next available agent. This seems completely counter intuitive to how I would expect it to work as I would expect that call to go to voicemail if the agent is unable to pick up.
What we tried
We have logged a support ticket with Twilio to no avail. Any attempts to use the StartConferenceCall won't work because we're on an active call. Twilio functions' Dial doesn't seem to work as I don't seem to have access to my current call. (Dial is meant for calls that are about to happen?) But Dial seems to be the only process that allows digits to be passed. Therefore, conferences & participants don't seem to be the answer either.
What we do have though is a phone number that will end up in our primary work flow and if the incoming number is the aforementioned number, we skip the voice prompts, accept digits and redirect to the agent. This part works.
What we need
We're looking for a Twilio function or plugin Action (or some other method that I'm unaware of) that can cold transfer a call to another agent with no over flow to a queue (let it get picked up by voicemail).
So the skip token I get from graph API is a number, based on my understanding(I might be wrong), it indicates how many emails need to be skipped.
In our application, we store that skip token in our db/memory so we can fetch next page of emails. So if say a users current skip token is 100, and before we send a request to the server with skip token 100, that user delete 10 emails, then what gonna happen if still use that 100 skip token?
Since I am not sure how to deal with this kind of user delete emails case, the way our application works is: we always do a minus on the skip token(like -10), and check if we can find any email or timestamp overlap between current response and previous response, if there is no overlap, we do another minus to the skip token. It is kind of like walk backward. We stop doing minus till we can find an overlap.
Does it make sense? So far, I noticed some skip tokens's responses give nextLink as null while there are still new emails in user's inbox. Also, we missed a couple of emails for around half year(meaning that the email is in user's inbox but not fetched by our application).
The Delta Query (Track Changes) API might be better suited for your needs. It effectively allows you to keep a "bookmark" in a change log of someones inbox.
E.g. Instead of keeping the skip token you would keep the deltaLink you get back from calling /messages/delta. When you call the API again with the deltaLink you will get a set of changes back since the last time you called the API + a new deltaLink. This allows you to keep "in sync" with the changes going on in the inbox you are monitoring.
The API reference docs are here:
https://learn.microsoft.com/en-us/graph/delta-query-overview
I am in the process of switching the LDAP backend that we use to authenticate access to Gerrit.
When a user logs in via LDAP, a local account is created within Gerrit. We are running version 2.15 of Gerrit, and therefore our local user accounts have migrated from the SQL DB into NoteDB.
The changes in our infrastructure, mean that once the LDAP backend has been switched, user logins will appear to Gerrit as new users and therefore a new local account will be generated. As a result we will need perform a number of administrative tasks to the existing local accounts before and after migration.
The REST API exposes some of the functionality that we need, however two key elements appear to be missing:
There appears to be no way to retrieve a list of all local accounts through the API (such that I could then iterate through to perform the administrative tasks I need to complete). The /accounts/ endpoint insists on a query filter being specified, which does not appear to include a way to simply specify 'all' or '*'. Instead I am having to try and think of a search filter that will reliably return all accounts - I haven't succeeded yet.
There appears to be no way to delete an account. Once the migration is complete, I need to remove the old accounts, but nothing is documented for the API or any other method to remove old accounts.
Has anybody found a solution to either of these tasks that they could share?
I came to the conclusion that the answers to my questions were:
('/a/' in the below examples is accessing the administrative endpoint and so basic Auth is required and the user having appropriate permissions)
Retrieving all accounts
There is no way to do this in a single query, however combining the results of:
GET /a/accounts?q=is:active&n=<number larger than the number of users>
GET /a/accounts?q=is:inactive&n=<number larger than the number of users>
will give effectively the same thing.
Deleting an account
Seems that this simply is not supported. The only option appears to be to set an account inactive:
DELETE /a/accounts/<account_id>/active
We use Microsoft Graph to subscribe to webhooks from emails. Additionally, as a backup procedure we also crawl the messages directly.
We crawl around 5 million emails a day, and we see that each day consistently around 1%-2% of these emails are not sent to us via the webhook, although the subscription for this principal is active (and other email notifications from this user are indeed sent).
Is there any logic on the Microsoft Graph side to not send webhooks for certain types of emails by design? or is it just a problem on the notification mechanism?
(Obviously crawling them, as we do now, is a viable backup option, but that also means the processing of the email will be delayed)
I currently have a similar webhook setup and we get around 200-300 emails and I notice that the subscription service usually misses out on 1-2 per day since sometimes some emails come around at the same time. I have also noticed that the data structure is an array of objects when we get two or more emails at the same time. What we have put into place is basically a cron scheduled script that checks the mailbox on specific time intervals, such as every 5 minutes, every 10 minutes and so on. This is the only solution that has worked for my application to capture every single email.
I am writing a client application that fetches emails from an IMAP server and then stores them in a database. The problem is that once I have checked the mail, the next time I only want to download the mail that has arrived since. So if I had checked the server for mail two hour ago, I only want to get the mail that has arrived in the last two hours.
I could use SEARCH with SINCE DATE, but there's no support for time + date could be easily spoofed.
I also tried the RECENT flag, but that doesn't seem to work with gmail (in ruby it shows nil everytime).
You want to use the UniqueId (UID) for the messages. This is specifically why it was created.
You will want to keep track of the last UID requested, and then, to request all new messages you use the message set "[UID]:*", where [UID] is the actual UID value.
For example, lets say the last message feteched had a unique id of "123456". You would fetch
123456:*
Then, discard the first returned message.
UIDs are 'supposed' to be stable across sessions, and never change, and always increase in value. The catch to verify this, is to check the UIDValidity when you select the folder. If the UIDValidity number hasn't changed, then the UIDs should still be valid across sessions.
Here are the relevant parts from the RFC:
2.3.1.1. Unique Identifier (UID) Message Attribute
A 32-bit value assigned to each message, which when used with the
unique identifier validity value (see below) forms a 64-bit value
that MUST NOT refer to any other message in the mailbox or any
subsequent mailbox with the same name forever. Unique identifiers
are assigned in a strictly ascending fashion in the mailbox; as each
message is added to the mailbox it is assigned a higher UID than the
message(s) which were added previously. Unlike message sequence
numbers, unique identifiers are not necessarily contiguous.
The unique identifier of a message MUST NOT change during the
session, and SHOULD NOT change between sessions. Any change of
unique identifiers between sessions MUST be detectable using the
UIDVALIDITY mechanism discussed below. Persistent unique identifiers
are required for a client to resynchronize its state from a previous
session with the server (e.g., disconnected or offline access
clients); this is discussed further in [IMAP-DISC].
Note: The next unique identifier value is intended to
provide a means for a client to determine whether any
messages have been delivered to the mailbox since the
previous time it checked this value.
Here is the link with more info:
http://www.faqs.org/rfcs/rfc3501.html
What I would do, is also keep track of the InternalDate of the messages downloaded. This way, if you ever lose UID sync, you can at least iterate through the messages, and find the last one you downloaded, based upon the InternalDate of the message.
There's an imap flag called "seen". Most clients would mark a message seen when viewing the message, so you'd want to iterate over messages on the server which do not have that flag set.
Here's a code snippet which should give you the right idea. The operative bit of course is
imap.search(["NOT", "SEEN"]).each do bla.bla.bla
If you are you able to filter incoming mail into a specific IMAP folder on the server side, your app
can read new messages in that folder and then move them into the standard INBOX folder after it's done.