How to know that all attachments in Corda transaction have been received - attachment

When a sender node run a transaction with a receiver node, is there a way for the receiver to know that all attachments are stored in the receiver vault ?
Responder ?

Corda takes care of storing all the attachments for you, so you shouldn't need to worry about that.
That being said, you could add some logic to check the node's attachment storage for the attachments. But, that is going to require sending some information around so that counterparties know what attachments were meant to be stored.

Related

Overview of how to track mail move change notification ms graph api

Our platform is replicating email functionality, I.e. view all emails folders and contents, reply, create new, draft, move etc for users.
We have successfully subscribed users to all change notifications (create, update, & delete) for the whole mailbox, however, we are not sure how to track folder move operations as I can’t find an example in the documentation and our current implementation is not working reliably.
The issue we have is that after receiving the various change notifications, when we are doing the requests to get the value for the updated/deleted message, sometimes the value returned is the updated value, not the original one, therefore if the folder has changed we do not know which message to delete. This issue is highlighted in the documentation here (half way down): https://learn.microsoft.com/en-us/graph/outlook-change-notifications-overview?tabs=http#example-3-create-a-subscription-to-get-change-notifications-with-resource-data-for-a-message-based-on-a-condition-preview
We tried it using immutable ids, but the final webhook received was sometimes the delete webhook for the message ID. This is supposed to be for the original email that was moved, however because the message ID is the same (and the parent folder ID value is not reliable) we may end up deleting the wrong email.
With immutable turned off, we did not receive any delete notifications (only creates and updates), so we ended up with duplicate emails as the original was never deleted.
Is someone able to advise the correct procedure to track these events?
Thanks
So it turns out the only reliable way to do this is using the delta query - https://learn.microsoft.com/en-us/graph/delta-query-messages - whenever a change notification is received for a folder.
So, when authorisation is provided for access to a users mailbox, you must get subscriptions for each folder and then whenever a change notification is received for that folder/subscription, the delta query is run for that folder.
I believe that MS are in beta testing for providing the change information in the webhook which in my option would be a great improvement in efficiency in terms of implementation and operation.
Hope this helps someone in the future!

Continuous production of keys from the bucket in Riak

There is a function stream_list_keys/2 (from riak-erlang-client) that allows to receive all keys in some bucket.
But this function completes sending keys after sending them all.
Is it possible to continue to getting keys? That is to receive the keys as they appear in Riak?
The short answer is no. The ability to stream keys is only based on keys that already exist. What you are asking for is the ability to "subscribe" to events which the function isn't designed for.
As Joe suggested above commit hooks are probably the best way to do what you want to do in Riak. Based on the way that you asked the question I would however recommend that you use the Post-Commit hooks (http://docs.basho.com/riak/latest/dev/advanced/commit-hooks/) instead of Pre-Commit Hooks. As Joe mentioned errors in Pre-Commit hooks can cause keys to not be persisted and I don't believe that is the behavior that you are looking for.
This could be accomplished with a custom commit hook to extract the desired data from the object being stored, and send that information to a listening process. Note though that if the precommit hook throws an exception or returns anything other than a riak object, the put operation will be aborted and the object will not be stored.

Mvc azure storage, auto delete storage after certain time

Im developing a azure website where users can upload blob and metadata. I want uploaded stuff too be deleted after some time.
The only way i can think off is going for a cloudapp instead of a website with a worker role that checks like every hour if the uploaded file has expired and continue and delete it. However im going for a simple website here without workerroles.
I have a function that checks if the uploaded item should be deleted and if the user do something on the page i can easily call this function, BUT.. If the user isnt doing anything and the time runs out it wont delete it because the user never calls the function.. The storage will never be deleted. How would you solve this?
Thanks
Too broad to give one right answer, as you can solve this in many ways. But... from an objective perspective because you're using Web Sites I do suggest you look at Web Jobs and see if this might be the right tool for you (as this gives you the ability to run periodic jobs without the bulk of extra VMs in web/worker configuration). You'll still need a way to manage your metadata to know what to delete.
Regarding other Azure-specific built-in mechanisms, you can also consider queuing delete messages, with an invisibility time equal to the time the content is to be available. After that time expires, the queue message becomes visible, and any queue consumer would then see the message and be able to act on it. This can be your Web Job (which has SDK support for queues) or really any other mechanism you build.
Again, a very broad question with no single right answer, so I'm just pointing out the Azure-specific mechanisms that could help solve this particular problem.
Like David said in his answer, there can be many solutions to your problem. One solution could be to rely on blob itself. In this approach you can periodically fetch the list of blobs in the blob container and decide if the blob should be removed or not. The periodic fetching could be done through a Azure WebJob (if application is deployed as a website) or through a Azure Worker Role. Worker role approach is independent of how your main application is deployed. It could be deployed as a cloud service or as a website.
With that, there are two possible approaches you can take:
Rely on Blob's Last Modified Date: Whenever a blob is updated, its Last Modified property gets updated. You can use that to identify if the blob should be deleted or not. This approach would work best if the uploaded blob is never modified.
Rely on Blob's custom metadata: Whenever a blob is uploaded, you could set the upload date/time in blob's metadata. When you fetch the list of blobs, you could compare the upload date/time metadata value with the current date/time and decide if the blob should be deleted or not.
Another approach might be to use the container name to be the "expiry date"
This might make deletion easier, as you then could just remove expired containers

How to mark IMAP message as fetched?

I am working on PHP project that should fetch emails from IMAP server, and store them in local database.
Same IMAP server can be used by other email clients, like outbox and so on.
The problem is how to know which messages I already fetched, and which I didn't? I am thinking to use search by datetime, but is it reliable(I would have cronjob, that would access user mail box every minute, and check for emails, but not sure if datetime can cause some issues, for example in case when at almost same time arrive short message and message with big attachment).
I was thinking about system tags, but user can modify them via email client, so I can rely on them, and don't want to modify them and confuse client.
Next I was thinking about custom tags, but not all IMAP servers support them(and our software need to be flexible as much as possible).
Any good idea how I could solve this problem?
Keep track of the currently highest synced UID of the folder you are syncing, and verify that the UIDVALIDITY value of the folder match.
Unique identifiers are assigned in a strictly ascending fashion in the mailbox; as each message is added to the mailbox it is assigned a higher UID than the message(s) which were added previously. Unlike message sequence numbers, unique identifiers are not necessarily contiguous.

Ideas for storing e-mail messages in a Delphi client server application

There are many suggestions here and there for storing e-mail messages. Somehow what I am doing is writing an Outlook addin to send emails from inbox/sent folders directly to my application.
So only what is really interesting is saved. And I decide where to save it.
Imagine this case:
I recieve an email from a customer. It's up to me to decide whether I should save it on the customer or on the order 24 that that customer did. So this is why I am doing the add in, and not some automatic storing of emails = noise after some time.
This said, how to store the emails? For the emails that I recieve or send through Outlook the idea could be save the whole file in a blob field (so the eml file), may be I can save also other info (like the subject) in another text field. But the problem comes when I write an email from my application.
In this case I am not generating an eml file, I send through MAPI data to Outlook to compose an email that I will send with Outlook (so in this case I cannot save the eml), or I directly send it with Indy. Also in this case I don't have the eml file...
One idea could be that the all the emails that I auto compose have a special flag that the Add in recognises and therefore when I send the mail it is stored back to the DB. So in this case I can save the eml also of the mails I sent from my application.
May you comment?
First you have to decide on on what information you want to store. The rest is just a means to get there.
One option is to store the .msg files (you have posted related questions suggesting you are no stranger to MAPI) in stead of .eml files. Using MAPI you can store the IMessage you created as a .msg file (with a bit of pain). However, not all mapi props will be set until the message actually is sent, you so might need to hook outlooks send items folders for that.
A much more straightforward solution would be to generate the .eml (or whatever textbased format you prefer) directly from the source. When sending, take your source data, generate the correct MAPI calls to outlook AND generate the .eml and store it directly into your database. When recieving, have Outlook save to .eml directly.
Personally, I wouldn't use .eml at all for storage. I would parse the data I'm particularly interested in (like to/from addresses) into separate columns. In the end, you are probably using your DB for data retrieval. Databases tend to do a better job when you don't store everything in a single memo/blob field. :)

Resources