IMAP: How to determine maximum allowed folder name and path length? - imap

I'm developing a specialized IMAP client that uses an email server to store messages in automatically generated folders organized in a hierarchy. Currently I'm using Dovecot as email server and sometimes run into its limits. As per the documentation there is a 255 byte limit for folder names and another limit for the whole folder path that depends on the used storage mechanism (254 vs. 4096 bytes). Other OSes or email servers might have different limits.
How do I query the email server about those limits so I can adhere to them? Does the IMAP protocol support such a query? Or some IMAP protocol extension? Or do I have to "probe" the server's limits by creating a dummy folder hierarchy until it fails?

Related

Sinchronize blob files from cloud to IoT Edge Blob (local)

Please refer to the following hypothetical diagram for an IoT Edge device implementation. We want to know if there is an automated mechanism for it using the Azure IoT infrastructure.
An admin application will write several JSON configurations files associated with a specific device. Each device has a different config, and the config files are large (1Mb), so using twins is not a good solution.
We want those files stored in the cloud to be sent automatically to the target device, for it to store them in its local blob storage. The local files shall always reflect what is in the cloud, almost like a OneDrive.
Is there any facility for this in Azure/Edge? How we can isolate the information for each device without exposing other configurations stored in the cloud blob?
Upload the BLOB to Azure storage (or anywhere, really), and set a properties.desired property containing the link+SAS token (or if you want: a hash of the contents if you want to keep the URL always the same). Your edge module will get a callback (during startup, and during runtime) that the property value has changed, and can connect to the cloud do download the configuration. No need to use LocalBlobStorage module, the config can be cached in the edge modules /tmp directory.

Azure Blob Storage authorization with SAS

I have a web application (ASP.NET MVC) which uses Azure Blob Storage for storing documents and images. Each user has specific access rights to the blobs and this
is stored in web application's database.
Currently I have a quick temporary solution which uses the web application as a middle layer that runs the authorization and if the client has read access to the blob,
it is first retrieved from Azure and then delivered to the client. This is of course not the optimal way of doing it for many reasons.
I have started to rebuild this part using SAS (Shared Access Signatures), but can't find a good source for setting up a system that will scale well as the number of
users and files grow. I am expecting the number of users to be around 100 and the number of blobs to be around 100 000.
As I see it I have two options.
1) All files have one signature stored in the web applications database and this is used for all users who have access to the file. This would be the easy way to do it,
but if a user for some reason does not still have access to the file, they will still be able to access the file if they have the link from earlier access.
2) All files have specific signatures for each user who has access to the file. This will make it easy to revoke access to files, but the number of signatures will
be massive and will this have any side effects?
Are there any more options?
Any thoughts on this are greatly appreciated!
Rather than having SAS for each users it would be better that you group the files by roles and map the users to roles which is easy to scale irrelevant to number of users.
Also giving access to users to blob directly is not recommended as you want to distribute your blob content through your application. So provide access to application with specific in context of role of user.
See below article for generating twominute SAS which expires in two minute so your users with the link does not have access to image for long time.
http://www.dotnetcurry.com/windows-azure/901/protect-azure-blob-storage-shared-access-signature
Hope this helps. :)

How to get public URL of Azure Files file?

I am using Azure Files to store files for my Web Application, which I have previously mentioned here.
I am currently processing the files/sub-directories within a directory, and outputting a navigation table so the user can navigate into sub-directories, and in the end, obtain said files. I'm doing this by using the methods described in the 'Access the file share programmatically' section of this Azure Documentation article.
My question is very simple, how can I, from my Web App, which is
running in Azure app service, provide a public URL were the user
can download/view the file?
Please note, I would prefer that the file is not automatically downloaded, since most of the files would be a .PDF, and therefor preview-able in the browser.
My question is very simple, how can I, from my Web App, which is
running in Azure app service, provide a public URL were the user can
download/view the file?
One possible solution would be to create a Shared Access Signature (SAS) on the files with at least Read permission and use that SAS URL. Depending on the file's content type, the file's contents will be either displayed inline in the browser or the user will be prompted to download the file. If you want to force the download, you could always override Content-Disposition response header in the SAS.
Using Shared Access Signature (SAS) could be a solution but it is probably an overkill in the given scenario.
In provided scenario the Blob Storage with public access is the most practical way to store files. From documentation:
... You can specify that a container and its blobs, or a specific blob, are available for public access. When you indicate that a container or blob is public, anyone can read it anonymously; no authentication is required. Public containers and blobs are useful for exposing resources such as media and documents that are hosted on websites. To decrease network latency for a global audience, you can cache blob data used by websites with the Azure CDN.
https://azure.microsoft.com/en-us/documentation/articles/storage-introduction/
To set container permissions from the Azure Portal, follow these steps:
Navigate to the dashboard for your storage account.
Select the container name from the list. Clicking the name exposes the blobs in the chosen container
Select Access policy from the toolbar.
In the Access type field, select "Blob"

Access MP3 files on server from iOS.

I'm building a streaming app similar to pandora. However right now I'm storing all my files on http and accessing them with urls. Is there an alernative to this because all the files are in the public html folder? For example how does apps like pandora or spotify pull files off their servers. I'm new to web severs and not sure where to ask this question. I have a centos server on vps hosting with apache, MySQL, http, ftp.
You just need to provide the content as a bit stream rather than a file download. The source of that data to send as a stream can be stored as binary data in a BLOB column in a database or as a regular file on a non-public part of the file system. It really does not mater which one you use.
Storing them in the database gives your app a bit easier access and makes the app more portable since it is not restricted the file system level permissions.
The fact you currently have the files in a public folder is not really that critical of an issue since you are making them available for download. You would just need to make sure you have an authentication requirement if you want to restrict who can access them.

how to upload files from a FTP location into Marklogic

i need to upload files from a FTP location into marklogic. please guide me on this
MarkLogic doesn't allow accessing external FTP locations from XQuery, like it allows HTTP calls. Nor does it provide FTP servers, like it provides WebDAV servers.
You can however easily put a mediator in between that accesses the FTP instead, and use other means to upload the document into MarkLogic. The latter can be done through a WebDAV App Server that you can create using the Admin interface, through the built-in REST api in MarkLogic 6 ( http://docs.marklogic.com/REST ), or through custom code like Corona ( http://developer.marklogic.com/code/corona ).
If you write the mediator in Java, you can also use the Java API ( see Java API tab at http://docs.marklogic.com/ ).
HTH!
We have a app that needs documents from a shared folder that we running an etl on to get into marklogic. You can do this a number of ways. If you are able to I'd amount the drive on the marklogic box and then read from there. IF that doesnt work see if you can make those files viewable from an http-get requested. IF that doest work then you might want to make a web services.
I personally would avoid WebDav unless you absolutely need it.
Is this a one-off, batch , or continous job ?
If one-off or batch then I would suggest using a script to FTP the files to a local disk then using mlcp or RecordLoader or xmlsh to push them to MarkLogic.
If this is a continuous job then a custom Java app is probably the way to go.
Do realize that FTP is a horribly sensitive protocol .. it can fail in so many ways and takes special port openings etc. It was designed in the 80's before firewalls, NAT and such.
Getting FTP to work reliably irregardless of MarkLogic is a black magic art in itself.
If its possible to use another protocol then FTP that would be ideal.
Say scp or rsync or http.

Resources