IMAP Server Facade - how to make one? - imap

I have implemented a custom email server and web client. The server is just a REST API (similar to google's gmail API) that uses a 3rd party (sendgrid) for sending and receiving. The emails are stored in a database. The web client just talks to the REST client for sending and receiving.
The problem with this approach is it doesn't implement IMAP anywhere, which makes it impossible for standard clients (outlook, iphone, etc.) to connect to and use our email API. This limits customers to using only our client for email.
What I need is some sort of IMAP Server "facade" that will manage the connections to clients and make calls to my REST API for actually handling the requests (get email, send email, etc.).
How can an IMAP facade be implemented? Is there maybe a way to take an existing MailServer and gut it and point all it's "events" to making calls to my API?

tl:dr; write your gateway in Perl; use Net::IMAP::Server; override Net::IMAP::Server::Mailbox; and use one of the many Perl REST clients to talk to your server.
Your best bet for doing this quickly, while maintaining a reasonable amount of code security, is with Perl. You'll need two Perl modules. The first is Net::IMAP::Server, and here is the Github repository for that module. This is a standards-compliant RFC 3501 server that was purposely designed to have a configurable mail store. You will override the default Net::IMAP::Server::Mailbox implementation with your own code that talks to your custom email backend.
For your second module, choose your favorite Perl module(s) to use to speak to your REST server. Your choice depends on how much fine grained control you want to have over the construction and delivery of the REST messages.
Fortunately, here you have tons of choices. One possibility is Eixo::REST, which has a Github repository here. Eixo::REST seems to deal well with asynchronous vs. synchronous REST API calls, but it doesn't provide a lot of control over X509 key management. Depending on how googley your API is, there's also the REST::Google module. Interestingly, this family also has a REST::Google::Apps::EmailSettings module, specifically for setting Gmail-specific funkiness like labels and languages. Lastly, the REST::Consumer module seems to encapsulate a lot of https-specific things like timeout and authentication as parameters to Perl object instantiation.
If you use these existing frameworks, then about 90% of the necessary code should already be done for you.
Don't do this by hacking Dovecot or any other mail server written in C or C++. If you hack together a mail server quickly using a compiled language, your server will sooner or later experience all the joy of buffer overflows and stack smashing and everything else that the Internet does to fuck over mail servers. Get it working safely first, then optimize later.

(This is basically my comment again, but elaborated quite a bit more.)
Some IMAP servers, most notably Dovecot, are structured such that the file access is in a separate module with a defined interface. Dovecot isn't the only one, but it's by far the most popular and its backend interface is known to be appropriate, so I'd take that absent specific concerns.
There already exist non-file modules such as imapc, which proves that it can be done. When a client opens a mailbox backed by imapc, Dovecot parses IMAP commands, calls message access functions in imapc, imapc issues new IMAP commands, parses the server responses, returns C structs to Dovecot, Dovecot fashions new IMAP responses and returns them to the client.
I suggest that you take the dovecot source, look at src/lib-storage/inbox/index/imapc and the other backends in that directory, and implement one that speaks your REST API as a client.

Since you're familiar with .NET, I would suggest hacking either of the following implementations of IMAPv4 servers to your liking:
Lumisoft Mail Server - a very old project indeed (let's call it "mature", huh?). Don't be too turned off by the decade-old website and the lack of a github link - the source is provided under "other downloads".
McNNTP - also an older project and with a major focus on NNTP (as the name says) but very close to what you're trying to achieve in terms of the IMAP component. Take a look, you'll probably find this a good starting point.

Related

How do we handle service accounts after Exchange Basic Auth is retired?

https://developer.microsoft.com/en-us/office/blogs/end-of-support-for-basic-authentication-access-to-exchange-online-apis-for-office-365-customers/
Our organization is finding this announcement somewhat problematic! We use an IMAP library extensively to read various service based email accounts in o365. Any guidance on how to address this would be greatly appreciated.
Note, we have many console apps written in .NET (4.8) that run on a server based fired by many scheduled tasks. I understand we'd need to somehow register our "application" (I'm assuming that can be a generic one for our company), but we cannot involve any "user" interaction. These are utility apps. Glancing at the existing sample code for OAuth, they all seem to involve popping up a browser window to get someone to interact with "asking permission" which is exactly what we need to avoid.
We've used IMAP all this time to simply read and parse service based email accounts. I'm not sure I understand why IMAP over a secure connection is "less secure" than a more complex solution. Why take the option away?
On the other hand, the Microsoft Graph API looks significantly more complicated and appears to be OAuth based which, again, seems to involve quite a bit of authentication complexity.
Most REST APIs we've interacted w/ in other .NET console apps use a simple set of API "keys." Why not offer that at least?
As I say, we're looking for a way to write some process that run programmatically to automate a number of operations related to certain mailboxes. IMAP has worked like a charm so far, so we're looking for direct guidance on what to migrate to.
We understand your concerns. While a secure IMAP connection protects the data that's being transported, Basic Authentication exposes your Exchange Online accounts to attack techniques like phishing or brute forcing.
The primary objective of this change is to protect our customers from these threats. In addition, Modern Auth enables admin visibility into app access and enables fine-grained control of these apps.
To answer your question on implementation guidance, there is an existing approach in Graph to achieve exactly what you're looking to do. It's called "OAuth 2.0 client credentials flow". You can read more at https://learn.microsoft.com/en-us/graph/auth-v2-service
(Disclosure - I'm a Senior PM at Microsoft)

Restrict access to web service to only allow mobile clients

I'm currently building a mobile application (iOS at first), which needs a backend web service to communicate with.
Since this service will be exposing data that I only want to be accessed by my mobile clients, I would like to restrict the access to the service.
However I'm in a bit of a doubt as to how this should be implemented. Since my app doesn't require authentication, I can't just authenticate against the service with these credentials. Somehow I need to be able to identify if the request is coming from a trusted client (i.e. my app), and this of course leads to the thought that one could just use certificates. But couldn't this certificate just be extracted from the app and hence misused?
Currently my app is based on iOS, but later on android and WP will come as well.
The web service I'm expecting to develop in nodejs, though this is not a final decision - it will however be a RESTful service.
Any advice on best practice is appreciated!
Simple answer: You cannot prevent just anybody from acecssing your web site from a non-mobile client. You can, however, make it harder.
Easy:
Send a nonstandard HTTP header
Set some unique query parameter
Send an interesting (or subtly non-interesting) User Agent string
(you can probably think of a few more)
Difficult:
Implement a challenge/response protocol to identify your client
(Ab)use HTTP as a transport for your own encrypted content
(you can probably think of a few more)
Of course anybody could extract the data, decompile your code, replay your HTTP requests, and whatnot. But at some point, being able to access a free Web application wouldn't be worth the effort that'd be required to reverse-engineer your app.
There's a more basic question here, however. What would be the harm of accessing your site with some other client? You haven't said; and without that information it's basically impossible to recommend an appropriate solution.

Client/Server Application for iOS

I have had experienced with iOS development but no Client Server type applications.
I have heard about HTTPS, REST, JSON, etc. I am confused on the differences.
My app that I want to build is getting a list of data to output to the user and also sending a form to the server to be processed. E.g. A Membership Application to the Server with personal information and other pertaining information to be stored in the server. I also need the connection to be secure and the user must logon to the server with a username and password.
How does my app communicate with the server? Is it using NSURLRequest?
What is the best method or protocol to accomplish this?
Thanks!
HTTPS, REST, and JSON are different tools you can use when performing networked operations (more specifically, a secure protocol, a web service architecture, and a method of object serialization, respectively). If you don't know what these mean, I would do a little reading before attempting to build an iOS app that functions as a client. The link johnathon posted in the comments is a little low-level for what you're wanting to do, but searching around for "consuming a web service with iOS" might be good.
Also, does the service already exist? If so, your task is essentially to understand how to communicate with the server.
Once you're a little more up-to-speed on the fundamentals, however, the AFNetworking library is phenomenal.

Preferred method/format for sending/receiving data to/from server using iOS?

As I begin building the framework of my first iPhone app, I'd like to learn more about the "standard" or preferred approach for interacting with HTTP servers. I assume most of these iPhone apps initiate HTTP connections to send and receive data. What is the preferred data format and method for going about this task?
Secondary questions: Are there other ways of sending/receiving data to a server? Should I avoid using a PHP web server as the middle man in interacting with a few databases?
Current process:
Outbound: iOS -> Http request -> PHP -> MySQL Database
Inbound: MySQL -> PHP -> JSON Data -> iOS
I would use XML to communicate with your server unless you are doing something special (Video/Audio or packaging your own data). Cocoa has built-in support for XML so it would speed up the development process.
There are other ways to communicate with the server. You could write your own protocol which would only be understood by your client (Maximum security but could be hard to maintain or bugs could be discovered). You could use someone else's framework (like JSON).
For more details about JSON, please see this link iPhone/iOS JSON parsing tutorial
You could try NSURLConnection. It is usually your best bet. It's the preferred method to access web resources. Be sure to check out NSURLConnection SSL HTTP Basic Auth to see how to use SSL. If your're debugging and your certificate is not quite trusted, check out: How to use NSURLConnection to connect with SSL for an untrusted cert?.
As for your Database question.
I personally would use a PHP Webserver that communicates directly with my Database because
1. I can change web hosting companies and my iOS app will only need to know the domain name (www.example.com/?username=abc&password=0000&uuid=000000&data=PackagedData)
2. I can upgrade my DB plan from FREE to something that can manage more connections (or the type of DB) and I just need to update the connection strings in my PHP Script (no need to update client iOS app)
Here are some scary reasons why you don't want direct communication with your database server
1. If you are storing sensitive non public data (usernames, documents, passwords, etc) then you're taking a HUGE risk. A clever hacker can reverse engineer your app and find the strings you used to connect to the DB and then gain access to your DB (without your knowledge). Possibly use the data or sell it!
If you ever decide to choose a new DB server or if your hosting company decides to give you a new IP (or sub domain for your DB Server) then you will have to update ALL your clients immediately and you may need to send them Push notifications to inform them that your App will stop working unless they upgrade.
There isn't a preferred format. Personally I like using JSON but some people swear by plists because of the speed. You can also use XML if you are more comfortable with it. I've found working with JSON REST API's very enjoyable on iOS using ASIHTTPRequest and JSONKit. It's been pretty easy to get started and the flexibility allows for some really cool stuff.
You should definitely use a PHP Server as the 'middleman' because you'd want to validate your data on the server side as you receive it. Exposing your DB directly exposes it to attacks and using PHP you could save yourself a lot of headaches and issues. Of course you can use other frameworks and languages such as Ruby (RoR, Sinatra etc.), Python (Django) and others
Your current process looks just fine to me and is what many services on the Web use to solve this exact problem.

What is the standard method for a website to communicate with a win32 executable?

I have some delphi code which, given a list of items, calculates the total price taking into account any special deals that might apply.
This code is non-trivial to rewrite in another language.
How should I set it up to communicate with a website running on the same server? The website will need to ask it for a price every time the user updates their shopping cart. It's possible that there will be multiple concurrent requests.
The delphi code needs to maintain an in-memory list of special deals, periodically refreshed from a database. So it cannot simply be executed every time or anything as simple as that.
I don't know what the website is written in, or even which http server it runs under, so I'm just looking for ideas or standard methods.
It sounds like the win32 app is already running as a Windows Service on the box. So, if you can't modify that service, you are going to have to deal with whatever way it wants to accept and respond to requests. This could be through sockets or some higher level communication protocol like web services.
You could do a couple of things. Write an assembly that knows how to communicate with the service and have your web site use that assembly. Or you could build a shim service that knows how to communicate with the legacy service, but exposes communication over higher level protocols such as web services. Either way will have the benefit of hiding the concurrency, threading and communications issue behind an easy to call interface, but the latter will make communicating with the service easier for everyone going forward.
If you can modify the delphi app to take an XML request and respond with an XML answer over a TCP socket (ideally using the HTTP protocol), you will be able to make it interoperate with most web server frameworks relatively easily. But the exact details of how to make that integration happen will depend on the language/framework it was written in.
If the web server is on windows you can compile your delphi app as a DLL that can return XML or HTML, taking parameters as part of the URL or a POST operation. Some details on making a Delphi DLL for web servers are here.
It doesn't matter what web server or OS the existing system is running under. What matters is what you want YOUR code to run under. If it is windows then the easiest solution would be to use WebBroker and write a custom ISAPI application, or use SOAP to expose web services. The first method could be used if you wanted to write a rest like API for instance, the second if your web application has the ability to consume web services.
Another option, if you are running both on the same box under IIS, is to create a COM/Automation object which you then invoke via server side scripting (ASP). If the application is an ASP.NET application, then I would use PRISM to port your code into an assembly.
I have done this with a quite complicated workers compensation calculator. I created a windows service using RemObjects Sdk. The calculations are exposed as a soap method so it can be accessed by nearly anything.
It's not necessary to use RemObjects in the service but it makes it much easier to do as it handles a lot of the underlying plumbing. The clients don't need RemObjects, they just need to be able to call soap methods. Nearly any programming langugae can do that.
You could also create an isapi dll for IIS that exposes a soap interface. This would be useful if other websites on different servers needed access to the methods. However I have handled this in my case by opening a port in the firewall to access my windows service.
There is a lot of examples on the web. A couple of places to start reading are About.Com and Dr Bob.
Torn this app into Windows Service. Write Web Service that will communicate with your windows service. You should spend some time designing your Web Service, because this Web Service is going to be your consistent interface, shielding old Delphi app. So in the future whenever you will want to write web app, mobile app, or whatever you will imagine, you will have one consistent interface – XML Web Service.
A popular way to integrate a web application with background services is a message broker.
The message flow would be:
the web application sends a "calculation request" message to a message destination on the message broker, which contains all needed parameters and also a correlation id to match the calculation request with the response from the Delphi service
one (or, in a high availability / load balanced environment more) Delphi services handle the messages: pull the next incoming message, process it by feeding the parameters to the calculation engine, and send a "calculation result message" back to the web server
the web server can either synchronously wait for the response (and discard responses which have no matching correlation ide) and build the result HTML document, or continue with other tasks and asynchronously receive the calculation result in a separate thread, for example in a Ajax based web application
See for an introduction this slideshow about the Dopplr image service:
http://de.slideshare.net/carsonified/dopplr-its-made-of-messages-matt-biddulph-presentation
If you can make it a service (but not a library), you have to do inter-process communication somehow - there are a few ways to do this on Windows:
Sockets directly which is hardest since you have to do marshalling/auth yourself
Shared Memory (yuck!)
RPC which works great but isn't trivial
DCOM which is easier but a pain to configure
WCF - but can you call it from your Windows Service written in Delphi?

Resources