I would like to synchronize IMAP mailbox from one server to another server in realtime. Currently I am using imapsync software which uses (pull method) from one source server to (destination) server to sync the messages.
I have tried mbsync (isync) also but It only supports remote IMAP4 type mailbox not Maildir type.
Can anyone suggest realtime mailbox syncing solution which must be support Maildir remote mailbox type.
As Max said, the issue is not with IMAP. IMAP is merely the protocol for serving messages to clients.
What you want to do is to synchronize the backing store, i.e. the place where the messages are kept. It sounds like you are using Maildir. I don't know Maildir, but since it uses the local filesystem, I would try to sync that between the servers. The simplest solution that I can think of here is to use something like Dropbox for this.
There are some practical reasons why realtime syncing of IMAP messages could be hard though, one of which is that the UID values have to increase with time, so you can not have two servers allocating UIDs without each of them knowing what the last UID was.
I don't think stackoverflow is the right setting for this question, since it does not sound like a development problem. Maybe try https://serverfault.com/
Related
Let's say I have some system that coordinates the transfer of many files; that is, I have an Indy TCP server controlling the synchronization of files over a large distributed system.
Currently, in order to send files to specific clients, it requires the locking of the Contexts list on the server.
If I have 500 clients all connected and synchronizations taking place, this locking I suspect would be quite costly on performance as it halts all the client connection threads.
Is there any way to speed this up, or is this not really an issue? Is it worth distributing clients on many servers? What's the trick?
Cheers,
Adrian
There is no need to lock the Contexts list just to send files. Let the OS handle any file locking for you. When sending a file to a client, have the client open the file in read-only mode. This allows multiple clients to read from the same file at the same time. If a client is uploading a file, open the file in exclusive mode so other clients cannot access the file until the upload is finished.
If the clients are always connected, the OnExecute method - which runs in a loop until the connection terminates - can be used to send data to the clients when it is available. This however requires that the protocol is under your control.
A related question and a detailed answer which shows how the locklist can be avoided can be found here:
TCPserver without OnExecute event
Use case is to have a server connect to thousands of users email accounts and sniff incoming mail in java preferably with java mail and spring integration/amqp/rabbit mq type scalable infrastructure.. And imap idle type connections and add server processing nodes as needed.
Single inbound channel is easy with imap idle inbound adapter.. You could configure few in XML. But if you need a persistent listener/imapidlechannel adapters queue of thousands of these adapters and Needed to add new user connection dynamically for server processing.. This would be a challenge. Also need fault taulerance that if the java listener dies or server reboots all these listeners and their configuration also reboot vs rebuilding thousands of these connections and recovery if some connections loose their idle receive capability without rebuilding all user connections for the idle receiving.
Any ideas welcome as searched a lot however could not find anything? This seems to be a significant scalability issue about e mail receive connections open.
If you want to use the IMAP IDLE command to listen for new messages using JavaMail, you'll need one thread per mailbox, which is likely to impact your scalability. Even keeping thousands of connections open might be an issue.
You don't say how quickly you need to react to new messages. Unless you have near real time requirements, it might be better to poll a subset of mailboxes every so often, eventually cycling through all the mailboxes.
You'll need to deal with the fault tolerance issues yourself, using checkpointing or transactions or whatever seems appropriate for your application.
The other option is to perhaps take a look at something like Akka with actors performing the async io. You'll need to ditch the JavaMail package and parse the imap commands yourself but there's lots of packages out there to do that. Would love to hear if you had a better solution.
For my portfolio software I have been using fetchmail to read from a Google email account over IMAP and life has been great. Thanks to the miracle of idle connection supported by imap3, my triggers fire in near-realtime due to server push, much sooner than periodic polling would allow otherwise.
In my basic .fetchmailrc setup, in which a brokerage customer's account emails trade notifications to a dedicated Gmail/Google Apps box, I've had
poll imap.gmail.com proto imap user "youraddress#yourdomain-OR-gmail.com" pass "yoMama" keep nofetchall ssl idle mimedecode limit 29000 no rewrite mda "myCustomSpecialMDAhandler.sh %F %T"
Trouble is, now I need to support reading from multiple email boxes, and hand off the emails to other specialized MDA scripts I wrote. No problem, just add more poll lines to .fetchmailrc, right? Well that doesn't work when the other accounts also use imap.gmail.com. What ends up happening is that while one account reads fine (and not necessary the first one listed, though usually yes), the other is getting "socket error" all day and the emails remain untouched, unread. I can't figure out why and not even sure if it's some mechanism at imap.gmail.com or not, eg. limiting to one IMAP connection from a host. That doesn't seem right since I have kept IMAP connections to many separate Gmail & Google Apps accounts from the same client for years (like Thunderbird) and never noticed this exclusivity problem.
I haven't tried launching multiple fetchmail daemons using separate -f config files (assuming they wouldn't conflict), or deploying one or more getmail and other similar email fetchers in addition. Still trying to avoid that kind of mess--unscaleable the more boxes I have to monitor.
Don't have the reference offhand but somewhere in fetchmail's docs I recall reading that idle is not so much an imap feature as a fetchmail optional trick, which has a (nasty for me) side effect of choking off all other defined accounts from polling until the connection is cut off by some external event or timeout. So at least that would vindicate Google.
Credit to Carl's Whine Rack blog for some tips.
For now I use killall fetchmail; fetchmail -f fetcher.$[$RANDOM % $numaccounts].rc periodically from crontab to cycle reading accounts each defined individually in fetcher.1.rc, fetcher.2.rc, etc. Acceptable while email events are relatively infrequent.
On non-web based chat system the server distinguishes its clients by their PIDs, right? And what should be used to distinguish the clients on web-based chat system?
Thnx in advance
The fact that you're using a web server shouldn't change much about your model. You're still building chat. You also don't want to make your chats tied too deeply to the process that is managing their HTTP connection. HTTP connections are ephemeral, even if everything is going well and you're using long polling there's no guarantee that the connection will be re-used with Keep-Alive for the next long poll. The user might also want to open up the same chat in multiple browser windows, multiple computers, whatever.
I haven't looked closely at any of these but you're not the first person that has built web chat with Erlang:
http://chrismoos.com/2009/09/28/building-an-erlang-chat-server-with-comet-part-1/
http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf
http://yoan.dosimple.ch/blog/2008/05/15/
https://github.com/yrashk/socket.io-erlang (more of a general tool for this sort of thing, not chat specifically)
https://github.com/rvirding/chat_demo (as seen above)
I think the confusion comes from the notion that a Erlang server process must stay alive for every individual client. It can, but Mochiweb doesn't do that by default if I'm not mistaken. It just spawns a new process for every request. If you would like to have a long lived bidirectional client <-> server process connection you can do that for example by;
sending a client identifier with every request and map that to a long-lived process on the server. The process will maintain servers state and you can call methods on it. It's still pull and not push though.
use the web socket implementations. Not sure if Mochiweb has one, but other Erlang HTTP servers like Misultin and Yaws provide one. For a web based chat system I believe web sockets would be a great fit.
For a very trivial example of a web-based chat system using websockets and Misultin you can check out this chat demo. It was written to demonstrate an idea and is not very elegant, but it does work.
I'm currently designing an application for iOS (using MonoTouch) that will have a server component running on Windows Azure. The application will essentially be a chat type application where users will generate messages within their clients and send them to the server, which will then need to forward those messages out (as quickly as practicable) to other clients that the user might be sending the messages to.
My question is - is there a recommended practice for architecting an application like this, where clients need to receive 'push' messages from the server?
I've considered a few options but would appreciate feedback.
The first option is to use Apple's Push Notifications service (APNs). I have two concerns about this - first, the clients only need to receive the messages when they're online (APNs sends messages through when the app is closed too, which I don't need or want); and second, there is a possibility that there will be a high volume of messages, which I know Apple would probably get unhappy about (perfectly fairly).
A second option I considered is using a web service (WCF-based) and having the client call this service every (say) 2-3 seconds, which is the maximum delay we could tolerate. This would seem to involve a great deal of potentially unnecessary network traffic, though ("have you got anything for me?", "no", repeated ad nauseum).
A third option is to maintain a persistent web service connection between the client and the server. When the client app starts it would call a web service method on a background thread. The server would hold the connection open (by not returning anything), and if any messages came through it would immediately return them. This connection might time out after, say, 2 minutes at which point it would be re-established. This seems to do what I want, but again, I'm concerned that there'd be a lot of connections open to the server at any moment, which could require server resources unnecessarily.
A fourth option is to use a persistent connection over TCP (or UDP, although from what I've found, Windows Azure doesn't support this). This seems to be a good option, but again, might be overkill in terms of server usage - there could potentially be hundreds or even thousands of clients connected at any moment.
A fifth option is to somehow have the server push messages directly to the client, perhaps by having the client run a mini web server or similar. However, as the app will be running on 3G and WiFi networks (beyond my control) I don't expect incoming ports will be open for this sort of thing.
If anyone has any other suggestions, or thinks one of the above options would be a good idea (or is a standard way of approaching this sort of problem) I'd be very interested to hear about it.
Thanks in advance,
John
You had a look at Pubnub http://www.pubnub.com/ ?