How to run background code in a asp.net-mvc/wcf app - asp.net-mvc

Based on this answer here, I need to put emails in a queue and have a background task run and send them. How do I do this with an architecture that is of ASP.NET-MVC and WCF?
How do I build a queue (sql server)?
How do I build a background task?

You can skin this cat many different ways. The key being that the actual sending of the emails is asynchronous to the queuing of the email.
Queue messages via WCF Service using MSMQ binding via this series of blog posts, which assumes IIS 7: MSMQ, WCF, and IIS: Getting Them to Play Nice.
Queue messages to MSMQ. MSMQ is a nice (sometimes underutilized) queue service built into Windows. You'll write a Windows service to receive messages from this queue. If you have IIS 7, then check out Death to Windows Services, Long Live AppFabric. MSMQ is a breeze, but has some quirky constraints (4MB message size and availability)
Queue messages to a 'sql queue'. Create a table to hold basic queued message information and then stored procedures to wrap the queue semantics (e.g. you don't want multiple consumers to receive the same message). Not difficult, but a little time consuming to get right.
Queue messages to Service Broker (or even MSMQ) and write a Windows service that receives messages from the Service Broker Queue. Service Broker handles the queueing semantics (competing consumers) for you. The downside is that its a pain in the ass to administer.
HTH,
Z

I think your solution is independant of the fact you're using MVC.
The way I've implemented this in the past is to persist the fact you need to sent an e-mail into the database and then process this using a Windows Service.
Another way to do this would be to utilize MSMQ as your storage medium. In general, MSMQ shouldn't be used to "store" data, only as a message transport mechanism, but it's certainly an option in this case.
In terms of developing a "queue", if the e-mails need ordered delivery for some reason, simply having a "RequestedDTTM" column in your database table would allow you to send them in the order they were requested.
Lastly, I would consider implementing a simply multi-threaded e-mail sender to maximize performance. Using the TPL in .NET 4.0 would make this pretty easy. Alternatively, you could use something like the SmartThreadPool library (available at codeplex.com) to manager your e-mail sender threads.
As was mentioned in the other answer you linked to, your UI shouldn't be doing this e-mail sending.

Related

How to fire events in a Delphi application from another Delphi application?

Please read before tagging as duplicate.
I'm creating a set of applications which rely on smart cards for authentication. Up to now, each application has controlled the smart card reader individually. In a few weeks, some of my customers will be using more than one application at the same time. So, I thought maybe it would be more practical to create a service application which controls the authentication process. I'd like my desktop applications to tell the service application they are interested in the authentication process, and the service application would then provide them with information about current user. This part is easy, using named pipes. The hard part is, how can the service tell the desktop applications that an event has occurred (UserLogIn, UserLogOut, PermissionsChanged, ... to name a few). So far I have two methods in mind. CallBack functions, and Messages. Does anyone have a better idea? I'm sure someone has.
You want do to IPC (Inter Process Communication) with Delphi.
There are many links that can help you, Cromis IPC is just one to give you an idea what you are after.
A similar SO question to yours is here.
If you want to go pure Windows API, then take a look at how OutputDebugString communications is implemented.
Several tools can listen to the mechanism and many apps can send information to it.
Search for DBWIN_DATA_READY and DbWin32 for more information on how the protocol for OutputDebugString works.
This and this are good reading.
When it gets into IPC, some tips:
Do not be tied on one protocol: for instance, if you implements named pipe communication, you would later perhaps need to run it over a network, or even over HTTP;
Do not reinvent the wheel, nor use proprietary messages, but standard formats (like XML / JSON / BSON);
Callbacks events are somewhat difficult to implement, since the common pattern could be to implement a server for each Desktop client, to receive notifications from the server.
My recommendation is not to use callbacks, but polling on a stateless architecture, on the Desktop applications. You open a communication channel with the server, then every second / half second (use a TTimer in your UI), you make a small request asking for what did change (you can put a revision number or a time stamp of your last retrieval). Therefore, you synchronize your desktop data with pending events. Asking for updates on an existing connection is very fast, and will just send one IP packet over the network back and forth, if nothing changed. It is a very small task, and won't slow down nor the client nor the server (if you use some in-memory cache).
On practice, with real application, such a stateless architecture is very responsive, from the end-user point of view, and is much more easy to deploy. You do not need to create a server on each desktop application, so you don't have to open firewall ports or such. Since HTTP is stateless, it is even Internet friendly.
If you want to develop services, you can use DataSnap, something like RemObjects or you can try our Open Source mORmot framework which is able to create interface-based services with light JSON messages over REST, either in-process, using GDI messages, named pipes or TCP/HTTP - for free, with unbeatable performance, build-in security, and from Delphi 6 up to XE2. For your event-based task, just using the Client-Server ORM available in mORMot could be enough: create a table/class storing the events (you can even define a round-robin in-memory storage - no need to use SQLite3 engine nor a DB here), then ask for all pending events since the last refresh. And the server can safely be a background service, or a normal application - some mORMot users even have the same executable able to be either a stand-alone application, a server service, an application server, or a UI client, just by changing the configuration.
Edit / announcement:
On the mORMot roadmap, we added a new upcoming feature, to easily implement one-way callbacks from the server.
That is, add transparent "push" mode to our Service Oriented Architecture framework.
Aim is to implement notification events triggered from the server side, very easily from Delphi code, via some interface definitions, even over a single HTTP connection - for instance, WCF does not allow this: it will need a dual binding, so will need to open a firewall port and such.
It will used for easy Event Collaboration, via a publish / subscribe pattern, and allow Event Sourcing. I will try to make it implement the two modes: polling and lock-and-wait. A direct answer to your question.
You can use a simple TCP socket connection (bidirectional) to allow asynchronous server to client messages on the same socket.
An example is the Indy TIdTelnetClient class, it uses a thread for incoming messages from the server.
You can build a similar text-based protocol and only need a Indy TCP server instance in the service, and one Indy Client instance in the application(s).

Handle SOAP calls with ESB/MessageBroker or Grails?

we are currently trying to determine a application architecture for an application that will need to accept a number of SOAP calls and also make SOAP calls. One of the design goals is simplicity and robustness which we need to take into account.
In the Grails space we could all tie this into one big Grails application but this gives headaches in the robustness aspect as and update of the Grails application will disable all incoming SOAP request.
I was wondering if splitting up the Grails app and combining this with something like ActiveMQ/ServiceMix/Mule etc is recommend? Any advice or comments are appreciated! And what kind of solution woud be a good candidate?
You can achieve some robustness with your monolithic Grails app by running it behind a network load balancer. This would allow you to perform no-downtime rolling upgrades.
Now this doesn't address other concerns like the need to deal with possibly unreachable remote SOAP services, etc... This is when a tool/framework, like Mule, can become helpful as it will provide you exception handling, retries and whatnot.
This is conditioned by the intended behavior of your SOAP bridge: is it asynchronous (ie. fire and forget, send the message to the bridge, get an immediate ACK and let the bridge do the remote dispatch whenever possible) or is it synchronous (ie. the caller of the bridge is held until a remote response is received and forwarded back to it).
If your bridge is fundamentally synchronous, I'd say you can stick with your single Grails app and use a load balancer. It will be up to the caller to deal with retries.
Otherwise, if it's async, consider a messaging middleware to help with the temporary message persistence and redelivery in case of failure.

Is it necessary to use as few queues as possible And solutions for web messaging

I read in forum that while implementing any application using AMQP it is necessary to use fewer queues. So would I be completely wrong to assume that if I were cloning twitter I would have a unique and durable queue for each user signing up? It just seems the most natural approach and if not assign a unique queue for each user how would one design something like that.
What is the most used approach for web messaging. I see RabbitHUb and Rabbit WebHooks but Webhooks doesn't seem to be a scalable solution. i am working with Rails and my AMQP server as running as a Daemon.
In RabbitMQ, queues are quite cheap. They're effectively lightweight Erlang processes, and you can run tens to hundreds of thousands of queues on a single commodity machine (i.e. my laptop). Of course, each will consume a bit of RAM, but unused-recently queues will hibernate, so they'll consume as little memory as possible. In addition, if Rabbit runs low on memory for messages, it will page old messages to disk.
The above only applies to a single machine. RabbitMQ supports a form of lightweight clustering. When you join several Rabbit nodes into a cluster, each can see the queues and exchanges on the other nodes but each runs only its own queues. So, you'll be able to have even more queues! (to the limit of Erlang clusters, which is usually a few hundred nodes) So, a cluster forms a logical broker distributed over several machines; clients connect to it and use it transparently through any of the nodes.
That said, having a single durable queue for each user seems a bit strange: in AMQP, you cannot browse messages while they're on the queue; you may only get/consume messages which takes them off the queue and publish which adds the to the end of the queue. So, you can use AMQP as a message router, but you can't use it as a sort of message database.
Here is a thread that just talks about that: http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2009-February/003041.html

Erlang web-distribution

On non-web based chat system the server distinguishes its clients by their PIDs, right? And what should be used to distinguish the clients on web-based chat system?
Thnx in advance
The fact that you're using a web server shouldn't change much about your model. You're still building chat. You also don't want to make your chats tied too deeply to the process that is managing their HTTP connection. HTTP connections are ephemeral, even if everything is going well and you're using long polling there's no guarantee that the connection will be re-used with Keep-Alive for the next long poll. The user might also want to open up the same chat in multiple browser windows, multiple computers, whatever.
I haven't looked closely at any of these but you're not the first person that has built web chat with Erlang:
http://chrismoos.com/2009/09/28/building-an-erlang-chat-server-with-comet-part-1/
http://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf
http://yoan.dosimple.ch/blog/2008/05/15/
https://github.com/yrashk/socket.io-erlang (more of a general tool for this sort of thing, not chat specifically)
https://github.com/rvirding/chat_demo (as seen above)
I think the confusion comes from the notion that a Erlang server process must stay alive for every individual client. It can, but Mochiweb doesn't do that by default if I'm not mistaken. It just spawns a new process for every request. If you would like to have a long lived bidirectional client <-> server process connection you can do that for example by;
sending a client identifier with every request and map that to a long-lived process on the server. The process will maintain servers state and you can call methods on it. It's still pull and not push though.
use the web socket implementations. Not sure if Mochiweb has one, but other Erlang HTTP servers like Misultin and Yaws provide one. For a web based chat system I believe web sockets would be a great fit.
For a very trivial example of a web-based chat system using websockets and Misultin you can check out this chat demo. It was written to demonstrate an idea and is not very elegant, but it does work.

What is the most common approach for designing large scale server programs?

Ok I know this is pretty broad, but let me narrow it down a bit. I've done a little bit of client-server programming but nothing that would need to handle more than just a couple clients at a time. So I was wondering design-wise what the most mainstream approach to these servers is. And if people could reference either tutorials, books, or ebooks.
Haha ok. didn't really narrow it down. I guess what I'm looking for is a simple but literal example of how the server side program is setup.
The way I see it: client sends command: server receives command and puts into queue, server has either a single dedicated thread or a thread pool that constantly polls this queue, then sends the appropriate response back to the client. Is non-blocking I/O often used?
I suppose just tutorials, time and practice are really what I need.
*EDIT: Thanks for your responses! Here is a little more of what I'm trying to do I suppose.
This is mainly for the purpose of learning so I'd rather steer away from use of frameworks or libraries as much as I can. Take for example this somewhat made up idea:
There is a client program it does some function and constantly streams the output to a server(there can be many of these clients), the server then creates statistics and stores most of the data. And lets say there is an admin client that can log into the server and if any clients are streaming data to the server it in turn would stream that data to each of the admin clients connected.
This is how I envision the server program logic:
The server would have 3 Threads for managing incoming connections(one for each port listening on) then spawning a thread to manage each connection:
1)ClientConnection which would basically just receive output, which we'll just say is text
2)AdminConnection which would be for sending commands between server and admin client
3)AdminDataConnection which would basically be for streaming client output to the admin client
When data comes in from a client to the server the server parses what is relevant and puts that data in a queue lets say adminDataQueue. In turn there is a Thread that watches this queue and every 200ms(or whatever) would check the queue to see if there is data, if there is, then cycle through the AdminDataConnections and send it to each.
Now for the AdminConnection, this would be for any commands or direct requests of data. So you could request for statistics, the server-side would receive the command for statistics then send a command saying incoming statistics, then immediately after that send a statistics object or data.
As for the AdminDataConnection, it is just the output from the clients with maybe a few simple commands intertwined.
Aside from the bandwidth concerns of the logical problem of all the client data being funneled together to each of the admin clients. What sort of problems would arise from this design due to scaling issues(again neglecting bandwidth between clients and server; and admin clients and server.
There are a couple of basic approaches to doing this.
Worker threads or processes. Apache does this in most of its multiprocessing modes. In some versions of this, a thread or process is spawned for each request when the request arrives; in other versions, there's a pool of waiting threads which are assigned work as it arrives (avoiding the fork/thread create overhead when the request arrives).
Asynchronous (non-blocking) I/O and an event loop. This is basically using the UNIX select call (although both FreeBSD and Linux provide more optimized alternatives such as kqueue). lighttpd uses this approach and is able to achieve very high scalability, but any in-server computation blocks all other requests. Concurrent dynamic request handling is passed on to separate processes (via CGI) or waiting processes (via FastCGI or its equivalent).
I don't have any particular references handy to point you to, but if you look at the web sites for open source projects using the different approaches for information on their design wouldn't be a bad start.
In my experience, building a worker thread/process setup is easier when working from the ground up. If you have a good asynchronous framework that integrates fully with your other communications tasks (such as database queries), however, it can be very powerful and frees you from some (but not all) thread locking concerns. If you're working in Python, Twisted is one such framework. I've also been using Lwt for OCaml lately with good success.

Resources