Windows Service vs. Windows Workflow Foundation? - windows-services

I need something that runs in the background and go into my database and scan and update certain rows based on certain logic. I need this to run like every hour and my environment is Windows Server 2003, SqlServer 2005.
Is WWF good for this purpose? Or should I create a Windows Service? And, what's difference between WWF and Windows Service, or just simply what is the best way to just do this?
Thanks,
Ray.

I would say use a windows service not a workflow. Using a workflow is for when there is a process involved. As you are just updating records in a table, I would say a service is as good as anything..
Actually, now that I have read your question again, you might want to consider a SQL Server job as well as they can be scheduled to run at whatever interval you like.
A windows service is a long running process that runs in the background in windows. A Windows Workflow Foundation workflow is used for laying out a workflow for a business process (or something). You need to host the workflow runtime within something (Console App, ASP.Net, Windows Service, etc)

I would use a windows service if I were you. I've done a lot of work in WF and the main reason I would say to not do it in WF is that MS is basically completely rewriting the next version of WF according to what MS said at PDC in Oct. There will be a way to run legacy 3.0/3.5 activities in 4.0, but my impression was that there are going to be major changes.
Also, it sounds like you don't need the modular activity capability that WF provides. WF is going to add another layer of abstraction that it sounds like you are not going to need, plus you are still going to need to write a windows service to run your workflow that you create. WF would be a good choice if you had a business person that needed to constantly change the logic that was happening and you wanted to make a big investment in the management of this process you want to create.
Also I agree that based on what you are saying you should consider creating an SSIS package in SQL Server, unless you don't have direct access to the database.

Windows service had work for me in the past, workflow primary feature is not scheduling and you will need to provide a host for it, when windows services infrastructure already contains all of this and its also well documented.

Related

Should I use a Container/Service Fabric Guest Executable for a scheduled daily workload?

This is a more general question about which types of payloads to host in a Container. In our case we will use Service Fabric guest executables. For this post I will only use the word Container to refer to both. The reason I do this is they have similar properties and think more people may understand a container than a SF Guest Exe.
WebAPIs/Services that needs to scale are a good fit for containers, but this question is related to what we call a "Batch" job. This nomenclature comes out of the old .bat files, but in our case we are using a .NET Framework or Core .exe (console apps).
Currently Windows Task Scheduler kicks off the batch running under a service account on a VM. We want the processing to happen on a certain time of day or day of the week and not before or after. There is not any real scaling here. There is one instance which may or may not be multithreaded and on average they generally run between 2-15 minutes and then stop. Some run longer some run shorter. I understand there are limitations to this approach but this is the type of payload I'm discussing here.
As we modernize the Technology stack we are looking to use the Orchestrator as much as possible. As a technologist I've always tried to understand the different tools in our tool belts and not use a tool just because that's the one I used last, instead use the correct tool for the task.
We started out by not writing any more .net console apps. Instead we put the business logic of these "batches" into WebApi's. Then having the task scheduler call the API when it needed to perform its action. If I put this into Service Fabric and host it my concern is that the system resources are consumed for 23 hours and 45 minutes a day when they are not being used. That seems to be opposite of what you would expect when using a container.
Now if I could spin up a Service Fabric Guest Exe/Container on demand and then after it finishes destroy the instance of the app that could fit the need. Then I could have the benefits of the orchestrator without the determent of having it consume resources all the time. I would hope to retire the Batch Server (VM) as the hardware is usage is not optimized and instead add resources to the cluster.
UPDATE
Looking at Vaclav's Scalability Doco I think there might be a use case in here? https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-concepts-scalability He uses a "Workload Manager Service" combined with CreateServiceAsync, to spin up an instance of the service on demand. I guess I would deploy the app to the image store but not create an instance of the app until needed. Then I need to figure out how to end it, is it as simple as changing the infinite loop in Program.cs? The thing is it doesn't look like there is a Program.cs in a Guest Executable.
This looks like a way to run a package until completion, which was releases as part of 7.1. But how do we start a second execution of the service? I want to execute based on a request coming in.
https://learn.microsoft.com/en-us/azure/service-fabric/run-to-completion
Thoughts?

Scheduled Job equivalent functionality in MVC

I have a requirement in my MVC app.
I had an export to excel functionality that is taking 3 mins of time (user clicks on a export button and waits on).
This export downloads an excel that has multiple worksheets after applying certain rules on the data.
These rules are datamanipulations plus applying colors on the cells belonging to certain columns.
Inorder to avoid the wait time, I was asked to develop a code with in the MVC app that can run like a scheduled job.
This job has to export the excel to a dedicated folder with in the network on the scheduled time (daily once).
Also i was asked to develop a web page within the app which has links to download these excels.
Quesions here (Any help would be appreciated) :
I have chosen Quartz.NET to implement this requirement. This is an open source (to my little knowledge) that can
provide the facility to schedule a job (class developed in .NET). Is it the right choice or would there be any implications in future?
Is it really needed to develop a job like code or any other way of coding can address this?
I'm not very familiar with Quartz.net, but I do know that trying to run background/scheduled tasks from within the same process as the MVC application can be problematic.
Ref 1: http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx/
Ref 2: http://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
Essentially, you can't guarantee that the process will complete correctly when running it due to how IIS handles app pools (which is where you MVC process runs: assuming hosting on IIS anyway).
You mention running a scheduled task within your MVC app. Again, this is incorrect. Why can't you just slap a console app project into the solution and drive the code from there, then put it on the server and use the Windows Task Scheduler?
In terms of background tasks, the "correct" way to do this is to send a command from your MVC app to some sort of message queue, which can then ensure that the command doesn't get dropped. I've used RabbitMQ in the past (a middleware message broker). Perhaps this is the aim of Quartz.net.
This setup typically involves another app (for me, usually a console app run on the server) that receives the command message from the message queue and runs in it's own process, entirely separate from MVC and thus the issues inherent with IIS AppPools and background tasks.
A lot of work, really... one would think it'd be easier, but that's the surefire way to do it and maintain the integrity of the task to be run.

Automating Azure VIP Swap

I have an ASP.NET MVC 4 app hosted as an Azure web role. I want to do something that seems like it should be pretty standard: I want to create a function that I can call that initiates a VIP swap and raises and event (or calls a callback) when the VIP Swap operation is done.
Just to add some context to the situation: My website implements a workflow that takes about an hour (or less) to complete. If I want to release a new version of the website code, it's convenient (i.e. much less "backward compatibility" code to write) to first let all of the current users complete the workflow so that the new code doesn't need to deal with data created by the previous version of the code. So a management function in my website would first poke a value into the database that disables new workflows; it would then wait until all current workflows are done; it would then call the "VIP Swap" routine; finally, when the VIP Swap routine signals its completion, it would poke the database value to re-enable new workflows.
I found the Microsoft documentation for how to programmatically initiate a VIP swap here:
http://msdn.microsoft.com/en-us/library/ee460814.aspx
The procedure involves POSTing to a magic URL and including some headers in the POST, then periodically performing a GET to a magic URL and checking the response code.
The more I think about this, the more non-trivial it seems. In addition to the basic complexities of wiring up a background timer and completion notification, I don't know what complexities, if any, I might run into trying to do this stuff in the IIS environment. Can I even perform HTTP operations on a background thread? For that matter, will I run into complications just trying to use any of the half dozen or so different "do things in the background" mechanisms baked into .NET?
Any help or guidance will be greatly appreciated. In particular, I'd be ecstatic if someone could point me at a ready-to-go implementation of this function!
I don't think you will find an easy solution to this as the fabric controller is setup to do some very fancy things without your involvement. Running hour-long workflows on a cloud computing environment, where an instance can be pulled out from underneath you, (with a maximum of 5 minutes from the OnStopping event being called to clean up) requires that you do other work anyway to make sure that all of your tasks complete.
The simple question is "What do you do if an instance goes down when workflows are still running?" Do you restart them or are they lost? If they get lost then you don't care anyway, so killing workflows for an upgrade are equally unimportant. If you re-start them then use that same mechanism to decide whether or not a node is due to be shut down, and distribute the jobs accordingly. This pattern is eerily similar to the Hadoop JobTracker. Don't just run the workflows on any 'ol instance. Submit them to a (job tracker) service that decides what to do. The (job tracker) service can then use the service management API to scale up as many instances as you need running the version that you want, run workflows on the appropriate node, and shut them down when they are no longer needed or are outdated.
Unfortunately this may not be the simple solution that you are looking for, but something in your architecture needs to change, rather than trying to force PaaS to fit with your current approach. Decompose your workloads, create loosely coupled services, design for failure, and a few other cloud/distributed computing practices need to be considered. There is a reason why Hadoop is built the way that it is — and it has a reputation for being able to get work done on a bunch of somewhat unreliable commodity hardware.

TFS 2010 - How to set up for a new application

I have started at a new site that is using .Net applications for the first time. As a developer I am used to VSS but this product is dying a death so we are using TFS (BASIC) instead.
I have been using TFS for source control up until now. But now we are having new servers installed for a live environment.
Now I am not sure what I should be doing. There are no books on TFS 2010 that I can find and I am wondering what tips you can give me. Does TFS need to be installed again, or should I use the existing installation? I am thinking I ought to set up a daily build for a test server. I have not been using TDD up until now, but for the next project this may change.
What must I absolutely get right, and what pitfuls should I avoid?
Without being there in your environment, it's hard to make appropriate recommendations. I've made some assumptions about what your installation based on what you said, but these may be wildly wrong.
You say you're using TFS (BASIC)-- I'm not sure what you mean by that, but if you are using TFS installed on one of the developers workstations, and you're starting to move towards a more robust development environment, I would recommend that you get a separate server (or servers) for your TFS installation.
It sounds like you're relatively small, so having your application tier and your data tier on the same machine shouldn't be that much of an issue. Just make sure that you have enough RAM on the machine to support both processes, and that you have enough disk space allocated for the growth of the database.
You talk about Test Driven Development (TDD), but what I think you're actually talking about is Continuous Integration (CI). When you have a CI environment set up, builds happen automatically based on either a schedule, or triggered by check-ins. Having this set up is never a bad idea, and would recommend that you get into the rhythm of CI builds as soon as possible.
If you're looking for a build server, you are probably going to be ok hosting the build agent on the combined application/data tier. If you find that you're getting performance hits when you do builds, you can move your builds to a different server without much effort.
You will also want to look at migrating your source code repository from your current environment to your future environment. The TFS installation wizard might be able to help you with that. If not, there are other options available, such as moving the database files to the new machine, or using the codeplex-based TFS Integration Platform.

Keeping applications and infrastructure connected

I work in an IT department that is divided into two groups. One group develops and manages applications, the other manages the company's infrastructure and servers. One of the problems we face is a break down in communication. I work for the application group and one of the problems I have is not being notified when a server is taken down by infrastructure, or a database is being refreshed.
Does anyone have suggestions on how to improve communications between the two groups or any ideas on how to keep a light-weight log across multiple systems (both linux and windows)? Ideally it would be nice if we could have our boxes just tweet their statuses or something.
Thanks for the help,
Ben
One thing you could do to communicate server status is to have our Infrastructure group setup a network monitoring system like Nagios. This will give everyone in your application group the ability to get a snapshot view of the status of every server in the system. Having this kind of status is invaluable when you are doing development.
Nagios gives you network monitoring, but also allows you to show scheduled down time for a particular server in the system.
Another thing your group could do to foster communication with the Infrastructure is to have your build system report which servers it is currently using for building and testing your products.
Also, setting up regular meeting between stakeholders of both groups is probably a good idea too. If you all are talking to each other, even for 15 minute a week, you'll probably see incidents like the one you described above go down quite a bit.
I think this is a bigger issue of change control.
You should have hardware and software change control and an approval process.
Ultimately, infrastructure serves you - the purpose for IT infrastructure is to run applications.
In my current large financial data company, servers are not TOUCHED without proper authorization through the client and application groups. It seems like a huge pain, but every single server is there for a reason - to meet a specific business goal and run a specific application. There is simply no excuse for the infrastructure group to be changing things or upsetting servers on their own volition.
Response to critical hardware failure might be an exception.
Needed software and OS updates are handled through scheduled maintenance windows and an approved change process.
I like the Nagios idea as well. If you want to setup something that's more of a communication tool, I would recommend a content management system like Drupal.
We use Drupal internally to communicate between teams. When one team takes a server down, they would add an event into Drupal. The rest of us would either get it as an email, an RSS item or just by refreshing the page.
Implement a change control process where changes are submitted, approved and scheduled for BOTH groups. This lets everyone know what is going on. This process can be as light or heavy-weight as you want.

Resources