I've got a sorta-legacy R&D app which is a monolith website and then a few webjobs which are used for some background processing.
Now, I've been experimenting moving all of this over to Docker + Microservices (note: not because Microservices are the 'new hot stuff' but because our application suits getting split up into more manageable pieces/services.)
It was easy slicing the website up into Gateway API (or BFF Api's) + microservices. But I'm not sure how to handle the webjob migration. The webjobs are (currently) Azure Queue timer and trigger based.
Everything is running under:
- Docker (on linux containers)
- ASP.NET Core 2.1
Anyone have any suggestions what other ways I can migrate the WebJobs to a Docker container of something?
I know Hangfire is a tool that enables background processing on an ASP.NET website. But before I go down that route, just checking if there are other solutions people use.
Also, .NET Core 2.1 has the concept of an IHostedService ... so I'm not sure if this is a legit solution and if so .. how?
You can now run Azure Webjobs in a container.
If you target .Net Core and use Azure WebJobs SDK > 3.0 (Which is distributed as a .Net Standard 2.0 library), you can run the code inside a container. Use an image based on microsoft/dotnet
Here is an example in github: christopheranderson/webjobs-docker-sample
Another option: Implement background tasks in microservices with IHostedService and the BackgroundService class.
This is for .NET Core 2.1
Doesn't require a website (e.g. Kestrel on ASP.NET Core)
Is basically a console app with some smarts to handle the lifecycle of your-tasks/your-services.
Possible clean replacement for webjobs/functions.
Generic Host Code samples
e.g a really basic code example:
using System.Threading.Tasks;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
namespace GenericHostSample
{
public class ProgramHelloWorld
{
public static async Task Main(string[] args)
{
var builder = new HostBuilder()
.ConfigureServices((hostContext, services) =>
{
services.AddHostedService<MyServiceA>();
services.AddHostedService<MyServiceB>();
});
await builder.RunConsoleAsync();
}
}
}
Related
I'm looking to write a daemon process using .NET Core which will basically act much like a cron job and just orchestrate API/DB calls on some interval. As such, it has no need to expose any web routes, so there's no need for ASP.NET Core.
However, afaik ASP.NET Core is where you get that nice Startup class with all the DI plumbing and environment-based configuration you might need.
The way I see it, I have two options:
Forgo ASP.NET Core and just hook up the DI framework on my own. If I go that route, how do I do that?
Include ASP.NET Core just for the DI portion, but then how do I spawn background tasks which "run forever" outside of any request context? My understanding is that the DI framework very much assumes there's some sort of incoming request to orchestrate all the injections.
You seem to pose multiple questions let me try to answer them one by one.
Dependendency injection without Startup Class.
This is definitely possible. Since the Startup class is part of the WebHostBuilder package (which contains Kestrel/webserver). The Dependency injection is nuget package is just a dependency on this package and so can be used alone in the following way:
var services = new ServiceCollection();
services.AddTransient<IMyInterface, MyClass>();
var serviceProvider = services.BuildServiceProvider(); //ioc container
serviceProvider.GetService<IMyInterface>();
So at your program main (startup function) you can add this code and maybe even make the ServiceProvider Staticaly available.
Note that the IHostingEnvironment is also part of the kestrel package and not available to you, but there are simple workarounds for this.
Registration
Im not sure what exactly you mean by spawning background tasks/running forever. But in dotnet you can spawn tasks with TaskCreationOptions.LongRunning to tell the schedular that your task will be running very long and dotnet wel optimise threads for this. you can also use serviceProvider in these tasks.
The only downside to the DI is that you need to set it up at the startup of your application and cannot add new services while running your application (actually you can add to services and then rebuild the serviceProvider, but its easier using another external IOC container). If you where thinking of running some kind of plugin system where dependencies would automaticaly be registered, you're better of making your own factory method.
Also note when using plugins, when they are loaded in as dll's, they cannot be unloaded so if you have a theoretically unlimited amount of plugins, your memory will slowly build up every time you add new plugins.
As of.NET Core 2.1 this can/should be done with a Generic Host. From
.NET Core docs:
"The goal of the Generic Host is to decouple the HTTP pipeline from the Web Host API to enable a wider array of host scenarios..."
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/generic-host?view=aspnetcore-2.1
public static async Task Main(string[] args)
{
var builder = new HostBuilder()
.ConfigureAppConfiguration((hostingContext, config) =>
.ConfigureServices((hostContext, services) =>
{
// ...
});
await builder.RunConsoleAsync();
}
Working on an MVC application with the below architecture. Bootstrapped with Castle Windsor.
Controller -> Service -> Repository (uses DbContext).
Now certain flows in the application require that I run some part of the flow in a thread.
For example:
Controller -> service ->Repo1 -> control returns to service -> new
Thread() started-> Repo2
The issue I face is the dbcontext is disposed as it is declared as LifestylePerWebRequest().I have tried using LifestyleTransient() that didnt seem to work. What am I missing?
There are similar dependencies which i have to sometimes use in a separate thread and sometimes in a single request. How do i configure Windsor to handle these dependencies?
There is a nuget package that i use to extend the lifestyles for Castle Windsor and it is called: Castle.Windsor.Lifestyles.
It has hybrid lifestyles which are handy for web requests and threads.
container.Register
(
Classes.FromAssemblyContaining<IServiceFactory>()
.BasedOn<IServiceFactory>()
.WithServiceAllInterfaces()
.Configure(c => c.LifeStyle.HybridPerWebRequestPerThread())
);
The important functionality is HybridPerWebRequestPerThread() which creates a new instance for the initial web request and then for every new thread it will create a new instance.
I would like to consume a public web service using Xamarin and WCF. For this demo, I'll be using Xamarin.iOS .
This is the (public) webservice I'm trying to consume:
http://www.webservicex.net/globalweather.asmx?WSDL
Inside Xamarin Studio, I add a Web Reference with the URL from the webservice. The selected Framework is set to Windows Communication Foundation (WCF).
Now, I'm using the following code to connect with the service:
var _client = new GlobalWeatherSoapClient();
_client.BeginGetWeather ("Berlin", "Germany", (ar) => {
var result = _client.EndGetWeather(ar);
}, null);
When executing this code, I'm getting a System.NullReferenceException. This is the problem, why isn't it working correct?
The strangest part: When I'm not using WCF, but select .NET 2.0 Web Services as Framework, everything seems to be working fine.
I can't see what's wrong with my WCF code - according to the docs, everything should work ok.
I hope somebody can help me out! Thanks in advance!
You are not following the docs.
Quote from that page:
To generate a proxy to a WCF service for use in Xamarin.iOS projects,
use the Microsoft SilverlightServiceModel Proxy Generation Tool
(SLSvcUtil) that ships with the Silverlight SDK on Windows. This tool
allows specifying additional arguments that may be required to
maintain compliance with the service endpoint configuration.
So, first thing is that you need to create a proxy on a Windows machine with slsvcutil. Adding WCF references to project through Xamarin Studio does not work. It only works for .NET 2.0 Web Services, that's why that option is OK.
Second thing, after you have created your proxy on Windows with slsvcutil, you need to add it to your project.
Third, you need to initialize it like this, with basic http binding (again, code from the above link):
var binding = new BasicHttpBinding () {
Name= "basicHttpBinding",
MaxReceivedMessageSize = 67108864,
};
//...
client = new Service1Client (binding, new EndpointAddress ("http://192.168.1.100/Service1.svc"));
Suggestion: forget about WCF on iOS. That article contains very useful information on other means of client-server communication.
Folks,
I am trying to re-factor a legacy brownfield application into a CQRS architecture with commands and a command bus for domain modifications.
The application will more than likely be implemented in Asp.Net MVC3. My employer prefers the use of Unity for DI in MVC applications.
Any examples I can find showing a dependency container for command/bus resolution are based on Structuremap or Autofac, however I will need to use Unity in this implementation. Has anyone used Unity in this manner or know of any examples?
Where exactly do you think you need the container at all? Maybe this post contains some usefull information.
It describes a container agnostic way of handling commmands.
Update
You mean you would like to have something like this:
var builder = new ConfigurationBuilder();
var convention = new CommandHandlerConvention().WithTransaction().WithDeadlockRetry();
builder.Extension<DecoratorExtension>();
builder.Scan(x =>
{
x.With(convention);
x.AssemblyContainingType(typeof(BarCommand));
});
var container = new UnityContainer();
container.AddExtension(builder);
ICommandHandler<BarCommand> barHandler = container.Resolve<ICommandHandler<BarCommand>>("BarHandler");
var command = new BarCommand();
barHandler.Handle(command);
Assert.AreEqual("-->Retry-->Transaction-->BarHandler", command.HandledBy);
That registration uses a custom configuration engine for Unity that provides a lot of the features of StructureMap's config.
Update2
The code samples are part of my pet project on codeplex. The above snippets can be found inside the TecX.Unity.Configuration.Test project.
I've just deployed an MVC-based web service to Azure. It's been running fine on a dedicated server. It uses Ninject.
When deployed to Azure, I'm getting the following error:
The IControllerFactory 'xxx.NinjectControllerFactory' did not return a controller for the name '<DeploymentName>'.
where <DeploymentName> is the name of the production deployment (or hosted service - both have the same name) - which seems a little weird.
I'm using the latest version of Ninject from NuGet (2.2.0.0). My understanding was that there was a medium trust issue in 1.x, but not in 2.x.
Can anyone point me in the right direction on this one? As I said, it works fine in the non-Azure deployment - and I've used the same code in numerous MVC 3 web apps with no problems, so it does look like some Azure-specific issue.
I know that some projects run successfully on Azure that use Ninject.MVC3. You should try to use this extension rather than implement you own NinjectControllerFactory. Otherwise the problem is most likely in your ControllerFactory and not Ninject.