running code in remote process CLR runtime through ICLRRuntimeHost and ExecuteInDefaultAppDomain() - clr

I tried to combine the examples at coding the wheel and profiler attach. Everything seems to go fine, except, when I try to enumerate running assemblies in the remote processes' default appdomain, I don't get the right list. public class remoteFoo {
public static int sTest(String message) {
AppDomain currentDomain = AppDomain.CurrentDomain;
Assembly[] assems = currentDomain.GetAssemblies();
MessageBox.Show("List of assemblies loaded in current appdomain:");
foreach (Assembly assem in assems)
MessageBox.Show(assem.ToString()); //remoteFoo gets listed, but not hostBar.
return 3;
}
}remoteFoo gets listed, but not hostBar. Basically I want to use introspection to run code in the remote process's appdomain. But it seems that I don't have the right appdomain...
Here is a link to the code I use: main.cpp

Related

Can an Azure Function be Executed for Multiple Environments

I've encountered a dependency injection scenario which I cannot find a way through.
We currently have an Azure function.
We are using dependency injection via the FunctionsStartup attribute.
That all works fine, until I get asked to make it work for multiple environments.
The tester found it too onerous to deploy to 7 different environments, so I was asked to re-jig the function so that it runs (in a loop) for those environments.
That means 7 different IConfigurations and somehow having 7 separate compartmentalised IOC registrations of services.
I can't think of a way of doing that, without significantly re-structuring the way abstractions are being resolved. Even if you set up registrations in a loop and inject an IEnumerable of a service, when it goes to resolve a child dependency, it just pulls the last one registered, rather than the one which was meant to correlate with the current item being iterated.
So, something like this (using Autofac):
Registration
foreach (var configuration in configurations)
{
containerBuilder.Register<ICosmosDbService<AccountUsage>>(sp =>
{
var dBConfig = CosmosDBHelper.GetProjectDatabaseConfig(configuration.Value, Project.Jupiter);
return CosmosClientInitializer<AccountUsage>.Initialize(dBConfig);
}).As<ICosmosDbService<AccountUsage>>();
}
Usage
private readonly IEnumerable<IAccountUsageService> _accountUsageService;
public JobScheduler(IEnumerable<IAccountUsageService> accountUsageService)
{
_accountUsageService = accountUsageService;
}
[FunctionName("JobScheduler")]
public async Task Run([TimerTrigger("0 */2 * * * *")] TimerInfo myTimer, ILogger log)
{
log.LogInformation($"Job Scheduler Timer trigger function executed at: {DateTime.Now}");
try
{
foreach (var usageService in _accountUsageService)
{
var logs = await usageService.GetCurrentAccountUsage("gfkjdsasjfa");
// ...
}
}
I realise this kind of DI usage is not ideal (and does not even work).
Is there a way to structure an Azure Function such that it can execute for different configurations in a compartmentalised manner? Or is this really just fighting against the technology?
You've got a couple of ways to do this - either inject the right dependencies into the function constructor, or resolve them dynamically using a service-locater type approach with a named instance.
Let's consider the second approach and what it would mean for your implementation. As you demonstrated, you'd be looping through your instances and resolving the dependency you want to use, then invoking it
foreach (var usageService in _accountUsageService)
{
var logs = await usageService.GetCurrentAccountUsage("named-instance");
logs.DoSomething();
}
This is technically possible, but now you're doing batch processing - you're doing more than once piece of work that's been triggered by a single event (the timer object), which means you have to deal with a couple of extra problems. What should you do if there's a failure with one of the instances, and what to do if one of the instances is running slowly?
Ideally, you want functions to do the smallest bit of work they can, and complete quickly - You don't want failure or slowness with one particular instance impacting the other instances. By breaking it down to the smallest piece of work (think, one event trigger does one piece of work) then you can take advantage of the functions runtime for things like retries on failures, and threading and concurrency is now being done for you by the runtime.
You could then think about a couple of ways you could do this. a) multiple function signatures and a service resolver approach, e.g.
public class JobScheduler
{
public JobScheduler(IEnumerable<IAccountUsageService> accountUsageService)
{
_accountUsageService = accountUsageService;
}
[FunctionName("FirstInstance")]
public Task FirstInstance([TimerTrigger("%MetricPoller:Schedule%")] TimerInfo myTimer)
{
var logs = await _accountUsageService.GetNamedInstance("instance-a");
logs.DoSomething();
}
[FunctionName("SecondInstance")]
public Task SecondInstance([TimerTrigger("%MetricPoller:Schedule%")] TimerInfo myTimer)
{
var logs = _accountUsageService.GetNamedInstance("instance-b");
logs.DoSomething();
}
}
or b), multiple classes with the necessary dependencies injected
public class JobSchedulerFirstInstance
{
public JobSchedulerFirstInstance(ILogs logs)
{
_logs = logs;
}
[FunctionName("FirstInstance")]
public Task FirstInstance([TimerTrigger("%MetricPoller:Schedule%")] TimerInfo myTimer)
{
_logs.DoSomething();
}
}
I'd personally lean towards multiple classes approach, and register named instances with my container. A bit of extra wire up work needed, but you'll end up with lots of small classes that all look very similar that are basically jus t plumbing that the functions runtime executes.

How do i run a Windows service in Azure Service Fabric?

I have a Windows service for test purposes that i want to migrate to Service Fabric. The service does nothing more than writing to a txt-file on my drive when it starts and when it stops. It works fine when i manually start and stop the service after installing it on my machine. Can i achieve the same result on service fabric or does the implementation be different?
I have created a guest executable with the service and deployed it to a local cluster following this guide.
First of all, I don't like this answer. After playing with it, I'm convinced the best way is to just port the code to a service fabric app. I would love to see a better "bolt-on" solution, but I haven't found any others. Every answer I've seen says "just run the exe as a Guest Executable", but a Windows Service exe doesn't "just run". It needs to be ran as a Windows Service which calls the OnStart entry point of the Service class (which inherits from ServiceBase).
The code below will allow your Windows Service to run in Service Fabric, but Service Fabric seems to report WARNINGS! So it's FAR from perfect.
It shouldn't require any changes to your OnStart or OnStop methods, however it does require some basic plumbing to work. This is also helpful if you wish to debug your windows services, as it allows you to pass in a /console command line argument and have it run in a console window.
First, either create your own ServiceBase class, or simply paste this code into your Service class (by default it's called Service1.cs in a C# Windows Service project):
// Expose public method to call the protected OnStart method
public void StartConsole(string[] args)
{
// Plumbing...
// Allocate a console, otherwise we can't properly terminate the console to call OnStop
AllocConsole();
// Yuck, better way?
StaticInstance = this;
// Handle CTRL+C, CTRL+BREAK, etc (call OnStop)
SetConsoleCtrlHandler(new HandlerRoutine(ConsoleCtrlCheck), true);
// Start service code
this.OnStart(args);
}
// Expose public method to call protected OnStop method
public void StopConsole()
{
this.OnStop();
}
public static Service1 StaticInstance;
private static bool ConsoleCtrlCheck(CtrlTypes ctrlType)
{
switch (ctrlType)
{
case CtrlTypes.CTRL_C_EVENT:
case CtrlTypes.CTRL_BREAK_EVENT:
case CtrlTypes.CTRL_CLOSE_EVENT:
case CtrlTypes.CTRL_LOGOFF_EVENT:
case CtrlTypes.CTRL_SHUTDOWN_EVENT:
StaticInstance.StopConsole();
return false;
}
return true;
}
[DllImport("kernel32.dll")]
private static extern bool AllocConsole();
[DllImport("Kernel32")]
public static extern bool SetConsoleCtrlHandler(HandlerRoutine Handler, bool Add);
public delegate bool HandlerRoutine(CtrlTypes CtrlType);
public enum CtrlTypes
{
CTRL_C_EVENT = 0,
CTRL_BREAK_EVENT,
CTRL_CLOSE_EVENT,
CTRL_LOGOFF_EVENT = 5,
CTRL_SHUTDOWN_EVENT
}
Now change your Main method in Program.cs to look like this:
static void Main(string[] args)
{
var service = new Service1();
if (args.Length > 0 && args.Any(x => x.Equals("/console", StringComparison.OrdinalIgnoreCase)))
{
service.StartConsole(args);
}
else
{
ServiceBase.Run(
new ServiceBase[]
{
service
});
}
}
You may need to rename 'Service1' to whatever your service class is called.
When calling it through Service Fabric, make sure it's passing in the /console argument in ServiceManifest.xml:
<CodePackage Name="Code" Version="1.0.0">
<EntryPoint>
<ExeHost>
<Program>WindowsService1.exe</Program>
<Arguments>/console</Arguments>
<WorkingFolder>Work</WorkingFolder>
</ExeHost>
</EntryPoint>
</CodePackage>
If you wish to use this as a debuggable Windows Service, you can also set your 'Command line arguments' to /console under the Project settings > Debug tab.
EDIT:
A better option is to use TopShelf. This will work without warnings in Service Fabric, however it does require some code refactoring as it becomes a Console project instead of a Windows Service project.

Hangfire job on Console/Web App solution?

I'm new to Hangfire and I'm trying to understand how this works.
So I have a MVC 5 application and a Console application in the same solution. The console application is a simple one that just updates some data on the database (originally planned to use Windows Task Scheduler).
Where exactly do I install Hangfire? In the Web app or the console? Or should I convert the console into a class on the Web app?
If I understand it correctly, the console in your solution is acting like an "pseudo" HangFire, since like you said it does some database operations overtime and you plan to execute it using the Task Scheduler.
HangFire Overview
HangFire was design to do exactly what you want with your console app, but with a lot more of power and functionalities, so you avoid all the overhead of creating all that by yourself.
HangFire Instalation
HangFire is installed commonly alongside with ASP.NET Applications, but if you carefully read the docs, you will surprisingly find this:
Hangfire project consists of a couple of NuGet packages available on
NuGet Gallery site. Here is the list of basic packages you should know
about:
Hangfire – bootstrapper package that is intended to be installed only
for ASP.NET applications that uses SQL Server as a job storage. It
simply references to Hangfire.Core, Hangfire.SqlServer and
Microsoft.Owin.Host.SystemWeb packages.
Hangfire.Core – basic package
that contains all core components of Hangfire. It can be used in any
project type, including ASP.NET application, Windows Service, Console,
any OWIN-compatible web application, Azure Worker Role, etc.
As you can see, HangFire can be used in any type of project including console applications but you will need to manage and add all the libraries depending on what kind of job storage you will use. See more here:
Once HangFire is Installed you can configure it to use the dashboard, which is an interface where you can find all the information about your background jobs. In the company I work, we used HangFire several times with recurring jobs mostly to import users, synchronize information across applications and perform operations that would be costly to run during business hours, and the Dashboard proved to be very useful when we wanted to know if a certain job was running or not. It also uses CRON to schedule the operations.
A sample of we are using right now is:
Startup.cs
public partial class Startup
{
public void Configuration(IAppBuilder app)
{
//Get the connection string of the HangFire database
GlobalConfiguration.Configuration.UseSqlServerStorage(connection);
//Start HangFire Server and enable the Dashboard
app.UseHangfireDashboard();
app.UseHangfireServer();
//Start HangFire Recurring Jobs
HangfireServices.Instance.StartSendDetails();
HangfireServices.Instance.StartDeleteDetails();
}
}
HangfireServices.cs
public class HangfireServices
{
//.. dependency injection and other definitions
//ID of the Recurring JOBS
public static string SEND_SERVICE = "Send";
public static string DELETE_SERVICE = "Delete";
public void StartSend()
{
RecurringJob.AddOrUpdate(SEND_SERVICE, () =>
Business.Send(), //this is my class that does the actual process
HangFireConfiguration.Instance.SendCron.Record); //this is a simple class that reads an configuration CRON file
}
public void StartDeleteDetails()
{
RecurringJob.AddOrUpdate(DELETE_SERVICE, () =>
Business.SendDelete(), //this is my class that does the actual process
HangFireConfiguration.Instance.DeleteCron.Record); //this is a simple class that reads an configuration CRON file
}
}
HangFireConfiguration.cs
public sealed class HangFireConfiguration : ConfigurationSection
{
private static HangFireConfiguration _instance;
public static HangFireConfiguration Instance
{
get { return _instance ?? (_instance = (HangFireConfiguration)WebConfigurationManager.GetSection("hangfire")); }
}
[ConfigurationProperty("send_cron", IsRequired = true)]
public CronElements SendCron
{
get { return (CronElements)base["send_cron"]; }
set { base["send_cron"] = value; }
}
[ConfigurationProperty("delete_cron", IsRequired = true)]
public CronElements DeleteCron
{
get { return (CronElements)base["delete_cron"]; }
set { base["delete_cron"] = value; }
}
}
hangfire.config
<hangfire>
<send_cron record="0,15,30,45 * * * *"></send_cron>
<delete_cron record="0,15,30,45 * * * *"></delete_cron>
</hangfire>
The CRON expression above will run at 0,15,30,45 minutes every hour every day.
Web.config
<configSections>
<!-- Points to the HangFireConfiguration class -->
<section name="hangfire" type="MyProject.Configuration.HangFireConfiguration" />
</configSections>
<!-- Points to the .config file -->
<hangfire configSource="Configs\hangfire.config" />
Conclusion
Given the scenario you described, I would probably install HangFire in your ASP.NET MVC application and remove the console application, simple because it is one project less to worry about. Even though you can install it on a console application I would rather not follow that path because if you hit a brick wall (and you'll hit, trust me), chances are you'll find help mostly for cases where it was installed in ASP.NET applications.
No need of any more console application to update the database. You can use hangfire in your MVC application itself.
http://docs.hangfire.io/en/latest/configuration/index.html
After adding the hangfire configuration, you can make use of normal MVC method to do the console operations like updating the DB.
Based on your requirement you can use
BackgroundJob.Enqueue --> Immediate update to DB
BackgroundJob.Schedule --> Delayed update to DB
RecurringJob.AddOrUpdate --> Recurring update to DB like windows service.
Below is an example,
public class MyController : Controller
{
public void MyMVCMethod(int Id)
{
BackgroundJob.Enqueue(() => UpdateDB(Id));
}
public void UpdateDB(Id)
{
// Code to update the Database.
}
}

QTP 11 Extensibility developing (.NET sdk) - calling Record() on the recorder object fails

As far as I looked, there's no answered question about QTP's Ext. sdk on stackoverflow (and almost anywhere else on the net; there isn't even a appropriated tag for it...), so I'm aware it's unlikely I get my problem solved by asking , but whatever, it worth trying.
Anyway, before I lose the attention of anyone who never heard or used the Ext. sdk, maybe I will have more luck asking you to help me figure out how to locate the error log file QTP produces at run-time. I know such a file exists in the new UFT 11.5 version, but I couldn't locate it in QTP 10 or 11 (For the record, I don't talk about QTP's Log Tracking feature, but about the "meta" error log of errors/exceptions produced by QTP itself).
Now for the question:
I'm developing an extension for QTP to support native record and run tests on my application.
I'm currently able to import an object repository, and write test steps using The COM object testing agent I developed.
Problem started when I was trying to implement the IRecordable interface; I'm getting the IRecorder object from qtp, and even able to use it as ISuppressor object to exclude redundant steps from being recorded, but all my attempts to record a step (that is, to add new recorded objects to the repository, and add steps to the test) simply failed.
This is the code that I'm using:
public class MyTestingAgent :
AutInterface.ITestable,
AutInterface.IRecordable
{
QTPInterface.IRecorder recorder;
...
public void AutInterface.IRecordable.BeginRecording(object recorder)
{
IRecordSuppressor recordSuppressor = recorder as IRecordSuppressor;
recordSuppressor.Suppress(MyTestingAgentId,
"<Suppress><Item type=\"HWND\" value=\"[#HWND]\" /></Suppress>".Replace("[#HWND]", getMyAppHWND().ToString()));
this.recorder = recorder as QTPInterface.IRecorder;
...
}
public void recordNewObjStep(string parentName, string objName, string method, Object[] arguments)
{
object[] objectHyrarchy = new object[] { findObjectId(objName), findObjectId(parentName) };
string externalParent = null;
string appDescriptionXml = getDescriptionXml(parentName, objName);
try
{
recorder.Record(MyTestingAgentId, objectHyrarchy , appDescriptionXml, externalParent, method, arguments);
Trace.TraceInformation("Record successfully done.");
}
catch (Exception e)
{
Trace.TraceError("TEAAgent.recordSTElement: " + e.ToString());
}
}
...
}
I'm pretty sure all the arguments I send with the call to Record() are accurate. getDescriptionXml() and findObjectId() are used in different cases in the code, and works fine, the method name and argument are correct.
The annoying thing is that the call to Record doesn't even throw exception, and I get "Record successfully done." in the trace log. Needless to say no new object is created in the repository, and no step is added to the test.
As I can't debug QTP, I'm pretty much in the dark with what I'm doing wrong. That's why I'm asking for help with finding QTP's log file, or any other approach that might shed some light on the subject.
For QTP 11 you can turn on the logs by going to QTP's bin directory and running ClientLogs.exe.
Specifically for TEA extensibility do the following.
select the QTP node from the list on the left
find the LogCatPackTEA from the Available Categories list
Click the > button to move it to Selected Categories
Change TEAs level to Debug2 by selecting the category and changing the level
Click OK and run QTP
The logs will show up as QTP.log in the diretory specified in Path:
I'm curious on what the problem you're facing is, please update if you find the cause.

Asp.NET MVC 3 will not resolve routes for an MVC area loaded outside of bin directory

I have MVC areas in external libraries which have their own area registration code just as a normal MVC area would. This area registration gets called for each dll (module) and I have verified the RouteTable contains all the routes from the loaded modules once loading has been completed.
When I reference these external areas in the main site they get pulled into the bin directory and load up fine. That is, when a request is made for a route that exists in an external library, the correct type is passed to my custom controller factory (Ninject) and the controller can be instantiated.
Once I move these dll's outside of the bin directory however (say to a Modules folder), there appears to be an issue with routing. I have checked that the RouteTable has all the required routes but by the time a request makes its way into the ninject controller factory the requested type is null. From reading here an SO link here this behaviour seems to occur when ASP.NET MVC cannot find the controller matching the requested route or does not know how to make sense of the route.
When loading the modules externally I have ensured that the modules that I want loaded are loaded into the app domain via a call to Assemby.LoadFrom(modulePath);
I did some research and it appears that when attempting to load a library outside of bin you need to specify private probing in app.config as pointed out here;. I have mine set to 'bin\Modules' which is where the mvc area modules get moved too.
Does anyone have any ideas why simply moving an mvc area project outside of the bin folder would cause the requested type passed into the controller factory to be null resulting in the controller to be instantiated?
Edit:
All routes registered in external areas have the namespace of the controller specified in the route
Below is a fragment of code that creates a new Ninject kernel, reads a list of module names from a file to enable, and then goes searching for the enabled modules in the bin/Modules directory. The module is loaded via the assembly loader, has its area(s) registered and then loaded into the ninject kernel.
// comma separated list of modules to enable
string moduleCsv = ConfigurationManager.AppSettings["Modules.Enabled"];
if (!string.IsNullOrEmpty(moduleCsv)) {
string[] enabledModuleList = moduleCsv.Replace(" ", "").Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
_Modules = enabledModuleList ?? new string[0];
// load enabled modules from bin/Modules.
var moduleList = Directory.GetFiles(Server.MapPath("~" + Path.DirectorySeparatorChar + "bin" + Path.DirectorySeparatorChar + "Modules"), "*.dll");
foreach (string enabledModule in enabledModuleList) {
string modulePath = moduleList.Single(m => m.Contains(enabledModule));
// using code adapted from from AssemblyLoader
var asm = AssemblyLoader.LoadAssembly(modulePath);
// register routes for module
AreaRegistrationUtil.RegisterAreasForAssemblies(asm);
// load into Ninject kernel
kernel.Load(asm);
}
}
This is the crux of the Ninject controller factory that receives the aforementioned Ninject kernel and handles requests to make controllers. For controllers that exist within an assembly in bin/Modules the GetControllerType(...) returns null for the requested controller name.
public class NinjectControllerFactory : DefaultControllerFactory
{
#region Instance Variables
private IKernel _Kernel;
#endregion
#region Constructors
public NinjectControllerFactory(IKernel kernel)
{
_Kernel = kernel;
}
protected override Type GetControllerType(System.Web.Routing.RequestContext requestContext, string controllerName)
{
// Is null for controller names requested outside of bin directory.
var type = base.GetControllerType(requestContext, controllerName);
return type;
}
protected override IController GetControllerInstance(System.Web.Routing.RequestContext requestContext, Type controllerType)
{
IController controller = null;
if (controllerType != null)
controller = _Kernel.Get(controllerType) as IController;
return controller;
}
}
Update on Ninject Nuget Install
I couldn't get it to install Ninject.MVC3 via NuGet for some reason. Visual Studio was giving some schemaVersion error when clicking the install button (I have installed other Nuget packages like ELMAH btw).
I did find out something else that was interesting though, and that is that if I pass in the extra module assembilies to the NinjectControllerFactory I have and search those when the type cannot be resolved it finds the correct type and is able to build the controller. This leads to another strange problem.
The first route to be requested from an external module is the /Account/LogOn in the auth and registration module. The virtual path provider throws an error here after it has located the view and attempts to render it out complaining of a missing namespace. This causes an error route to fire off which is handled by an ErrorHandling module. Strangely enough, this loads and render fine!
So I am still stuck with two issues;
1) Having to do a bit of a dodgy hack and pass in the extra module assemblies to the NinjectControllerFactory in order to be able to resolve types for Controllers in external modules
2) An error with one particular module where it complains about a namespace not being found
These two issues are obviously connected because the assembly loading just isn't loading up and making everything available that needs to be. If all these mvc areas are loaded from the bin directory everything works fine. So it is clearly a namespacing/assembly load issue.
LoadFrom load the assembly into the loading context. These types are not available to the other classes in the default Load context. Probably this is the reason why the controller is not found.
If you know which assemblies have to be loaded then you should always use Assembly.Load(). If you don't know which assemblies are depolyed in the directory then either guess from the filesnames the assembly names or use Assembly.ReflectionOnlyLoadFrom() (preferably using a temporary AppDomain) to get the assembly names. Then load the assemblies using Assembly.Load() with the assembly name.
If your assemblies contain NinjectModules you can also use kernel.Load() which does what I described above. But it only loads assemblies containing at least one module.
Read up http://msdn.microsoft.com/en-us/library/dd153782.aspx about the different assembly contexts.
Here is a small extract from the Ninject codebase. I removed the unnecessary stuff but did not try to compile or run so probably there are minor issues with this.
public class AssemblyLoader
{
public void LoadAssemblies(IEnumerable<string> filenames)
{
GetAssemblyNames(filenames).Select(name => Assembly.Load(name));
}
private static IEnumerable<AssemblyName> GetAssemblyNames(IEnumerable<string> filenames)
{
var temporaryDomain = CreateTemporaryAppDomain();
try
{
var assemblyNameRetriever = (AssemblyNameRetriever)temporaryDomain.CreateInstanceAndUnwrap(typeof(AssemblyNameRetriever).Assembly.FullName, typeof(AssemblyNameRetriever).FullName);
return assemblyNameRetriever.GetAssemblyNames(filenames.ToArray());
}
finally
{
AppDomain.Unload(temporaryDomain);
}
}
private static AppDomain CreateTemporaryAppDomain()
{
return AppDomain.CreateDomain(
"AssemblyNameEvaluation",
AppDomain.CurrentDomain.Evidence,
AppDomain.CurrentDomain.SetupInformation);
}
private class AssemblyNameRetriever : MarshalByRefObject
{
public IEnumerable<AssemblyName> GetAssemblyNames(IEnumerable<string> filenames)
{
var result = new List<AssemblyName>();
foreach(var filename in filenames)
{
Assembly assembly;
try
{
assembly = Assembly.LoadFrom(filename);
}
catch (BadImageFormatException)
{
// Ignore native assemblies
continue;
}
result.Add(assembly.GetName(false));
}
return result;
}
}
}

Resources