In my application each tenant has it's own StructureMap container.
At runtime, tenant instances may be shutdown or restarted. Is there any tidying up I should do (such as calling IContainer.Dispose) or should I just let Garbage Collection do it's job?
We do have a number of singleton instances that implement IDisposable. Ideally we should call Dispose on these prior to disposing the container. I know this is done automatically on a Nested Container but wasn't sure about a standard container?
Thanks,
Ben
You should call Dispose on your container, which will dispose cached instances for you.
Call dispose on the containers.
You should never "just let the Garbage Collector do its job". See my response to this post to understand why:
Is it bad practice to depend on the garbage collector
Related
I am working on a project to allocate tasks to the containers. Since my major is not Computer science, I am not familiar with the mathematical theory behind that.
The question which I am going to ask is regarding the computation time of Docker containers.
I am copying a part of an article that confuses me.
Delay(t) = lambda(t) / C_max
where C_max is the capacity of resources allocated to the container. The lambda be the
mean of a Poisson Process of the requests arrival.
Now, I have several questions:
What is the unit of lambda and C_max if the resources allocated to this container are Processing resources? Would it be a cycle per sec?
Can more than one task be allocated to a container at the same time or we need a buffer to store future tasks to be processed?
Question 2:
Yes, more than one task can be executed in a container at the same time. A container can have multiple processes inside it. But for ease of working with and reasoning about often people just try to put a single process inside a container.
In the case where your factory takes the IOC container as a constructor parameter and then uses the container to resolve an interface.
It is often stated that the only place the container should be referenced is your application entry point (Composition Root). Having the container in the factory goes against this
Is this an anti-pattern?
Is it bad?
Are there alternatives?
There's few you can do if you need to build objects inside your application and those objects need dependency resolution. You will find many blogposts about using this as only viable solution for certain problems, however you can still solve it in 2 elengat ways (at least) withohut exposing directly the Container outside the composition root (wich is Evil, Avoidable and BadPractise).
1) You create the factory as a Lambda method in CompositionRoot, most frameworks allows to do that (If you needed complex initialization and your Framework not allow that initialization but allow to register "custom injector", you'll probably need that in very few places of your code).
2) You wrap the "build" (or equivalent name) method of your container inside a FactoryClass, and then you inject that FactoryClass into a more specific Factory that does extra initialization steps (if you need to do so), else you inject FactoryClass directly (most frameworks may inject this kind of factories for you).
In Both cases there's no explicit reference to the Container outside CompositionRoot and that prevent the "Service Locator" pattern (wich cause enexpected dependencies by allowing users to access everything causing hard to debug side effects, especially when you deal with programmers that don't understand Dependency Injection and maintenability problems).
Of course, one of the main purposes of IoC Containers is to remove calls to "new" in user code wich allows to decouple creation and use, so most people just call "new" inside factories and prevent some code bloat (it's about the "Single Responsibility" principle, the responsability of a Factory is to "create", so for most people the simpler and hence the most maintenable solution is to use "new" directly).
Currently, our application uses the grails-jms plugin. We have an ActiveMQ message queue that we connect to. The problem is that if we start up the application after a message is already on the queue, the mdp(Message Driven Pogo) tries to consume the message before grails has completely started.
(By completely started, we are noticing that the domain objects do not yet have dynamic finders on them)
A current solution we have implemented is to use message retry, and setting a configured amount of time between the retry attempts. This however cannot be our final solution.
Has anyone run into this scenario before? Does anyone have any suggestions?
I don't know grails but with Java, I would set the listener container autoStartup property to false and start() the container when you are ready. But that won't work if there's an explicit start() of the context itself before gradle is ready.
AutoStart only controls whether SmartLifecycle beans start automatically on refresh() (rather than waiting for a start()). Most SmartLifecycle objects have auto start true.
I think each task will contain an instance of spout or bolt, and a while or for block calls them, is it right?
If so, since every task coordinates to one of some threads running in a worker process, and there is probability that two or more tasks of the same spout or bolt are assigned to the same worker, in this case, do we need to sync (especially if the spout or bolt contains critical resources such as static members)? Why?
Yes, several tasks of the same spout/bolt could be assigned to the same worker and run in the same JVM. I recommend not to use static members that are not thread-safe - in this case you won't to need care about synchronization.
Concerns:
I have read through posts/blogs that describe IIS could recycle the app pool at any time. Does this mean (in terms of IIS recycling app pool) it would not matter if I make a call to a long running process synchronous or asynchronous, as IIS could recycle the app pool and terminate the long-running process? If this is the case, what are common paths to make sure this does not happen?
Example
public Task<ActionResult> LongRunningProcessAsync(SubmitFileModel aModel)
{
return Task.Run( () => LongRunningProcess( aModel) );
}
HostingEnvironment has a static method, RegisterObject, that allows you to place an object in the list of registered objects for the application.
According to Phil Haack...
When ASP.NET tears down the AppDomain, it will first attempt to call
Stop method on all registered objects....When ASP.NET calls into this method, your code needs to prevent this method from returning until your work is done.
Alternatively, you could create a service that is not hosted by IIS.
If this is the case, what are common paths to make sure this does not happen?
Your application domain will be unloaded on recycling process, with or without using the RegisterObject method. It only gives your background work some time to complete (30 seconds by default). If you have long-running job that exceeds this "some time", it will be aborted, and no one will care about automatic retry on restart.
If you want to run long-running jobs in your ASP.NET application, want to keep things simple, and forget about ASP.NET/IIS issues, look at HangFire – it uses persistent storages (SQL Server or Redis) and have immunity to the recycling process.