how to check performance improvement by ConcurrentConsumers in spring-amqp - spring-amqp

I am pretty new to Spring AMQP module. I was successful to create simple project which produces and consumes messages. What I don't understand is following:
If there is only one listener and more than one concurrent consumers are configured in SimpleMessageListenerContainer, how it will improve the performance? As per my understanding, as long as I have single listener which processes the message, no matter how many consumers(threads) picks up the messages from the queue does not matter.
Here's my code for your reference:
#Bean
public SimpleMessageListenerContainer messageListenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory());
container.setQueues(someQueue());
container.setMessageListener(messageListenerAdapter());
container.setConcurrentConsumers(3);
return container;
}
I am sure, I am missing something in my understanding. Can someone throw light please. Thanks in advance.

The container will create 3 threads; each of which is registered as a consumer. It is equivalent to creating 3 separate containers with 1 consumer each.
Each consumer will call your single listener - it must be thread-safe - no shared data/fields or access to any such data must be synchronized.
It's generally best, however, to use a state-less bean for your listener so you don't have to worry about concurrency.
If you can't make your listener thread-safe, you must create 3 separate containers and provide each one with its own instance of your listener.

Related

PerRequestLifetimeManager and Task.Factory.StartNew - Dependency Injection with Unity

How to manage new tasks with PerRequestLifeTimeManager?
Should I create another container inside a new task?(I wouldn't like to change PerRequestLifeTimeManager to PerResolveLifetimeManager/HierarchicalLifetimeManager)
[HttpPost]
public ActionResult UploadFile(FileUploadViewModel viewModel)
{
var cts = new CancellationTokenSource();
CancellationToken cancellationToken = cts.Token;
Task.Factory.StartNew(() =>
{
// _fileService = DependencyResolver.Current.GetService<IFileService>();
_fileService.ProcessFile(viewModel.FileContent);
}, cancellationToken);
}
You should read this article about DI in multi-threaded applications. Although it is written for a different DI library, you'll find most of the information applicable to the concept of DI in general. To quote a few important parts:
Dependency injection forces you to wire all dependencies together in a
single place in the application: the Composition Root. This means that
there is a single place in the application that knows about how
services behave, whether they are thread-safe, and how they should be
wired. Without this centralization, this knowledge would be scattered
throughout the code base, making it very hard to change the behavior
of a service.
In a multi-threaded application, each thread should get its own object
graph. This means that you should typically call
[Resolve<T>()] once at the beginning of the thread’s
execution to get the root object for processing that thread (or
request). The container will build an object graph with all root
object’s dependencies. Some of those dependencies will be singletons;
shared between all threads. Other dependencies might be transient; a
new instance is created per dependency. Other dependencies might be
thread-specific, request-specific, or with some other lifestyle. The
application code itself is unaware of the way the dependencies are
registered and that’s the way it is supposed to be.
The advice of building a new object graph at the beginning of a
thread, also holds when manually starting a new (background) thread.
Although you can pass on data to other threads, you should not pass on
container-controlled dependencies to other threads. On each new
thread, you should ask the container again for the dependencies. When
you start passing dependencies from one thread to the other, those
parts of the code have to know whether it is safe to pass those
dependencies on. For instance, are those dependencies thread-safe?
This might be trivial to analyze in some situations, but prevents you
to change those dependencies with other implementations, since now you
have to remember that there is a place in your code where this is
happening and you need to know which dependencies are passed on. You
are decentralizing this knowledge again, making it harder to reason
about the correctness of your DI configuration and making it easier to
misconfigure the container in a way that causes concurrency problems.
So you should not spin of new threads from within your application code itself. And you should definitely not create a new container instance, since this can cause all sorts of performance problems; you should typically have just one container instance per application.
Instead, you should pull this infrastructure logic into your Composition Root, which allows your controller's code to be simplified. Your controller code should not be more than this:
[HttpPost]
public ActionResult UploadFile(FileUploadViewModel viewModel)
{
_fileService.ProcessFile(viewModel.FileContent);
}
On the other hand, you don't want to change the IFileService implementation, because it shouldn't its concern to do multi-threading. Instead we need some infrastructural logic that we can place in between the controller and the file service, without them having to know about this. They way to do this is by implementing a proxy class for the file service and place it in your Composition Root:
private sealed class AsyncFileServiceProxy : IFileService {
private readonly ILogger logger;
private readonly Func<IFileService> fileServiceFactory;
public AsyncFileServiceProxy(ILogger logger, Func<IFileService> fileServiceFactory)
{
this.logger = logger;
this.fileServiceFactory = fileServiceFactory;
}
void IFileService.ProcessFile(FileContent content) {
// Run on a new thread
Task.Factory.StartNew(() => {
this.BackgroundThreadProcessFile(content);
});
}
private void BackgroundThreadProcessFile(FileContent content) {
// Here we run on a different thread and the
// services should be requested on this thread.
var fileService = this.fileServiceFactory.Invoke();
try {
fileService.ProcessFile(content);
}
catch (Exception ex) {
// logging is important, since we run on a
// different thread.
this.logger.Log(ex);
}
}
}
This class is a small peace of infrastructural logic that allows processing files on a background thread. The only thing left is to configure the container to inject our AsyncFileServiceProxy instead of the real file service implementation. There are multiple ways to do this. Here's an example:
container.RegisterType<ILogger, YourLogger>();
container.RegisterType<RealFileService>();
container.RegisterType<Func<IFileService>>(() => container.Resolve<RealFileService>(),
new ContainerControlledLifetimeManager());
container.RegisterType<IFileService, AsyncFileServiceProxy>();
One part however is missing here from the equation, and this is how to deal with scoped lifestyles, such as the per-request lifestyle. Since you are running stuff on a background thread, there is no HTTPContext and this basically means that you need to start some 'scope' to simulate a request (since your background thread is basically its own new request). This is however where my knowledge about Unity stops. I'm very familiar with Simple Injector and with Simple Injector you would solve this using a hybrid lifestyle (that mixes a per-request lifestyle with a lifetime-scope lifestyle) and you explicitly wrap the call to BackgroundThreadProcessFile in such scope. I imagine the solution in Unity to be very close to this, but unfortunately I don't have enough knowledge of Unity to show you how. Hopefully somebody else can comment on this, or add an extra answer to explain how to do this in Unity.

Stateful vs Stateless Service Grails

I am currently in the process of moving business logic from a controller method to a service, when I fell down the rabbit hole of grails services. I have the following method in my service:
Job closeJobOpportunity(Job op, Employee res) {
op.chosenOne = res
op.requisitionCanceledDate = new Date()
if(!op.chosenOne || !op.hrEffectiveDate){
return null
}
else if(StringUtils.isEmpty(op.chosenOne.id)){
return null
}
return op
}
I started thinking about the different ways this method could cause synchronization problems(because of grails making the service a singleton), and noticed the grails documentation mentions that business logic should be put in the service as long as you don't store the state.
At the risk of sounding ignorant or not well informed, can someone simply provide the differences between stateful and stateless services in Grails? Is the above method stateful? Should it then be surrounded by try catch in the controller?
The difference between stateful and stateless in a Grails service (or any other instance of a class for that matter) is determined by if the instance itself holds any state.
First, it's difficult to say if your Service in your example is stateless or not, but the interaction you have there in that particular method doesn't indicate that you are doing anything stateful with the service itself. That would lead me to believe that the service is going to be stateless.
Let me give you an example of a stateful service and explain why it's stateful.
class MyStatefulService {
Long someNumber
String someString
void doSomething(Long addMe) {
someNumber += addMe
}
void updateSomething(String newValue) {
someString = newValue
}
}
As you can see the above service has two properties. If this service is created as a singleton then all calls to it will use the same single instance. As you can see the two methods on the service effect the properties, or state, of the service. This means, in it's current form, you can't be sure that the state doesn't change while a particular thread is executing a method or methods of the service. Making it unreliable, in it's current form. While this is a very simple example it does demonstrate what makes a service stateful.
It's okay for services to have properties, and often they do. They can be references to other services or even configuration values. The key concept is to make sure they don't change state (there is always exceptions to this, but they are the edge cases).
It's entirely possible to rewrite the service to be stateful, synchronized and such to avoid the pitfalls of multiple threads accessing and modifying the state, but it's not something you should aim to do. Stateless services are simpler, cleaner, easier to test, easier to debug, and more lightweight.
In short, make your services stateless and save yourself the headaches.
I agree with the above answer that specifically details stateful vs stateless services. I'd also agree with the process of moving business logic out of the controller layer and avoiding the "fat controller" anti-pattern. However, to answer a slightly different, perhaps implied question, I wouldn't necessary jump to stuffing everything in the service layer.
It does depend on the complexity of your app. In the short term business logic at the service layer is appealing, but I feel like longer term it leads to procedural thinking and code that is hard to extend or reuse. If you're thinking about where actual business logic should live I'd encourage taking 30 minutes and watching this talk.
https://skillsmatter.com/skillscasts/4037-all-hail-the-command-object-are-stateless-services-the-only-way

DI Container and custom-scoped state in legacy system

I believe I understand the basic concepts of DI / IoC containers having written a couple of applications using them and reading a lot of stack overflow answers as well as Mark Seeman's book. There are still some cases that I have trouble with, especially when it comes to integrating DI container to a large existing architecture where DI principle hasn't been really used (think big ball of mud).
I know the ideal scenario is to have a single composition root / object graph per operation but in a legacy system this might not be possible without major refactoring (only the new and some select refactored old parts of the code could have dependencies injected through constructor and the rest of the system using the container as a service locator to interact with the new parts). This effectively means that a stack trace deep within an operation might include several object graphs with calls being made back and forth between new subsystems (single object graph until exiting into an old segment) and traditional subsystems (service locator call at some point to code under DI container).
With the (potentially faulty, I might be overthinking this or be completely wrong in assuming this kind of hybrid architecture is a good idea) assumptions out of the way, here's the actual problem:
Let's say we have a thread pool executing scheduled jobs of various types defined in database (or any external place). Each separate type of scheduled job is implemented as a class inheriting a common base class. When the job is started, it gets fed the information about which targets it should write its log messages to and the configuration it should use. The configuration could probably be handled by just passing the values as method parameters to whatever class needs them but if the job implementation gets larger than say 10-20 classes, it doesn't seem very handy.
Logging is the larger problem. Subsystems the job calls probably also need to write things to the log and usually in examples this is done by just requesting instance of ILog in the constructor. But how does that work in this case when we don't know the details / implementation until runtime? Since:
Due to (non DI container controlled) legacy system segments in the call chain (-> there potentially being multiple separate object graphs), child container cannot be used to inject the custom logger for specific sub-scope
Manual property injection would basically require the complete call chain (including all legacy subsystems) to be updated
A simplified example to help better perceive the problem:
Class JobXImplementation : JobBase {
// through constructor injection
ILoggerFactory _loggerFactory;
JobXExtraLogic _jobXExtras;
public void Run(JobConfig configurationFromDatabase)
{
ILog log = _loggerFactory.Create(configurationFromDatabase.targets);
// if there were no legacy parts in the call chain, I would register log as instance to a child container and Resolve next part of the call chain and everyone requesting ILog would get the correct logging targets
// do stuff
_jobXExtras.DoStuff(configurationFromDatabase, log);
}
}
Class JobXExtraLogic {
public void DoStuff(JobConfig configurationFromDatabase, ILog log) {
// call to legacy sub-system
var old = new OldClass(log, configurationFromDatabase.SomeRandomSetting);
old.DoOldStuff();
}
}
Class OldClass {
public void DoOldStuff() {
// moar stuff
var old = new AnotherOldClass();
old.DoMoreOldStuff();
}
}
Class AnotherOldClass {
public void DoMoreOldStuff() {
// call to a new subsystem
var newSystemEntryPoint = DIContainerAsServiceLocator.Resolve<INewSubsystemEntryPoint>();
newSystemEntryPoint.DoNewStuff();
}
}
Class NewSubsystemEntryPoint : INewSubsystemEntryPoint {
public void DoNewStuff() {
// want to log something...
}
}
I'm sure you get the picture by this point.
Instantiating old classes through DI is a non-starter since many of them use (often multiple) constructors to inject values instead of dependencies and would have to be refactored one by one. The caller basically implicitly controls the lifetime of the object and this is assumed in the implementations (the way they handle internal object state).
What are my options? What other kinds of problems could you possibly see in a situation like this? Is trying to only use constructor injection in this kind of environment even feasible?
Great question. In general, I would say that an IoC container loses a lot of its effectiveness when only a portion of the code is DI-friendly.
Books like Working Effectively with Legacy Code and Dependency Injection in .NET both talk about ways to tease apart objects and classes to make DI viable in code bases like the one you described.
Getting the system under test would be my first priority. I'd pick a functional area to start with, one with few dependencies on other functional areas.
I don't see a problem with moving beyond constructor injection to setter injection where it makes sense, and it might offer you a stepping stone to constructor injection. Adding a property is usually less invasive than changing an object's constructor.

DI: Register-Resolve-Release when using child containers

There's this software, X, which has this really complicated API that I have to write a facade for. I wrote a class library, XClientLibrary, and I made it using DI and IoC container (Unity). This was possible because my library exports services (interfaces) so users are not aware of the concrete classes which use constructor DI. They're also unaware of the IoC container.
The "root service" is a IXClient instance which is supposed to be created once and used as long as the application runs. (It is a desktop application btw). The X-client allows users to connect to X-hosts if they know the URL. A X-host allows users to access host's services and their services and so on (quite a complex object graph). This is sample user code:
// 1. app startup
XClientProvider provider = new XClientProvider(); // call only once per app
IXClient xClient = provider.GetClient(); // always returns the same instance
xClient.Startup();
// 2. app normal usage
IXHost host = xClient.ConnectToHost(new Uri("http://localhost")); // return new instance each time
IXService1 service = host.GetThis();
IXService2 otherService = service.DoThat();
...
host.Dispose();
// get another host, consume it, dispose it, etc
...
// 3. app shutdown
xClient.Shutdown();
provider.Dispose();
I tried to follow Mark Seemann's suggestions to implement this, but I'm not sure if they apply to a class library too. The client provider is the composition root, which is the only place where the IoC container is used. The composition root follows the RRR pattern:
the container is created on new XClientProvider() and configured
the container resolves IXClient when calling GetClient()
the container is disposed on provider.Dispose()
Things get complicated when the container is asked to resolve IXHost. Its implementation is:
internal class XHost : IXHost
public XHost(Uri uri, IXService1 service1)
The client is supposed to create XHost instances, so its implementation needs to know how to create IXService1:
internal class XClient : IXClient
public XClient(Func<IXService1> xService1DelegateFactory)
Invoking the delegate factory reaches the container which creates a IXService1. Also, let's say that in this graph there is a class XComponent7 which requires the exact IXService1 instance which was used to create the host:
internal class XComponent7 : IXService7
public XComponent7(Func<IXService1> service1DelegateFactory)
I have to use Func to deal with the circular dependency. The container should be configured such that once a IXService1 was resolved, it will provide the same instance whenever asked to resolve IXService1.
Now it gets really complicated. I want to restrict this behavior "per host resolve", meaning once a host is created the container should create a IXService1 and cache it and provide it to whatever component needs it, as long as the component is part of the object graph of the host. I also need a way to dispose all components when a host is disposed.
I was thinking I can do it using child containers. I can create one when users call ConnectToHost, ask it to resolve the host and dispose it on host disposal. The main container is still alive and won't be disposed until they call Dispose on the provider.
Problem is, I think it breaks the RRR pattern. So I wonder how RRR works when child container are involved... Maybe the IXHost is another "root" which can be directly resolved by the composition root? Or maybe there's a really smart Unity lifetime manager which can do what I need?
#Suiden So my understanding is: Your client is something that lets you lookup hosts (like a registry). Hosts offer services implemented by components. Every application has exactly one instance of your lookup/client. Your components not only implement services but might need other services to do their job. You want to resolve all parts of that object graph exactly once and when you dispose your client throw all of it away.
A couple of thoughts:
Circular references between dependencies (or services) is something you should try to avoid. If these services need each other that indicates they should be one service. That's what high cohesion, low coupling is about.
Unity does not clean up after itself. That means that even if you dispose a container that will not dispose the objects created by that container. The cleanup feature is on the wish list for Unity vNext
If you want to resolve an instance of some service and cache that instance inside your client/host wherever you should have a look at Lazy. It takes a Func to create an instance of T and evaluates that Func the first time the value is requested. So you can inject the Func into your classes or teach Unity to inject Lazy instances directly.
Child containers are a feature I find less than usefull. You can scope registration information and object lifetimes. But to make use of these scopes you would have to reference the appropriate child container. That means you are dropping dependency injection in favor of the ServiceLocator anti-pattern.

Remoting (server side)

I´m relative new on remoting (2.0 C#). Is there any/someway to lock the server side object/instance to one client?
I have up to 10 clients that will connect to the server. The server will offer 3 different task/operations/classes and if one client does a request and if the server is not working on that, I´ll like to lock this operation to that client. The reason for this is that the requests works with HW that only can handle on task at the time. Hope you understand what I like too do.
EDIT:
I´ll try to explain my problem again...
I have 3 classes that will have X number of methods/operations (operations that will trigger a external hardware to do some measuring). When a client "connects" to one class (at the time) and request a measuring to be performed I want to lock that class to the client, hence, the client will own this class and it shall be able to execute all methods. No other client shall be able/allowed to access this class while the first client has control. The other tow classes should be open for requests from other clients, but the same principle/rules shall apply to these classes. As soon as a client request a lock it shall have it as long as it requires it. I´ll will have an intreface that all clients must follow. Call a method called Lock() to require the control over the class and Unlock() to release the control. I/We will develop all the clients and the server!
Thanks for all the help, so far!
Regards
/Anders
You have to lock the task by using semaphores in order to ensure only one thread at a time. Look into the Semaphore and Mutex classes.
Edit:
You can do many ways from locking to complex semaphores, here you have two samples:
This one only locks to ensure that one execution is being done at a time:
private static object lockObject=new object();
public void Test()
{
lock (lockObject)
{
//your code here
}
}
This one uses a Mutex to wait until it is released, but with a timeout that will return with some information to the client indicating that the method could not be executed.
private static Mutex mutex = new Mutex();
public bool Test2()
{
if (!mutex.WaitOne(500))
{
return false;
}
try
{
//your code here
}
finally
{
mutex.ReleaseMutex();
}
return true;
}
Ok, now I see the point.
You can use the CAO approach instead: create a factory (can be a singleton) that gives you a CAO (Client Activated Object) if nobody else owns an instance.
CAO is good for that because it will ensure that if the client dies the CAO would be released.
Explaining a CAO is too much for a simple answer, it is something like this: CAO is a class inherited from MarshalByRefObject that you will create from your factory and return the instance from one method (i.e.: your Lock method); the object lives in the server and the client receives only a proxy. The object will live into the server while it's lease is being refreshed by the client (done automatically while the object is referenced and client are alive).
You may take a look to the Ingo Rammer's articles and books on remoting.
jmservera, thanks for all your help.
I have now found a solution that will work for me...I´m using the proxy pattern combined with the factory pattern. I do use the WellKnownObjectMode.Singleton method so I can control how many active instances I have on my server.
And by doing it this way, i don´t need to share my code with the client, only the interface (as you said before).
Regards
/Anders

Resources