How do you deal with shared data in stateless grails service - grails

I was trying to implement a grails SearchService that indexes certain text and stores it in memory for faster lookup. In order to store this data, I was trying to use a private static property in the Service to store the data, but the property was randomly resetting values. After rereading documentation, I realized that this is likely because grails services are supposed to be stateless since the employee the singleton pattern. Still, not sure I understand how a static variable can vary. Does the JVM load separate copies of service classes per thread? Not sure I'm wrapping my head around what's happening.
Nonetheless, now that I know I can't rely on static variables to store application-wide data, what's the best approach to store and access data for use across the application, while keeping synchronization and avoiding races?
Caused by: java.lang.IllegalStateException: Method on class [TEXTSTORE] was used outside of a Grails application. If running in the context of a test using the mocking API or bootstrap Grails correctly.
at SearchService.buildIndex(SearchService.groovy:63)
at SearchService$_indexAllDomains_closure2.doCall(SearchService.groovy:42)
at SearchService.indexAllDomains(SearchService.groovy:41)
at SearchService.$tt__rebuildIndex(SearchService.groovy:48)
at SearchService.afterPropertiesSet(SearchService.groovy:35)
... 4 more

You seem to be a bit confused about services in Grails. There is no reason why a service (defaulting to singleton) can't have shared state. It's not uncommon for a service to populate some cached or indexed data when it is created so it can be used by multiple callers.
Most often this is done by implementing the org.springframework.beans.factory.InitializingBean interface and making use of the afterPropertiesSet() method which is called when the service (Spring bean) has been created in the application context and all dependencies have been resolved.
For example:
package com.example
import org.springframework.beans.factory.InitializingBean
class MyExampleService implements InitializingBean {
private List data
def otherService
void afterPropertiesSet() {
data = otherService.doSomethingToFetchData()
}
// .. other stuff
}
By hooking into the lifecycle of the bean you can be fairly sure that even in development (when your service reloads because you've changed some code) it will still have the data needed.

Related

Dependency Injection and/vs Global Singleton

I am new to dependency injection pattern. I love the idea, but struggle to apply it to my case. I have a singleton object, let’s call it X, which I need often in many parts of my program, in many different classes, sometimes deep in the call stack. Usually I would implement this as a globally available singleton. How is this implemented within the DI pattern, specifically with .NET Core DI container? I understand I need to register X with the DI container as a singleton, but how then I get access to it? DI will instantiate classes with constructors which will take reference to X, that’s great – but I need X deep within the call hierarchy, within my own objects which .NET Core or DI container know nothing about, in objects that were created using new rather than instantiated by the DI container.
I guess my question is – how does global singleton pattern aligns/implemented by/replaced by/avoided with the DI pattern?
Well, "new is glue" (Link). That means if you have new'ed an instance, it is glued to your implementation. You cannot easily exchange it with a different implementation, for example a mock for testing. Like gluing together Lego bricks.
I you want to use proper dependency injection (using a container/framework or not) you need to structure your program in a way that you don't glue your components together, but instead inject them.
Every class is basically at hierarchy level 1 then. You need an instance of your logger? You inject it. You need an instance of a class that needs a logger? You inject it. You want to test your logging mechanism? Easy, you just inject something that conforms to your logger interface that logs into a list and the at the end of your test you can check your list and see if all the required logs are there. That is something you can automate (in contrast to using your normal logging mechanism and checking the logfiles by hand).
That means in the end, you don't really have a hierarchy, because every class you have just gets their dependencies injected and it will be the container/framework or your controlling code that determines what that means for the order of instantiation of objects.
As far as design patterns go, allow me an observation: even now, you don't need a singleton. Right now in your program, it would work if you had a plain global variable. But I guess you read that global variables are "bad". And design patterns are "good". And since you need a global variable and singleton delivers a global variable, why use the "bad", when you can use the "good" right? Well, the problem is, even with a singleton, the global variable is bad. It's a drawback of the pattern, a toad you have to swallow for the singleton logic to work. In your case, you don't need the singleton logic, but you like the taste of toads. So you created a singleton. Don't do that with design patterns. Read them very carefully and make sure you use them for the intended purpose, not because you like their side-effects or because it feels good to use a design pattern.
Just an idea and maybe I need your thought:
public static class DependencyResolver
{
public static Func<IServiceProvider> GetServiceProvider;
}
Then in Startup:
public void Configure(IApplicationBuilder app, IServiceProvider serviceProvider)
{
DependencyResolver.GetServiceProvider = () => { return serviceProvider; };
}
And now in any deed class:
DependencyResolver.GetServiceProvider().GetService<IService>();
Here's a simplified example of how this would work without a singleton.
This example assumes that your project is built in the following way:
the entry point is main
main creates an instance of class GuiCreator, then calls the method createAndRunGUI()
everything else is handled by that method
So your simplified code looks like this:
// main
// ... (boilerplate)
container = new Container();
gui = new GuiCreator(container.getDatabase(), container.getLogger(), container.getOtherDependency());
gui.createAndRunGUI();
// ... (boilerplate)
// GuiCreator
public class GuiCreator {
private IDatabase db;
private ILogger log;
private IOtherDependency other;
public GuiCreator(IDatabase newdb, ILogger newlog, IOtherDependency newother) {
db = newdb;
log = newlog;
other = newother;
}
public void createAndRunGUI() {
// do stuff
}
}
The Container class is where you actually define which implementations will be used, while the GuiCreator contructor takes interfaces as arguments. Now let's say the implementation of ILogger you choose has itself a dependency, defined by an interface its contructor takes as argument. The Container knows this and resolves it accordingly by instantiating the Logger as new LoggerImplementation(getLoggerDependency());. This goes on for the entire dependency chain.
So in essence:
All classes keep instances of interfaces they depend upon as members.
These members are set in the respective constructor.
The entire dependency chain is thus resolved when the first object is instantiated. Note that there might/should be some lazy loading involved here.
The only places where the container's methods are accessed to create instances are in main and inside the container itself:
Any class used in main receives its dependencies from main's container instance.
Any class not used in main, but rather used only as a dependency, is instantiated by the container and receives its dependencies from within there.
Any class used neither in main nor indirectly as a dependency somewhere below the classes used in main will obviously never be instantiated.
Thus, no class actually needs a reference to the container. In fact, no class needs to know there even is a container in your project. All they know is which interfaces they personally need.
The Container can either be provided by some third party library/framework or you can code it yourself. Typically, it will use some configuration file to determine which implementations are actually supposed to be used for the various interfaces. Third party containers will usually perform some sort of code analysis supported by annotations to "autowire" implementations, so if you go with a ready-made tool, make sure you read up on how that part works because it will generally make your life easier down the road.

How to inject an e4view with Guice injection

I am working on an existing Eclipse RCP based on Luna which consists of 99% 3.x API. We want to change this in an ongoing process; so when I was given the task of creating a new view, I wanted to use the new (in Luna, anyways) e4view element for the org.eclipse.ui.views extension point.
My problem is that part of the RCP uses xtext and thus, several components are available by using Guice.
I am now stranded with something like this
public class MyViewPart
{
#Inject // <- should be injected via Guice (I used #com.google.inject.Inject, otherwise E4DI would complain)
ISomeCustomComponent component;
#PostConstruct // <- should be called and injected via E4 DI
public void createView(Composite parent)
{
// ...
}
}
To get this injected with Guice, I would usually use an AbstractGuiceAwareExecutableExtensionFactory (as usually done in Xtext contexts) like this:
<plugin>
<extension
point="org.eclipse.ui.views">
<e4view
class="my.app.MyExecutableExtensionFactory:my.app.MyViewPart"
id="my.app.view"
name="my view"
restorable="true">
</e4view>
</extension>
</plugin>
But I did not expect this to work, because I thought it would bypass the E4 mechanism (actually, it seems to be the other way round and the e4view.class element seems to ignore the extension factory and just uses my.app.MyViewPart to inject it with E4DI. To be sure, I have set a class loading breakpoint to MyViewPart which is hit from ContextInjectionFactory.make()).
As I said, I didn't expect both DI frameworks to coexist without conflict, so I think the solution to my problem would be to put those object which I need injected into the E4 context.
I have googled a bit but I have found multiple approaches, and I don't know which one is the "correct" or "nice" one.
Among the approaches I have found, there are:
providing context functions which delegate to the guice injector
retrieving the objects from guice and configure them as injector bindings
retrieving the objects from guice, obtain a context and put them in the context
(The first two approaches are mentioned in the "Configure Bindings" section of https://wiki.eclipse.org/Eclipse4/RCP/Dependency_Injection)
And of course I could get the objects from Guice in the MyViewPart implementation, but that's not what I want...
[Edit:] In the meantime I have explored the options above a bit more:
Context Functions
I tried to register the context functions as services in the Bundle Activator with this utility method:
private void registerGuiceDelegatingInjection(final BundleContext context, final Class<?> clazz)
{
IContextFunction func = new ContextFunction()
{
#Override
public Object compute(final IEclipseContext context, final String contextKey)
{
return guiceInjector.getInstance(clazz);
}
};
ServiceRegistration<IContextFunction> registration =
context.registerService(IContextFunction.class, func,
new Hashtable<>(Collections.singletonMap(
IContextFunction.SERVICE_CONTEXT_KEY, clazz.getName()
)));
}
and called registerGuiceDelegatingInjection() in the BundleActivator's start() method for each class I needed to be retrieved via Guice.
For some reason, however, this did not work. The service itself was registered as expected (I checked via the OSGi console) but the context function was never called. Instead I got injection errors that the objects could not be found during injection. Maybe the context functions cannot be contributed dynamically but have to be contributed via declarative services, so they are known as soon as the platform starts?
(Answer here is: yes. As the JavaDoc to IContextFunction says: Context functions can optionally be registered as OSGi services [...] to seed context instances with initial values. - and since the application context already exists when my bundle is started, the dynamically registered service is not seen by the ContextFactory in time).
Injector Bindings
I quickly found out that this solution does not work for me, because you can only specify an interface-class to implementation-class mapping in the form
InjectorFactory.getDefault().addBinding(IMyComponent.class).implementedBy(MyComponent.class)
You obviously cannot configure instances or factories this way, so this is not an option, because I need to delegate to Guice and get Guice-injected instances of the target classes...
Putting the objects in the context
This currently works for me, but is not very nice. See answer below.
[Edit 2:] As I have reported, putting the objects in the (application) context works for me. The downside is that having the objects in the application context is too global. If I had two or more bundles which would require injection of object instances for another DSL, I would have to take care (e.g., by using #Named annotations) to not get the wrong instance injected.
What I would like better is a way to extend the Part's context with which my e4view is created and injected directly. But so far I have not found a way to explicitly target that context when putting in my instances ...
Thanks for any further hints...
Try the processor mechanism of E4: You should be using a (Pre or Post) Processor (along with the PostContextCreate annotation) to register your POJOs into the (global) IEclipseContext.
The solution that worked for me best so far was getting the IEclipseContext and put the required classes there myself during the bundle activator's start() method.
private void registerGuiceDelegatingInjection(final BundleContext context, final Class<?> clazz)
{
IServiceLocator s = PlatformUI.getWorkbench();
IEclipseContext ctx = (IEclipseContext) s.getService(IEclipseContext.class);
ctx.set(clazz.getName(), guiceInjector.getInstance(clazz));
}
This works at least for now. I am not sure how it works out in the future if more bundles would directly put instances in the context; maybe in the long-term named instances would be needed. Also, for me this works, because the injected objects are singletons, so it does not do any harm to put single instances in the context.
I would have liked the context function approach better, but I could not get it to work so far.

How to make the Controller a single instance per application in ASP.NET MVC?

Over time controllers develop a lot of dependencies, and creating an instance of controller becomes too expensive for each request (especially with DI). Is there any solution to make controllers singletons?
Creating instances of controllers is pretty fast and simple operation. What becomes too expensive is creating dependencies for each request. So, what you really need is many controllers which share same instances of dependencies.
E.g. you have following controller
public class SalesController : Controller
{
private IProductRepository productRepository;
private IOrderRepository orderRepository;
public SalesController(IProductRepository productRepository,
IOrderRepository orderRepository)
{
this.productRepository = productRepository;
this.orderRepository = orderRepository;
}
// ...
}
You should configure your dependency injection framework to use same instances of repositories for all application (keep in mind, you can have synchronization problems). Now creating dependencies is not expensive any more. All dependencies are instantiated only once, and reused for all requests.
If you have many dependencies and you are worrying about costs of getting reference to instance of each dependency and providing these references to controller instance (which I don't think will be very expensive), then you can group your dependencies (something like Introduce Parameter Object refactoring):
public class SalesController : Controller
{
private ISalesService salesService;
public SalesController(ISalesService salesService)
{
this.salesService = salesService;
}
// ...
}
public class SalesService : ISalesService
{
private IProductRepository productRepository;
private IOrderRepository orderRepository;
public SalesService(IProductRepository productRepository,
IOrderRepository orderRepository)
{
this.productRepository = productRepository;
this.orderRepository = orderRepository;
}
// ...
}
Now you have single dependency, which will be injected very quickly. If you will configure your dependency injection framework to use singleton SalesService, then all SalesControllers will reuse same instance of service. Creation of controllers and providing dependencies will be very fast.
So first an answer to the original question:
public void ConfigureServices(IServiceCollection services) {
// put other services bindings here
// bind all Controller classes as singletons
services.AddSingleton<HomeController, HomeController>();
// tell framework to obtain Controller instances from ServiceProvider.
services.AddMvc().AddControllersAsServices();
}
As stated in the original question, if controllers have big dependency trees consisting mainly of request Scoped or Transient dependencies then creating them separately for each request may have some footprint on scalability of your application (in Java for example Servlet instances are singletons by default exactly for this reason). While usually CPU and real time needed to create even a big dependency tree is negligible (unless you have some heavy computations or network communication in constructors of your components, which almost never is a good idea for transient or request scoped components), the memory usage footprint is something to reckon with. In case of common DB-Web apps memory is the main factor limiting number of concurrent requests that a single machine-node can handle. If every request has a separate copy of a big dependency tree, together they may consume a significant amount of memory (the other thing to watch for is initial stack size for a new thread, by the way).
The accepted answer 1220560 solves the problem as well, but I would consider it an ugly hack and it has some drawbacks: you need to create this artificial singleton service that will be used by your Controllers either as a service locator or a proxy for other services. If you have just one such singleton object for all your controllers then you are effectively hiding real dependencies of your Controller: for example if someone wants to write a unit-test for your Controller he needs to analyse carefully its implementation to see which dependencies it actually uses, so that he knows what mocks/fakes he needs to provide in his test setup. If later you change your Controller and as a result of your change the subset of services your controller uses changes as well, it is very easy to forget to update the test setup also. This may sometimes lead to bugs that are hard to track. Contrary to this, if your dependencies are declared explicitly as constructor params, you will get a compiler error in the test setup right away. Another thing you can do is to have a separate such a singleton proxy/service locator for each controller, but then it's a lot of hassle basically.
Regardless whether you use the solution proposed by me or the one from answer #1220560 you must be careful when injecting request Scoped dependencies into singleton objects as described in https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection#registering-your-own-services right at the end of the "registering-your-own-services" section. You can find possible solutions to this problem here: how to use scoped dependency in a singleton in C# / ASP
Another thing to watch for is concurrency issue: singleton objects may be accessed concurrently by several threads handling different concurrent requests, so make sure to add proper synchronization to any non-thread-safe resources your singleton uses.
edit:
I've just realized the original question was about ASP.NET and this answer is for ASP.NET Core, so it probably won't work for "non-Core".

DI Container and custom-scoped state in legacy system

I believe I understand the basic concepts of DI / IoC containers having written a couple of applications using them and reading a lot of stack overflow answers as well as Mark Seeman's book. There are still some cases that I have trouble with, especially when it comes to integrating DI container to a large existing architecture where DI principle hasn't been really used (think big ball of mud).
I know the ideal scenario is to have a single composition root / object graph per operation but in a legacy system this might not be possible without major refactoring (only the new and some select refactored old parts of the code could have dependencies injected through constructor and the rest of the system using the container as a service locator to interact with the new parts). This effectively means that a stack trace deep within an operation might include several object graphs with calls being made back and forth between new subsystems (single object graph until exiting into an old segment) and traditional subsystems (service locator call at some point to code under DI container).
With the (potentially faulty, I might be overthinking this or be completely wrong in assuming this kind of hybrid architecture is a good idea) assumptions out of the way, here's the actual problem:
Let's say we have a thread pool executing scheduled jobs of various types defined in database (or any external place). Each separate type of scheduled job is implemented as a class inheriting a common base class. When the job is started, it gets fed the information about which targets it should write its log messages to and the configuration it should use. The configuration could probably be handled by just passing the values as method parameters to whatever class needs them but if the job implementation gets larger than say 10-20 classes, it doesn't seem very handy.
Logging is the larger problem. Subsystems the job calls probably also need to write things to the log and usually in examples this is done by just requesting instance of ILog in the constructor. But how does that work in this case when we don't know the details / implementation until runtime? Since:
Due to (non DI container controlled) legacy system segments in the call chain (-> there potentially being multiple separate object graphs), child container cannot be used to inject the custom logger for specific sub-scope
Manual property injection would basically require the complete call chain (including all legacy subsystems) to be updated
A simplified example to help better perceive the problem:
Class JobXImplementation : JobBase {
// through constructor injection
ILoggerFactory _loggerFactory;
JobXExtraLogic _jobXExtras;
public void Run(JobConfig configurationFromDatabase)
{
ILog log = _loggerFactory.Create(configurationFromDatabase.targets);
// if there were no legacy parts in the call chain, I would register log as instance to a child container and Resolve next part of the call chain and everyone requesting ILog would get the correct logging targets
// do stuff
_jobXExtras.DoStuff(configurationFromDatabase, log);
}
}
Class JobXExtraLogic {
public void DoStuff(JobConfig configurationFromDatabase, ILog log) {
// call to legacy sub-system
var old = new OldClass(log, configurationFromDatabase.SomeRandomSetting);
old.DoOldStuff();
}
}
Class OldClass {
public void DoOldStuff() {
// moar stuff
var old = new AnotherOldClass();
old.DoMoreOldStuff();
}
}
Class AnotherOldClass {
public void DoMoreOldStuff() {
// call to a new subsystem
var newSystemEntryPoint = DIContainerAsServiceLocator.Resolve<INewSubsystemEntryPoint>();
newSystemEntryPoint.DoNewStuff();
}
}
Class NewSubsystemEntryPoint : INewSubsystemEntryPoint {
public void DoNewStuff() {
// want to log something...
}
}
I'm sure you get the picture by this point.
Instantiating old classes through DI is a non-starter since many of them use (often multiple) constructors to inject values instead of dependencies and would have to be refactored one by one. The caller basically implicitly controls the lifetime of the object and this is assumed in the implementations (the way they handle internal object state).
What are my options? What other kinds of problems could you possibly see in a situation like this? Is trying to only use constructor injection in this kind of environment even feasible?
Great question. In general, I would say that an IoC container loses a lot of its effectiveness when only a portion of the code is DI-friendly.
Books like Working Effectively with Legacy Code and Dependency Injection in .NET both talk about ways to tease apart objects and classes to make DI viable in code bases like the one you described.
Getting the system under test would be my first priority. I'd pick a functional area to start with, one with few dependencies on other functional areas.
I don't see a problem with moving beyond constructor injection to setter injection where it makes sense, and it might offer you a stepping stone to constructor injection. Adding a property is usually less invasive than changing an object's constructor.

How to avoid having injector.createInstance() all over the place when using guice?

There's something I just don't get about guice: According to what I've read so far, I'm supposed to use the Injector only in my bootstrapping class (in a standalone application this would typically be in the main() method), like in the example below (taken from the guice documentation):
public static void main(String[] args) {
/*
* Guice.createInjector() takes your Modules, and returns a new Injector
* instance. Most applications will call this method exactly once, in their
* main() method.
*/
Injector injector = Guice.createInjector(new BillingModule());
/*
* Now that we've got the injector, we can build objects.
*/
RealBillingService billingService = injector.getInstance(RealBillingService.class);
...
}
But what if not all Objects I ever need can be created during startup? Maybe I want to respond to some user interaction when the application is running? Don't I have to keep my injector around somewhere (e.g. as a static variable) and then call injector.getInstance(SomeInterface.class) when I need to create a new object?
Of course spreading calls to Injector.getInstance() all over the place seems not to be desirable.
What am I getting wrong here?
Yes, you basically only should use the Injector to create get the instance for the root-object. The rest of the application shouldn't touch the Guice-Container. As you've noticed, you still need to create some objects when required. There are different approaches for doing that, each suitable for different needs.
Inject a Provider
Provider is a interface from Guice. It allows you to request a new instance of a object. That object will be created using Guice. For example.
class MyService{
private Provider<Transaction> transactionProvider;
public MainGui(Provider<Transaction> transactionProvider){
this.transactionProvider = transactionProvider;
}
public void actionStarted(){
Transaction transaction = transactionProvider.get();
}
Build a Factory
Often you need some kind of factory. This factory uses some injected services and some parameters and creates a new object for you. Then you use this factory for new instances. Then you inject that factory and use it. There also help for this with the AssistedInject-extension
I think with these two possibilities you rarely need to use the Guice-Injector itself. However sometimes is still appropriate to use the injector itself. Then you can inject the Injector to a component.
To extend on the answer Gamlor posted, you need to also differentiate between the object types you are using.
For services, injection is the correct solution, however, don't try to always make data objects (which are generally the leafs in your object graph) injectable. There may be situations where that is the correct solution, but injecting a Provider<List> is probably not a good idea. A colleague of mine ended up do that, it made the code base very confusing after a while. We just finished cleaning it all out and the Guice modules are much more specific now.
In the abstract, I think the general idea is that if responding to user events is part of the capabilities of your application, then, well...
BillingService billingService = injector.getInstance(BillingService.class);
billingService.respondToUserEvent( event );
I guess that might be a little abstract, but the basic idea is that you get from Guice your top-level application class. Judging from your question, I guess that maybe BillingService isn't your top-level class?

Resources