Creating repositories with spring.net - spring.net

I'm creating my repositories with spring.net. However, I'm wondering what the lifetime is of these objects. In my repositories, objects that are retrieved from the database are cached in a registry. But this should only happen for a single server call. Can you specify in spring.net configuration that the objects should be created for each call to the server.
I guess singleton=false doesn't do it for me, as this will create a new Repository everytime, even in the same thread.

From your previous posts I see you put all your repositories in a registry class to retrieve them.
I'd step away from that approach and inject the repositories directly into the classes that need them. Then it becomes a lot clearer what the lifetime of your objects is.
You should look at the other scopes Spring.NET has to offer as well.

This is a complicated question, because the design of the cache and registry comes into play. It sounds like the lifetime of the persistent objects will be controlled by the registry, since it will be maintaining references.
So there are a few things to ask:
Which object owns the cache? The repository, the service, or something else?
How are you invalidating the cache? Is it keeping track when persistent objects are updated?
What's the timeout value for the session in which the objects are created? How would an invalidated session be communicated to the cache?
When you say "registry", do you mean "Windows registry"? (god forbid, please so "no".)
In Spring for Java EE, one usually gets configurable caching with Hibernate and EhCache. If you use the Spring JDBC template you have to write it yourself. What implementation are you using for your repositories?

Related

How do you avoid injecting global state when sharing dependencies in DI?

Imagine you inject a single database connection to a handful of service classes. They now share what's essentially a global mutable state. How do DI frameworks deal with this? Do they:
Freeze the dependency before injection?
Only share immutable objects?
Wrap each dependency in a decorator to only provide exactly what's dependent on?
I tried searching for this and am a bit surprised I didn't find much. Feel free to provide links.
Related: https://en.wikipedia.org/wiki/Principle_of_least_privilege
Most DI containers provide the feature of registering a dependency within a Lifetime. For instance in .net core DI you can register a service with three different lifetimes:
Singleton: There is only one single instance. All the consumers of that service will use that instance. If one consumer changes the state of that dependency, all the other consumers will see that change.
Scoped: There is one instance per scope, where a scope is a web request. If a consumer changes the state of a scoped service, all the other consumers that will run in the same web request will see the change.
Transient: Each consumer uses a different instance of the service.
Always in .net core, the DBContext is (by default) added as a scoped service, this means that in the same web request all the consumers will use the same instance and this is useful when you need to run a transaction across different consumers (or better across different repositories).

Autofac Dependecy Injection Azure function SingleInstance

I followed this links. https://dontcodetired.com/blog/post/Azure-Functions-Dependency-Injection-with-Autofac
Autofac Binding at Runtime
It worked fine. I want to know when azure function scales, object injected into azure function will be shared by all the instances of azure function. In this case object is NaiveInvestmentAllocator.
Let me know if you have any doubt. Also I actually implemented combination of two links. It is like factory pattern is used to get the object of instances from Autofac container. I can share the code if anyone want But I dont think that necessary.
My question is if I implemented first link, injected object is shared by all instances of same azure function or not?
Nope.
As Azure Functions scale, the other instances run on different VMs/Containers. Its similar to running your function app on different VMs/Containers manually.
If the requirement is to have a shared state across multiple function app instances, you should offload the state persistence to something like Redis, Table Storage, Blob Storage, etc.
For example, you can use Azure Cache for Redis for example and inject a client for the same into your service class.
If the intention is to save the number of open connections, note that the limit is per instance.

Downside of using my own Autofac LifetimeScope in MVC application

I am writing MVC UI wrapper reusing legacy core libraries that were written down for desktop edition using Autofac for DI. The problem I am facing is, core libraries are working with Lifetime scope that I can't change while MVC requires InstancePerRequest.
So what happens is, in MVC, if I register my services for InstancePerRequest scope, they get disposed by core libraries before request completes. It makes MVC application unhappy.
I tried using LifeTimeScope for all services in MVC app too. Since Lifetime scope is shorter than Request life, it appears to work in MVC.
Is there any downside in this approach?
Note: In legacy code all the time services are being resolved manually, instead of being injected through constructor. Like:
using (var scope = IocContainer.BeginLifetimeScope())
{
var service = scope.Resolve<IMyService>();
return service.FindAll();
}
MVC will work with InstancePerLifetimeScope for services as noted in the documentation about sharing registrations across apps that have and apps that don't have request scopes.
I think there are going to be potentially two gotchas in your approach to creating your own lifetime scope. Whether you can live with them is very much app specific so you'll have to judge for yourself.
Problem 1: Early Disposal
In your example you show a factory or service IMyService being resolved, doing some work, and returning that work. At the end of the using statement the owning lifetime scope is getting disposed. That means IMyService will be disposed (if it's IDisposable) and any dependencies that IMyService requires will also be disposed. In the case of things like database contexts or connections, that well could mean the return value becomes invalid because you won't be able to update the values or read additional data against a disposed connection.
Problem 2: Singleton/Sharing Issues
Lifetime scopes are sometimes used to isolate units of work or sets of components that need shared context. For example, in MVC you only have one instance of the controller for the whole request - no matter how many times you resolve the controller object, for that request it'll be the same instance. You might see a similar thing with database connections - one connection from the pool allocated for an entire request lifetime.
By creating your own lifetime scope you are also creating a sort of logical unit of work. Any dependencies for IMyService will not be shared with the rest of the MVC request. In fact, it's more like that tiny lifetime scope is its own request or its own unit of work. No overlap.
General Resolution
As noted in the doc I linked to earlier, register things as InstancePerLifetimeScope if they need to be used in both MVC and non-MVC contexts and just let the MVC request semantics handle spinning up and disposal of scopes if possible.
If that won't work, it'll be up to you and your app code to figure out if you can live with the issues here or if you need to address them. If you need to address them, that, too, will be app specific so there isn't "guidance" to provide - you're on your own for that.

Testing domain model in a JSF 2.0 application running on Glassfish server

I have just implement some entities and some backing beans. However, I would like to know if there is someway to test the domain model, or entities, on the application server which in this case is Glassfish.
For example when I have added a new entity, I would like to test that the persistence is correct by writing and reading the entity and maybe do some operations.
I have used JUnit for standard applications, but now on a web application which is deployed on the application server makes me confused.
What is the standard way to deal with this. I have heard something about JSFUnit but I didn't see any example for Glassfish (maybe it doesnt matter?)
PS. My project involve EJB's, which I assume require either testing under an embedded application server or hosting server?
Can you please help me to understand what is the best practice to deal with this kind of stuff?
Best regards
Preferably you want out of container testing, which is very possible since JPA works outside of container. Just set up a new persistence.xml designed for testing, configured as if you were using Java SE only, and you can test your entities. You will have to set the entitymanager instance yourself thought, since you are not inside your container. Either let your test classes inherit from the EJB you are testing and set the protected EntityManager instance to an instance from your EntityManagerFactory, or add a setter to your EntityManager in each of your DAO EJBs. Then you should be ready to go. You will have to handle your transactions manually though, which should be possible to in the same way for most calls since you will probably want to rollback changes from each test.

nHibernate strategies in a web farm

Our current project at work is a new MVC web site that will use a WCF service primarily to access a 3rd party billing system via a web service as well as a small SQL database for user personalization. The WCF service uses nHibernate for the SQL database.
We'd like to implement some sort of web farm for load balancing as well as failover and maintenance. I'm trying to decide the best way to handle nHibernate's caching and database concurrency if there are multiple WCF services running.
Some scenarios I've been thinking about...
1) Multiple IIS servers, one WCF server. With this setup, the WCF server would be a single point of failure, but there would be no issues with nHibernate caching or database concurrency.
2) Multiple IIS servers, each with it's own WCF service. This removes a single point of failure, but now nHibernate on one machine would not know about database changes done by another machine.
Some solutions to number 2 would be to use an IStatelessSession so we're not doing any caching and nHibernate is always fetching directly from the database. This might be the most feasible as our personalization database has very few objects in it. I'm also considering a 2nd-level cache such as memcached or Velocity, but it may be overkill for this system.
I'm putting this out there to see if anyone has experience doing this sort of architecture and to get some ideas for a solution. Thanks!
am i missing something here, i don't see a problem with nhibernate on the webservers.
application cache would not be a problem as each nhibernate box would keep it's own cache which would be populate from the datastore. look at creating a table that can be monitored for reasons to do a cache refresh. we used to do this using using CacheDependency class in .net 2.0 that would detect changes to a column and then remove the relevant item from the cache. so if a user inserts a new product, the cache would be dropped and the next call to get the products would load the cache again. it's old but check out: http://msdn.microsoft.com/en-us/magazine/cc163955.aspx#S2 for the concept. cheers
I would suggest not doing caching until not doing caching becomes a problem. Your DB will do its own caching to save you searching for the same data repeatedly, so the only thing you have to worry about is data across the wire. Judging by your description, you're not going to have a problem there. If you ever get to a stage where you do, use a distributed cache - allowing your servers to cache separately will cause you bouncing data problems on refresh.

Resources