J2EE and external URL handlers, having doubts about design - url

I am developing a J2EE application on Glassfish 3.1 which will use one external library that relies heavily on URLs. The URLs will use a custom protocol (e.g. db:123 where 123 is ID of a record in a database). I am having doubts on how to implement the URL protocol handler (and its URLConnection implementation), because this protocol handler will use EJBs to fetch data from the database.
The db protocol handler needs to be registered globally at the moment of JVM startup through -Djava.protocol.handler.pkgs flag. (I couldn't find a better way to do this in Glassfish.) Anyway, because it is registered at JVM startup, it has no knowledge of any EJBs that that it may call at the moment of opening URL streams. So what I did is create a singleton registry class to which database handlers can be registered. This registry is used by URLConnection whenever a stream is requested (it will search the registry for a database handler and use it to fetch data).
Then, an EJB will register itself as database handler in a #PostConstruct method. This way, any time a db:XXX URL is used, EJBs will be called indirectly to fetch the data.
Somehow I feel that this design is a bit dirty, but I'm very limited because of custom URL handlers. If you have any suggestions or tips that you can give me, that would be great.

Related

Passing data from a POST request and broadcasting to a websocket in Micronaut

Let's say I have a class called "WebSocketAdapter" annotated with #ServerWebSocket. This class has #OnOpen, #OnClose, #OnMessage functions similar to the chat example.
Inside my class I have a constructor that is passed in a WebSocketBroadcaster. Inside my socket functions I have a WebSocketSession which I can save out to the object if I want, but I am actually using the broadcaster to broadcast to all open sockets.
Next, I have an #Controller class with a #Post controller function. This just writes the posted data with println.
This may or may not be relevant: I am using an #Singleton with DefaultRouteBuilder to #Inject the POST controller dynamically.
Finally, I have my index.html set up as a static resource with a simple script built to consume websockets append data to the DOM.
So, I can stand up micronaut, visit localhost and see data stream in from my socket to the page. Also, I can post to my endpoint and see data in the console.
My question is how can I make my socket session broadcast when I post to the post controller? How exactly do I inject the websocket as a depenedency of the post controller so I can send the message posted to the server to all open browsers? Note: I am using Kotlin but open to any suggestion in any language.
Things I have tried:
Passing WebSocketSession directly into the post controller and hoping it
gets 'beaned' in
Trying to access the bean via
BeanContext.run().getBean(WebSocketAdapter::class.javaClass) and use it's broadcaster or session
Making the #ServerWebSocket a #Singleton and using #Inject on the
session and trying to access it
Trying to find bean using #ApplicationContext and use it's session
Using rx to pass data between the classes (I am familiar with RxSwift)
I seem to be getting an error like: Bean Context must support property resolution
The documentation says
The WebSocketSession is by default backed by an in-memory map. If you add the the session module you can however share sessions between the HTTP server and the WebSocket server.
I have added the session module to my .gradle however, how exactly do I share my sessions between ws:// and http:// with micronaut?
Unfortunately there doesn't seem to be an equivalent of SimpMessagingTemplate in Micronaut.
They way I got around this was to create an internal #WebSocketClient which allowed me to connect to server. The server recognises the connection as internal due to the way I authorise it and interprets messages on this socket as commands that are interpreted and executed.
It works, but SimpMessagingTemplate would be better.
This technique worked for me:
def sockServer = Application.ctx.getBean(MySocketServer)
sockServer.notifyListeners("you've been notified!")
In my case this code resides in an afterInsert() method in a GORM object in a micronaut server. Calls would come in to a controller and update a GORM object, and the changes are sent out to listener.

Create orchard session

I need to manipulate with data in separated thread in orchard cms.
Problem is when request ends session and services are disposed.
What is the best way to create db session, or how to manipulate with data after request finish?
EDIT:
I am trying something like this code
var builder = new ContainerBuilder();
builder.RegisterGeneric(typeof(Repository<>)).As(typeof(IRepository<>)).InstancePerLifetimeScope();
//builder.RegisterInstance(_shellSettings = new ShellSettings { Name = ShellSettings.DefaultName });
builder.RegisterType<TransactionManager>().As<ITransactionManager>().InstancePerLifetimeScope();
builder.RegisterType<SessionFactoryHolder>().As<ISessionFactoryHolder>().InstancePerLifetimeScope();
But i don't know what exactly to register, it throws me error when resolving repository.
Spawning threads on a web server is bad, it reduces its ability to serve many requests simultaneously. You should consider offloading your task to some other process, like a windows service, communicating through MSMQ by example.
Otherwise consider letting your task instantiates and disposes off the services and sessions it needs itself, instead of using those bounded to the request life-cycle. You may need to setup for this a dedicated dependency resolver letting the task explicitly control the lifetime of the objects it requests to the dependency resolver.

What is a Grails "transactional" service?

I'm reading the Grails docs on services which make numerous mention of transactions/transactionality, but without really defining what a transactional service method really is.
Given the nature of services, they frequently require transactional behaviour.
What exactly does this mean? Are transactional methods only those that use JPA/JDBC to communicate with a relational DB, or do they apply to anything covered by JTA?
Is there any reason why I just wouldn't make a service class #Transactional in case it evolves to some day use a transaction? In other words, are there performance concerns to making all service methods transactional?
Grails services are transactional by default - if you don't want a service to be transactional, you need to remove all #Transactional annotations (both Grails' #grails.transaction.Transactional and Spring's #org.springframework.transaction.annotation.Transactional) and add
static transactional = false
If you haven't disabled transactions with the transactional property and have no annotations, the service works the same as if it were annotated with Spring's annotation. That is, at runtime Spring creates a CGLIB proxy of your class and registers an instance of the proxy as the Spring bean, and it delegates to an instance of your actual class to do the database access and your business logic. This lets the proxy intercept all public method calls and start a new transaction, join an existing one, create a new one, etc.
The newer Grails annotation has all of the same settings as the Spring annotation, but it works a bit differently. Instead of triggering the creation of a single proxy, each method is rewritten by an AST transform during compilation, essentially creating a mini proxy for each method (this is obviously a simplification). This is better because the database access and transaction semantics are the same, but if you call one annotated method from another annotated with different settings, the different settings will be respected. But with a proxy, it's a direct call inside the delegate instance, and the proxy is bypassed. Since the proxy has all of the logic to create a new transaction or use other different settings, the two methods will use the first method's settings. With the Grails annotation every method works as expected.
There is a small performance hit involved for transactional methods, and this can accumulate if there are a lot of calls and/or a lot of traffic. Before your code runs, a transaction is started (assuming one isn't active) and to do this, a connection must be retrieved from the pool (DataSource) and configured to turn off autocommit, and make the various transaction settings (isolation, timeout, readonly, etc.) have to be made. But the Grails DataSource is actually a smart wrapper around the "real" one. It doesn't get a real JDBC Connection until you start a query, so all of the configuration settings are cached until then, and then "replayed" on the real connection. If the method doesn't do any database work (either because it never does, or because it exits early based on some condition before the db access code fires), then there's basically no database cost. But if it does, then things work as expected.
Don't rely on this DataSource proxying logic though - it's best to be explicit about which services are transactional and which aren't, and within each service which methods are transactional and which aren't. The best way to do this is by annotating methods as needed, or adding a single annotation at the class level if all methods use the same settings.
You can get more info in this talk I did about transactions in Grails.
First, if your performance concerns are due to the fact your services are transactional then you have reached nirvana. I say this because there are going to be plenty of other bottle necks in your application long before this is a major (or even minor) concern. So, don't fret about that.
Typically in Grails a transaction relates to the transactional state of a database connection or hibernate session. Though it could be anything managed by the JTA with the proper Spring configuration.
In simple terms, it's usually means (by default) a database transaction.

How to peek at message while dependencies are being built?

I building multitenancy into the unit of work for a set of services. I want to keep the tenancy question out of the way of day-to-day business domain work, and I do not want to touch every existing consumer in the system (I am retrofitting the multitenancy onto a system without any prior concept of a tenant).
Most messages in the system will be contexted by a tenant. However, there will be some infrastructure messages which will not be, particularly for the purpose of automating tenant creation. I need a way of determining whether to use a tenant-contexted unit of work, or a infrastructure unit of work uncontexted by a tenant because the way I interact with the database is different depending on whether I have tenant context. The unit of work is built in the process of spinning up the dependencies of the consumer.
As such I need a way of peeking at the message or its metadata before consuming it, and specifically, I need to be able to peek at it during the dependency building. I was intended to have a tag interface to mark tenant management messages out from normal business domain messages, but any form of identifying the difference could work. If I am in a unit of work resulting from an HTTP request, I can look at WebApi's HttpContext.Current and see the headers of the current request, etc. How do I do something analogous to this if I am in a unit of work resulting from messaging?
I see there is a way to intercept messages with BeforeConsumingMessage() but I need a way of correlating it to the current unit of work I am spinning up and I'm not seeing how that would work for me. Pseudocode for what I am trying to do:
if MessageContext.Message.GetType() = typeof<ITenantInfrastructureMessage>:
database = new Database(...)
else:
tenantId = MessageContext.Headers.TenantId;
database = new TenantDatabase(..., tenantId)
I am working in C#/.NET using MassTransit with RabbitMQ and Autofac with MassTransit's built-in support for both.
Your best option is to override at the IConsumerFactory<T> extension point, and extract the tenant from the message (either via a message header, or some message property) and register that in the container child lifetime scope so that subsequent resolutions from the actual consumer class (and it's dependencies) are properly matched to the tenant in the message.
In our systems, we have a TenantContext that is registered in a newly created LifetimeScope (we're using Autofac), after which we resolve the consume from the child scope, and the dependencies that use the tenant context get the proper value since it's registered as part of building the child container for the message scope.
It works extremely well, we even built up extension methods to make it easy for developers registering consumers to specify "tenant context providers" that go from a message type to the proper tenant id, which is used to build the TenantContext.
You can do similar things with activity factories in Courier routing slips (which are a specialization of a consumer).

CXF client loads wsdl for both service and port?

In a java web app, I need to call a remote soap service, and I'm trying to use a CXF 2.5.0-generated client. The soap service is provided by a particular ERP vendor, and its wsdl is monstrous, thousands of types, dozens of xsd imports, etc. wsdl2java generates the client ok, thanks to the -autoNameResolution flag. But at runtime it retrieves the remote wsdl twice, once when I create the service object, and again when I create a port object.
MyService_Service myService = new MyService_Service(giantWsdlUrl); // fetches giantWsdl
MyService myPort = myService.getMyServicePort(); // fetches giantWsdl again
Why is that? I can understand retrieving it when creating myService, you want to see that it matches the client I'm currently using, or let a runtime wsdl location dictate the endpoint address, etc. But I don't understand why asking for the port would reload everything it just went out on the wire for. Am I missing something?
Since this is in a web application, and I can't be sure that myPort is threadsafe, then I'd have to create a port for each thread, except that's way too slow, 6 to 8 seconds thanks to the monstrous wsdl. Or add my own pooling, create a bunch in advance, and do check-outs and check-ins. Yuck.
For the record, the JaxWsProxyFactoryBean creation route does not ever fetch the wsdl, and that's good for my situation. It still takes a long time on the first create(), then about a quarter second on subsequent create()s, and even that's less than desirable. And I dunno... it sorta feels like I'm under the hood hotwiring the thing rather than turning the key. :)
Well, you have actually answered the question yourself. Each time you invoke service.getPort() the WSDL is loaded from remote site and parsed. JaxWsProxyFactoryBean goes absolutely the same way, but once the proxy is obtained it is re-used for further invocations. That is why the 1st run is slow (because of "warming up"), but subsequent are fast.
And yes, JaxWsProxyFactoryBean is not thread-safe. Pooling client proxies is an option, but unfortunately will eat a lot of memory, as JAX-WS runtime model is not shared among client proxies; synchronization is perhaps better way to follow.

Resources