Dropwizard #Timer annotation does not work for classes other than resources - jmx

I am using JMX reporting for my dropwizard application. I initialize it as:
JmxReporter.forRegistry(this.registry).convertRatesTo(TimeUnit.SECONDS).build().start()
When I use the #Timed annotation, the methods in the resources are timed and metrics are reported. However, all the other classes that use #Timed annotation are not metered (or the metrics are not pushed). I checked this by starting jconsole and listing the beans pushed to JMX server.
How to get the #Timed annotation to work with other classes as well?

Using com.palominolabs.metrics:metrics-guice worked.

Related

Jersey HK2 : Dependency injection and healthchecks

My app has got quite a few of async (cron) processes and they all have their separate binder classes in Jersey. These processes are started and managed through independent scripts.
Now, if I make a change in the classes used by these processes, (e.g. add a dependency in the classes) and forget to update the binder class inadvertently, the processes still start up but fail with org.glassfish.hk2.api.UnsatisfiedDependencyException (which is expected). However, this does not stop the process on the node and monitoring tool still thinks that the process is running fine.
I am looking to implement Healthchecks in these process so that I can see if the process has come up fine after deployment and is able to start without any dependency error. I am exploring following options:
An async lightweight process that exports ServiceLocator health status to a monitoring system (e.g. prometheus)
An API endpoint that can be polled externally and returns response based on ServiceLocator status
I have got a couple of questions:
Is there a native way in Jersey HK2 to do this?
How do I know if Locator is able to resolve all the dependencies (i.e. there is no UnsatisfiedDependencyException)?
In hk2 there is a special service called the ErrorService which you can use to see if there are failures creating services. Since hk2 is a dynamic framework this error service implementation will not get called until someone attempts to create the service that is failing.
You also might want to seriously look into the automatic binding options of hk2. Makes everything so much easier IMO. See the section on Automatic Service Population
Here is an example of using the ErrorService:
#Service
public class ExampleErrorService implements ErrorService {
#Override
public void onFailure(ErrorInformation ei) throws MultiException {
if (ErrorType.SERVICE_CREATION_FAILURE.equals(ei.getErrorType())) {
// Tell your health check service there was an exception
return;
}
// Maybe log other failures?
}
}

Passing data from a POST request and broadcasting to a websocket in Micronaut

Let's say I have a class called "WebSocketAdapter" annotated with #ServerWebSocket. This class has #OnOpen, #OnClose, #OnMessage functions similar to the chat example.
Inside my class I have a constructor that is passed in a WebSocketBroadcaster. Inside my socket functions I have a WebSocketSession which I can save out to the object if I want, but I am actually using the broadcaster to broadcast to all open sockets.
Next, I have an #Controller class with a #Post controller function. This just writes the posted data with println.
This may or may not be relevant: I am using an #Singleton with DefaultRouteBuilder to #Inject the POST controller dynamically.
Finally, I have my index.html set up as a static resource with a simple script built to consume websockets append data to the DOM.
So, I can stand up micronaut, visit localhost and see data stream in from my socket to the page. Also, I can post to my endpoint and see data in the console.
My question is how can I make my socket session broadcast when I post to the post controller? How exactly do I inject the websocket as a depenedency of the post controller so I can send the message posted to the server to all open browsers? Note: I am using Kotlin but open to any suggestion in any language.
Things I have tried:
Passing WebSocketSession directly into the post controller and hoping it
gets 'beaned' in
Trying to access the bean via
BeanContext.run().getBean(WebSocketAdapter::class.javaClass) and use it's broadcaster or session
Making the #ServerWebSocket a #Singleton and using #Inject on the
session and trying to access it
Trying to find bean using #ApplicationContext and use it's session
Using rx to pass data between the classes (I am familiar with RxSwift)
I seem to be getting an error like: Bean Context must support property resolution
The documentation says
The WebSocketSession is by default backed by an in-memory map. If you add the the session module you can however share sessions between the HTTP server and the WebSocket server.
I have added the session module to my .gradle however, how exactly do I share my sessions between ws:// and http:// with micronaut?
Unfortunately there doesn't seem to be an equivalent of SimpMessagingTemplate in Micronaut.
They way I got around this was to create an internal #WebSocketClient which allowed me to connect to server. The server recognises the connection as internal due to the way I authorise it and interprets messages on this socket as commands that are interpreted and executed.
It works, but SimpMessagingTemplate would be better.
This technique worked for me:
def sockServer = Application.ctx.getBean(MySocketServer)
sockServer.notifyListeners("you've been notified!")
In my case this code resides in an afterInsert() method in a GORM object in a micronaut server. Calls would come in to a controller and update a GORM object, and the changes are sent out to listener.

Spring Cloud DataFlow Rabbit Source: how to intercept and enrich messages in a Source

I have been successfully evaluating Spring Cloud DataFlow with a typically simple flow: source | processor | sink.
For deployment there will be multiple sources feeding into this pipeline which I can do using data flow labels. All well and good.
Each source is a different rabbitmq instance and because the processor needs to know where the message came from (because it has to call back to the source system to get further information), the strategy I'd thought of was to enrich each message with header details about the source system which is then transparently passed along to the processor.
Now, I'm well-versed in Spring, Spring Boot and Spring Integration but I cannot find out how to enrich each message in a dataflow source component.
The source component is bound to an org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration. The source uses the default Source.OUTPUT channel. How do I get hold of each message in the source to enrich it?
My processor component uses some Spring Integration DSL to do some of what it needs to do but then this processor component has both an INPUT and OUTPUT channel by definition. Not so with the RabbitSourceConfiguration source.
So, can this be done?
I think you need a custom MessageListener on the MessageListenerContainer in RabbitSourceConfiguration.
In the RabbitSourceConfiguration you can set a custom ChannelAwareMessageListener (You can possibly extend from MessagingMessageListenerAdapter as well) on the MessageListenerContainer that does what you incline to do.
In the end what worked was subclassing org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration to:
override public SimpleMessageListenerContainer container() so that I could insert a custom health check before calling super.container(). My business logic enriches each message (see next bullet) with details of where the message came from (note, this is the publisher of the messages and not the rabbit queue). There's a health check needed to validate the additional enriching information (which is provided via configuration) to ensure that messages aren't consumed from the queue and enriched with the wrong information. If the validation fails, the source component fails to start and hence no messages are consumed.
override the creation of the AmqpInboundChannelAdapter bean so that a custom subclass of DefaultAmqpHeaderMapper can be set on the adapter. This custom mapper adds the enriched headers in public Map toHeadersFromRequest(final MessageProperties source).
For me, the inability of stream/dataflow to intercept and modify messages in Source components is problematic. I really shouldn't have to fiddle around with the underlying message broker API in the ways I did. I should be able to do it with e.g. Spring Integration. Indeed I can register a global message interceptor but I cannot change the headers of the message.
This ability would go on my WIBNI (wouldn't it be nice if) list. Perhaps I'll raise a request for this.

What is a Grails "transactional" service?

I'm reading the Grails docs on services which make numerous mention of transactions/transactionality, but without really defining what a transactional service method really is.
Given the nature of services, they frequently require transactional behaviour.
What exactly does this mean? Are transactional methods only those that use JPA/JDBC to communicate with a relational DB, or do they apply to anything covered by JTA?
Is there any reason why I just wouldn't make a service class #Transactional in case it evolves to some day use a transaction? In other words, are there performance concerns to making all service methods transactional?
Grails services are transactional by default - if you don't want a service to be transactional, you need to remove all #Transactional annotations (both Grails' #grails.transaction.Transactional and Spring's #org.springframework.transaction.annotation.Transactional) and add
static transactional = false
If you haven't disabled transactions with the transactional property and have no annotations, the service works the same as if it were annotated with Spring's annotation. That is, at runtime Spring creates a CGLIB proxy of your class and registers an instance of the proxy as the Spring bean, and it delegates to an instance of your actual class to do the database access and your business logic. This lets the proxy intercept all public method calls and start a new transaction, join an existing one, create a new one, etc.
The newer Grails annotation has all of the same settings as the Spring annotation, but it works a bit differently. Instead of triggering the creation of a single proxy, each method is rewritten by an AST transform during compilation, essentially creating a mini proxy for each method (this is obviously a simplification). This is better because the database access and transaction semantics are the same, but if you call one annotated method from another annotated with different settings, the different settings will be respected. But with a proxy, it's a direct call inside the delegate instance, and the proxy is bypassed. Since the proxy has all of the logic to create a new transaction or use other different settings, the two methods will use the first method's settings. With the Grails annotation every method works as expected.
There is a small performance hit involved for transactional methods, and this can accumulate if there are a lot of calls and/or a lot of traffic. Before your code runs, a transaction is started (assuming one isn't active) and to do this, a connection must be retrieved from the pool (DataSource) and configured to turn off autocommit, and make the various transaction settings (isolation, timeout, readonly, etc.) have to be made. But the Grails DataSource is actually a smart wrapper around the "real" one. It doesn't get a real JDBC Connection until you start a query, so all of the configuration settings are cached until then, and then "replayed" on the real connection. If the method doesn't do any database work (either because it never does, or because it exits early based on some condition before the db access code fires), then there's basically no database cost. But if it does, then things work as expected.
Don't rely on this DataSource proxying logic though - it's best to be explicit about which services are transactional and which aren't, and within each service which methods are transactional and which aren't. The best way to do this is by annotating methods as needed, or adding a single annotation at the class level if all methods use the same settings.
You can get more info in this talk I did about transactions in Grails.
First, if your performance concerns are due to the fact your services are transactional then you have reached nirvana. I say this because there are going to be plenty of other bottle necks in your application long before this is a major (or even minor) concern. So, don't fret about that.
Typically in Grails a transaction relates to the transactional state of a database connection or hibernate session. Though it could be anything managed by the JTA with the proper Spring configuration.
In simple terms, it's usually means (by default) a database transaction.

How can I tell how many database connections my application is using?

I have a grails web application which is connecting to a Postgres database. I'm concerned that the code is opening multiple database connections.
How can I find out how many connections it is holding during a request?
There's a lot of magic going on in there with GORM etc and I'm not sure how it's managing its connections.
It's managed by the dataSource bean, which is a javax.sql.DataSource. Unfortunately this interface is very basic, with only 4 methods - 2 getConnection() methods (one with and one without a username/password) and unwrap and isWrapperFor from its parent interface. The actual implementation classes typically have many different methods for configuration and monitoring, but there isn't really any standard, and definitely no interface.
If you're using a recent version of Grails and haven't reconfigured anything, the backing implementation is the Tomcat JDBC Pool, which doesn't depend on Tomcat but was written by a Tomcat committer. You can't just cast that bean to the pool implementation class however, because Grails wraps the actual datasource instance in two proxies. Fortunately the "real" instance is easy to get to - dependency-inject the dataSourceUnproxied bean in a service or wherever you wanted to look at usage:
def dataSourceUnproxied
and then you can call any of its methods (see the Javadoc for what's available)
It's not needed for Groovy of course, but if you want IDE autocompletion add this import
import org.apache.tomcat.jdbc.pool.DataSource
and cast it and call the methods on that, e.g.
DataSource tomcatDataSource = dataSourceUnproxied
log.debug "$tomcatDataSource.active active (max $tomcatDataSource.maxActive, initial $tomcatDataSource.initialSize), $tomcatDataSource.idle idle (max $tomcatDataSource.maxIdle, min $tomcatDataSource.minIdle)"

Resources