Readiness for Spring Cloud Data Flow - spring-cloud-dataflow

Spring Cloud Data Flow's documentation describes how to integrate with kubernetes Readiness probes. I'm developing my dataflow locally and running it in a docker-compose configuration, while we wait for our k8s SCDF environment to be stood up.
Is there another way to implement a readiness / do not send data context for SCDF? Upon component spin-up, I need to make a RESTful call and then run some computations on the results. Things attempted unsuccessfully:
use of ApplicationAvailabilityEvents - publishing a ReadinessState.ACCEPTING_TRAFFIC after the load + compilation is complete, after publishing a ReadinessState.REFUSING_TRAFFIC. When Spring completes its own load, it publishes an ACCEPTING_TRAFFIC, and so doesn't wait for mine from my service.
setting up an ApplicationRunner which also serves as an ApplicationListener for custom events which I throw when the computations are complete. Effectively, the run() method looks like:
public class ApplicationStartupRunner implements ApplicationRunner, ApplicationListener {
private boolean sessionLoaded = false;
public void run(ApplicationArguments args) {
doTimeExpensiveThing();
while (!sessionLoaded) {
TimeUnit.MILLISECONDS.sleep(150);
}
}
public void onApplicationEvent(SessionLoadEvent event) {
this.sessionLoaded = true;
}
}
Additional technical note: the Spring Boot application is built as a processor, which is using a function exposed as a Bean to provide its capability, ala
public Function<Flux<ChangeEvent>, Flux<Alert>> processChangeEvents()
Optimally, whatever approach I use which works in docker-compose, I'll wire into an indicator which'll be picked up by k8s and its readiness probe. Given that SCDF can be deployed on k8s, docker-compose (locally), or CloudFoundry, hoping that there's a model I can hook into that I've just overlooked.

Potential answer: instead of using the ApplicationRunner, wait in the processChangeEvents method and do not return the function until startup processing is complete.
In our case, because the doTimeExpensiveThing is an asynchronous activity, I use the technique of watching/waiting for the sessionLoaded flag, but now within the processChangeEvents method itself.
#Configuration
public class ConfigurationForProcessor implements ApplicationEventListener<SessionLoadEvent> {
boolean sessionLoaded;
Function<Flux<ChangeEvent>, Flux<Alert>> processChangeEvents() {
doTimeExpensiveAsynchronousThing();
while (!sessionLoaded) {
TimeUnit.MILLISECONDS.sleep(150);
}
return (Flux<ChangeEvent>) -> ... code which returns a Flux of Alert
}
public void onApplicationEvent(SessionLoadEvent event) {
this.sessionLoaded = true;
}
}
Very open to guidance on other approaches. This appears like it's working, though not sure there aren't gotchas I haven't caught yet.

Related

Inject OSGi Services in a non-component class

Usually I have seen in OSGi development that one service binds to another service. However I am trying to inject an OSGi service in a non-service class.
Scenario trying to achieve: I have implemented a MessageBusListener which is an OSGi service and binds to couple of more services like QueueExecutor etc.
Now one of the tasks of the MessageBusListener is to create a FlowListener (non-service class) which would invoke the flows based on the message content. This FlowListener requires OSGi services like QueueExecutor to invoke the flow.
One of the approach I tried was to pass the reference of the services while creating the instance of FlowListener from MessageBusListener. However when the parameterized services are deactivated and activated back, I think OSGi service would create a new instance of a service and bind to MessageBusListener, but FlowListener would still have a stale reference.
#Component
public class MessageBusListener
{
private final AtomicReference<QueueExecutor> queueExecutor = new AtomicReference<>();
#Activate
protected void activate(Map<String, Object> osgiMap)
{
FlowListener f1 = new FlowListener(queueExeciutor)
}
Reference (service = QueueExecutor.class, cardinality = ReferenceCardinality.MANDATORY, policy = ReferencePolicy.STATIC)
protected void bindQueueExecutor(QueueExecutor queueExecutor)
{
this.queueExecutor = queueExecutor;
}
}
public class FlowListener
{
private final AtomicReference<QueueExecutor> queueExecutor;
FlowListener(QueueExecutor queueExecutor)
{
this.queueExecutor = queueExecutor;
}
queueExecutor.doSomething() *// This would fail in case the QueueExecutor
service was deactivated and activated again*
}
Looking forward to other approaches which could suffice my requirement.
Your approach is correct you just need to also handle the deactivation if necessary.
If the QueueExecutor disappears the MessageBuslistener will be shut down. You can handle this using a #Deactivate method. In this method you can then also call a sutdown method of FlowListener.
If a new QeueExecutor service comes up then DS will create a new MessageBuslistener so all should be fine.
Btw. you can simply inject the QueueExecutor using:
#Reference
QueueExecutor queueExecutor;

How to propagate Spring Security Context in Spring Integration async messaging gateway

I am trying to get spring security context to propagate through an spring integration async message flow, but have found that even though I added SecurityContextPropagationChannelInterceptor the security context always ends up null in my message handler.
#Bean
#GlobalChannelInterceptor(patterns = {"*"})
public ChannelInterceptor securityContextPropagationInterceptor()
{
return new SecurityContextPropagationChannelInterceptor();
}
I initiate my flow from a service that has a populated security context by making a call to my gateway interface:
#MessagingGateway
public interface AssignmentsService
{
#Gateway(requestChannel = "applyAssignmentsFlow.input")
ListenableFuture<AssignmentResult> applyAssignments( AssignmentRequest assignmentRequest );
}
On further debugging I have found that the GatewayProxyFactoryBean creates a new thread when initiating my flow, but does not propagate the security context.
I have searched but have been unable to find out how to configure this to propagate the security context.
That's pretty interesting task. Indeed :) !
But anyway you can do it like this:
#Bean
public AsyncTaskExecutor securityContextExecutor() {
return new DelegatingSecurityContextAsyncTaskExecutor(new SimpleAsyncTaskExecutor());
}
...
#MessagingGateway(asyncExecutor = "securityContextExecutor")
public interface AssignmentsService
The main trick here is from Spring Security and its concurrency utils, where we should use TaskExecutor wrappers to pick up the current SecurityContext and propagate it into newly spawned Thread.
There is nothing about Spring Integration, though - just the proper way to work with Security.
Will add such a trick into Reference Manual soon.
Pull request on the matter: https://github.com/spring-projects/spring-integration/pull/2015

What is the Correct Way to Dispose a WCF Proxy?

I have been struggling with WCF Proxies. What is the correct way to Dispose a WCF Proxy? The answer is not trivial.
System.ServiceModel.ClientBase violates Microsoft's own Dispose-pattern
System.ServiceModel.ClientBase<TChannel> does implement IDisposable so one must assume that it should be disposed or used in a using-block. These are best practices for anything disposable. The implementation is explicit, however, so one does have to explicitly cast ClientBase instances to IDisposable, clouding the issue.
The biggest source of confusion, however, is that calling Dispose() on ClientBase instances that faulted, even channels that faulted because they never opened in the first place, will result in an exception being thrown. This, inevitably, means that the meaningful exception explaining the fault is immediately lost when the stack unwinds, the using scope ends and Dispose() throws a meaningless exception saying that you can't dispose a faulted channel.
The above behaviour is anathema to the dispose pattern which states that objects must be tolerant of multiple explicit calls to Dispose(). (see http://msdn.microsoft.com/en-us/library/b1yfkh5e(v=vs.110).aspx, "...allow the Dispose(bool) method to be called more than once. The method might choose to do nothing after the first call.")
With the advent of inversion-of-control, this poor implementation becomes a real problem. I.O.C. containers (specifically, Ninject) detect the IDisposable interface and call Dispose() explicitly on activated instances at the end of an injection scope.
Solution: Proxy ClientBase and Intercept calls to Dispose()
My solution was to proxy ClientBase by subclassing System.Runtime.Remoting.Proxies.RealProxy and to hijack or intercept calls to Dispose(). My first replacement for Dispose() went something like this:
if (_client.State == CommunicationState.Faulted) _client.Abort();
else ((IDisposable)_client).Dispose();
(Note that _client is a typed reference to the target of the proxy.)
Problems with NetTcpBinding
I thought that this had nailed it, initially, but then I discovered a problem in production: under certain scenarios that were fiendishly difficult to reproduce, I found that channels using a NetTcpBinding were not closing properly in the unfaulted case, even though Dispose was being called on _client.
I had an ASP.NET MVC Application using my proxy implementation to connect to a WCF Service using a NetTcpBinding on the local network, hosted within a Windows NT Service on a service cluster with only one node. When I load-tested the MVC Application, certain endpoints on the WCF Service (which was using port-sharing) would stop responding after a while.
I struggled to reproduce this: the same components running across the LAN between two developer's machines worked perfectly; a console application hammering the real WCF endpoints (running on the staging service cluster) with many processes and many threads in each worked; configuring the MVC Application on the staging server to connect to the endpoints on a developer's machine worked under load; running the MVC Application on a developer's machine and connecting to the staging WCF endpoints worked. The last scenario only worked under IIS Express, however, and this was a breakthrough. The endpoints would sieze up when load-testing the MVC Application under full-fat IIS on a developer's machine, connecting to the staging service cluster.
Solution: Close the Channel
After failing to understand the problem and reading many, many pages of the MSDN and other sources that claimed the problem shouldn't exist at all, I tried a long-shot and changed my Dispose() work-around to...
if (_client.State == CommunicationState.Faulted) _client.Abort();
else if (_client.State == CommunicationState.Opened)
{
((IContextChannel)_client.Channel).Close();
((IDisposable)_client).Dispose();
}
else ((IDisposable)_client).Dispose();
... and the problem stopped occurring in all test setups and under load in the staging environment!
Why?
Can anyone explain what might have been happening and why explicitly closing the Channel before calling Dispose() solved it? As far as I can tell, this shouldn't be necessary.
Finally, I return to the opening question: What is the correct way to Dispose a WCF Proxy? Is my replacement for Dispose() adequate?
The issue, as far as I have been able to understand, is that calling Dispose disposes off the handle, but doesn't actually close the channel, which then holds on to the resources and then eventually times out.
This is why your service stopped responding after a while during load testing: because the initial calls held on to resources longer than you thought they would, and later calls could then not avail those resources.
I came up with the following solution. The premise of the solution is that calling Dispose should be enough to dispose off the handle as well as close the channel. The additional benefit is that if the client ends up in a faulted state, it is recreated so that subsequent calls succeed.
If ServiceClient<TService> is injected into another class via a dependency injection framework like Ninject, then all resources will properly be released.
NB: Please note that in the case of Ninject, the binding must define a scope, i.e., it must not be missing an InXyzScope or be defined with an InTransientScope. If no scope makes sense, then use InCallScope.
Here's what I came up with:
public class ServiceClient<TService> : IDisposable
{
private readonly ChannelFactory<TService> channelFactory;
private readonly Func<TService> createChannel;
private Lazy<TService> service;
public ServiceClient(ChannelFactory<TService> channelFactory)
: base()
{
this.channelFactory = channelFactory;
this.createChannel = () =>
{
var channel = ChannelFactory.CreateChannel();
return channel;
};
this.service = new Lazy<TService>(() => CreateChannel());
}
protected ChannelFactory<TService> ChannelFactory
{
get
{
return this.channelFactory;
}
}
protected Func<TService, bool> IsChannelFaulted
{
get
{
return (service) =>
{
var channel = service as ICommunicationObject;
if (channel == null)
{
return false;
}
return channel.State == CommunicationState.Faulted;
};
}
}
protected Func<TService> CreateChannel
{
get
{
return this.createChannel;
}
}
protected Action<TService> DisposeChannel
{
get
{
return (service) =>
{
var channel = service as ICommunicationObject;
if (channel != null)
{
switch (channel.State)
{
case CommunicationState.Faulted:
channel.Abort();
break;
case CommunicationState.Closed:
break;
default:
try
{
channel.Close();
}
catch (CommunicationException)
{
}
catch (TimeoutException)
{
}
finally
{
if (channel.State != CommunicationState.Closed)
{
channel.Abort();
}
}
break;
}
}
};
}
}
protected Action<ChannelFactory<TService>> DisposeChannelFactory
{
get
{
return (channelFactory) =>
{
var disposable = channelFactory as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
};
}
}
public void Dispose()
{
DisposeChannel(this.service.Value);
DisposeChannelFactory(this.channelFactory);
}
public TService Service
{
get
{
if (this.service.IsValueCreated && IsChannelFaulted(this.service.Value))
{
DisposeChannel(this.service.Value);
this.service = new Lazy<TService>(() => CreateChannel());
}
return this.service.Value;
}
}
}

Hook in to plugin service call

I have a plugin that executes a service call as a background process. That is, it does some action on a timer that is not related directly to any user action.
What I need to do is execute some code in the "main" application every time this service call finishes. Is there a way to hook into that plugin code? I have access to the plugin code, so altering it isn't a huge obstacle.
You can have your plugin service publish an event when it completes then listen for that event in your main application. I have used this pattern a few times and it has been a very convenient way to decouple various pieces of of my application. To do this, create an event class.
class PluginEvent extends ApplicationEvent {
public PluginEvent(source) {
super(source)
}
}
Then, have your plugin service implement ApplicationContextAware. That gives your plugin a way to publish your events
class PluginService implements ApplicationContextAware {
def applicationContext
def serviceMethod() {
//do stuff
publishPluginEvent()
}
private void publishPluginEvent() {
def event = new PluginEvent(this)
applicationContext.publishEvent(event)
}
}
Then in your main application, create a listener service that will respond when the event is published:
class ApplicationService implements ApplicationListener<PluginEvent> {
void onApplicationEvent(PluginEvent event) {
//whatever you want to do in your app when
// the plugin service fires.
}
}
This listener doesn't need to be a Grails Service, you can just use a POJO/POGO, but you'll need to configure it as a spring bean inside resources.groovy.
I have been using this approach recently and it has worked well for me. It's definitely a nice tool to have in your Grails toolbox.

Any existing projects/software for sending hourly status emails

Classic requirement of checking system state and notifying users. Specifically, I'll be hitting the database every x-amount of time, getting some data, then sending out email notifications based on the results. Heck, this service might not even send out an email, but create a notification record in the database.
Seems like with IOC and configuration there could be a generic windows service that manages all this, along with metrics and management, in a simple manner.
In the past I've done email notifications by:
1) Running scripts as cron (at on Windows) jobs
2) running custom executables as cron/at jobs
3) using something like SQLServer's DatabaseMail.
4) Custom NT Services that run all the time monitoring things.
Is there any open source projects that manages this? It's the type of code I've written many, many times in various platforms, but don't want to spend the few days doing it now.
The only thing I found so far was Quartz.Net
Thanks
I just create a Windows service and use the Reactive Extensions to schedule tasks. If you don't need as much flexibility as cron offers, this works fine. Here's an abstract hourly task runner. (uses log4net)
public abstract class HourlyTask : IWantToRunAtStartup
{
protected readonly ILog Log = LogManager.GetLogger(typeof (HourlyTask).FullName);
private IDisposable _Subscription;
private void ExecuteWithLog()
{
Log.Debug("Triggering " + GetType());
try
{
Execute();
}
catch (Exception exception)
{
Log.Error("Hourly execution failed", exception);
}
}
public abstract void Execute();
public void Run()
{
_Subscription = Observable
.Interval(TimeSpan.FromMinutes(1))
.Select(x => DateTime.Now.Hour)
.DistinctUntilChanged()
.Subscribe(i => ExecuteWithLog());
}
public void Stop()
{
if (_Subscription == null)
{
return;
}
_Subscription.Dispose();
}
}
Then in your start up method you can just resolve all IWantToRunAtStartup instances and call Run() on them.

Resources