I am using Quartz.Net and we regularly see misfires during development and live. Whilst this is not a problem as such we would like to enable some sort of tracing so in development it is possible to see when a misfire occurs.
Are there any events we can hook into for this purpose? Ideally I am after something like...
var factory = new StdSchedulerFactory();
var scheduler = factory.GetScheduler();
scheduler.Start();
scheduler.OnMisfire += (e) => {
Console.Out.WriteLine(e);
}
You can use a trigger listener to handle this, see Lesson 7: TriggerListeners and JobListeners.
You can use the history plugin as a reference for building your own logging.
Example
class MisfireLogger : TriggerListenerSupport
{
private readonly ILog log = LogManager.GetLogger (typeof (MisfireLogger));
public override void TriggerMisfired (ITrigger trigger)
{
log.WarnFormat("Trigger {0} misfired", trigger.Key);
}
}
scheduler.ListenerManager.AddTriggerListener (new MisfireLogger ());
Related
Just upgraded from MassTransit 6.3.2 to 7.2.4. I'm using .Net Core 5.
The following unit test works fine before the upgrade but fails after.
using var harness = new InMemoryTestHarness();
harness.Consumer(() => new MockedxxxService(), xxxEndPoint);
harness.Start().Wait();
IBus endpoint = harness.Bus;
var tasks = new System.Collections.Concurrent.ConcurrentBag<Task<string>>();
var result = Parallel.For(0, 100, index =>
{
var sut = new xxxRetriever(..., endpoint, ...);
tasks.Add(sut.Getxxx(paramx));
});
Assert.True(result.IsCompleted);
await Task.WhenAll(tasks);
The code in the xxxRetriever class looks like this.
public async Task<string> Getxxx(string paramx)
{
try
{
var serviceAddress = new Uri("queue:" + ...);
var xxxService = _busService.CreateRequestClient<xxxContract>(serviceAddress, TimeSpan.FromMilliseconds(xxx));
var xxxResponse = await xxxService.GetResponse<xxxResultContract>(new
{
...
}).ConfigureAwait(false);
....
}
}
The endpoint is injected into the class as an IBus.
The mocked service looks like this.
public class MockedxxxService : IConsumer<xxxContract>
{
public async Task Consume(ConsumeContext<xxxContract> context)
{
await context.RespondAsync<xxxResultContract>(new { ... } } });
}
}
The tests run fine when we limit the number of tasks to about 30. But above that it fails consistently with the message "Timeout waiting for response, RequestId: ...".
Any help would be appreciated.
It might be related to contention in the TPL, but that would only be a guess. You might be able to configure increase the concurrency limit on the bus and see if that helps:
harness.OnConfigureInMemoryBus += c =>
{
c.Host(h => h.TransportConcurrencyLimit = 100);
};
That's a guess, but it might be related since it's load specific.
Obviously, this should be called prior to calling Start on the hardness (which should be awaited, instead of using .Wait() by the way).
After much investigation, we found that this was caused by the way .Net 5 handles thread creation. See ThreadPool.SetMinThreads does not create any new threads
We primed the thread pool before the test and that fixed the issue. Not sure why this worked correctly in .Net 5 under MassTransit 6, but it's working now.
I have an app that's using Boot 2.0 with webflux, and has an endpoint returning a Flux of ServerSentEvent. The events are created by leveraging spring-amqp to consume messages off a RabbitMQ queue. My question is: How do I best bridge the MessageListener's configured listener method to a Flux that can be passed up to my controller?
Project Reactor's create section mentions that it "can be very useful to bridge an existing API with the reactive world - such as an asynchronous API based on listeners", but I'm unsure how to hook into the message listener directly since it's wrapped in the DirectMessageListenerContainer and MessageListenerAdapter. Their example from the create section:
Flux<String> bridge = Flux.create(sink -> {
myEventProcessor.register(
new MyEventListener<String>() {
public void onDataChunk(List<String> chunk) {
for(String s : chunk) {
sink.next(s);
}
}
public void processComplete() {
sink.complete();
}
});
});
So far, the best option I have is to create a Processor and simply call onNext() each time in the RabbitMQ listener method to manually produce an event.
I have something like this:
#SpringBootApplication
#RestController
public class AmqpToWebfluxApplication {
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext = SpringApplication.run(AmqpToWebfluxApplication.class, args);
RabbitTemplate rabbitTemplate = applicationContext.getBean(RabbitTemplate.class);
for (int i = 0; i < 100; i++) {
rabbitTemplate.convertAndSend("foo", "event-" + i);
}
}
private TopicProcessor<String> sseFluxProcessor = TopicProcessor.share("sseFromAmqp", Queues.SMALL_BUFFER_SIZE);
#GetMapping(value = "/sseFromAmqp", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> getSeeFromAmqp() {
return this.sseFluxProcessor;
}
#RabbitListener(id = "fooListener", queues = "foo")
public void handleAmqpMessages(String message) {
this.sseFluxProcessor.onNext(message);
}
}
The TopicProcessor.share() allows to have many concurrent subscribers which we get when we return this TopicProcessor as a Flux to our /sseFromAmqp REST request via WebFlux.
The #RabbitListener just delegates its received messages to that TopicProcessor.
In the main() I have a code to confirm that I can publish to the TopicProcessor even if there is no subscribers.
Tested with two separate curl sessions and published messages to the queue via RabbitMQ Management Plugin.
By the way I use share() because of: https://projectreactor.io/docs/core/release/reference/#_topicprocessor
from multiple upstream Publishers when created in the shared configuration
That' because that #RabbitListener really can be called from different ListenerContainer threads, concurrently.
UPDATE
Also I moved this sample to my Sandbox: https://github.com/artembilan/sendbox/tree/master/amqp-to-webflux
Let's suppose you want to have a single RabbitMQ listener that somehow puts messages to one or more Flux(es). Flux.create is indeed a good way how to create such a Flux.
Let's start with Messaging with RabbitMQ Spring guide and try to adapt it.
The original Receiver would have to be modified in order to be able to put received messages to a FluxSink.
#Component
public class Receiver {
/**
* Collection of sinks enables more than one subscriber.
* Have to keep in mind that the FluxSink instance that the emitter works with, is provided per-subscriber.
*/
private final List<FluxSink<String>> sinks = new ArrayList<>();
/**
* Adds a sink to the collection. From now on, new messages will be put to the sink.
* Method will be called when a new Flux is created by calling Flux.create method.
*/
public void addSink(FluxSink<String> sink) {
sinks.add(sink);
}
public void receiveMessage(String message) {
sinks.forEach(sink -> {
if (!sink.isCancelled()) {
sink.next(message);
} else {
// If canceled, don't put any new messages to the sink.
// Sink is canceled when a subscriber cancels the subscription.
sinks.remove(sink);
}
});
}
}
Now we have a receiver that puts RabbitMQ messages to sink. Then, creating a Flux is rather simple.
#Component
public class FluxFactory {
private final Receiver receiver;
public FluxFactory(Receiver receiver) { this.receiver = receiver; }
public Flux<String> createFlux() {
return Flux.create(receiver::addSink);
}
}
Receiver bean is autowired to the factory. Of course, you don't have to create a special factory. This only demonstrates the idea how to use the Receiver to create the Flux.
The rest of the application from Messaging with RabbitMQ guide may stay the same, including the bean instantiation.
#SpringBootApplication
public class Application {
...
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(queueName);
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(Receiver receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
...
}
I used similar design to adapt Twitter streaming API sucessfuly. Though, there may be a nicer way how to do it.
Currently I have a job to check and send out emails every minute. I'm using hangfire as the job scheduler but it requires the site to be kept alive in order to function properly. To work round this I'm using another job which runs every 5 minutes as follows to keep the site alive:
public static bool Ping()
{
try
{
var request = (HttpWebRequest)WebRequest.Create('http://xyz.domain.com');
request.Timeout = 3000;
request.AllowAutoRedirect = false; // find out if this site is up and don't follow a redirector
request.Method = "HEAD";
using (request.GetResponse())
{
return true;
}
}
catch
{
return false;
}
}
Anyone know of any better or more efficient way to keep the site alive aside from using a windows service or task scheduler?
In last week for the same purpose I used an Azure Scheduler. I think it is very nice tool, you can:
schedule a job,
defince an action
get access to history of your scheduled task
etc.
So if you have an MSDN subscription I think it is worthly to consider.
As you've noticed, app pool recycling, or application inactivity will cause recurring tasks and delayed jobs to cease being enqueued, and enqueued jobs will not be processed.
If you're hosting the application 'on premise' you can use the 'Auto Start' feature that comes with Windows Server 2008 R2 (or later) - running IIS 7.5 (or above)
Full setup instructions are on the Hangfire documentation - http://docs.hangfire.io/en/latest/deployment-to-production/making-aspnet-app-always-running.html
I'll summarise below.
1)
Create a class that implements IProcessHostPreloadClient
public class ApplicationPreload : System.Web.Hosting.IProcessHostPreloadClient
{
public void Preload(string[] parameters)
{
HangfireBootstrapper.Instance.Start();
}
}
2)
Update your global.asax.cs
public class Global : HttpApplication
{
protected void Application_Start(object sender, EventArgs e)
{
//note - we haven't yet created HangfireBootstrapper
HangfireBootstrapper.Instance.Start();
}
protected void Application_End(object sender, EventArgs e)
{
HangfireBootstrapper.Instance.Stop();
}
}
3)
Create the HangfireBootstrapper class mentioned above.
public class HangfireBootstrapper : IRegisteredObject
{
public static readonly HangfireBootstrapper Instance = new HangfireBootstrapper();
private readonly object _lockObject = new object();
private bool _started;
private BackgroundJobServer _backgroundJobServer;
private HangfireBootstrapper()
{
}
public void Start()
{
lock (_lockObject)
{
if (_started) return;
_started = true;
HostingEnvironment.RegisterObject(this);
GlobalConfiguration.Configuration
.UseSqlServerStorage("connection string");
// Specify other options here
_backgroundJobServer = new BackgroundJobServer();
}
}
public void Stop()
{
lock (_lockObject)
{
if (_backgroundJobServer != null)
{
_backgroundJobServer.Dispose();
}
HostingEnvironment.UnregisterObject(this);
}
}
void IRegisteredObject.Stop(bool immediate)
{
Stop();
}
}
4)
Enable service auto-start
After creating above classes, you should edit the global
applicationHost.config file
(%WINDIR%\System32\inetsrv\config\applicationHost.config). First, you
need to change the start mode of your application pool to
AlwaysRunning, and then enable Service AutoStart Providers.
<applicationPools>
<add name="MyAppWorkerProcess" managedRuntimeVersion="v4.0" startMode="AlwaysRunning" />
</applicationPools>
<!-- ... -->
<sites>
<site name="MySite" id="1">
<application path="/" serviceAutoStartEnabled="true"
serviceAutoStartProvider="ApplicationPreload" />
</site>
</sites>
<!-- Just AFTER closing the `sites` element AND AFTER `webLimits` tag -->
<serviceAutoStartProviders>
<add name="ApplicationPreload" type="WebApplication1.ApplicationPreload, WebApplication1" />
</serviceAutoStartProviders>
Note that for the last entry, WebApplication1.ApplicationPreload is
the full name of a class in your application that implements
IProcessHostPreloadClient and WebApplication1 is the name of your
application’s library. You can read more about this here.
There is no need to set IdleTimeout to zero – when Application pool’s
start mode is set to AlwaysRunning, idle timeout does not working
anymore.
There are some database operations I need to execute before the end of the final attempt of my Hangfire background job (I need to delete the database record related to the job)
My current job is set with the following attribute:
[AutomaticRetry(Attempts = 5, OnAttemptsExceeded = AttemptsExceededAction.Delete)]
With that in mind, I need to determine what the current attempt number is, but am struggling to find any documentation in that regard from a Google search or Hangfire.io documentation.
Simply add PerformContext to your job method; you'll also be able to access your JobId from this object. For attempt number, this still relies on magic strings, but it's a little less flaky than the current/only answer:
public void SendEmail(PerformContext context, string emailAddress)
{
string jobId = context.BackgroundJob.Id;
int retryCount = context.GetJobParameter<int>("RetryCount");
// send an email
}
(NB! This is a solution to the OP's problem. It does not answer the question "How to get the current attempt number". If that is what you want, see the accepted answer for instance)
Use a job filter and the OnStateApplied callback:
public class CleanupAfterFailureFilter : JobFilterAttribute, IServerFilter, IApplyStateFilter
{
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
try
{
var failedState = context.NewState as FailedState;
if (failedState != null)
{
// Job has finally failed (retry attempts exceeded)
// *** DO YOUR CLEANUP HERE ***
}
}
catch (Exception)
{
// Unhandled exceptions can cause an endless loop.
// Therefore, catch and ignore them all.
// See notes below.
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
// Must be implemented, but can be empty.
}
}
Add the filter directly to the job function:
[CleanupAfterFailureFilter]
public static void MyJob()
or add it globally:
GlobalJobFilters.Filters.Add(new CleanupAfterFailureFilter ());
or like this:
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new CleanupAfterFailureFilter () };
};
app.UseHangfireServer(options, storage);
Or see http://docs.hangfire.io/en/latest/extensibility/using-job-filters.html for more information about job filters.
NOTE: This is based on the accepted answer: https://stackoverflow.com/a/38387512/2279059
The difference is that OnStateApplied is used instead of OnStateElection, so the filter callback is invoked only after the maximum number of retries. A downside to this method is that the state transition to "failed" cannot be interrupted, but this is not needed in this case and in most scenarios where you just want to do some cleanup after a job has failed.
NOTE: Empty catch handlers are bad, because they can hide bugs and make them hard to debug in production. It is necessary here, so the callback doesn't get called repeatedly forever. You may want to log exceptions for debugging purposes. It is also advisable to reduce the risk of exceptions in a job filter. One possibility is, instead of doing the cleanup work in-place, to schedule a new background job which runs if the original job failed. Be careful to not apply the filter CleanupAfterFailureFilter to it, though. Don't register it globally, or add some extra logic to it...
You can use OnPerforming or OnPerformed method of IServerFilter if you want to check the attempts or if you want you can just wait on OnStateElection of IElectStateFilter. I don't know exactly what requirement you have so it's up to you. Here's the code you want :)
public class JobStateFilter : JobFilterAttribute, IElectStateFilter, IServerFilter
{
public void OnStateElection(ElectStateContext context)
{
// all failed job after retry attempts comes here
var failedState = context.CandidateState as FailedState;
if (failedState == null) return;
}
public void OnPerforming(PerformingContext filterContext)
{
// do nothing
}
public void OnPerformed(PerformedContext filterContext)
{
// you have an option to move all code here on OnPerforming if you want.
var api = JobStorage.Current.GetMonitoringApi();
var job = api.JobDetails(filterContext.BackgroundJob.Id);
foreach(var history in job.History)
{
// check reason property and you will find a string with
// Retry attempt 3 of 3: The method or operation is not implemented.
}
}
}
How to add your filter
GlobalJobFilters.Filters.Add(new JobStateFilter());
----- or
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new JobStateFilter() };
};
app.UseHangfireServer(options, storage);
Sample output :
I'm struggling with the concept of creating a Jedis-client which listens infinitely as a subscriber to a Redis pubsub channel and handles messages when they come in.
My problem is that after a while of inactivity the server stops responding silently. I think this is due to a timeout occurring on the Jedis-client I subscribe with.
Would this likely indeed be the case? If so, is there a way to configure this particular Jedis-client to not timeout? (While other Jedispools aren't affected with some globally set timeout)
Alternatively, is there another (best practice) way of what I'm trying to achieve?
This is my code, (modified/ stripped for display) :
executed during web-server startup:
new Thread(AkkaStarter2.getSingleton()).start();
AkkaStarter2.java
private Jedis sub;
private AkkaListener akkaListener;
public static AkkaStarter2 getSingleton(){
if(singleton==null){
singleton = new AkkaStarter2();
}
return singleton;
}
private AkkaStarter2(){
sub = new Jedis(REDISHOST, REDISPORT);
akkaListener = new AkkaListener();
}
public void run() {
//blocking
sub.psubscribe(akkaListener, AKKAPREFIX + "*");
}
class AkkaListener extends JedisPubSub {
....
public void onPMessage(String pattern, String akkaChannel,String jsonSer) {
...
}
}
Thanks.
ermmm, the below solves it all. Indeed it was a Jedis thing
private AkkaStarter2(){
//0 specifying no timeout.. Overlooked this 100 times
sub = new Jedis(REDISHOST, REDISPORT,0);
akkaListener = new AkkaListener();
}