My question stems from an issue almost identical to the one here (which did not end up getting a satisfactory answer):
https://vaadin.com/forum/thread/13932610
Like this person, I expected that upon closing the browser that my app was open in, a detach event would proc; however, this did not happen. I've tried adding a detach listener, overriding the detach method, and doing both at the same time, but none of them were successful. As for how I know the detach event was not called, my detach event is a simple print statement - that does not show up in the output.
Note that like in the aforementioned thread, I've already set the heartbeat interval (2 seconds in my case) and set closeIdelSessions to be true. So, I thought I would just have to wait six seconds, but that's certainly not been the case.
When I try this (find the essential parts of the code below), the detach() is eventually being called. I run this with Jetty, and I did not touch its default. It tooks some ~45 minutes after closing the Browser, when I saw "Detach called" logged on console. So yes, the time is lengthy. The reason is that the last UI is cleaned up only after HttpSession is expired (which depends on application container etc. settings). If you want to do forced clean up quicker, you need to use https://vaadin.com/directory/component/cleanupservlet-add-on
#Push
#SuppressWarnings("serial")
public class DemoUI extends UI {
#WebServlet(value = "/*", asyncSupported = true)
#VaadinServletConfiguration(productionMode = false, ui = DemoUI.class, heartbeatInterval=5, closeIdleSessions=true)
public static class Servlet extends VaadinServlet {
}
#Override
public void detach() {
System.out.println("Detach called");
}
#Override
protected void init(VaadinRequest vaadinRequest) {
...
}
Related
We have an Entity Framework execution strategy coded in our lower environment. How do we test this to show it's actually working? We don't want to release to Prod without something to say we aren't introducing new problems.
The easy way is to use some listener where you can throw an exception and subscribe this listener to the dbContext.
public class CommandListener
{
[DiagnosticName("Microsoft.EntityFrameworkCore.Database.Command.CommandExecuting")]
public void OnCommandExecuting(DbCommand command, DbCommandMethod executeMethod, Guid commandId, Guid connectionId, bool async, DateTimeOffset startTime)
{
throw new TimeoutException("Test exception");
}
[DiagnosticName("Microsoft.EntityFrameworkCore.Database.Command.CommandExecuted")]
public void OnCommandExecuted(object result, bool async)
{
}
[DiagnosticName("Microsoft.EntityFrameworkCore.Database.Command.CommandError")]
public void OnCommandError(Exception exception, bool async)
{
}
}
Subscribe listener to dbContext f.i. in Startup.cs
var context = provider.GetService<SomeDbContext>();
var listener = context.GetService<DiagnosticSource>();
(listener as DiagnosticListener).SubscribeWithAdapter(new CommandListener());
As TimeoutException is transient exception in SqlServerRetryingExecutionStrategy.cs (if you use the default retrying strategy) you will get TimeoutException as many as your MaxRetryingCount of strategy setting has. Finally, you have to get RetryLimitExceededException as a result of the request.
You have to see TimeoutException in your app logs. Also it is good idea to turn on transaction logs "Microsoft.EntityFrameworkCore.Database.Transaction": "Debug"
What I did to manage to throw the transient exception and debug strategy (testing and playing around purpose only)
I added ExectuionStrategyBase.cs and
TestServerRetryingExecutionStrategy.cs. The first one is clone of
ExectuionStrategy.cs and second one is clone of
SqlServerRetryingExecutionStrategy.cs
In Startup.cs I set retrying strategy
strategy services.AddDbContext<SomeDbContext>(options =>
{options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"), sqlOption =>
{
sqlOption.ExecutionStrategy(dependencies =>
{
return new TestSqlRetryingStrategy(
dependencies,
settings.SqlMaxRetryCount,
settings.SqlMaxRetryDelay,
null);
});
});
In OnCommandExecuting of CommandListener.sc I just checked some static bool variable to throw or not TimeoutException and in ExectuionStrategyBase.cs I swithced that variable.
Thus, I managed to throw transient Exception on the first execution of the query and successful execution on the second short. Now I think about some long running transaction and kill session of this transaction in SSCM during execution of it.
Also, I found out that if there is a query like
var users = context.Users.AsNoTracking().ToArrayAsync() execution strategy is not implemented and I am stuck on it. I have been struggling with that a couple of days but still can figure out nothing. If remove AsNoTracking or replace ToArrayAsync() by something like FirstAsync() all foes well.
Over the last couple of years, I have done a fair amount of work on Amazon SWF, but the following points are still unclear to me and I am not able to find any straight forward answers on any forums yet.
These are pretty basic requirements I suppose, sure others might have come across too. Would be great if someone can clarify these.
Is there a simple way to return a workflow execution result (maybe just something as simple as boolean) back to workflow starter?
Is there a way to catch Activity timeout exception, so that we can do run customised actions in such scenarios?
Why doesn't WorkflowExecutionHistory contains Activities, why just Events?
Why there is no simple way of restarting a workflow from the point it failed?
I am considering to use SWF for more business processes at my workplace, but these limitations/doubts are holding me back!
FINAL WORKING SOLUTION
public class ReturnResultActivityImpl implements ReturnResultActivity {
SettableFuture future;
public ReturnResultActivityImpl() {
}
public ReturnResultActivityImpl(SettableFuture future) {
this.future = future;
}
public void returnResult(WorkflowResult workflowResult) {
System.out.print("Marking future as Completed");
future.set(workflowResult);
}
}
public class WorkflowResult {
public WorkflowResult(boolean s, String n) {
this.success = s;
this.note = n;
}
private boolean success;
private String note;
}
public class WorkflowStarter {
#Autowired
ReturnResultActivityClient returnResultActivityClient;
#Autowired
DummyWorkflowClientExternalFactory dummyWorkflowClientExternalFactory;
#Autowired
AmazonSimpleWorkflowClient swfClient;
String domain = "test-domain;
boolean isRegister = true;
int days = 7;
int terminationTimeoutSeconds = 5000;
int threadPollCount = 2;
int taskExecutorThreadCount = 4;
public String testWorkflow() throws Exception {
SettableFuture<WorkflowResult> workflowResultFuture = SettableFuture.create();
String taskListName = "testTaskList-" + RandomStringUtils.randomAlphabetic(8);
ReturnResultActivity activity = new ReturnResultActivityImpl(workflowResultFuture);
SpringActivityWorker activityWorker = buildReturnResultActivityWorker(taskListName, Arrays.asList(activity));
DummyWorkflowClientExternalFactory factory = new DummyWorkflowClientExternalFactoryImpl(swfClient, domain);
factory.getClient().doSomething(taskListName)
WorkflowResult result = workflowResultSettableFuture.get(20, TimeUnit.SECONDS);
return "Call result note - " + result.getNote();
}
public SpringActivityWorker buildReturnResultActivityWorker(String taskListName, List activityImplementations)
throws Exception {
return setupActivityWorker(swfClient, domain, taskListName, isRegister, days, activityImplementations,
terminationTimeoutSeconds, threadPollCount, taskExecutorThreadCount);
}
}
public class Workflow {
#Autowired
private DummyActivityClient dummyActivityClient;
#Autowired
private ReturnResultActivityClient returnResultActivityClient;
#Override
public void doSomething(final String resultActivityTaskListName) {
Promise<Void> activityPromise = dummyActivityClient.dummyActivity();
returnResult(resultActivityTaskListName, activityPromise);
}
#Asynchronous
private void returnResult(final String taskListname, Promise waitFor) {
ActivitySchedulingOptions schedulingOptions = new ActivitySchedulingOptions();
schedulingOptions.setTaskList(taskListname);
WorkflowResult result = new WorkflowResult(true,"All successful");
returnResultActivityClient.returnResult(result, schedulingOptions);
}
}
The standard pattern is to host a special activity in the workflow starter process that is used to deliver the result. Use a process specific task list to make sure that it is routed to a correct instance of the starter. Here are the steps to implement it:
Define an activity to receive the result. For example "returnResultActivity". Make this activity implementation to complete the Future passed to its constructor upon execution.
When the workflow is started it receives "resultActivityTaskList" as an input argument. At the end the workflow calls this activity with a workflow result. The activity is scheduled on the passed task list.
The workflow starter creates an ActivityWorker and an instance of a Future. Then it creates an instance of "returnResultActivity" with that future as a constructor parameter.
Then it registers the activity instance with the activity worker and configures it to poll on a randomly generated task list name. Then it calls "start workflow execution" passing the generated task list name as an input argument.
Then it wait on the Future to complete. The future.get() is going to return the workflow result.
Yes, if you are using the AWS Flow Framework a timeout exception is thrown when activity is timed out. If you are not using the Flow framework than you are making your life 100 times harder. BTW the workflow timeout is thrown into a parent workflow as a timeout exception as well. It is not possible to catch a workflow timeout exception from within the timing out instance itself. In this case it is recommended to not rely on workflow timeout, but just create a timer that would fire and notify workflow logic that some business event has timed out.
Because a single activity execution has multiple events associated to it. It should be pretty easy to write code that converts history to whatever representation of activities you like. Such code would just match the events that relate to each activities. Each event always has a reference to the related events, so it is easy to roll them up into higher level representation.
Unfortunately there is no easy answer to this one. Ideally SWF would support restarting workflow by copying its history up to the failure point. But it is not supported. I personally believe that workflow should be written in a way that it never fails but always deals with failures without failing. Obviously it doesn't work in case of failures due to unexpected conditions. In this case writing workflow in a way that it can be restarted from the beginning is the simplest approach.
It seems like I just need to implement kind of a listener, if there is no something similar already.
Let's say I have a method which is executed each time build finishes (RunListener event); but that's not enough and I want to run the method each X minutes. I'm stuck!
So, I wonder if there is a way to do it (kind of a listener, event trigger, whatever).
Any info, thoughts are welcomed!
If you want to execute a task regularly in a Jenkins plugin, you can implement the PeriodicWork extension point.
A minimal example that would automatically register with Jenkins, and be executed every three minutes:
#Extension
public class MyPeriodicTask extends PeriodicWork {
#Override
public long getRecurrencePeriod() {
return TimeUnit.MINUTES.toMillis(3);
}
#Override
protected void doRun() throws Exception {
// Do something here, quickly.
// If it will take longer, use AsyncPeriodWork instead
}
}
I have a simple UI class
public class HelloWorldUI extends UI {
#Override
protected void init(VaadinRequest request) {
System.out.println("Initialized !");
final VerticalLayout layout = new VerticalLayout();
layout.addComponent(new Label("Hello World !"));
setContent(layout);
}
#Override
public void detach() {
System.out.println("Detach !");
super.detach();
}
#Override
public void attach() {
System.out.println("Attach !");
super.attach();
}
}
When first time my UI was loaded , I see outputs at my console as
Attach !
Initialized !
It is OK and this is what I expected. But when I refresh the browser , my console outputs were
Attach !
Initialized !
Detach !
Amazing ! I think Detach ! may be produce first because (as I think) when browser was refreshed , detach() method should be call and attach() , init() should be follow . But actually detach() method will call after attach() method. What's wrong my thinking ?
Browser Refresh = New UI Instance
When you refresh a browser window or tab, a new UI instance is created. So you see an attach message of a new UI instance. The old UI instance will be detached later.
This is default behavior in Vaadin 7. You may change that behavior with an annotation.
#PreserveOnRefresh
Adding #PreserveOnRefresh annotation to the UI changes the behavior: No new UI instance won't be created on refresh.
To quote the doc for this annotation:
Marks a UI that should be retained when the user refreshed the browser window. By default, a new UI instance is created when refreshing, causing any UI state not captured in the URL or the URI fragment to get discarded. By adding this annotation to a UI class, the framework will instead reuse the current UI instance when a reload is detected.
In Polymer there is a this.job() function that handles the delayed processing of events. How do you access this functionality from polymer.dart?
#override
void attached() {
super.attached();
dom.window.onMouseMove.listen(mouseMoveHandler);
}
PolymerJob mouseMoveJob;
void mouseMoveHandler(dom.MouseEvent e) {
print('mousemove');
mouseMoveJob = scheduleJob(mouseMoveJob, onDone, new Duration(milliseconds: 500));
}
void onDone() {
print('done');
}
If the job isn't rescheduled for 500ms it is executed.
In polymer this is often used during initialization when
xxxChanged(old);
is called several times succinctly because xxx is updated on changes from several other states which are initialized one after the other but it is enough when xxxChanged is executed for the last update (a much shorter timeout should be used then like 0-20 ms depending whether xxxChanged is only called from sync or also from async code.
Another situation where I used this pattern (but not using PolymerJob) is where an #observable field is bound to a slider <input type="range" value='{{slider}}'>.
This invokes sliderChanged(oldVal, newVal) very often in a short interval when you move the knob. The execution of the update is expensive and can't be finished between two such calls see http://bwu-dart.github.io/bwu_datagrid/example/e04_model.html for an example.
Without some delayed execution this would be very cumbersome to use.
Try using Future:
doJob() => print('hi');
new Future(doJob).then((_) => print('job is done'));
Here are the docs for the Future class.