On the main Electron process, I have a menu template. On that menu template, I have a close function. My application is using a database to store information. On the close function, I would like call the renderer to get the current id of the record that is open, so that I can save it, before I close the database. I am wondering if there is a way to call a function on the main process and return a id value from the renderer process. Only way I can determine to do this is the following BrowserWindow.getFocusedWindow().webcontents.send("closing"), then do ipcRenderer.send back to main with id. Seems like there should an easier way to do this.
Thanks
Unfortunately, at this moment in time, there is nothing similar to the ipcRenderer.invoke(channel, ...args) for the main thread.
As you indicated, you would need to use the app.on('before-quit', () => {...} ); method and within it, poll the render thread for the current id and await its return.
Alternatively, to keep your render thread free and responsive, you could manage the current id from within your main thread, so there is no need to call the render thread prior to closing. This is what I do. All the heavy lifting is done in the main thread (such as DB calls, file system reads / writes, API calls) and the render thread just reflects the state via IPC messages / data updates.
Related
Problem Context
I am trying to generate a total (linear) order of event items per key from a real-time stream where the order is event time (derived from the event payload).
Approach
I had attempted to implement this using streaming as follows:
1) Set up a non overlapping sequential windows, e.g. duration 5 minutes
2) Establish an allowed lateness - it is fine to discard late events
3) Set accumulation mode to retain all fired panes
4) Use the "AfterwaterMark" trigger
5) When handling a triggered pane, only consider the pane if it is the final one
6) Use GroupBy.perKey to ensure all events in this window for this key will be processed as a unit on a single resource
While this approach ensures linear order for each key within a given window, it does not make that guarantee across multiple windows, e.g. there could be a window of events for the key which occurs after that is being processed at the same time as the earlier window, this could easily happen if the first window failed and had to be retried.
I'm considering adapting this approach where the realtime stream can first be processed so that it partitions the events by key and writes them to files named by their window range.
Due to the parallel nature of beam processing, these files will also be generated out of order.
A single process coordinator could then submit these files sequentially to a batch pipeline - only submitting the next one when it has received the previous file and that downstream processing of it has completed successfully.
The problem is that Apache Beam will only fire a pane if there was at least one time element in that time window. Thus if there are gaps in events then there could be gaps in the files that are generated - i.e. missing files. The problem with having missing files is that the coordinating batch processor cannot make the distinction between knowing whether the time window has passed with no data or if there has been a failure in which case it cannot proceed until the file finally arrives.
One way to force the event windows to trigger might be to somehow add dummy events to the stream for each partition and time window. However, this is tricky to do...if there are large gaps in the time sequence then if these dummy events occur surrounded by events much later then they will be discarded as being late.
Are there other approaches to ensuring there is a trigger for every possible event window, even if that results in outputting empty files?
Is generating a total ordering by key from a realtime stream a tractable problem with Apache Beam? Is there another approach I should be considering?
Depending on your definition of tractable, it is certainly possible to totally order a stream per key by event timestamp in Apache Beam.
Here are the considerations behind the design:
Apache Beam does not guarantee in-order transport, so there is no use within a pipeline. So I will assume you are doing this so you can write to an external system with only the capability to handle things if they come in order.
If an event has timestamp t, you can never be certain no earlier event will arrive unless you wait until t is droppable.
So here's how we'll do it:
We'll write a ParDo that uses state and timers (blog post still under review) in the global window. This makes it a per-key workflow.
We'll buffer elements in state when they arrive. So your allowed lateness affects how efficient of a data structure you need. What you need is a heap to peek and pop the minimum timestamp and element; there's no built-in heap state so I'll just write it as a ValueState.
We'll set a event time timer to receive a call back when an element's timestamp can no longer be contradicted.
I'm going to assume a custom EventHeap data structure for brevity. In practice, you'd want to break this up into multiple state cells to minimize the data transfered. A heap might be a reasonable addition to primitive types of state.
I will also assume that all the coders we need are already registered and focus on the state and timers logic.
new DoFn<KV<K, Event>, Void>() {
#StateId("heap")
private final StateSpec<ValueState<EventHeap>> heapSpec = StateSpecs.value();
#TimerId("next")
private final TimerSpec nextTimerSpec = TimerSpec.timer(TimeDomain.EVENT_TIME);
#ProcessElement
public void process(
ProcessContext ctx,
#StateId("heap") ValueState<EventHeap> heapState,
#TimerId("next") Timer nextTimer) {
EventHeap heap = firstNonNull(
heapState.read(),
EventHeap.createForKey(ctx.element().getKey()));
heap.add(ctx.element().getValue());
// When the watermark reaches this time, no more elements
// can show up that have earlier timestamps
nextTimer.set(heap.nextTimestamp().plus(allowedLateness);
}
#OnTimer("next")
public void onNextTimestamp(
OnTimerContext ctx,
#StateId("heap") ValueState<EventHeap> heapState,
#TimerId("next") Timer nextTimer) {
EventHeap heap = heapState.read();
// If the timer at time t was delivered the watermark must
// be strictly greater than t
while (!heap.nextTimestamp().isAfter(ctx.timestamp())) {
writeToExternalSystem(heap.pop());
}
nextTimer.set(heap.nextTimestamp().plus(allowedLateness);
}
}
This should hopefully get you started on the way towards whatever your underlying use case is.
In my program, it uses both of
DispatchQueue.global(qos: .background)
and
self.concurrentQueue.sync(flags: .barrier)
to deal with the background multithread issues.
It is swift 3 so I use the latest way to get the childContext:
lazy var context: NSManagedObjectContext = {
return (UIApplication.shared.delegate as! AppDelegate).persistentContainer.newBackgroundContext()
}()
I also enable -com.apple.CoreData.ConcurrencyDebug 1 to debug
Then the problem occurs:
1, When there's an API call and in the callback block (background thread), I need to fetch the core data, edit, then save. I tried to use self.context from the code above to call performBlockAndWait and do save inside of this block. The whole process goes fine but when I try to access my result outside of this block but inside of the callback block, the error occurs. I have also tried to get the objectId and getObjectById by both self.context and self.context.parent and the error occurs on this line. What did I do wrong and how should I do this? since I need to use the result everywhere in many different thread (not context).
2, I read a post says that I need one context per thread, then in my case, how do I determine which exact thread it is if it's a call back from API call and do I really need to do this?
3, You might ask that why do I need a privateConcurrentType, because my program has things need to be running in background thread so that it has to do it this way, (read from other post), is this right?
4, Even in my question 1, get object by passing objectId to different Context still not working in my case. Let's assume this is the proper way. How am I gonna manage passing so many objectID throughout my entire program in different thread without being super messy? To me this sounds crazy but I suppose there's a much cleaner and easier way to deal with this.
5, I have read many posts some are pretty old (before swift 3), they have to do childContext.save then parentContext.save, but since I use the code above (swift 3 only). It seems that I can do childContext.save only to make it work? Am I right?
Core data in general is not multithreading friendly. To use it on concurrent thread I can assume only bad things will happen. You may not simply manipulate managed objects outside the thread on which the context is.
As you already mentioned you need a separate context per thread which will work in most cases but by my experience you only need one background context which is read-write and a single main thread read-only context that is used for fetch result controllers or other instant fetches.
Think of a context as some in-memory module that communicates with the database (a file). Fetched entities are shared within the context but are not shared between contexts. So you can modify pretty much anything inside the context but that will not show in the database or other contexts until you save the context into the database. And if you modify the same entity on 2 contexts and then save them you will get a conflict which should be resolved by you.
All of these then make quite a mess in the code logic and so multiple contexts seem like something to avoid. What I do is create a background context and then do all of the operations on that context. Context has a method perform which will execute the code on its own thread which is not main (for background context) and this thread is serial.
So for instance when doing a smart client I will get a response from server with new entries. These are parsed on the fly and I perform a block on context to get all the corresponding objects in the database and create the ones that do not exist. Then copy the data and save the context into database.
For the UI part I do similar. Once an entry should be saved I either create or update the entity on the background context thread. Then usually do some UI stuff on completion so I have a method:
public func performBlockOnBackgroundContextAndReturnOnMain(block: #escaping (() -> Void), main: #escaping (() -> Void)) {
if let context = context {
context.perform {
block()
DispatchQueue.main.async(execute: { () -> Void in
main()
})
}
}
}
So pretty much all of the core data logic happens on a single thread which is in background. For some cases I do use a main context to get items from fetch result controller for instance; I display a list of objects with it and once user selects one of the items I refetch that item from the background context and use that one in the user interface and to modify it.
But even that may give you trouble as some properties may be loaded lazily from database so you must ensure that all the data you need will be loaded on the context and you may access them on the main thread. There is method for that but I rather use wrappers:
I have a single superclass for all the entities in the database model which include id only. So I also have a superclass wrapper which has all the logic to work with the rest of wrappers. What I am left with in the end is that for each of the subclass I need to override 2 mapping methods (from and to) managed object.
It might seem silly to create additional wrappers and to copy the data into memory from managed object but the thing is you need to do that for most of the managed objects anyway; Converting NSData to/from UIImage, NSDate to/from Date, enumerations to/from integers or strings... So in the end you are more or less just left with strings that are copied 1-to-1. Also this makes it easy to have the code that maps the response from your server in this class or any additional logic where you will have no naming conflicts with managed objects.
I have a simple yet time consuming operation:
when the user clicks a button, it performs a database intensive operation, processing records from an import table into multiple other tables, one import record at a time.
I have a View with a button that triggers the operation and at the end of the operation a report is displayed.
I am looking at ways to notify the user that the operation is being processed.Here is a solution that I liked.
I have been reading up online about Asynchronous operations in MVC. I have found a numbers of links saying that if your process is CPU bound stick to using synchronous operations. Is database related process considered CPU bound or not?
Also if I got the Asynchronous operation route should I use AsyncController as described here or just use Task as in the example I mentioned and also here . or are they all the same?
The first thing you need to know is that async doesn't change the HTTP protocol. As I describe on my blog, the HTTP protocol gives you one response for each request. So you can't return once saying it's "in progress" and return again later saying it's "completed".
The easy solution is to only return when it's completed, and just use AJAX to toss up some "in progress..." notification on the client side, updating the page when the request completes. If you want to get more complex, you can use something like SignalR to have the server notify the client when the request is completed.
In particular, an async MVC action does not return "early"; ASP.NET will wait until all the asynchronous actions are complete, and then send the response. async code on the server side is all about scalability, not responsiveness.
That said, I do usually recommend asynchronous code on the server side. The one exception is if you only have a single DB backend (discussed well in this blog post). If your backend is a DB cluster or a distributed/NoSQL/SQL Azure DB, then you should consider making it asynchronous.
If you do decide to make your servers asynchronous, just return Tasks; AsyncController is just around for backwards compatibility these days.
Assuming C# 5.0, I would do something like this following:
// A method to get your intensive dataset
public async Task<IntensiveDataSet> GetIntensiveDataSet() {
//in here you'll want to use any of the newer await Async calls you find
// available for your operations. This prevents thread blocking.
var intensiveDataSet = new IntensiveData();
using (var sqlCommand = new SqlCommand(SqlStatement, sqlConnection))
{
using (var sqlDataReader = await sqlCommand.ExecuteReaderAsync())
{
while (await sqlDataReader.ReadAsync())
{
//build out your intensive data set.
}
}
}
return intensiveDataSet;
}
// Then in your controller, some method that uses that:
public async Task<JsonResult> Intense() {
return Json(await GetIntensiveDataSet());
}
In your JS you'd call it like this (With JQuery):
$.get('/ControllerName/Intense').success(function(data) {
console.log(data);
});
Honestly, I'd just show some sort of spinner while it was running.
If you do need some sort of feedback to the user, you would have to sprinkle updates to your user's Session throughout your async calls... and in order to do that you'd need to pass a reference to that Session around. Then you'd just add another simple JsonResult action that checked the message in the Session variable and poll it with JQuery on an interval. Seems like overkill though. In most cases a simple "This may take a while" is enough for people.
You should consider the option of implementing asynchronization using AJAX. You could handle the client "... processing" message right in your View, with minimum hassle,
$.ajax({
url: #Url.Action("ActionName"),
data: data
}).done(function(data) {
alert('Operation Complete!');
});
alert('Operation Started');
// Display processing animation
Handling async calls on the server side can be expensive, complicated and unnecessary.
I have a controller with two actions. One performs a very long computation, and at several steps, stores status in a session container:
public function longAction()
{
$session = new Container('SessionContainer');
$session->finished = 0;
$session->status = "A";
// do something long
$session->status = "B";
// do more long jobs
$session->status = "C";
// ...
}
The second controller:
public function shortAction()
{
$session = new Container('SessionContainer');
return new JsonModel(
array(
'status' => $session->status
)
);
}
These are both called via AJAX, but I can evidence the same behavior in just using browser tabs. I first call /module/long which does its thing. While it completes its tasks, calling /module/short (I thought would just echo JSON) stalls /module/long is done!
Bringing this up, some ZFers felt this was a valid protection against race conditions; but I can't be the only one with this use case that really doesn't care about the latter.
Any cheap tricks that avoid heading towards queues, databases, or memory caches? Trying to keep it lightweight.
this is the expected behavior. this is why:
Sessions are identified using a cookie to store the session id, this allows your browser to pickup the same session on the next request.
As you long process is using sessions, it will not call session_write_close() until the whole process execution is complete, meaning the session is still open while the long process is running.
when you connect with another browser tab the browser will try and pickup the same session (using the same cookie) which is still open and running the long process.
If you open the link using a different browser you will see the page will load fine and not wait around for the session_write_close() to be called, this is because it's opening a separate session (however you will not see the text you want as it's a separate session)
You could try and manually write and close (session_write_close()) the session, but that's probably not the best way to go about things.
It's definitely worth looking at something like Gearman for this, there's not that much extra work, and it's designed especially for this kind of async job processing. Even writing status to the database would be better, but that's still not ideal.
I have the following code:
Button export = new Button("CSV");
export.addListener(new ClickListener()
{
public void buttonClick(ClickEvent event)
{
CsvExport csvExport;
csvExport = new CsvExport(_table);
csvExport.setDisplayTotals(false);
csvExport.setDoubleDataFormat("0");
csvExport.excludeCollapsedColumns();
csvExport.setReportTitle("Document title");
csvExport.setExportFileName("Nome_file_example.csv");
csvExport.export();
getWindow().showNotification("Document saved", "The document has been exported.");
}
}
I would like the notification to appear only after the file has been exported and downloaded, but actually the notification is not working, maybe because it does not "wait" for the statement
csvExport.export();
to finish. If I comment it, the notification works.
Does anybody have any suggestions?
Thanks very much,
You'll need to split the work into a separate thread, then provide a way to notify the user 'later'.
So, first, create a thread... if you're on Java EE, use the built-in thread pooling, otherwise use something else (we're on tomcat, we rolled our own, to allow us better control).
Then, when you're done, synchronize your thread, work your way back into your UI class (We use closures from Groovy, but you can make your own listener), and call the method to notify your user. window.showNotification('All Done')
So here's the tricky part, you've notified your user, but Vaadin has already sent the 'click' response back... so the Server part thinks it's notified the user, but it isn't able to show the user yet... You'll need a progress indicator on your page, as it asks the server every 5 seconds if anything has changed.
There are also some 'push' plugins, but I've found that most of the places that we we're spinning up threads, we want to show a 'loading' animation, so the progress indicator works well.