If I'm running some JavaScript in my iOS app using JavaScriptCore, is it possible for my native code to query whether the JavaScript is currently processing an event, and/or whether there are events waiting to be immediately processed next?
Scenario: the running JS is managing some state, and I want my UI thread to occasionally query the JS for that state, but I don't want to risk waiting too long if the JS is busy processing other events first.
I investigated a bit more, and discovered that there's a surprising answer for this question: there is no javascript event loop in JSCore, if you don't implement it.
To add further detail, execution of javascript code is always synchronous with the ObjC function that invokes it, and there is no method in Javascript to generate asynchronous execution: no setTimeout, no setInterval, no DOM and addEventListener. You have to implement these function natively, and then you have to decide the scheduling policy. If your JSContext is attached to a WebView or similar browser-like context, the WebView provides these events, and you may not be able to access them in the way you want. In this case however you may use something like Zone.js to monkey patch all functions which may generate events and implement in Javascript something that tracks what you want (which is something that's done in the video of the presentation of the technology, in the linked page)
Based on a quick perusal of the API source here:
https://github.com/phoboslab/JavaScriptCore-iOS/tree/master/JavaScriptCore/API
I don't see any function on the JSContext or JSVirtualMachine that would get you the information you're interested in.
As an alternative, you could do a callback to your ObjC code from JS each time the JS code begin/ends processing an event, and then store state information in the ObjC side so you can quickly determine if you believe the JSVM is processing an event or not.
Related
A dart:html.WebSocket opens once and closes once. So I'd expect that the onOpen and onClose properties would be Futures, but in reality they are Streams.
Of course, I can use stream.onClose.first to get a Future. But the done property on the dart:io version of WebSocket is a Future as expected, so I'm wondering if I'm missing something.
Why use Streams and not Futures in dart:html?
In a word: transparency. The purpose of the dart:html package is to provide access to the DOM of the page, and so it mirrors the underlying DOM constructs as closely as possible. This is in contrast with dart:io where the goal is to provide a convenient server-side API rather than exposing some underlying layer.
Sure, as a consumer of the API you would expect open and close to be fired only once, whereas message would be fired multiple times, but at the root of things, open, close, message, and error are all just events. And in dart:html, DOM events are modeled as streams.
And actually, a WebSocket could very well fire multiple open events (or close events). The following is definitely a contrived example, but consider this snippet of javascript:
var socket = new WebSocket('ws://mysite.com');
socket.dispatchEvent(new Event('open'));
socket.dispatchEvent(new Event('open'));
socket.dispatchEvent(new Event('open'));
How would a Dart WebSocket object behave in a situation like this, if onOpen were a Future rather than a Stream? Of course I highly, highly doubt this would ever surface out there in the "real world". But the DOM allows for it, and dart:html should not be making judgment calls trying to determine which situations are likely and which are not. If it's possible according to the spec, dart:html should reflect that. Its role is simply to pass through the behavior - as transparently as possible - and let the consumer of the API decide what cases they need to handle and which they can ignore.
My app includes a back-end server, with many transactions which must be carried out in the background. Many of these transactions require many synchronous bits of code to run.
For example, do a query, use the result to do another query, create a new back-end object, then return a reference to the new object to a view controller object in the foreground so that the UI can be updated.
A more specific scenario would be to carry out a sequence of AJAX calls in order, similar to this question, but in iOS.
This sequence of tasks is really one unified piece of work. I did not find existing facilities in iOS that allowed me to cleanly code this sequence as a "unit of work". Likewise I did not see a way to provide a consistent context for the "unit of work" that would be available across the sequence of async tasks.
I recently had to do some JavaScript and had to learn to use the Promise concept that is common in JS. I realized that I could adapt this idea to iOS and objective-C. The results are here on Github. There is documentation, code and unit tests.
A Promise should be thought of as a promise to return a result object (id) or an error object (NSError) to a block at a future time. A Promise object is created to represent the asynchronous result. The asynchronous code delivers the result to the Promise and then the Promise schedules and runs a block to handle the result or error.
If you are familiar with Promises on JS, you will recognize the iOS version immediately. If not, check out the Readme and the Reference.
I've used most of the usual suspects, and I have to say that for me, Grand Central Dispatch is the way to go.
Apple obviously care enough about it to re-write a lot of their library code to use completion blocks.
IIRC, Apple have also said that GCD is the preferred implementation for multitasking.
I also remember that some of the previous options have been re-implemented using GCD under the hood, so you're not already attached to something else, Go GCD!
BTW, I used to find writing the block signatures a real pain, but if you just hit return when the placeholder is selected, it does all that for you. What could be sweeter than that.
This isn't directly a coding question, but answers to this will help in coding a Firefox addon.
Is all JS addon code executed on main thread?
Assuming question to above is 'yes', could the addon code be executed by different JS runtimes on the main thread?
Assuming answer to #2 is 'no', are there multiple JS 'execution contexts' within the same JS runtime? If 'yes', could the addon code be executed by different execution contexts within the same JS runtime?
I could be way off in my above questions, but I am seeing a strange behavior in my addon that is causing parts of my addon code to hang (when a modal dialog is up, am not able to receive data on socket #1), but other parts continue working (am able to read data from socket #2). I am not able to explain the behavior.
Is all JS addon code executed on main thread?
Yes. AFAIK the only exception to this rule is code you run via web workers or ChromeWorker. There were plans to run extensions based on the Add-on SDK in a different process, I'm not sure whether this is still the goal.
could the addon code be executed by different JS runtimes on the main thread?
Theoretically - yes. E.g. an extension could come with a binary XPCOM component containing a different JavaScript engine. Or there is the Zaphod extension coming with a JavaScript-based JavaScript engine. But that doesn't really matter to your code running in the regular SpiderMonkey engine.
I am seeing a strange behavior in my addon that is causing parts of my addon code to hang (when a modal dialog is up, am not able to receive data on socket #1)
JavaScript execution is based on event queues - the browser will take an event from the queue (could be for example a DOM event, a timeout, a network event) and process it which will run corresponding JavaScript code. No other events can be processed until this JavaScript code finishes.
However, JavaScript code doesn't continue to run when a modal dialog is open or a synchronous XMLHttpRequest is performed - yet events still need to be processed. This is solved by modal dialogs and XMLHttpRequest doing event processing while they are active (they spin their own event loop).
So far the basics. Data coming in on a socket is a regular event - and it should work regardless of whether it is being processed by the main event loop or by the modal dialog's loop.
The Delphi debugger is great for debugging linear code, where one function calls other functions in a predictable, linear manner, and we can step through the program line by line.
I find the debugger less useful when dealing with event driven gui code, where a single line of code can cause new events to be triggered, which may in turn trigger other events.
In this situation, the 'step through the code' approach doesn't let me see everything that is going on.
The way I usually solve this is to 1) guess which events might be part of the problem, then 2) add breakpoints or logging to each of those events.
The problem is that this approach is haphazard and time consuming.
Is there a switch I can flick in the debugger to say 'log all gui events'? Or is there some code I can add to trap events, something like
procedure GuiEventCalled(ev:Event)
begin
log(ev);
ev.call();
end
The end result I'm looking for is something like this (for example):
FieldA.KeyDown
FieldA.KeyPress
FieldA.OnChange
FieldA.OnExit
FieldB.OnEnter
This would take all the guesswork out of Delphi gui debugging.
I am using Delphi 2010
[EDIT]
A few answers suggested ways to intercept or log Windows messages. Others then pointed out that not all Delphi Events are Windows messages at all. I think it is these type of "Non Windows Message" Events that I was asking about; Events that are created by Delphi code. [/EDIT]
[EDIT2]
After reading all the information here, I had an idea to use RTTI to dynamically intercept TNotifyEvents and log them to the Event Log in the Debugging window. This includes OnEnter, OnExit, OnChange, OnClick, OnMouseEnter, OnMouseLeave events. After a bit of hacking I got it to work pretty well, at least for my use (it doesn't log Key events, but that could be added).
I've posted the code here
To use
Download the EventInterceptor Unit and add it to your project
Add the EventInterceptor Unit to the Uses clause
Add this line somewhere in your code for each form you want to track.
AddEventInterceptors(MyForm);
Open the debugger window and any events that are called will be logged to the Event Log
[/EDIT2]
Use the "delphieventlogger" Unit I wrote download here. It's only one method call and is very easy to use. It logs all TNotifyEvents (e.g. OnChange, OnEnter, OnExit) to the Delphi Event Log in the debugger window.
No, there's no generalized way to do this, because Delphi doesn't have any sort of "event type" that can be hooked in some way. An event handler is just a method reference, and it gets called like this:
if assigned(FEventHandler) then
FEventHandler(self);
Just a normal method reference call. If you want to log all event handlers, you'll have to insert some call into each of them yourself.
I know it is a little bit expensive, but you can use Automated QA's (now SmartBear) TestRecorder as an extension to TestComplete (if you want this only on your system, TestComplete alone will do). This piece of software will track your GUI actions and store it in a script like language. There is even a unit that can be linked into your exe to make these recordings directly at the user's system. This is especially helpful when some users are not able to explain what they have done to produce an error.
Use WinSight to see the message flow in real time.
If you really want the program to produce a log, then override WinProc and/or intercept the messages in Application.
The TApplication.OnMessage event can be used to catch messages that are posted to the main message queue. That is primarily for OS-issued messages, not internal VCL/RTL messages, which are usually dispatched to WndProc() methods directly. Not all VCL events are message-driven to begin with. There is no single solution to what you are looking for. You would have to use a combination of TApplication.OnMessage, TApplication.HookMainWindow(), WndProc() overrides, SetWindowsHook(), and selective breakpoints/hooks in code.
Borland's WinSight tool is not distributed anymore, but there are plenty of third-party tools readily available that do the same thing as WinSight, such as Microsoft's Spy++, WinSpector, etc, for tracking the logging window messages in real-time.
As an alternative, to debug the triggered events use the debugger Step Into (F7) instead of Step Over (F8) commands.
The debugger will stop on any available code line reached during the call.
You can try one of the AOP frameworks for Delphi. MeAOP provides a default logger that you can use. It won't tell you what is going on inside an event handler but it will tell you when an event handler is called and when it returns.
I'm developing a application in Lazarus, that need to check if there is a new version of a XML file on every Form_Create.
How can I do this?
I have used the synapse library in the past to do this kind of processing. Basically include httpsend in your uses clause, and then call httpgetbinary(url,xmlstream) to retrieve a stream containing the resource. I wouldn't do this in the OnCreate though, since it can take some time to pull the resource. Your better served by placing this in another thread that can make a synchronize call back to the form to enable updates, or set an application flag. This is similar to how the Chrome browser displays updates on the about page, a thread is launched when the form is displayed to check to see if there are updates, and when the thread completes it updates the GUI...this allows other tasks to occur (such as a small animation, or the ability for the user to close the dialog).
Synapse is not a visual component library, it is a library of blocking functions that wrap around most of the common internet protocols.
You'll need to read up on FPC Networking, lNet looks especially useful for this task.