Add version to a WSDL - wsdl

This question seems simple, but I couldn't find the proper place to set in a WSDL document a version to its definitions.
The objective is be able to easily see when it becomes outdated, when in the future I update it.
I'm gonna set it to 1.0. And if in the future I add a new operation to it, I set it to 1.1. Then if somebody has the 1.0 version it will be easy to see it's missing that operation definition and request to update it.

First thing to realise is that a new version of a service can be seen as a new service. Alike but different. The question then changes to "how to minimise duplication if services are similar".
As for versioning, you can use the namespace declaration (e.g. targetNamespace="mynamespace/1.0") or a <version>1.0</version> tag in (or version="1.0" attribute on) the root node of the types used for request/response messages).
Using the namespace most likely means that one implementation can only serve one version of the service. If you want a certain implementation to serve, say, version 1.0-1.3 and another 1.3+, than you will likely use the <version/> method (or #version), since in that case there is only one namespace. Implementations can than internally decide wether to process or deny based on the value of <version/>.
In a more hybrid service landscape, you can use the <version/> method to create a proxy implementation that relays to services that use the targetNamespace method. Better still would be using a UDDI for this, if you have one at your disposal.
Please consider backwards compatibility of your changes. Adding an operation, as you suggest, is fully backwards compatible. If you have a client X on version 1.0, and add an operation to you server (now 1.1), X can still call the server because all the operations X knows about are still available. (Provided you did not change the namespace to reflect the version that is (use <version/>.) The (absence of) backward compatibility of an interface is usually reflected in a changed major version number (e.g. 1.1 -> 2.0), which may cause you to realize that you could do major changes with the namespace and minor ones with a <version/> tag.
Have fun, this is interesting stuff to be working on!

Related

How to pass thread local variable in Project Reactor

I started using project reactor. Does anyone know how can I pass thread local variables from one thread to another? I saw some methods on Hooks.java but could not figure out what is the recommended way of doing this. Can someone point me to some documentation or with a code snippet on how to do it. Thanks.
I have a working example in this github repository based on the spring-cloud-sleuth's implementation: https://github.com/gumartinm/JavaForFun/tree/master/SpringJava/WebReactive/spring-webreactive-reactor-context-enrich
The key classes are: ContextCoreSubscriber.java, SubscriberContext.java, ThreadContextEnrichmentAutoConfiguration.java and UsernameFilter.java
ContextCoreSubscriber.java:
Enables you to fill the Mapped Diagnostic Context: MDC
SubscriberContext.java:
Helper class for inserting data in the Reactor's Context.
ThreadContextEnrichmentAutoConfiguration.java:
In charge of configuring the Reactor's Hooks: Hooks.onEachOperator
UsernameFilter.java:
Example where we want to register the username information based on some HTTP header.
Reactor doesn't guarantee that the processing done by a Flux or Mono chain of operators will stick executing on a single thread. On the contrary, it performs work-stealing and lets the user switch execution context.
As such, using ThreadLocal is not very adapted to Reactor.
There is currently some work done in 3.1.0 towards providing an equivalent, at least for library authors that use Reactor, but nothing definite in place yet.
Keep your eyes peeled for 3.1.0, that should be the main theme of that release (and will probably be the focus of the second upcoming milestone, M2).

Why use defaults/preferences/*.js vs setting Firefox Add-on defaults in JavaScript?

The more I think about it, I wonder why should anyone bother to use the defaults/preferences/*.js file to set defaults, versus setting the defaults in JavaScript?
I am doing work on an older XUL/Overlay based add-on created by someone else, which actually uses a prefs.js file in addition to setting all the defaults in the JavaScript (in case the prefs.js failed for some inexplicable reason?), which essentially makes all the preferences into user-prefs directly after installation. This confused me at first, as the defaults were showing up as modified (user-set) when looking at the prefs in about:config. Then I realized it was unconditionally setting some of the defaults (very large strings).
So I realized, not only do I maintain the same prefs in 3 locations (prefs.js, interface and content scripts), but the prefs.js file is largely redundant, which seems to add additional maintenance for no reason. This just seems silly, and I am looking for a better way, to just store prefs and manage them in one location (which probably is why the prefs.js should be used exclusively).
Now, I realize, this question has the potential to be flagged as an "opinion", and may or may not have a specific "right" answer. But I think it is a valid question, and I would like to learn more about the pros and cons of using a prefs.js file, versus setting all prefs during initialization in a shared JS code. Are there any performance concerns, or objective list of criteria that I could use to make this determination? Is it possible that the prefs.js mechanism would ever fail? Is it safe to assume it never will fail? Was it more prone to failure back in the FF 1.0-3.5 days?
Default preferences:
Can be changed in a new version of the add-on without overriding the value set explicitly by the user.
Do not clutter up the user's profile after the add-on is uninstalled
Are located in the same place, not scattered thoughout the code. It's easy to see what preferences there are. They also appear in about:config, which allows you to have UI for "hidden" prefs - for the sake of power users.
It is a pity they did not make the get*Pref API return the default value in case the user-set value is invalid, and I understand not wanting to maintain the default in two locations. This whole API was "frozen" a long time ago, and will eventually go away with the rest of XPCOM, so it doesn't really matter...

Dart dynamic class loading

I want to build an application server with Dart. The httpServer in the dart:io library is certainly a good starting point for that. But I struggle with the task to "deploy" an application without restarting the server process.
To be more precise: I want to have something like a servlet container in Java, like Tomcat, into which I can easily deploy or redeploy an application without restarting the container. I thought I could utilize the mirror system, which allows me in principle to load a library and its contained classes from the filesystem. But unfortunately it seems that I cannot re-load the library. When I add for example a new class to the library, or change the coding of an existing class, a new reflection of the library without restarting the dart process, does not reflect the changes. Only when I stop the process and restart it again, the changes are visible.
So: is there a way to scrub the mirror system and let it load the library and its classes again, within the same Dart process?
I think isolates are a good fit for this requirement.
I haven't used them myself much yet but as far as I know you can load and unload them dynamically.
The documentation is not very extensive yet.
A few things I found:
https://api.dartlang.org/apidocs/channels/stable/dartdoc-viewer/dart-isolate
Recent documentation about Dart Isolates
https://www.youtube.com/watch?v=TQJ1qnrbTwk
https://www.youtube.com/watch?v=4GlK-Ln7HAc
So, yes, it is possible in Dart to dynamically (re-)load dart-files at runtime. Every new isolate has its own MirrorSystem. If you want to reload a dart-file you must create a new isolate and use the MirrorSystem of this isolate to iterate over the contents in the libraries known to this MirrorSystem. If your dart-file is part of a library known to the MirrorSystem, all functions and classes contained in this file are loaded and reflected anew.
This solution has some drawbacks: First, it is quite heavyweight. The programming of inter-isolate communication is cumbersome. Also it is to be seen whether the memory consumption increases with each reload. Second, the solution is not really dynamic: Isolates load only libraries that are "known" at design-time. They must be directly or indirectly imported into the dart file that contains the static function, which is called when the isolate is created.
Two ideas how the situation could be improved:
1. It would help if the spawn and the spawnUri methods of Isolate could get a list of additional libraries as parameter, which are included in the MirrorSystem of the isolate.
2. The classloaders in Java are independent of processes and threads. They just load classes. Why isn't this possible in Dart?

Loading additional javascript code from firefox add-on content script

I'm writing something that I want to release as both a chrome extension and a firefox add-on.
The chrome extension is already available on github. I've factored my code into several modules using a module load format similar to what requirejs uses; I did this to separate the parts that were chrome-specific from the parts I hoped to re-use in the firefox add-on.
Specifically, I split up not only the backend work, but also the content scripts.
In chrome, when my content script needs to load another module, it sends a message to the background page saying "please load this module"; the script on the background page then does:
function onLoadLibrary(request, sender, sendResponse) {
var allFrames = request.allFrames || false;
chrome.tabs.executeScript(
sender.tab.id, {file: request.library.toLowerCase() + '.js',
allFrames: allFrames},
function () {
sendResponse({});
});
return true;
}
That is, I'm able to load additional javascript into the same sandbox as the content script that asked for that code. This is necessary to make module dependencies work.
In firefox, I can't figure out how to do this. I'll attach my initial content scripts through pageMods and by calling tab.attach from the "ready" event of tabs. That seems straightforward, but then if that content script needs to load more code I can't see how to do it.
There doesn't seem to be a way to access the sandbox my content script is running in from the main.js file so that I might inject more code into it. Even if I somehow kept a reference to the relevant tab instance (which only lets me inject into the top frame in any case), it appears that each new call to tab.attach puts injected code into a new sandbox. The object tab that's passed to my ready event handle isn't a real XUL tab that I could pass to require("tabs/util").getBrowserForTab; if it were, then I think I can follow through enough of the sdk code to create my own sandbox, though I'd worry about leaving accidental memory leaks behind.
I considered passing the code back to the content script through a "eval-this-code" message, but I really don't want to use eval in my extension because of security concerns; I also worry that using eval would make it difficult to impossible to get my firefox add-on approved for AMO. (Also, how would that interact when my add-on runs on sites with a Content Security Policy?)
The usage of traits to define the add-on API seems to close off access to objects such that I can't reach inside a Worker to get a reference to the sandbox my content script is executing in. At this point, it appears that I'd need to include nearly a full copy of the sdk in my add-on just to expose one method on WorkerSandbox.
Note: I'm using the Add-On sdk (the project formerly known as JetPack). I'm willing to use Components.utils.import if someone can tell me how to use that from inside an Add-On SDK-managed content script.
Content-scripts do not expose a public API to attach more scripts to a content-script sandbox after it was initialized. You should probably file an enhancement bug and state your use case, if there isn't one filed already (search first), and/or even come up with some patches yourself.
In cases where there is a DOM that your add-on own (widget), then it's just a matter of attaching another script tag.
For things like page-mods where there is no DOM you own, you're left with a couple of options, none of which is really satisfying. As you already found out yourself, the use of traits prohibits you from accessing "private" properties/methods.
Fork page-mod/tab/content-worker to provide the functionality you need. That would require creating your own copies of the modules and expose the necessary APIs to inject scripts into existing workers.
This is has a steep learning curve (but given that you already figured out details such as traits, should be doable for you), but more importantly hard to maintain as you need to make sure you keep up with the upstream. And AMO editors will not like you very much for it :p
On the plus side, you could try to get your stuff committed upstream, fixing this problem for everybody and become a hero to many authors using the Add-on SDK.
The eval method you propose. Not only is this eval a major source for security issues, but it also may be a performance killer, as right now IIRC evaled code will not use the JIT. And, of course, it will make us AMO editors cringe, even if used "correctly".
Do not use lazy loading at all, and specify all content scripts from the very beginning. This is what add-ons usually do (I'm almost inclined to say "always"). However, this conflicts with your current design, and depending on your add-on may pose a serious performance penalty for loading stuff you didn't really need in the end.
You could use the require mechanism to have most scripts as SDK module and not content-scripts. This is not always feasible, of course, e.g. when dealing with code that would normally modify the DOM in your content-script, but might work for some other stuff.
Replace page-mod, etc with your own Greasemonkey-like, enhanced API. This means lots of work, it is error-prone, security-sensitive and has to be maintained. So, it's not really a viable solution, IMO...
Components.utils.import does not help you. It isn't available to content-scripts anyway.

Delphi logging with multiple sinks and delayed classification?

Imagine i want to parse a binary blob of data. If all comes okay, then all the logs are INFO, and user by default does not even see them. If there is an error, then user is presented with error and can view the log to see exact reason (i don't like programs that just say "file is invaid. for some reason. you do not want to know it" )
Probably most log libraries are aimed at quickly loading, classifying and keeping many many log lines per second. which by itself is questionable, as there is no comfort lazy evaluation and closures in Delphi. Envy Scala :-)
However that need every line be pre-сlassified.
Imagine this hypothetical flow:
Got object FOO [ok]
1.1. found property BAR [ok]
1.1.1. parsed data for BAR [ok]
1.2 found property BAZ [ok]
1.2.1 successfully parsed data for BAR [ok]
1.2.2 matching data: checked dependancy between BAR and BAZ [fail]
...
So, what can be desired features?
1) Nested logging (indenting, subordination) is desired then.
Something like highlighted in TraceTool - see TraceNode.Send Method at http://www.codeproject.com/KB/trace/tracetool.aspx#premain0
2) The 1, 1.1, 1.1.1, 1.2, 1.2.1 lines are sent as they happen in a info sink (TMemo, OutputDebugString, EventLog and so one), so user can see and report at least which steps are complete before error.
3) 1, 1.2, 1.2.2 are retroactively marked as error (or warning, or whatever) inheriting from most specific line. Obviously, warning superseeds info, error superseeds warning and info, etc/
4) 1 + 1.2 + 1.2.2 can be easily combined like with LogMessage('1.2.2').FullText to be shown to user or converted to Exception, to carry the full story to human.
4.1) Optionally, with relevant setup, it would not only be converted to Exception, but the latter even would be auto-raised. This probably would require some kind of context with supplied exception class or supplied exception constructing callback.
5) Multisink: info can be just appended into collapsible panel with TMemo on main form or currently active form. The error state could open such panel additionally or prompt user to do it. At the same time some file or network server could for example receive warning and error grade messages and not receive info grade ones.
6) extra associated data might be nice too. Say, if to render it with TreeView rather than TMemo, then it could have "1.1.1. parsed data for BAR [ok]" item, with mouse tooltip like "Foo's dimensions are told to be 2x4x3.2 metres"
Being free library is nice, especially free with sources. Sometimes track and fix the bug relying solely on DCUs is much harder.
Non-requiring extra executable. it could offer extra more advanced viewer, but should not be required for just any functionality.
Not being stalled/abandoned.
ability to work and show at least something before GUI is initialized would be nice too. Class constructors are cool, yet are executed as part of unit visualization, when VCL is not booted yet. If any assertion/exception is thrown from there, user would only see Runtime error 217, with all the detail lost. At least OutputDebugStreen can be used, if nothing more...
Stack tracing is not required, if needed i can do it and add with Jedi CodeLib. But it is rarely needed.
External configuration is not required. It might be good for big application to reconfigure on the fly, but to me simplicity is much more important and configuration in code, by calling constructors or such, is what really matters. Extra XML file, like for Log4J, would only make things more fragile and complex.
I glanced few mentioned here libraries.
TraceTool has a great presentation, link is above. Yet it has no info grade, only 3 predefined grades (Debug/Error/Warning) and nothing more, but maybe Debug would suit for Info replacement... Seems like black box, only saving data into its own file, and using external tool to view it, not giving stream of events back to me. But their messages nesting and call chaining seems cool. Cools is also attaching objects/collection to messages.
Log4D and Log4Delphi seems to be in a stasis, with last releases of 2007 and 2009, last targeted version Delphi 7. Lack documentation (probably okay for log4j guy, but not for me :- ) Log4Delphi even had test folder - but those test do not compile in Delphi XE2-Upd1. Pity: In another thread here Log4delphi been hailed for how simple is to create custom log appender (sink)...
BTW, the very fact that the only LOG4J was forked into two independent Delphi ports leaves the question of which is better and that both lack something, if they had to remain in split.
mORMot part is hardly separated from the rest library. Demo application required UAC escalation for use its embedded SQLite3 engine and is frozen (no window opened, yet the process never exits normally) if refused Admin grants. Another demo just started infinite stream of AV exceptions, trying to unwind the stack. So is probably not ready yet for last Delphi. Though its list of message grades is excessive, maybe even a bit too many.
Thank you.
mORMot is stable, even with latest XE2 version of Delphi.
What you tried starting were the regression tests. Among its 6,000,000 tests, it includes the HTTP/1.1 Client-Server part of the ORM. Without the Admin rights, the http.sys Server is not able to register the URI, so you got errors. Which makes perfectly sense. It's a Vista/Seven restriction, not a mORMot restriction.
The logging part can be used completely separated from the ORM part. Logging is implemented in SynCommons.pas (and SynLZ.pas for the fast compression algorithm used for archival and .map embedding). I use the TSynLog class without any problem to log existing applications (even Delphi 5 and Delphi 6 applications), existing for years. The SQLite3 / ORM classes are implemented in other units.
It supports the nesting of events, with auto-leave feature, just as you expect. That is you can write:
procedure TMyClass.MyMethod(const Params: integer);
begin
TSynLog.Enter;
// .... my method code
end;
And adding this TSynLog.Enter will be logged with indentation corresponding to the recursive level. IMHO this may meet your requirements. It will declare an ISynLog interface on the stack, which will be freed by Delphi at the "end;" code line, so it will implement an Auto-Leave feature. And the exact unit name, method name and source code line number will be written into the log (as MyUnit.TMyClass.MyMethod (123)), if you generated a .map file at compilation (which may be compressed and appended to the .exe so that your customers logs will contain the source line numbers). You have methods at the ISynLog interface level to add some custom logging, including parameters and custom state (you can log objects properties as JSON if you need to, or write your custom logging data).
The exact timing of each methods are tracked, so you are able to profile your application from the data supplied by your customer.
If you think the logs are too much verbose, you have several levels of logging, to be customized on the client side. See the blog articles and the corresponding part of the framework documentation (in the SynCommons part). You've for instance "Fail" events and some custom kind of events. And it is totally VCL-independent, so you can use it without GUI or before any GUI is started.
You have at hand a log viewer, which allow client-side profiling and nested Enter/Leave view (if you click on the "Leave" line, you'll go back to the corresponding "Enter", e.g.):
If this log viewer is not enough, you have its source code to make it fulfill your requirements, and all the needed classes to parse and process the .log file on your own, if you wish. Logs are textual by default, but can be compressed into binary on request, to save disk space (the log viewer is able to read those compressed binary files). Stack tracing and exception interception are both implemented, and can be activated on request.
You could easily add a numeration like "1.2.1" to the logs, if you wish to. You've got the whole source code of the logging unit. Feel free to ask any question in our forum.
Log4D supports nested diagnostic contexts in the TLogNDC class, they can be used to group together all steps which are related to one compound action (instead of a 'session' based grouping of log events). Multi-Sinks are called Appenders in log4d and log4delphi so you could write a TLogMemoAppender with around twentyfive lines of code, and use it at the same time as a ODSAppender, a RollingFileAppender, or a SocketAppender, configurable at run time (no external config file required).

Resources