Since all versions of Learning Tools Interoperability previous to 1.3 are deprecated, I am trying to find a way to (as a tool provider) support re-launches of the tool analogue to the ab-initio in Scorm. In other words, I want the learner to start my activity as if it were the first time they did so.
I am looking for a supported way to do this in LTI 1.3
In LTI 1.0 there was the lis_result_sourcedid on which I have seen implementations of above functionality. According to the migration guide this is deprecated for 1.3
Second possibility, to use the optional resource_link_id property which the consumer would set for a requested from scratch launch, I am told, suffers the same fate.
My question now is: is there a better route to implement an ab-initio resource launch (as a tool provider)?
Related
Summary
I am devloping a app that is intendent to work across multiple graph databases suppoted by tinkerpop
Details
Based on my research the same version of tinkerpop library (gremlin-python) does not work with the latest version of all the graph databases. What is the best approach for this situation. The databases I intent to test are
JanusGraph 0.2.0 supported gremlin-python 3.2.7
NEO4J 3.3.3 supported gremlin-python 3.3.2
I am still trying to integrating some more databases like orientDB and Amazon Neptune do you know what version they will supported.
This issue can be a little tricky especially with non-open source systems that don't publish version and feature support clearly. For open source systems, you can typically find the version of TinkerPop they support for a particular version by looking at the pom.xml of the project. For OrientDB that means finding the version you want (in this case 3.2.3.0) and then looking for the gremlin-core dependency:
https://github.com/orientechnologies/orientdb-gremlin/blob/3.2.3.0/driver/pom.xml#L47
The version points to a property, so examine the pom a bit further and you'll see that number defined above:
https://github.com/orientechnologies/orientdb-gremlin/blob/3.2.3.0/driver/pom.xml#L14
So OrientDB 3.2.3.0 supports TinkerPop 3.2.3. With closed source systems you can only search around until you find the answer your looking for or ask the vendor directly I guess - I've seen that Neptune is on 3.3.x, but I'm not sure of what version of "x".
Just because all of these systems support different versions of TinkerPop and the general recommendation is to use a matching TinkerPop version to connect to them doesn't mean that you can't get a 3.3.x driver to connect to a 3.2.x based server. You may not have the best experience doing so and you would need to be aware of a few things as you do that, but I think it can be done.
The key to this to work from a driver perspective is to ensure that you have the right serialization configuration for the graph you are connecting to. This is true whether you are trying to connect to a same version system or not. By default, TinkerPop ensures that these configurations within the same version are aligned so that they work out of the box. This is why we tend to recommend that you use the same version when possible. When not possible, you need to make those alignments manually.
For example, if you scroll down in this link a bit to the "Serialization" section you will find the supported formats for Neptune:
https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-differences.html
As long as you configure your driver to match one of those formats it should work for you. The same could be said of JanusGraph, which in contrast to Neptune, will not support Gryo or GraphSON 3.0 as it is bound to the 3.2.x line. The configuration for the serializers can be found in JanusGraph's packaging of Gremlin Server:
https://github.com/JanusGraph/janusgraph/blob/v0.2.0/janusgraph-dist/src/assembly/static/conf/gremlin-server/gremlin-server.yaml#L15-L21
As to how you configure your python driver for serialization? Admittedly, there isn't a lot written on that. The key is to set the message_serializer when configuring the Client (from gremlinpython 3.3.2):
https://github.com/apache/tinkerpop/blob/3.3.2/gremlin-python/src/main/jython/gremlin_python/driver/client.py#L44-L45
You can see there that by default it is set to GraphSON 3.0. So, that's perfect for Neptune, but not JanusGraph. For JanusGraph, which doesn't support GraphSON 3.0 yet, you would just change the configuration to use the GraphSON 2.0 serializer:
https://github.com/apache/tinkerpop/blob/3.3.2/gremlin-python/src/main/jython/gremlin_python/driver/serializer.py#L149
So, that is just getting a connection working - then there are other things to consider:
If you use a new version of gremlinpython against an older server, you will need to make sure that you are aware of any features that aren't supported on the server (e.g. don't use math() step from your 3.3.x client because it won't work on a 3.2.x server)
CosmosDB has may allow you to connect with 3.3.x, but it doesn't have full Gremlin support and at this time does not support bytecode based traversals - only strings
A number of bugs have been fixed in GraphSON serialization over these releases and sometimes certain types may have a revised serialization scheme that may prevent a 3.3.x from talking to a 3.2.x - I can't think of any big issues like that offhand that would immediately jump out, but I'm pretty sure it's happened - perhaps something in serialization of Tree and perhaps some of the extended types. You can always look at the full list of GraphSON types here and compare between published versions if you run into trouble.
I have a custom NodeInfo, DocumentInfo and ExternalObjectModel implementation written with saxon 8.7.
I also need to support few custom functions.
My understanding is that saxon 9.7 HE has better support, so trying to migrate from 8.7 based implementation to 9.7 HE.
Is there a way to switch off xslt functionality ? I don't need it for now.
Is s9api the recommended api to get the following features :
To work with custom datamodels (I dont have xml documents)
To support
custom functions
To provide custom implementation for current()
function
current implementation has this pattern.
XPathEvaluator eval = new XPathEvaluator(docw);
eval.setNamespaceContext(new NamespaceContext() {
// stripped off
});
List<DataNode> res = eval.evaluate(xpath);
Now, the XPathEvaluator is not accepting the 'NodeInfo' implementor.
the evaluate is returning a string.
what are the relevant new api/classes in 9.7 ?
also, there is no saxon-xpath. I think that functionality is now part of Saxon-HE.
A lot has changed between 8.7 and 9.7 - you are talking about two releases separated by about 10 years, with 10 major releases and perhaps 100 maintenance releases intervening. While the changes to the NodeInfo interface between any two major releases will be very minor, they will accumulate over time to a significant difference.
Saxon 9.7 changed the DocumentInfo interface, replacing it (in effect) with a new TreeInfo object to hold information about a tree whether or not the root node is a document node.
A question like "what are the new api/classes in 9.7" is much too broad. We publish detailed change information at each major release, and the online interactive documentation has a change history which allows you to list changes by category between any two selected releases. With two releases as far apart as 8.7 and 9.7 it is a very long list, and there's no point even starting to summarise it here.
saxon-xpath was once a separate JAR file, I think the reason was probably so that you could keep it off your classpath to avoid JAXP applications picking it up by accident. The functionality is now in the main JAR file - except that Saxon no longer advertises itself as a JAXP XPath provider, to avoid this problem.
I would generally recommend use of the s9api interface to anyone writing Saxon applications especially if they need to transcend XSLT, XPath, XSD, and XQuery. The JAXP equivalents are much messier: they don't integrate well across tools, they are often not type-safe, they don't provide access to functionality in the latest W3C standards, etc. But if you're doing deep integration, e.g. defining your own object models and replacing system functions, then you're going to have to dig below s9api into internal lower-level APIs.
A great deal is possible here, but it's not 100% stable from one release to the next, and it's not always very well documented. We're happy to answer technical questions but we expect you to have a high level of technical competence if you tackle such integration.
Can somebody give me some insights what is the difference between Netflix Zuul version 1.x.x and new version 2.x.x?
Seems that both product line are maintained.
And version 2 is using Guice for DI and there are some difference in Filter implementation. ??
I got really nice answer from #NiteshKant on GitHub from Netflix:
Unfortunately there is no documentation about the motivations for 2.x and what it changes. I am intending to put together something in the coming weeks when time permits. As of today, I hope the following suffices:
What is 2.x?
2.x intends to move zuul from current synchronous execution model to a top to bottom asynchronous processing model. This includes using non-blocking I/O (practically RxNetty as the networking library) and application processing semantics (RxJava as the asynchronous library)
Why 2.x?
Intentionally staying away from proofs and benchmarks, the motivation for 2.x (essentially moving to an async model) is to have better resilience, control and performance characteristics for all applications inside Netflix.
Status
The current status of 2.x is snapshot. We are currently testing the new filter model (async) with blocking I/O inside netflix. Once we are comfortable with this change, we will be testing the changes with non-blocking I/O. After that we will be publishing release candidate and release artifacts.
Should you adopt 2.x now?
2.x is really very bleeding edge (sorry for the cliche) so we will be changing APIs, deployment models and implementations. So, unless you are prepared to take the burden of keeping up with these changes, I would recommend waiting a while.
Also, 2.x comes with lots of changes in usage, so most likely you will have to change all your existing filters, if any. This can be a big task depending on the current usage. So, it is your decision on that front in terms of ROI.
There are more related links to the subject on Zuul 2.x:
https://github.com/Netflix/zuul/issues/121
https://github.com/Netflix/zuul/issues/106
https://github.com/Netflix/zuul/issues/139
https://github.com/Netflix/zuul/issues/130
Traditional categorization of processes is talking about integration, human centric and document centric processes, with the last one as a good candidate for placing inside the DMS system (of course, the prerequisite is that there is a built-in support for BPM).
But I was unable to find some concrete,more detailed explanation of the distinction between those options.
Imagine a company, that have Enterprise BPM solution , and also a DMS system with quite good support for BPM (i.e. Filenet DMS).
In both systems you can create user screens and workflows (process logic) as well.
Also, most processes working with documents are also quite "human-centric".
I am perfectly aware of the fact, that choosing the target platform always depends on the requirements and specific circumstances, but I wonder, if there are some general rules, or principles, based on which I can better decide where to put the process layer of the whole solution.
Additional clarification:
I don't want to implement any new platform. As I indicated a little bit in the previous post, we already have BPM platform (Oracle) and DMS as well (Filenet with BPM support - Case Foundation). So the question is not about choosing the new platform...but more about setting the rules for using the existing products/platforms. There are a lot new projects in the queue...and for some of them (that are touching the area of working with documents) we need to decide the target platform/s. For example, when you have a simple process with a few steps, and in all steps there is some work with an existing document (the document - or at least his original version, is also input to this process), the requirements on the front-end are not very complicated etc...it would simpler to build the whole solution in the Filenet platform( mostly because of the cost). But I am wondering if there are some similar rules....Like you should think about that or that... when you want use only the DMS platform...or both platforms etc. You can call these rules the principles for development, references architectures or something like that....that is guiding you when designing the target architecture/s.
Thank you
I'm reposting the answer because I don't see a reason for deletion (by #Bohemian).
I think it adds value to anyone asking the same question. #Bohemian could have at least specified why he deleted the post.
Here it goes:
You gave us rather small amount of information. And what exactly is
the question? What do you mean by "where to put the process layer"?
You shouldn't constrain yourself to only those DM systems that claim
to have BPM built-in. That's marketing speak behind which often lay
two half-baked products. You should instead question which
standards-based integration points the system has, so you can
integrate effortlessly. And then invest in best-of-breed DM and best
BPM separately. All-in-one solutions are often too closed, difficult
to extend and above all, they bring free vendor-lock-in with them.
What are your business requirements, i.e. what do you have to do?
Implement BPM inside organization that already has DM or not? Do you
have some BPM platform already? Do you have any
constraints/requirements when choosing either of those (vendor,
technology foundation, Gartner quadrant...)?
What are the options you're considering for DM and which options are
you evaluating (if any) as a BPM platform? Have you already settled on
IBM or you can go elsewhere? Is open source an option?
What is your role/responsibility in this project?
EDIT - after the author's clarifications:
I have not worked with Oracle's BPM, but I can tell you that, although Case Foundation is more suited to Case Management, you can develop a complete Process Management solution with it (workflows, tasks, roles, deadlines, in-baskets, etc.).
If you go that path and later come across the business need to allow business users to define their own case templates, take a look at IBM Case Manager, as it builds on top of Case Foundation, but also brings additional WebUI features (built on IBM Content Navigator), suitable for business users (although, more often than not, it turns out the IT does that job).
A few IBM redbooks about Case & Content management that might help you make an informed decision:
Introducing IBM FileNet Business Process Manager - this is the former name for Case Foundation - the same product, new version.
Advanced Case Management with IBM Case Manager
Customizing and Extending IBM Content Navigator - you'll need this one for customizations, if you decide to go with CF (instead of Oracle).
Building IBM Enterprise Content Management Solutions From End to End - from ingestion to case/process management (contains Case Manager).
I agree with #Robert regarding integration, after all, before version 5.2 FileNet Content Platform Engine was FN Content Engine + FN Process Engine.
The word of advice I can give you is to first document all features that business requires from BPM. Then do a due diligence on both products, noting down which of those features each of those products supports. Then the answer, if not laid out in front of you, will at least be much easier.
You also have to take into account that IBM is oriented towards IBM BPM (former Lombardi) when process management is concerned. Former FN BPM is now more pushed into Case Management (but those two are very similar paradigms).
You should definitely post back about your experience, whichever option you choose.
Good "luck" :)
I'd like to start developing a new project using sproutcore. Since 2.0 seems quite different to 1.6 and there are already three betas out (and so I expect and RC soon?) I wonder if it'd be a good decision to start directly with sproutcore 2.0 intead of 1.6.
The sproutcore app will be backed by a rails app which exposes a rest json api.
Sproutcore 1.x and 2.x are indeed targeting different types of applications. So, the decision to choose 1.x or 2.x mainly boils down to the question which application type you are going to develop.
Choose 1.x if you need a set of predefined components, e.g. if you plan to develop an internal CRUD-like application. You might use the new template-based approach in some places but your main application will be composed with predefined components. SC 1.x clearly targets desktop-like applications.
On the other hand if you plan to build the next twitter or github or stackoverflow, you should use SC 2. It's easier to embed into webpages and you are in control over the complete layout, html and css but it is clearly more work to do in regards to html/css. If you've to implement your own design it's probably easier with SC2 because you are in full control. If you've already profund jQuery knowledge you can use this with SC2, it's no problem to combine the two, in fact since SC2 fully builds upon jQuery it's already included ... where SC 1.x only uses a special stripped down embedded jQuery version. If you plan to use certain plugins this might be a problem.
The programming model for your model and controller parts is nearly the same and it is very easy to transfer those parts from SC 1.x to 2 (and vice versa), the main difference is the view part.
I have to partially disagree with the above comment. The goal of the SproutCore framework is to help create near native user experiences using the web technology stack. That goal has not changed from 1.x to 2.x. What has changed is that SC 2 was built from the ground up to be modular (and thus lighter weight... important for mobile apps) and to allow developers to more easily integrate with other frameworks and tools they've already invested in or may want to use in the future.
Yes designing the view layer of a 1.x and 2.x app are in most ways completely different but implying that you shouldn't or couldn't use SC 2 to create desktop-style applications or that SC 2 is only for creating web-style apps like Twitter and Stack Overflow is just not a correct assumption to make. Just about all of the apps we create at my company are desktop-style apps and we've been using SC 2 with builds of either jQuery UI or Twitter-Bootstrap for controls, theming, and layout support for months now. We've actually found that the apps we create in SC 2 are more feature rich with less development effort than with 1.x since the amount of already built controls that can easily integrate with SC 2 is massive (we haven't found a jQuery plugin that couldn't work with SC 2 yet).
My recommendation, just use SC 2. Don't even bother with 1.x.
Ok, seems Sproutcore 2.0 is now called Amber.js because of all the confusion:
http://yehudakatz.com/2011/12/08/announcing-amber-js