My task is to take a bpmn 2.0 xml file and map it as good as possible (with a certain error rate) to available web services. For example when my bpmn file explains the process of buying a pizza, i give 10€ and get back 1 pizza. Now it should map that bpmn to the webservice that needs an of type int with the name "money" etc.
How is that even possible? I searched for a few hours now and came up with the following:
I found https://github.com/camunda/camunda-bpm-platform and can easily use it to parse a plain .bpmn file to a java object structure which i can then query. Easy.
After parsing the xml notation i should analyze it and search for elements that input data and elements that output data for this are the only things i can map to wsdl (wsdl only describes the structure of the webservice: names of variables, types of variables, number of variables). Problem: I do not find any 1:1 elements i can easily declare as "when this bpmn element is used, it 100% means that the process is getting some input named x". What should i do here? What can i map?
I found ws-bpel. As far as i understand i can somehow transfer bpmn to ws-bpel which should be better modeling of the process and more easily be mappable to a wsdl (?). Camunda however doesn't offer this functionality and i am restricted to open source software.
Any suggestions what i should do?
Related
I'm using a bidirectional map to link a list of names to a particular single name (for example, to correlate cities and countries). So, my definition of the type is something like:
using boost::bimap<boost::bimaps::unordered_set_of<std::string>, std::string> CitiesVsCountries;
But one question intrigues me:
What's the advantage on using a boost::bimaps::unordered_set_of<std::string> v/s a simple std::unordered_set? The advantage of the bimap is clear (avoing having to synchronize by hand two maps), but I can't really see what added value is giving the Boost version of the unordered set, nor I can find any document detailing the difference.
Thanks a lot for your help.
I am currectly defining a data layer definition/convention that is to be used at a large oranisation.
So every time someone is defining new tags, collect some sort of information from a web page, should follow the convention.
It covers variable naming, values, type description and when to use.
The convention is later to be used with GTM/Tealium iQ but it should be tool agnostic.
What is the best way, from a technical perspective, to define the data layer schema? I am thinking if swagger of json-schema. Any thoughts?
It's important to define your data layer in a way in which works for your organisation. That said, the best data layers have an easy to understand naming convention, are generally not nested and they contain good quality data.
A good tag manager will be able to read your data layer in whatever format you would like, whether this is out of the box or a converter which runs before tag execution.
Here is Tealium's best practice:
https://community.tealiumiq.com/t5/Data-Layer/Data-Layer-Best-Practices/ta-p/15987
There is function to parse SequenceExample --> tf.parse_single_sequence_example().
But it parses only single SequenceExample, which is not effective.
Is there any possibility to parse a batch of SequenceExamples?
tf.parse_example can parse many Examples.
Documentation for tf.parse_example contain a little info about SequenceExample:
Each FixedLenSequenceFeature df maps to a Tensor of the specified type (or tf.float32 if not specified) and shape (serialized.size(), None) + df.shape. All examples in serialized will be padded with default_value along the second dimension.
But it is not clear, how to do that. Have not found any examples in google.
Is it possible to parse many SequenceExamples using parse_example() or may be other function exists?
Edit:
Where can I ask question to tensorflow developers: does they plan to implement parse function for multiple SequenceExample -s?
Any help ll be appreciated.
If you have many small sequences where batching at this stage is important, I would recommend VarLenFeatures or FixedLenSequenceFeatures with regular Example protos (which, as you note, can be parsed in batches with parse_example). For examples of this, see the unit tests associated with example parsing (testSerializedContainingSparse parses Examples with FixedLenSequenceFeatures).
SequenceExamples are more geared toward cases where there is significant amounts of preprocessing work to be done for each SequenceExample (which can be done in parallel with queues). parse_example does does not support SequenceExamples.
I really like the Freebase and World Bank type providers and I would like to learn more about type providers by writing one on my own. The European Union has an open data program where you can access data through SPARQL/Linked data. Would it be possible to wrap data access to open EU data by means of a type provider or will it be a waste of time trying to figure out how to do it?
Access to EU data is described here: http://open-data.europa.eu/en/linked-data
I think it is certainly possible - I talked with some people who are actually interested in this (and are working on this, but I'm not sure what is the current status). Anyway - I definitely think this is such a broad area that an additional effort would not be a waste of time.
The key problem with writing a type provider for RDF-like data is to decide what to treat as types (what should become a name of a type or a property name) and what should be left as value (returned as a list or key-value pairs). This is quite obvious for WorldBank - names of countries & properties become types (property names) and values become data. But for triple based data set, this is less obvious.
So far, I think there are two approaches:
Additional ontology - require that the data source comes with some additional ontology that specifies what are the keys for navigation. There is something called "facet ontology" which is used on http://mspace.fm and that might be quite interesting.
Parameterization - parameterize the type provider (in some way) and give it a list of relations that should become available at the type level (and you would probably also need to provide some root where to start).
There are definitely other possibilities - and I think having provider for linked data would be really interesting. If you wanted to do this for F# Data, there is a useful page on contributing :-).
I wrote an SDP (Session Description Protocol, RFC 4566) parser and I would like to test it with a comprehensive set of "test vectors," i.e., a set of SDP descriptions that stress, as much as possible, every aspect of the parser.
I googled things like "sdp test parsing," but the signal-to-noise ratio is low (also because SDP has many meaning). The thing closest to a set of test vectors is the java code at
http://grepcode.com/file/repository.jboss.org/maven2/javax.sip/jain-sip-ri/1.2.86/test/gov/nist/javax/sdp/parser/SdpParserTest.java
but it is just four examples and I am searching for something more exhaustive.
Thank you for your help
You may find that just searching SO for SDP will yield you enough SDP's for you to utilize in your tests... I know I did a quick search and I was surprised at the number!
Another thing to keep in mind is that various attributes can be registered with IANA at any time...
https://www.rfc-editor.org/rfc/rfc4566 - 8.2.4. Attribute Names ("att-field")
Attribute field names ("att-field") MUST be registered with IANA and
documented, because of noticeable issues due to conflicting
attributes under the same name. Unknown attributes in SDP are simply
ignored, but conflicting ones that fragment the protocol are a
serious problem.
There are also other items in the SDP which can change if they are registered with IANA.
You will want to check their site http://www.iana.org/protocols/
Specifically http://www.iana.org/assignments/sdp-parameters/sdp-parameters.xml but most likely others.
You could also then make a program to download each xml file and create a random SDP based on the information from the xml files and then test parsing it but since you made the files that wouldn't be much of a test...