HLA: FOM vs SOM - high-level-architecture

I'm starting to study how to implement HLA in a project that I'm developing, and there's something that I can't understand very well.
In a federation, the data that are exchanged are defined in the FOM (Federation Object Model), that contains all the necessary stuff (classes, interactions and so on). That's ok.
I've also read that every federate needs to publish its own SOM (Simulation Object Model), that's a description of data that the federate publishes and for which it's subscribed.
My questions are:
Who needs to load the FOM file? Every federate? It must be read from the RTI Manager?
Why a federate needs to publish a SOM if the FOM is already available? If the FOM defines all strucutres that can be exchanged, the SOM duplicates a sub-part of the FOM that's needed only by the federate?
Who reads the SOM that's sent by a federate?
What data are stored inside the SOM? There's a deep copy of some structure defined in the FOM, there are links to them in the FOM?
The SOM can contain classes, interactions and so on that are not defined in the FOM?
If every federate publish data about objects that it uses with the SOM, why we need the FOM?
Sorry if they are simple questions, but I'm new to this and even if I understand the programming aspect of HLA, the logic behind these things are a bit obscure for me (maybe my not perfect English has a part in it, too).

Yserbius described the difference between a FOM and a SOM.
I have added some responses to your direct questions.
Who needs to load the FOM file? Every federate? It must be read from the RTI Manager?
The RTI uses the FOM file. It has to be provided when the federation is created. Only the first federate needs to provide it.
Why a federate needs to publish a SOM if the FOM is already available? If the FOM defines all strucutres that can be exchanged, the SOM duplicates a sub-part of the FOM that's needed only by the federate?
The SOM is not needed at runtime, when your federates are executed. The SOM can be viewed as a document describing the capabilities of your simulator.
Who reads the SOM that's sent by a federate?
No SOM is needed at runtime.
What data are stored inside the SOM? There's a deep copy of some structure defined in the FOM, there are links to them in the FOM?
The SOM is a subset of the FOM.
The SOM can contain classes, interactions and so on that are not defined in the FOM?
Yes, but they can not be used at runtime, unless they are added to the FOM.
If every federate publish data about objects that it uses with the SOM, why we need the FOM?
Good question. It is usually the other way around, the federates provide a FOM at runtime, and no SOM exist to describe the federate.

I apologize for the late answer, I hope this is still applicable. I am assuming you are using HLA 1.3 or HLA 1516 because the term SOM in HLA 1516-2010/Evolved has been replaced with FOM modules.
The first federate to create a federation does so with a FOM file (a Lisp variant in 1.3 and XML in subsequent versions). That FOM then becomes available to every new federate that joins. The individual federates do not need to have copies of the FOM file.
The SOM is not actually required by any of your software. It's simply a subset of the FOM. It's main use is for compliance checking and verification, so that before a federation is deployed, its verified what objects and interactions each individual federate can and cannot use. When you're running a federation, the SOM can be ignored (unless you're running some sort of dynamic all-purpose federate whose activity can be modified without recompiling by swapping out the SOM).
A 1.3 and 1516 SOM cannot contain anything that's not in the FOM. 1516-2010 introduced the concept of FOM modules. Instead of one big FOM file, a federate can build a FOM with a collection of smaller files that make up the FOM. Each joining 1516-2010 federate can have their own modules to add to the FOM.

Related

Zanzibar doubts about Tuple + Check Api. (authzed/spicedb)

We currently have a home grown authz system in production that uses opa/rego policy engine as core for decision making(close to what netflix done). We been looking at Zanzibar rebac model to replace our opa/policy based decision engine, and AuthZed got our attention. Further looking at AuthZed, we like the idea of defining a schema of "resource + subject" types and their relations (like OOP model). We like the simplicity of using a social-graph between resource & subject to answer questions. But the more we dig-in and think about real usage patterns, we get more questions and missing clarity in some aspects. I put down those thoughts below, hope it's not confusing...
[Doubts/Questions]
[tuple-data] resource data/metadata must be continuously added into the authz-system in the form of tuple data.
e.g. doc{org,owner} must be added as tuple to populate the relation in the decision-graph. assume, i'm a CMS system, am i expected to insert(or update) the authz-engine(tuple) for for every single doc created in my cms system for lifetime?.
resource-owning applications are kept in hook(responsible) for continuous keep-it-current updates.
how about old/stale relation-data(tuples) - authz-engine don't know they are stale or not...app's burnded to tidy it?.
[check-api] - autzh check is answered by graph walking mechanism - [resource--to-->subject] traverse path.
these is no dynamic mixture/nature in decision making - like rego-rule-script to decide based on json payload.
how to do dynamic decision based on json payload?
You're correct about the application being responsible for the authorization data it "owns". If you intend to have a unique role/relationship for each document in your system, then you do need to write/delete those relationships as the referenced resources (or the roles on them, more likely) change, but if you are using an RBAC-like design for your schema, you'd have to apply these role changes anyway; you'd just apply them to SpiceDB, instead of to your database. Likewise, if you have a relationship between say, a document and its parent organization, you do have to write/delete those as well, but that should only occur when the document is created or deleted.
In practice, unless you intend to keep the relationships in both your database and in SpiceDB (which some users do), you'll generally only have to write them to one or the other. If you do intend to apply them to both, you can either just perform the updates to both at the same time, or use an outbox-like pattern to synchronize behind the scenes.
Having to be proactive in your applications about storing data in a centralized system is necessary for data consistency. The alternative is federated systems that reach into other services. Federated systems come with the trade-offs of being eventually consistent and can also suffer from priority inversion. I presented on the centralized vs federate trade-offs in a bit of depth and other design aspects of authorization systems in my presentation on the cloud native authorization landscape.
Caveats are a new feature in SpiceDB that enable dynamic policy to be enforced on the relationship graph. Caveats are defined using Google's Common Expression Language, which a language used for policy in other cloud-native projects like Kubernetes. You can also use caveats to make relationships that eventually expire, if you want to take some of book-keeping out of your app code.

How can I express "composition" in Protégé?

I'm building an ontology in Protégé for data sharing management in IoT environments. For this purpose I want to create a package that contains the observation (raw data), information on its provenance, and the licence that the data consumer will need to accept and respect to be able to use the ressource. The aim of our project is to make this "package" the entity that will be circling between users rather than just the raw data, and therefor for the data owner/producer to not completely lose his ownership once his data is shared.
In order to do this, I created rather spontaneously a class named "Package" composed of three disjointed classes which are: the observation, its provenance information, and the generated licence. However, I realized that this does not mean "a package is composed of those three elements", but rather "each one of those three elements is a package", which is not at all what I'm seeking.
Is there a way to express the composition, without (for example) having to create an Object Property named "isComposedOf" ?
Thank you in advance for your time. Please don't hesitate if you need other details

Saving different sets of values of variables with a changing structure

I have several sets of values (factory setting, user setting...) for a structure of variables and these values are saved in a binary file. So when I want to apply certain setting I just load the specific file containing desired values and these values are applied to the variables accordingly to the structure. This works fine when the structure of variables doesn't change.
I can't figure out how to do it when I add a variable but need to retain the values of the rest (when a structure in a program changes, I need to change the files so that they would contain the new values accordingly to the new structure and at the same time keep the old ones).
I'm using a PLC system that is written in ST language. But I'm looking for some overall approach for solving this issue.
Thank you.
This is not an easy task to provide a solution that is generic and works with different plc platforms. There are many different ways to accomplish this depending on the system/interface you actually want to use e.g. PLC Source Code / OPC / ADS / MODBUS / special functions, addins from the vendor and there are some more possibilities e.g. language features on the PLC. I wrote three solutions to this with C#/ST(with OOP Extensions) and ADS/OPC communication, one with source code parsing first in C#, the other with automatic generation from PLC side and another with an automatic registration system of the parameters with an EntityFramework compatible Database as ParameterStore. If you don't want to invest too much time in this you should try out the parameter management systems that are provided by your plc vendor and live by those restrictions.

ADT vs. CCDA data gap

We are developing a provide and register web service for CCDAs. Our vendor requires ADT as the patient registration portion. I can create a bare ADT message from the information provided to me in the CCDA in order to simplify the onboarding process (eliminate a dedicated ADT feed) and reduce the cost. BUT there are data elements (NK1, IN1, GT) that are either not included in the CCDA or not as robust.
I wanted to know if there are any documented data gaps between these two message (CCDA vs. ADT).
I wanted to get feedback to my approach.
I wanted to know the governing process for CCDA, as it makes sense to eventually include some of these ADT data points in the CCDA.
Thanks!
I don't think there is any specific documentation on data gaps between C-CDA and HL7 V2.x ADT messages. Generally it's fine to extract content from C-CDA and use that to construct an ADT message, but obviously you won't get everything. Governance is handled by the Structured Documents workgroup; anyone is welcome to join and submit change proposals.
May be you can find the additional information at CDA sections entries. C-CDA does not requires, for example, a CDA document to contain an immunizations sections with entries, but yes it defines how to include this information. If your CDA includes that information, that may be a good option.
Martí
Remember that CDA/CCDAs are not a replacement for clinical or administrative messages. Your approach is fine, but StrucDoc may push back on adding content that is directed toward workflow concerns. CDAs are static objects, they are not intended to trigger action.
As Marti points out, consider what information is possible the specific document you are using ... Or in the base CCDA specification. As long as your document template does not exclude a base specification section, that section can be included in a instance of that document template.
Without appropriate details it's hard to say for certain.
Does the system requiring ADT need encounters? In that case, you're going to need an encounters section from the CDA, which then needs to be turned into multiple A08s.
Do they just need demographics? That's probably do-able.
I would ask for specs around what event types they expect and what fields are required (or at least will bomb out on their side), and just go through the list a sample C-CDA or two on your side.

What are the dangers in re-using variables in a data source?

Not quite sure if this is on topic, so when in doubt feel free to close.
We have a client who is missing tracking data for a large segment of his visitors in his Report Suite. However the complete set of data is available in a data warehouse. We are now investigating if it is possible to import them as a data source. I have only experience with enriching data via classifications, however the goal here is to create views (sessions etc) for a past timeframe etc from scratch.
According to the documentation this should be possible. However there is one caveat specifically mentioned in the FAQ:
"Adobe recommends you select new, unused variables to import data
using Data Sources. If you are uncertain about the configuration of
your data file, or want to better understand the risks of re-using
variables, contact Customer Care.“
I take that to mean that I should not import data to props,Evars,events etc. that have been used when data has been collected via the tracker, which would pretty much defeat our purpose (basically we want to merge the data from the data warehouse with existing data). Since I have to go to some intermediaries to reach customer care and this takes a long time I wonder if somebody here can explain what the dangers in re-using variables are (and maybe even if there is still a way to do this).
DISCLAIMER: I'm not familiar with Adobe Analytics, but the problem here is pretty universal. If someone with actual experience/knowledge specific to the product comes along, pay more attention to them than me :)
As a rule, Variable reuse in any system runs the risk of data corruption. I'm not familiar with Adobe Analytics, but a brief read through some blogs imply that this is what they're worried about in terms of variable reuse - if you have a variable that is being used in one section, and you import data into it in another section when is in the same scope, you overwrite the data that the other section was using.
Now, that same blog states that provided you have your data structure set up in a specific way, it can allow you to reuse variables/properties without issue and in fact encourages it, hence the statement in your quote "If you are uncertain about the configuration of your data file...". They're probably warning you that if you know what you're doing and know that there won't be any overwriting, fine, go ahead and reuse, but if you don't, or you aren't sure whether something else might be using the original content, then it's unsafe.
Regarding your specific case, you want to merge the two piece of data together, not overwrite, so reusing your existing variables would overwrite the existing data. Sounds like you will need to import to a second (new) set of variables, and then compare/merge between them within the system, rather than trying to import and merge in one go.
We have since received an answer from Adobe Customer Care.
According to them the issue is that hits created via data imports are indistinguishable from hits created via server calls, so they cannot be removed or corrected after they have been imported. They recommend to use different variables so that the original data remains recognizable and salvageable in case one does import faulty data via the imports.
This was already mentioned in the online documentation, but apparently Adobe thinks this is important enough to issue the extra warning.

Resources