HL7 CLIA and Lab Name location - hl7

I have had a request by a client to pull in the Lab Name and CLIA information from several different vendors HL7 feeds. Problem is I am unsure what node I should really pull this information from.
I notice one vendor is using ZPS and it appears they have Lab Name and CLIA there. Although I see that others do not use the ZPS. Just curious what would be the appropriate node to pull these from?
I see the headers nodes look really abbreviated with some of my vendors. I need a perfectly readable name like, 'Johnson Hospital'. Any suggestions on the field you all would use to pull the CLIA and Lab Name?

Welcome to the wild world of HL7. This exact scenario is why interface engines are so prevalent and useful for message exchange in the healthcare industry.
Up until, I believe HL7 v2.5.1, there was no standardization around CLIA identifiers. Assuming you are receiving ORU^R01 message, you may want to look at the segment OBX and field 15, which may have producer or lab identifier. The only thing is that there is a very slim chance that they are using HL7 2.5.1 or are implementing the guidelines as intended. There are a lot of reasons for all of this, but the concept here is that you should be prepared to have to do some work here for each and every integration.
For the data, be prepared to exchange or ask for a technical specification from your trading partner. If that is not a possibility or if they do not have one, you should either ask for a sample export of representative messages from their system or if they maybe have a vendor reference. Since the data that you are looking for is not quite as established as something like an address there is a high likelihood that you will have to get this data from different segments and fields from each trading partner. The ZPS segment that you have in your example, is a good reference. Any segment that starts with Z is a custom segment and was created because the vendor or trading partner could not find a good, existing place to store that data, so they made a new segment to store that data themselves.
For the identifiers, what I would recommend is to create a translation or a mapping table for identifiers. So, if you receive JHOSP or JH123 you can translate/map that to 'Johnson Hospital'. Each EMR or hospital system will have their own way to represent different values and there is no guarantee that they will be consistent, so you must be prepared to handle that scenario.

Related

Zanzibar doubts about Tuple + Check Api. (authzed/spicedb)

We currently have a home grown authz system in production that uses opa/rego policy engine as core for decision making(close to what netflix done). We been looking at Zanzibar rebac model to replace our opa/policy based decision engine, and AuthZed got our attention. Further looking at AuthZed, we like the idea of defining a schema of "resource + subject" types and their relations (like OOP model). We like the simplicity of using a social-graph between resource & subject to answer questions. But the more we dig-in and think about real usage patterns, we get more questions and missing clarity in some aspects. I put down those thoughts below, hope it's not confusing...
[Doubts/Questions]
[tuple-data] resource data/metadata must be continuously added into the authz-system in the form of tuple data.
e.g. doc{org,owner} must be added as tuple to populate the relation in the decision-graph. assume, i'm a CMS system, am i expected to insert(or update) the authz-engine(tuple) for for every single doc created in my cms system for lifetime?.
resource-owning applications are kept in hook(responsible) for continuous keep-it-current updates.
how about old/stale relation-data(tuples) - authz-engine don't know they are stale or not...app's burnded to tidy it?.
[check-api] - autzh check is answered by graph walking mechanism - [resource--to-->subject] traverse path.
these is no dynamic mixture/nature in decision making - like rego-rule-script to decide based on json payload.
how to do dynamic decision based on json payload?
You're correct about the application being responsible for the authorization data it "owns". If you intend to have a unique role/relationship for each document in your system, then you do need to write/delete those relationships as the referenced resources (or the roles on them, more likely) change, but if you are using an RBAC-like design for your schema, you'd have to apply these role changes anyway; you'd just apply them to SpiceDB, instead of to your database. Likewise, if you have a relationship between say, a document and its parent organization, you do have to write/delete those as well, but that should only occur when the document is created or deleted.
In practice, unless you intend to keep the relationships in both your database and in SpiceDB (which some users do), you'll generally only have to write them to one or the other. If you do intend to apply them to both, you can either just perform the updates to both at the same time, or use an outbox-like pattern to synchronize behind the scenes.
Having to be proactive in your applications about storing data in a centralized system is necessary for data consistency. The alternative is federated systems that reach into other services. Federated systems come with the trade-offs of being eventually consistent and can also suffer from priority inversion. I presented on the centralized vs federate trade-offs in a bit of depth and other design aspects of authorization systems in my presentation on the cloud native authorization landscape.
Caveats are a new feature in SpiceDB that enable dynamic policy to be enforced on the relationship graph. Caveats are defined using Google's Common Expression Language, which a language used for policy in other cloud-native projects like Kubernetes. You can also use caveats to make relationships that eventually expire, if you want to take some of book-keeping out of your app code.

Is there a listing of known whois query output formats?

TL;DR: I need a source for as many different output formats from a whois query as possible.
Background:
I am looking for a single reference that can provide as many (if not all) unique whois query output formats as possible.
I don't believe this exists but hope to be proven wrong.
This appears to be an age old problem
This stackoverflow post from 2015 references the challenge of handling the "~40 formats" that the author was aware of.
The author never detailed any of these formats.
The RFC for whois is... depressing
The IETF ran an analysis in 2015 that examined the components of whois per each RIR at the time
In my own research I see that registrars like JPNIC do not appear to comply with the APNIC standards
I am aware of existing tools that do a bang-up job parsing whois (python-whois for example) however I'd like to hedge my bets against outliers with odd formats. I'm also open to possible approaches to gather this information, however that would likely be too broad to fit this question.
Hoping there is a simple "go here and download this" answer. Hoping...
"TL;DR: I need a source for as many different output formats from a whois query as possible."
There isn't, except if you use any kind of provider that does this for you, with whatever caveats.
Or more precisely there isn't something public, maintained and exhaustive. You can find various libraries that try to do this, in various languages, but none is complete, as this is basically an impossible task, especially if you want to include any TLDs, like ccTLDs (you are not framing your constraints space in a very detailed way, nor in fact really saying you are asking about domain name data in whois or IP addresses/ASN data?).
Some providers of course try to do that and offering you an abstract uniform API. But why would anyone share their internal secret sauce, that is list of parsers and so on? It makes no business incentive to do that.
As for opensource library authors (I was one at some point), it is just tedious and absolutely not rewarding at all to just update it forever with all new formats and tweaks per registry (battle scar example: one registrar in the past changed its output format at each query! one query gave you somefield: somevalue while next time it was somefield:somevalue or somefield somevalue, etc. of course that is only a simple example).
RFC 3912 specified just the transport part, not the content, hence a lot of cases appeared. Specifically in the ccTLD world, each registry is king in its kingdom and it is free to implement whatever it wants the way it wants. Also the protocol had some serious limitations (ex: internationalization, what is the "charset" used for the underlying data) that were circumvented in different ways (like passing "options" in your query... of course none of them are standardized in any way)
At the very least, gTLDs whois format is specified there:
https://www.icann.org/resources/pages/approved-with-specs-2013-09-17-en#whois
Note however that due to GDPR there were changes (see https://www.icann.org/resources/pages/gtld-registration-data-specs-en/#temp-spec) and will be other changes in the future.
However, you should be highly pressed to look at RDAP instead of whois.
RDAP is now a requirement in all gTLDs registries and registries. As it is JSON, it solves immediately the problem of format.
Its core specifications are:
RFC 7480 HTTP Usage in the Registration Data Access Protocol (RDAP)
RFC 7481 Security Services for the Registration Data Access Protocol (RDAP)
RFC 7482 Registration Data Access Protocol (RDAP) Query Format
RFC 7483 JSON Responses for the Registration Data Access Protocol (RDAP)
RFC 7484 Finding the Authoritative Registration Data (RDAP) Service
You can find various libraries doing RDAP for you (see below for links), but at its core it is JSON over HTTPS so you can emulate simple cases with any kind of HTTP client library.
Work is underway to fix some missing/not precise enough details on RFC 7482 and 7483.
You need also to take into account ICANN specifications (again, only for gTLDs of course):
https://www.icann.org/en/system/files/files/rdap-technical-implementation-guide-15feb19-en.pdf
https://www.icann.org/en/system/files/files/rdap-response-profile-15feb19-en.pdf
Note that, right now, even if it is an ICANN requirement, you will find a lot of missing or broken gTLD registries or registrar RDAP server. You will also find a lot of "deviations" in replies from what would be expected per the specification.
I gave full details in various other questions here, so maybe have a look:
https://stackoverflow.com/a/61877920/6368697
https://stackoverflow.com/a/48066735/6368697
https://webmasters.stackexchange.com/a/115605/75842
https://security.stackexchange.com/a/213854/137710
https://serverfault.com/a/999095/396475
PS: philosophical question on "Hoping there is a simple "go here and download this" answer. Hoping..." because a lot of people hoped for that in the past, and see initial remark at beginning. Let us imagine you go forward and build this magnificent resource with all exhaustive details. Would you be inclined to just share it with anyone, for free? The answer is probably no, for obvious reasons, so the same happened in the past for others that went on the same path as you, and hence the results of now various providers offering you more or less this service (you would need to find details on which formats are parsed, the rate limites, the prices, etc.), but nothing freely available to share.
Now you can just dream/hope that every registries and registrars switch to RDAP AND implement it properly. Then the problem of format is solved once for all. However, the above requirements ("every" + "properly") are not small, and may not happen "soon". Specifically in ccTLDs, where registries are in no way mandated by any external force (except market pressure?) to implement RDAP at all.

ADT vs. CCDA data gap

We are developing a provide and register web service for CCDAs. Our vendor requires ADT as the patient registration portion. I can create a bare ADT message from the information provided to me in the CCDA in order to simplify the onboarding process (eliminate a dedicated ADT feed) and reduce the cost. BUT there are data elements (NK1, IN1, GT) that are either not included in the CCDA or not as robust.
I wanted to know if there are any documented data gaps between these two message (CCDA vs. ADT).
I wanted to get feedback to my approach.
I wanted to know the governing process for CCDA, as it makes sense to eventually include some of these ADT data points in the CCDA.
Thanks!
I don't think there is any specific documentation on data gaps between C-CDA and HL7 V2.x ADT messages. Generally it's fine to extract content from C-CDA and use that to construct an ADT message, but obviously you won't get everything. Governance is handled by the Structured Documents workgroup; anyone is welcome to join and submit change proposals.
May be you can find the additional information at CDA sections entries. C-CDA does not requires, for example, a CDA document to contain an immunizations sections with entries, but yes it defines how to include this information. If your CDA includes that information, that may be a good option.
Martí
Remember that CDA/CCDAs are not a replacement for clinical or administrative messages. Your approach is fine, but StrucDoc may push back on adding content that is directed toward workflow concerns. CDAs are static objects, they are not intended to trigger action.
As Marti points out, consider what information is possible the specific document you are using ... Or in the base CCDA specification. As long as your document template does not exclude a base specification section, that section can be included in a instance of that document template.
Without appropriate details it's hard to say for certain.
Does the system requiring ADT need encounters? In that case, you're going to need an encounters section from the CDA, which then needs to be turned into multiple A08s.
Do they just need demographics? That's probably do-able.
I would ask for specs around what event types they expect and what fields are required (or at least will bomb out on their side), and just go through the list a sample C-CDA or two on your side.

Master Data Management using Graph Database

I am building a master database to store all relevant information about our customers. I am using Neo4j.
Below is a sample of our model. We have Person, that can be registered in 3 of our mobile applications. (App.01, App. 02, App. 03 - We use CPF key, it is like a SSN). In those apps the user can be registered with an email. So it is represented by Email entity. Those user can have multiple address represented by Address entity.
The question is:
As I am building a Master Data, IMO, if someone query the mdm database asking for all "best" information about a person, I would return for example:
Name: John
Best email: email2 (because it has two apps using it)
Best address: addr1 (because it has tow apps using it)
So I am going to build some heuristis to define what is the "best" email and address.
For this purpose, I have some options:
I could create an edge from John to email2 and to addr1. So it's going to be easy for an user of MDM to get the "best" address/email from John.
I could build a rest API endpoint and create this heuristic in query time.
Does anyone have experience using graph database or design MDM database?
Is it a good approach?
This question is a complement for the question: Using Neo4j to build a Master Data Management
The graph data model is good to store your master data, however, your master data most likely will co-exist with operational and reference data in the form of dimensions.
if you decide to go with a graph model for your DMD, make sure that you have a well defined semantic model for the core dimension is MDM, usually:
products
customer
employees
Assets
Location
These core dimensions become attributes of your nodes.
Also, decide what DMD architecture style you are going to adopt, some popular ones are:
The Registry - Graph fits very well with this style because your master data remains in the SOS(system of record) and the references can be represented in the graph very nicely.
Master data Hub - Extra transformations ar4e required to transpose your system of record from tabular to the graph.
Master-Master. - this style fits well with your MDM in the graph if you do not have too many legacy apps that depend on your MDM.
Approach 1 would add a lot of essentially redundant information (about 2N extra relationships, where N is the number of people), and also require more complex coding to handle changes to a person's apps. And, as always when information is stored redundantly, you would have to be especially careful that inconsistencies do not creep in. But, it should be faster when querying for the "best" contact info.
Approach 2 keeps the DB the same size, but requires a more complex and slower query to get the "best" contact info. However, changing a person's apps and contact info is straightforward.
To decide which approach to use, you should consider whether DB size is an issue, and also look at your use cases and how frequently they will be performed.
Here is a simple heuristic if DB size is not an issue. Suppose G is the frequency at which you need to get a person's "best" contact info, and M is the frequency at which you need to modify a person's apps or contact info. You would pick approach 1 if the value of G/M exceeds some threshold value, K, that you would have to decide on, taking into consideration the above considerations.

Using machine learning to de-duplicate data

I have the following problem and was thinking I could use machine learning but I'm not completely certain it will work for my use case.
I have a data set of around a hundred million records containing customer data including names, addresses, emails, phones, etc and would like to find a way to clean this customer data and identify possible duplicates in the data set.
Most of the data has been manually entered using an external system with no validation so a lot of our customers have ended up with more than one profile in our DB, sometimes with different data in each record.
For Instance We might have 5 different entries for a customer John Doe, each with different contact details.
We also have the case where multiple records that represent different customers match on key fields like email. For instance when a customer doesn't have an email address but the data entry system requires it our consultants will use a random email address, resulting in many different customer profiles using the same email address, same applies for phones, addresses etc.
All of our data is indexed in Elasticsearch and stored in a SQL Server Database. My first thought was to use Mahout as a machine learning platform (since this is a Java shop) and maybe use H-base to store our data (just because it fits with the Hadoop Ecosystem, not sure if it will be of any real value), but the more I read about it the more confused I am as to how it would work in my case, for starters I'm not sure what kind of algorithm I could use since I'm not sure where this problem falls into, can I use a Clustering algorithm or a Classification algorithm? and of course certain rules will have to be used as to what constitutes a profile's uniqueness, i.e what fields.
The idea is to have this deployed initially as a Customer Profile de-duplicator service of sorts that our data entry systems can use to validate and detect possible duplicates when entering a new customer profile and in the future perhaps develop this into an analytics platform to gather insight about our customers.
Any feedback will be greatly appreciated :)
Thanks.
There has actually been a lot of research on this, and people have used many different kinds of machine learning algorithms for this. I've personally tried genetic programming, which worked reasonably well, but personally I still prefer to tune matching manually.
I have a few references for research papers on this subject. StackOverflow doesn't want too many links, but here is bibliograpic info that should be sufficient using Google:
Unsupervised Learning of Link Discovery Configuration, Andriy Nikolov, Mathieu d’Aquin, Enrico Motta
A Machine Learning Approach for Instance Matching Based on Similarity Metrics, Shu Rong1, Xing Niu1, Evan Wei Xiang2, Haofen Wang1, Qiang Yang2, and Yong Yu1
Learning Blocking Schemes for Record Linkage, Matthew Michelson and Craig A. Knoblock
Learning Linkage Rules using Genetic Programming, Robert Isele and Christian Bizer
That's all research, though. If you're looking for a practical solution to your problem I've built an open-source engine for this type of deduplication, called Duke. It indexes the data with Lucene, and then searches for matches before doing more detailed comparison. It requires manual setup, although there is a script that can use genetic programming (see link above) to create a setup for you. There's also a guy who wants to make an ElasticSearch plugin for Duke (see thread), but nothing's done so far.
Anyway, that's the approach I'd take in your case.
Just came across similar problem so did a bit Google. Find a library called "Dedupe Python Library"
https://dedupe.io/developers/library/en/latest/
The document for this library have detail of common problems and solutions when de-dupe entries as well as papers in de-dupe field. So even if you are not using it, still good to read the document.

Resources