We are developing a provide and register web service for CCDAs. Our vendor requires ADT as the patient registration portion. I can create a bare ADT message from the information provided to me in the CCDA in order to simplify the onboarding process (eliminate a dedicated ADT feed) and reduce the cost. BUT there are data elements (NK1, IN1, GT) that are either not included in the CCDA or not as robust.
I wanted to know if there are any documented data gaps between these two message (CCDA vs. ADT).
I wanted to get feedback to my approach.
I wanted to know the governing process for CCDA, as it makes sense to eventually include some of these ADT data points in the CCDA.
Thanks!
I don't think there is any specific documentation on data gaps between C-CDA and HL7 V2.x ADT messages. Generally it's fine to extract content from C-CDA and use that to construct an ADT message, but obviously you won't get everything. Governance is handled by the Structured Documents workgroup; anyone is welcome to join and submit change proposals.
May be you can find the additional information at CDA sections entries. C-CDA does not requires, for example, a CDA document to contain an immunizations sections with entries, but yes it defines how to include this information. If your CDA includes that information, that may be a good option.
MartÃ
Remember that CDA/CCDAs are not a replacement for clinical or administrative messages. Your approach is fine, but StrucDoc may push back on adding content that is directed toward workflow concerns. CDAs are static objects, they are not intended to trigger action.
As Marti points out, consider what information is possible the specific document you are using ... Or in the base CCDA specification. As long as your document template does not exclude a base specification section, that section can be included in a instance of that document template.
Without appropriate details it's hard to say for certain.
Does the system requiring ADT need encounters? In that case, you're going to need an encounters section from the CDA, which then needs to be turned into multiple A08s.
Do they just need demographics? That's probably do-able.
I would ask for specs around what event types they expect and what fields are required (or at least will bomb out on their side), and just go through the list a sample C-CDA or two on your side.
Related
TL;DR: I need a source for as many different output formats from a whois query as possible.
Background:
I am looking for a single reference that can provide as many (if not all) unique whois query output formats as possible.
I don't believe this exists but hope to be proven wrong.
This appears to be an age old problem
This stackoverflow post from 2015 references the challenge of handling the "~40 formats" that the author was aware of.
The author never detailed any of these formats.
The RFC for whois is... depressing
The IETF ran an analysis in 2015 that examined the components of whois per each RIR at the time
In my own research I see that registrars like JPNIC do not appear to comply with the APNIC standards
I am aware of existing tools that do a bang-up job parsing whois (python-whois for example) however I'd like to hedge my bets against outliers with odd formats. I'm also open to possible approaches to gather this information, however that would likely be too broad to fit this question.
Hoping there is a simple "go here and download this" answer. Hoping...
"TL;DR: I need a source for as many different output formats from a whois query as possible."
There isn't, except if you use any kind of provider that does this for you, with whatever caveats.
Or more precisely there isn't something public, maintained and exhaustive. You can find various libraries that try to do this, in various languages, but none is complete, as this is basically an impossible task, especially if you want to include any TLDs, like ccTLDs (you are not framing your constraints space in a very detailed way, nor in fact really saying you are asking about domain name data in whois or IP addresses/ASN data?).
Some providers of course try to do that and offering you an abstract uniform API. But why would anyone share their internal secret sauce, that is list of parsers and so on? It makes no business incentive to do that.
As for opensource library authors (I was one at some point), it is just tedious and absolutely not rewarding at all to just update it forever with all new formats and tweaks per registry (battle scar example: one registrar in the past changed its output format at each query! one query gave you somefield: somevalue while next time it was somefield:somevalue or somefield somevalue, etc. of course that is only a simple example).
RFC 3912 specified just the transport part, not the content, hence a lot of cases appeared. Specifically in the ccTLD world, each registry is king in its kingdom and it is free to implement whatever it wants the way it wants. Also the protocol had some serious limitations (ex: internationalization, what is the "charset" used for the underlying data) that were circumvented in different ways (like passing "options" in your query... of course none of them are standardized in any way)
At the very least, gTLDs whois format is specified there:
https://www.icann.org/resources/pages/approved-with-specs-2013-09-17-en#whois
Note however that due to GDPR there were changes (see https://www.icann.org/resources/pages/gtld-registration-data-specs-en/#temp-spec) and will be other changes in the future.
However, you should be highly pressed to look at RDAP instead of whois.
RDAP is now a requirement in all gTLDs registries and registries. As it is JSON, it solves immediately the problem of format.
Its core specifications are:
RFC 7480 HTTP Usage in the Registration Data Access Protocol (RDAP)
RFC 7481 Security Services for the Registration Data Access Protocol (RDAP)
RFC 7482 Registration Data Access Protocol (RDAP) Query Format
RFC 7483 JSON Responses for the Registration Data Access Protocol (RDAP)
RFC 7484 Finding the Authoritative Registration Data (RDAP) Service
You can find various libraries doing RDAP for you (see below for links), but at its core it is JSON over HTTPS so you can emulate simple cases with any kind of HTTP client library.
Work is underway to fix some missing/not precise enough details on RFC 7482 and 7483.
You need also to take into account ICANN specifications (again, only for gTLDs of course):
https://www.icann.org/en/system/files/files/rdap-technical-implementation-guide-15feb19-en.pdf
https://www.icann.org/en/system/files/files/rdap-response-profile-15feb19-en.pdf
Note that, right now, even if it is an ICANN requirement, you will find a lot of missing or broken gTLD registries or registrar RDAP server. You will also find a lot of "deviations" in replies from what would be expected per the specification.
I gave full details in various other questions here, so maybe have a look:
https://stackoverflow.com/a/61877920/6368697
https://stackoverflow.com/a/48066735/6368697
https://webmasters.stackexchange.com/a/115605/75842
https://security.stackexchange.com/a/213854/137710
https://serverfault.com/a/999095/396475
PS: philosophical question on "Hoping there is a simple "go here and download this" answer. Hoping..." because a lot of people hoped for that in the past, and see initial remark at beginning. Let us imagine you go forward and build this magnificent resource with all exhaustive details. Would you be inclined to just share it with anyone, for free? The answer is probably no, for obvious reasons, so the same happened in the past for others that went on the same path as you, and hence the results of now various providers offering you more or less this service (you would need to find details on which formats are parsed, the rate limites, the prices, etc.), but nothing freely available to share.
Now you can just dream/hope that every registries and registrars switch to RDAP AND implement it properly. Then the problem of format is solved once for all. However, the above requirements ("every" + "properly") are not small, and may not happen "soon". Specifically in ccTLDs, where registries are in no way mandated by any external force (except market pressure?) to implement RDAP at all.
For my bachelor´s degree I got the task to implement a B2B communication for an ERP system developed by the company I currently work for. Because it should also be able to communicate with other software I consider using EDI messages (EDIFACT) or maybe cXML. What is the best way to approach this task.
I had the idea to translate the EDIFACT message into xml defined by one xsd describing every EDIFACT message.
Then I would write the xml into the database or to business objects using a selfwritten mapper.
For writing EDIFACT messages I just use the same methods the other way round.
I thought using XML transformation first would be easier for the mapping and gives the opportunity using the xml for other purposes like writing other edi formats.
The other idea is to just use cXML and map it.
What is the best approach to this task?
You're essentially designing and implementing a public facing API for the ERP, so you need to consider security, reliability, non-repudiation, impact on business process under normal and abnormal conditions.
You'll also need to consider (ask) what sorts of information your customers will need to exchange with their partners (master data, transactional messages, financial information, etc).
I'd start by looking at the most commonly exchanged messages in the industry most representative of the ERPs users - look for message content and structure.
whether you choose to use EDIFACT, ANSI X12, cXML, XCBL, GS1XML, ebXML or something else is less important than good documentation and flexibility. it's unlikely that your choice will be exactly what any of your customers need without further transformation. you don't want to invent a new any to any transformation tool, and you probably don't even want to bundle an existing one.
I am trying to add support for sidecar applications in EHR platforms. I am taking a pure implementer's approach to build an intermediate representation (such as an XML) for mapping CDA<--->FHIR. I am using the smart-on-fhir as the reference implementation for this. The CDA I am trying to use is the Australian extension - ereferral (www.digitalhealth.gov.au/implementation-resources/clinical-documents/EP-0936-2012/NEHTA-0967-2012).
Is it possible to create such an intermediate representation using the smart-on-fhir (or any other FHIR) reference implementation? Has anyone else tried this?
While searching for actual implementations I came across these repos:
github.com/jmandel/sample_ccdas
github.com/amida-tech/fhir2ccda
github.com/amida-tech/cda-fhir
The FHIR group has some hand crafted examples. Are there any equivalent CDA examples for these FHIR resources?
I have read couple of web articles and white paper documents regarding the challenges between the transforms, such as:
David Hay's blog says "FHIR document is that it is like an object graph, rooted in the composition resource", so is their an equivalent representation for CDA?
Rene Spronk's article about whether HL7 v3 is a message or a document. What are the implications for an implementer who has to handle and validate representations across both CDA and FHIR
Lantana Group position paper - "If or when FHIR can accommodate the full CDA use case, the future holds the promise of seamless integration and information sharing between clinical documents and APIs". Does this mean that CDA<--->FHIR transform is not possible at this stage of the FHIR standard?
Apologies for cross posting it in both SO and FHIR community forums: http://community.fhir.org/t/cda-fhir-mapping-implementations/211/1
CDA to FHIR is fairly straightforward and it looks like you've already found some repositories that do that.
FHIR to CDA is also fairly straightforward - assuming all you care about is something that is valid. You may need to adjust those libraries for use in the AU/NZ locales.
That said - Keith Boone has some blog posts about some of the challenges in mapping between the two, as there will always be quirks.
The biggest hurdle you will have is loss of fidelity. A CDA (at least a C-CDA here in the states) comes with so much HL7v3 cruft that by turning it into FHIR, you're instantly not going to be able to re-create the original C-CDA from FHIR. The CCD template has evolved so much that you find documents that contain C32 elements, C-CDA 1.1 templateIds, and maybe someday C-CDA 2.1 templateIds. So while you can reasonably make a valid C-CDA out a FHIR bundle, you will have to pick a specific implementation and version to target.
My startup https://www.redoxengine.com has a JSON version of CDA that yo can see on the docs page. It's designed for simplicity, but can be mapped back and forth to FHIR resources.
I have had a request by a client to pull in the Lab Name and CLIA information from several different vendors HL7 feeds. Problem is I am unsure what node I should really pull this information from.
I notice one vendor is using ZPS and it appears they have Lab Name and CLIA there. Although I see that others do not use the ZPS. Just curious what would be the appropriate node to pull these from?
I see the headers nodes look really abbreviated with some of my vendors. I need a perfectly readable name like, 'Johnson Hospital'. Any suggestions on the field you all would use to pull the CLIA and Lab Name?
Welcome to the wild world of HL7. This exact scenario is why interface engines are so prevalent and useful for message exchange in the healthcare industry.
Up until, I believe HL7 v2.5.1, there was no standardization around CLIA identifiers. Assuming you are receiving ORU^R01 message, you may want to look at the segment OBX and field 15, which may have producer or lab identifier. The only thing is that there is a very slim chance that they are using HL7 2.5.1 or are implementing the guidelines as intended. There are a lot of reasons for all of this, but the concept here is that you should be prepared to have to do some work here for each and every integration.
For the data, be prepared to exchange or ask for a technical specification from your trading partner. If that is not a possibility or if they do not have one, you should either ask for a sample export of representative messages from their system or if they maybe have a vendor reference. Since the data that you are looking for is not quite as established as something like an address there is a high likelihood that you will have to get this data from different segments and fields from each trading partner. The ZPS segment that you have in your example, is a good reference. Any segment that starts with Z is a custom segment and was created because the vendor or trading partner could not find a good, existing place to store that data, so they made a new segment to store that data themselves.
For the identifiers, what I would recommend is to create a translation or a mapping table for identifiers. So, if you receive JHOSP or JH123 you can translate/map that to 'Johnson Hospital'. Each EMR or hospital system will have their own way to represent different values and there is no guarantee that they will be consistent, so you must be prepared to handle that scenario.
I believe several of us have already worked on a project where not only the UI, but also data has to be supported in different languages. Such as - being able to provide and store a translation for what I'm writing here, for instance.
What's more, I also believe several of us have some time-triggered events (such as when expiring membership access) where user location should be taken into account to calculate, like, midnight according to the right time-zone.
Finally there's also the need to support Right to Left user interfaces accoring to certain languages and the use of diferent encodings when reading submitted data files (parsing text and excel data, for instance)
Currently I'm storing all my translations for all my entities on a single table (not so pratical as it is very hard to find yourself when doing sql queries to look into a problem), setting UI translations mainly on satellite assemblies and not supporting neither time zones nor right to left design.
What are your experiences when dealing with these challenges?
[Edit]
I assume most people think that this level of multiculture requirement is just like building a huge project. As a matter of fact if you tihnk about an online survey where:
Answers will collected only until
midnight
Questionnaire definition and part of
the answers come from a text file
(in any language) as well as
translations
Questions and response options must
be displayed in several languages,
according to who is accessing it
Reports also have to be shown and
generated in several different
languages
As one can see, we do not have to go too far in an application to have this kind of requirements.
[Edit2]
Just found out my question is a duplicate
i18n in your projects
The first answer (when ordering by vote) is so compreheensive I have to get at least a part of it implemented someday.
Be very very cautious. From what you say about the i18n features you're trying to implement, I wonder if you're over-reaching.
Notice that the big boy (e.g. eBay, amazon.com, yahoo, bbc) web applications actually deliver separate apps in each language they want to support. Each of these web applications do consume a common core set of services. Don't be surprised if the business needs of two different countries that even speak the same language (e.g. UK & US) are different enough that you do need a separate app for each.
On the other hand, you might need to become like the next amazon.com. It's difficult to deliver a successful web application in one language, much less many. You should not be afraid to favor one user population (say, your Asian-language speakers) over others if this makes sense for your web app's business needs.
Go slow.
Think everything through, then really think about what you're doing again. Bear in mind that the more you add (like Right to Left) the longer your QA cycle will be.
The primary piece to your puzzle will be extensive use of interfaces on the code side, and either one data source that gets passed through a translator to whichever languages need to be supported, or separate data sources for each language.
The time issues can be handled by the interfaces, because presumably you will want things to function in the same fashion, but differ in the implementation details. To a large extent, a similar thought process can be applied to the creation of the interface when adjusting it to support differing languages. When you get down to it, skinning is exactly this, where the content being skinned is the interface, and the look/feel is the implementation.
Do what your users need. For instance, most programmer understand English, there is no sense to translate posts on this site. If many of your users need a translation, add a new table column with the language id, and another column to link a translated row to its original. If your target auditory contains the users from the Middle East, implement Right to Left. If time precision is critical up to an hour, add a time zone column to the user table, and so on.
If you're on *NIX, use gettext. Most languages I've used have some level of support; PHP's is pretty good, for instance.
I'll describe what has been done in my project (it wasn't my original architecture but I liked it anyways)
Providing Translation Support
Text which needs to be translated have been divided into three different categories:
Error text: Like errors which happen deep in the application business layer
UI Text: Text which is shown in the User interface (labels, buttons, grid titles, menus)
User-defined Text: text which needs to be translatable according to the final user's preferences (that is - the user creates a question in a survey and he can also create a translated version of that survey)
For each different cathegory the schema used to provide translation service is different - so that we have:
Error Text: A library with static functions which access resource files
UI Text: A "Helper" class which, linked to the view engine, provides translations from remote assemblies
User-defined Text: A table in the database which provides translations (according to typeID of the translated entity and object id) and is linked to the entity via a 1 x N relationship
I haven't, however, attacked the other obvious problems such as dealing with time zones, different layouts and picture translation (if this is really necessary). Does anyone have tackled this problem in a different way?
Has anyone ever tackled the other i18n problems?