I'm not entirely sure that this is the right place for it - if so, this question should be "Where should I ask this question?" ;)
I have some code I'm maintaining that is parsing out HL7 and MLLP. It opens the MLLP message and sends it to the HL7 parser. Which sends it right back to the MLLP parser. As you can imagine, this goes poorly, quickly.
I'm fairly new to HL7/MLLP, but I'm really confused and also pretty sure that HL7 shouldn't contain another MLLP message. If this is allowed, could I get a link/quote to some documentation stating that?
Just use escape sequences to mask the MLLP codes
see http://www.hl7standards.com/blog/2006/11/02/hl7-escape-sequences/
But if you only want to send more than one HL7-Message in just one MLLP envelope, I would use batch processing with FHS and BHS segments
While perhaps not the same as the issue you are describing, it is not uncommon for HL7 messages to end up being double wrapped in MLLP envelopes. This is particularly if the message is transitioning through another system. (say from a billing system, through an EHR)
a standard envelope is
<VT>...hl7data...<FS><CR>
but a double wrapped message will look like this
<VT><VT>...hl7data...<FS><CR><FS><CR>
It's just something to look out for. And should be corrected by the system in the middle unwrapping and rewrapping the message correctly.
Related
Does anyone know what ZCD may refer to? It is described as a segment with a link back to PreManage for the patient!
Can anyone please provide more details?
The Z segments (segments those begin with the letter "Z") are custom segments. Those are not defined in specifications. They vary from vendor to vendor. Vendor may publish a document explaining usage of segment. Two connected parties should know in advance and decide the usage by mutual understanding.
As those are custom, and if there is no way to know what data they contain, it may be safe to neglect them hoping the sender have not put critical data in it.
Please refer to this:
Z-segments can be inserted anywhere in the HL7 message. A popular approach is to place the Z-segment within a group of segments that contain similar information, such as insurance. Z-segments are also often placed at the end of the message. The advantage of doing so is that this placement prevents systems configured to parse “standard” HL7 format from requiring any configuration modifications in order to process the message. The application simply reads the segments in the order expected and then extracts the data from the Z-segment (if needed) via parser modifications.
Working with unexpected Z-segments
Sometimes systems may send unexpected Z-segments, whether or not they were part of the original specifications. Even if you are not interested in the data in the Z-segment, you may still (depending on its location) need to take the segment into account while testing and developing your interface.
I was investigating various Speech Recognition strategies and I liked the idea of grammars as defined in the Web Speech spec. It seems that if you can tell the speech recognition service that you expect “Yes” or “No”, the service could more reliably recognize a “Yes” as “Yes”, “No” as `No”, and hopefully also be able to say “it didn’t sound like either of those!”.
However, in SFSpeechRecognitionRequest, I only see taskHint with values from SFSpeechRecognitionTaskHint of confirmation, dictation, search, and unspecified.
I also see SFSpeechRecognitionRequest.contextualStrings, but it seems to be for a different purpose. I.e., I think I should put brands/trademark type things in there. Putting “Yes” and “No” in wouldn’t make those words any more likely to be selected because they already exist in the system dictionary (this is an assumption I’m making based on the little the documentation says).
Is a way with the API to do something more like grammars or, even more simply, just providing a list of expected phrases so that the speech recognition is more likely to come up with a result I expect instead of similar-sounding gibberish/homophones? Does contextualStrings perhaps increase the likelihood that the system chooses one of those strings instead of just expanding the system dictionary? Or maybe I’m taking the wrong approach and am supposed to enforce grammar on my own and enumerate over SFSpeechRecognitionResult.transcriptions until I find one matching an expected word?
Unfortunately, I can’t test these APIs myself; I am merely researching the viability of writing a native iOS app and do not have the necessary development environment.
I'm using the Mirth Connect listener and so can receive HL7 XML fine (apparently). I've been asked, though, if I am able to receive CCD messages.
Looking at Wikipedia, "The CCD specification is a constraint on the HL7 Clinical Document Architecture (CDA) standard". To me, that says I can at least receive the message via my normal process. Parsing the message could be something altogether different, though.
Can anyone tell me whether or not I am correct in this reading of the description? Is Mirth going to have any trouble receiving the CCD message/s?
Thanks.
The answer is yes, mostly. Below is an example of how to setup for receiving a CDA Message.
The real issue comes into play depending on how you need to receive the message and what needs to be done with it. CDA and HL7 v3 messaging is not as trivial as HL7 v2 (your typical pipe delimited HL7 message) is. The message structures are highly complex and will require a lot of learning. Additionally, CDA messages are not transferred over the MLLP protocol like HL7 v2. I have generally seen people transferring these messages using the XDS profiles. So, depending on how you need to receive the message, there may some additional work to be done.
I believe the paid/licensed version of Mirth offers some components to aid with CDA/HL7 v3 messages, but it is not included in the OSS version.
Receiving a CDA Message in Mirth
Mirth would have no issue receiving the XML message. Just make sure to set the data type to XML in the channel.
From there, you can setup your receive and destination. If you need to work with the XML of the CDA, in the message transformer make sure to provide a sample CDA in the message templates section. Once you do that, the message should show up in teh Message Trees section.
I have an erlang program which generates data. This data needs to be transferred via udp to a non-erlang program for further processing. I already have this part working - sending the data via udp and receiving it on the other non-erlang side.
Here's the problem. The data (erlang terms like tuples containing lists) doesn't seem to be able to go over "as is" (i.e. I can't just send arbitrary erlang terms). It apparently needs to be converted to either text or binary first. Converting to binary seems easy enough with a bif I found. The problem is, binary gobbledygook comes out the other side, and I don't know any easy way to decode it (the other side is non-erlang).
Barring someone telling me some easy way to decode binary gobbledygook on the other side, I'd like the data to be sent as a simplistic string representation of the terms - for instance a tuple like this:
{[1,2,3],[4,5,6]}
sent like this:
"{[1,2,3],[4,5,6]}"
I haven't seen any such bif, i.e. "convert_term_to_ascii/1" etc. I know I could scan it and send token representations of the terms, but I don't want to do that - decoding that on the other side is just a pain I don't want to deal with.
I know I'm not the first, second, or third person to have this problem. It has to be fairly common. How is it normally dealt with?
Can someone point me to some resource showing me how to either 1) convert binary gobbledygook to ascii (needed on the non-erlang side), or 2) straightforwardly convert terms to a string (needed on the erlang side)?
Or, tell me how I'm wrong and how I should really be doing this?
Thanks.
1) you can convert any term to string using
R= io_lib:format("~p",[yourtermhere]),
lists:flatten(R)
2) you might look at erlang external binary format, a lot of other languages have libraries for encode/decode that erlang binaries format. And in erlang you can encode any term by term_to_binary
I'd recommend converting the erlang terms into JSON, with either of known libraries (heard good words regarding rfc4267). It'd be a trivial task to convert JSON back with any non-erlang platform, I guess. )
My very old HL7 parser has just hit a snag as it is now getting some messages with a ZDS segment present. It was easy to fix by adding a ZDS object to my parser, but I am trying to find out what it is used for. Googling hasn't helped much. This is a sample
ZDS|PERFORM|p0001236^PATEL^ATEST^^^^^^HHB_INOP_PRSNL^^^^OTHER|20100714101800|CD:653
ZDS|TRANSCRIBE|p0001236^PATEL^ATEST^^^^^^HHB_INOP_PRSNL^^^^OTHER|20100714101800|CD:653
ZDS|SIGN|p0001236^PATEL^ATEST^^^^^^HHB_INOP_PRSNL^^^^OTHER|20100714101912|CD:653
So, I'm interested in what each field is though looking at this sample data, it seems I don't lose much by just dropping the whole segment.
In HL7, all segments that begin with the letter Z are considered to be custom and are not defined further by the HL7 standard. You will need to find out what system is responsible for generating these ZDS segments and ask the owners of that system to provide you their specification.
As Scott said, "Z" segments are custom and can vary from vendor to vendor. In the Cerner realm, however, ZDS segments are typically used for "Document Succession" purposes -- a means of document version tracking and synchronization between two supportive systems.
The ZDS segment is used to communicate document endorsement information (actions done or to be done) in Unsolicited Document Results. only a specific solution of Millennium use it, so if you don't need just ignore it.