Modality for radiology orders in ORM^O01 message - hl7

Our DICOM Modality Worklist server is currently receiving HL7 ORM^O01 orders from our hospital RIS. To map each order to a modality we are currently using the OBR-24 field in the message. The codes contained in OBR-24 are mapped to AE titles and modality by a lookup table in the MWL server. The current OBR-24 values are something along the lines of "LOC_STATION", where LOC is the location (on building level) for the scheduled station and STATION is the scheduled station for the current order.
Now a need has arisen to direct the same incoming message feed to another system for prefetching prior studies from a slower long-time archive based on the orders. An important input parameter for the prefetch engine rules is the modality of the ordered study. Therefore we would like to add the DICOM modality code in the incoming order messages, since we do not want to duplicate the entire lookup tables system and manage it separately in two places.
What would the best field for this kind information be inside the ORM^O01 structure as defined in v2.3.1 of HL7 standard? I have skimmed the standard and went through our MWL server vendor reference materials, but the closest I have found, is the same OBR-24 field, which is already in use in our solution. Or should we look at implementing some kind of custom Z-segment?
The situation is additionally complicated by the fact, that we are an independent PACS service provider, therefore we do not control the development of the HIS/RIS softwares in the hospitals and mostly we have to integrate with existing systems with minimum modifications on their side. Therefore it is quite difficult to change or move any of the existing fields in our messaging standard, but it's easier to implement new unused fields for new purposes.

IHE Radiology Technical Framework - Volume 2 (RAD TF-2): Transactions, Appendix B: HL7 Order Mapping to DICOM MWL also does not specify recommended mapping for DICOM tags
(0040,0001) Scheduled Station AE Title
(0008,0060) Modality.
In our ORM^O01 generator we use placer fields and filler fields (HL7 items #00251, #00252, #00253, #00254 - OBR-18..OBR-21) for the application entity title and the diagnostic service section id (HL7 item #00257 - OBR-24) for the modality code.
You can place your current routing information into the receiving facility field (MSH-6) and thus release the OBR-24 for another use.
MSH-6 (Receiving Facility, item #00006) was originally meant to represent part of the "receiver's address", the "LOC_STATION". While the MSH-5 indicates your PACS service address, the MSH-6 might be used to designate where next should the order go. This way you would not need to put the same information again into OBR-24 and you can use OBR-24 for just the modality code.
It should be ok to place it nearly anywhere as long as you document it in your conformance statement so that admins of the interface engines can define corresponding mapping.
If you can not change/influence the incoming message format then you may find useful some kind of universal field remapping service before the incoming message are processed or passed on to DICOM clients.
For an example of what I mean look at the XSLT mapping script used by the dcm4che.org open source DICOM Clinical Data Manager system with HL7/PACS/DICOM interfaces built-in. When a HL7 v2 message arrives, it is translated into its equivalent XML representation, transformed by a vendor-specific XSLT script and then pushed into a DICOM database storage. One of the default ORM^O01 mappings is in folder https://svn.code.sf.net/p/dcm4che/svn/dcm4chee/dcm4chee-arc/trunk/dcm4jboss-hl7/src/etc/conf/dcm4chee-hl7 in file orm2dcm.xsl

Related

How to standardize city names inserted by user

I need to write a small ETL pipeline because I need to move some data from a source database to a target database (a datawarehouse) to perform some analysis on data.
Among those data, I need to clean and conform the name of cities. Cities are inserted manually by international users, conseguently for a single city I can have multiple names (for example London or Londra).
In my source database I do not have only big cities but I have also small villages.
Well, if I do not standardize city names, our analysis could be nonsensical.
Which is the best practices to standardize cities in my target database? Have any idea or suggestion I can undertake?
Thank you
The only reliable way to do this is to use commercial address validation software - preferably in your source system when the data is being created but it could be integrated into your data pipeline processes.
Assuming you can't afford/justify the use of commercial software, the only other solution is to create your own translation table i.e. a table that holds the values that are entered and what value you want them to be translated to.
While you can build this table based on historic data, there will always be new values that are not in the table, so you would need a process to identify these, add the new record to your translation data and then fix the affected records. You would also need to accept that there would be un-cleansed data in your warehouse for a period of time after each data load

Event Store DB : temporal queries

regarding to asked question here :
suppose that we have ProductCreated and ProductRenamed events which both contain the title of the product.now we want to query EventStoreDB for all events of type ProductCreated and ProductRenamed with the given title.i want all these events to check whether there is any product in the system which has been created or renamed to the given title, so that i could throw the exception of repetitive title in the domain
i am using MongoDB for creating UI reports from all the published events and everything is fine there.but for checking some invariants, like checking for unique values, i have to either query the event store for some events along with their criteria and by iterating over them, decide whether there is a product created with the same title which has not renamed or a product renamed with the same title.
for such queries, the only way that event store provides is creating a one-time projection with the proper java script code which filters and emits required events to a new stream.and then all i have to do is to fetch events from the new generated stream which is filled by the projection
no the odd thing is, projections are great for subscriptions and generating new streams, but they seem to be odd for doing real time queries.immediately after i create a projection with the HTTP api, i check the new resulting stream for the query result, but it seems that the workers has not got the chance to elaborate on the result and i get 404 response.but after waiting for a bunch of seconds, the new streams pops out and gets filled with the result.
there are too many things wrong with this approach:
first, it seems that if the event store is filled with millions of events across many streams, it wont be able to process and filter all of them immediately to the resulting stream.it does not create the stream immediately, let alone the population.so i have to wait for some time and check for the result hoping the the projection is done
second, i have to fetch multiple times and issue multiple GET HTTP commands which seems to be slow.the new JVM client is not ready yet.
Third, i have to delete the resulting stream after i'm done with the result and failing to do so will leave event store with millions of orphan query result streams
i wish i could pass the java script to some api and get the result page by page like querying MongoDB without worrying about the projection, new streams and timing issues.
i have seen a query section in the Admin UI, but i dont know whats that for, and unfortunetly the documentation doesn't help much
am i expecting the event store to do something that is impossible?
do i have to create a bounded context inner read model for doing such checks?
i am using my events to dehyderate the aggregates and willing to use the same events for such simple queries without acquiring other techniques
I believe it would not be a separate bounded context since the check you want to perform belongs to the same bounded context where your Product aggregate lives. So, the projection that is solely used to prevent duplicate product names would be a part of the same context.
You can indeed use a custom projection to check it but I believe the complexity of such a solution would be higher than having a simple read model in MongoDB.
It is also fine to use an existing projection if you have one to do the check. It might be not what you would otherwise prefer if the aim of the existing projection is to show things in the UI.
For the collection that you could use for duplicates check, you can have the document schema limited to the id only (string), which would be the product title. Since collections are automatically indexed by the id, you won't need any additional indexes to support the duplicate check query. When the product gets renamed, you'd need to delete the document for the old title and add a new one.
Again, you will get a small time window when the duplicate can slip in. It's then up to the business to decide if the concern is real (it's not, most of the time) and what's the consequence of the situation if it happens one day. You'd be able to find a duplicate when projecting events quite easily and decide what to do when it happens.
Practically, when you have such a projection, all it takes is to build a simple domain service bool ProductTitleAlreadyExists.

How to automate 'B2B Manual Data mapping' process using ibm-watson-cognitive and ibm-bluemix

For Example There are 2 Company
GooDYear and Toyata both want to do data mapping, as they are doing business together and want to exchange information.
GooDYear has data format X12 (for example) and Toyata has format EDIFACT (for example).
And For Example Goodyear has fname and Toyata has f_name
But as standard both belongs to class first_name,so Now this process is very manual,developer do it by one-one mapping and time consuming ,my question how we can solve and automate it.ibm-watson-cognitive and ibm-bluemix has some API but not sure how to solve it.

Desire 2 Learn Org Unit ID

What is the API call for finding a particular orgUnit ID for a particular course? I am trying to pull grades and a class list from API but I can not do it without the orgUnitID
There's potentially a few ways to go about this, depending on the kind of use-case you're in. Firstly, you can traverse the organizational structure to find the details of the course offering you're looking for. Start from the organization's node (the root org) and use the route to retrieve an org's descendants to work your way down: you'll want to restrict this call to only course-offering type nodes (org unit type ID '3' by default). This process will almost certainly require fetching a large amount of data, and then parsing through it.
If you know the course offering's Code (the unique identifier your organization uses to define course offerings), or the name, then you can likely find the offering in the list of descendants by matching against those values.
You can also make this search at a smaller scope in a number of ways:
If you already know the Org Unit ID for a node in the structure that's related to the course offering (for example, the Department or Semester that's a parent of the course offering), you can start your search from that node and you'll have a lot fewer nodes to parse through.
If your calling user context (or a user context that you know, and can authenticate as) is enrolled in the course offering, or in a known parent org (like a Department), then you can fetch the list of all that user's enrollments, and parse through those to find the single course offering you're looking for. (Note that this enrollments route sends back data as a paged result set, and not as a simple JSON array, so you may have to make several calls to work your way through a number of data pages before finding the one you want.)
In all these scenarios, the process will end up with you retrieving a JSON structure that will contain the Org Unit ID which you can then persist and use directly later.

User-adjustable data structures

assume a data structure Person used for a contact database. The fields of the structure should be configurable, so that users can add user defined fields to the structure and even change existing fields. So basically there should be a configuration file like
FieldNo FieldName DataType DefaultValue
0 Name String ""
1 Age Integer "0"
...
The program should then load this file, manage the dynamic data structure (dynamic not in a "change during runtime" way, but in a "user can change via configuration file" way) and allow easy and type-safe access to the data fields.
I have already implemented this, storing information about each data field in a static array and storing only the changed values in the objects.
My question: Is there any pattern describing that situation? I guess that I'm not the first one running into the problem of creating a user-adjustable class?
Thanks in advance. Tell me if the question is not clear enough.
I've had a quick look through "Patterns of Enterprise Application Architecture" by Martin Folwer and the Metadata Mapping pattern describes (at quick glance) what you are describing.
An excerpt...
"A Metadata Mapping allows developers to define the mappings in a simple tabular form, which can then be processed bygeneric code to carry out the details of reading, inserting and updating the data."
HTH
I suggest looking at the various Object-Relational pattern in Martin Fowler's Patterns of Enterprise Application Architecture available here. This is a list of patterns it covers here.
The best fit to your problem appears to be metadata mapping here. There are other patterns, Mapper, etc.
The normal way to handle this is for the class to have a list of user-defined records, each of which consists of list of user-defined fields. The configuration information forc this can easily be stored in a database table containing the a type id, field type etc, The actual data is then stored in a simple table with the data represented only as (objectid + field index)/string pairs - you convert the strings to and from the real type when you read or write the database.

Resources