Correction of historical ADT Data in HL7 V2 - hl7

Just out of curiosity
How do you transmit corrections of historic ADT Data in HL7 V2, e.g. patient transfer, if you do not have a ZBE segment for historic movements as in Germany?
Do you cancel all relevant events and build a new patient history or do you use some of the already defined fields of the segments of ADT messages to mark the event, that should be corrected?
How do you deal with multiple transfers wardA --> wardB --> wardA -->wardB ?

ADT^A02 would be the "correct" way to transfer a patient from one room/bed to another. However, as #Sid stated I cannot recall a time where I've ever seen an ADT^A02 implemented in the real world.
This is most likely due to changing status/demographics while transferring a patient - Most of the time there will be a specific reason as to why the transfer is happening - moving from an outpatient/inpatient(or vice-versa), change in facility, etc. It's much easier to bunch up this information into one ADT^A08 then to send both an ADT^A02 and an ADT^A08 to satisfy these constraints.
If the transfer information is erroneous, like you've stated in the above comment, then a transfer cancellation will need to be triggered ADT^A12 - Again, this is another one that I've seen rarely used - but if the transfer was done accidently or wrong, you wouldn't want to keep that information in the system. You'd want to get rid of it, and only have the correct information updated.
Since your Health/Hospital Information System(HIS) is usually the same system in which your Patient Census is done, blasting this cancellation message out to every individually connected system isn't usually worth doing because most specialty applications attached to the HIS could careless about previous room/bed for patients, only the most current information. Because of this, again, an ADT^A08 is more widely used.
Previous Room/Bed information is usually kept up by the HIS from the application standpoint. When a patient room/bed is updated, it will write the current PatientRoom or PatientBed information in the database columns to something like PreviousRoom or PreviousBed. I've seen this implemented down to the "Previous-Previous" Room and Bed. It will then write the new room/bed to the PatientRoom or PatientBed.
This is done so that the HIS can locally process cancellations of transfers. Most of the time these cancellations are done directly in the HIS by a user, and then an ADT^A08 is sent out to the appropriate interoperable applications using the new Room/Bed, with the connected applications being none-the-wiser what the previous room and bed was, or that it may have been cancelled and retransferred. If they were to cancel this transfer in the HIS, it would revert back to the database columns for PreviousRoom or PreviousBed, and update accordingly.
Hope this didn't confuse you too much.
TL;DR - The HIS is predominately the only system that cares about previous room/bed data. ADT^A08 is what's used the majority of the time to update patient room/bed information, even though it's standard-wrong.

Related

How to handle SAP Kapsel Offline app OData conflicts properly?

I build an app that is able to store OData offline by using SAP Kapsel Plugins.
More or less it's the same as generated by WEB ID or similer to the apps in this example: https://blogs.sap.com/2017/01/24/getting-started-with-kapsel-part-10-offline-odatasp13/
Now I am at the point to check the error resolution potential. I created a sync conflict (chaning data on the server after the offline database was stored and changed something on the app and started a flush).
As mentioned in the documentation I can see the error in ErrorArchive and could also see some details. But what I am missing is the information of the "current" data on the database.
In the error details I can just see the data on the device but not the data changed on the server.
For example:
Device is loading some names into offline store
Device is offline
User A is changing some names
User B is changing one of this names directly online
User A is online again and starts a sync
User A is now informend about the entity that was changed BUT:
not the content user B entered
I just see the "offline" data.
Is there a solution to see the "current" and the "offline" one in a kind of compare view?
Please also note that the server communication is done by the Kapsel Plugin and not with normal AJAX calls. This could be an alternative but I am wondering if there is no smarter way supported by the API?
Meanwhile I figured out how to load the online data (manually).
This could be done by switching http handler back to normal one.
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
Anyhow this does not look like a proper solution and I also have the issue with the conflict log itself. It must be deleted before any refresh could be applied.
I could not find any proper documentation for that. Also ETag handling is hardly described in SAPUI5 and SAP Kapsel documentation.
This question is a really tricky one, due to its implications. I understand that you are simulating a synchronization error due to concurrent modification, and want to know if there is a way for the client to obtain the "current" server state in order to give the user a means to compare the local and server state.
First, let me give you the short answer: No, there is no way for the client to see the current server state "for reference" via the Offline APIs when there are synchronization errors. Doing an online query as outlined above might work, but it certainly is a bad idea.
Now for the longer answer, which explains why this is not necessarily a defect and why I said there are quite some implications to the answer.
Types of Synchronization Errors
We distinguish a number of synchronization errors, and in this context, we are clearly dealing with business-related issues. There are two subtypes here: Those that the user can correct, e.g. validation errors, and those that are issues in the business process itself.
If the user violates the input range, e.g. by putting a negative price for a product, the server would reply with the corresponding message: "-1 is not a valid input value for 'Price'". You, as a developer, can display such messages to the user from the error archive, and the ensuing fix is indeed a very easy one.
Now when we talk about concurrent modification, things get really, really nasty. In fact, I like to say that in this case there is an issue with the business process, because on one hand, we allow data to get out of sync. On the other hand, the process allows multiple users to manipulate the same piece of information. How all relevant users should now be notified and synchronize, is no longer just a technical detail, but in fact a new business process. There just is no way to generically device how to handle this case. In most cases, it would involve back-office experts who need to decide how the changes should be merged.
A Better Solution
Angstrom pointed out that there is no way to manipulate ETags on the client side, and you should in fact not even think about it. ETags work like version numbers in optimistic locking scenarios, and changing the ETag basically means "Just overwrite what's on the server". This is a no-go in serious scenarios.
An acceptable workaround would be the following:
Make sure the server returns verbose error messages so that the user can see what happened and what caused the conflict.
If that does not help, refresh the data. This will get you an updated ETag, and merge the local changes into the "current" server state, but only locally. "Merging" really means that local changes always overwrite remote changes.
The user now has another opportunity to review the data and can submit it again.
A Good Solution
Better is not necessarily good, so here is what you should really do: Never let concurrent modification happen because it is really expensive to handle. This implies that not the developer should address this issue, but the business needs to change the process.
The right question to ask is, "When you replicate data in a distributed system, why do you allow it to be modified concurrently at all?" Typically stakeholders will not like this kind of question, and the appropriate reaction is to work out a conflict resolution process together with them. Only then they will realize how expensive fixing that kind of desynchronization is, and more often than not they will see that adjusting the process is way cheaper than insisting in yet another back-office process to fix the issues it causes. Even if they insist that there is a need for this concurrent modification, they will now understand that it is not your task to sort this out and that they need to invest in a conflict resolution process.
TL;DR
There is no way to compare the server and client state to the server state on the client, but you can do a refresh to retain the local changes and get an updated ETag. The real solution, however, is to rework the business process, because this no longer is a purely technical issue.
The default solution is that SMP or HCPms is detecting errors by ETags. At client side there is no API to manipulate ETags in case of conflicts. A potential solution to implement a kind of diff view on the device would work like this:
Show errors
Cache errors (maybe only in memory?)
delete the errors
do a refresh of the database
build a diff view with current data and cached errors
The idea with
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
could also work but could be very tricky and may introduce side effects.
Maybe some requests are triggered against the "wrong" backend.

How to use GTFS to record or analyse operating time series?

I may be wrong but GTFS is mainly used to plan or describe a public transportation system and GTFS-realtime is mainly used to make realtime operation data available. I think I need something that is not contemplated by none of these frameworks.
I need to record operational data like, how many passenger were transported, how much they paid, when each trip left the initial stop, etc. Data that must be recorded daily and kept in a database for latter use.
Does GTFS somehow address this?
Not really. Using a GTFS and a GTFS-realtime feed together you should be able to identify when a trip departed from its origin and whether it was on-time. If your transit agency includes "alert" data in its GTFS-realtime feed you may also be able to identify exceptional events that affect particular trips, such as roadwork or collisions.
Beyond that, I think you will have to look for other sources for the data you need (most likely the transit agency itself).
GTFS data describes the static features of a transit network, including its stops, routes and timetables. A GTFS-realtime feed provides live, operational data, but data of the sort riders can use to know when their bus will be arriving, not data transit operators track internally like ridership and fare revenues.

The best way to handle erratic data on iOS

I am working on an application where I have a connection to a database. The database contains from 300MB to 4GB worth of data as each customer has their own database. My issue that I am having is in gathering the data, because of the potential database size, just downloading and storing the information locally isn't possible. The data can get quite complex and can vary. For an example:
A customer has a Job and they want to search for that job from the app.
I then fetch a list of jobs matching the search criteria.
The customer sees the job they want to view and I start the gathering process.
This job can potentially touch many tables, sometimes repeatedly..
There is the jobs table, a relational table to map to a person. Then there is another table that contains non-customer relational information, then there are calendar events associated to the job, which in tun can associate different people. Then there are emails attached to the job, which in turn can bring in additional people and events.
So I have a working model that gathers all of this information. The problem I have is that I cannot figure out a great method of signaling to my view that the data is completely downloaded. My initial thought was to use the NotificationCenter to message when the certain parts of the task were finished, allowing the core Job object to notify the view when everything was complete.
I know this is a pretty generalized question, but I'm honestly stumped as to how to take an unknown number of table results and translate that into a notice that my app can actually use.
My initial recommendation would be Core Data. It's designed for this kind of problem. No, I'm not saying to download the entire database into Core Data. I'm saying to use Core Data to manage your object model, because that's what it's good at.
As you receive data from the server, compose it into NSManagedObjects and stick them in the data store. On the UI side, create an NSFetchedResultsController to keep you informed as the data updates asynchronously. You don't necessarily need to persist this store. You could just keep it in memory and throw it away whenever you're done with the query, but keeping it on disk could be a nice caching solution. Again, don't think of Core Data as "a local database." Think of it as a model persistence engine that you can query for objects.
One advantage of this model is that you can provide the best available data to the user as it becomes available. But say you really don't want to get the information until it's all available. That's fine, too. Just let the network side keep updating its context, and then only save it when everything's complete. That way NSFetchedResultsController gets a single atomic update. The nice things with Core Data is that it has these concepts built in, so you can adjust your update strategy without requiring massive redesign.
The Notification Center will work great for this.
Post the notification at logical points in your data load to trigger a UI update for your users.

iOS App Offline and synchronization

I am trying to build an offline synchronization capability into my iOS App and would like to get some feedback/advice from the community on the strategy and best practice to be followed to do the same. The app details are as follows:
The app shows a digital catalog to users and allows them to perform actions like creating and placing orders, among others.
Currently the app only works when online, and we have APIs for all actions like viewing the catalog, creating/placing orders which return JSON data.
We would like to provide offline/synchronization capability to users, through which users can view the catalog and create/place orders while offline, and when they come online the order details will be synchronized and updated to our server.
We would also like to pull the latest data from the server, and have the app keep itself up to date in case of catalog changes or order changes that happened at the Server while the app was offline.
Can you guys help me to come with the best design and approach for handling this kind of functionality?
I have done something similar just in the beginning of this year. After I read about NSOperationQueue and NSOperation I did a straight forward approach:
Whenever an object is changed/added/... in my local database, I add a new "sync"-operation to the queue and I do not care about, if the app is online or offline (I added a reachability observer which either suspended the queue or takes it back working; of course, I do re-queueing if an error occurs (lost network during sync)). The operation itself reads/writes the database and does the networking stuff. My ViewController use a NSFetchedResultsController (with delegate=self) to get callbacks on changes. In some cases I needed some extra local data (it is about counting objects), where I have used NSManagedObjectContextObjectsDidChangeNotification.
Furthermore, I have used Multi-Context CoreData which sounded quite reasonable to use (I have only two contexts).
To get notified about changes from your server, I believe that iOS 7 has something new for you.
On the server side, you should read a little for the actual approach you want to go for: i.e. Data Synchronization by Dan Grover or Developing Android REST Client Applications (of course there are many more good articles out there).
Caution: you might be disappointed when you expect an easy solution. Your requirement is not unusual, but the solution might become more complex than you expect - depending on the "business rules" and other reasonable requirements. If you intelligently restrict your requirements you may find a solution which you can implement yourself, otherwise you may also consider to use a commercial product.
I could imagine, that if you design the business logic such that it takes an offline state into account and exposes this explicitly in the business logic, you may find a solution which you can implement yourself with moderate effort. What I mean by this is for example, when a user creates an order, it is initially in "not committed" stated. The order will only be committed when there is access to the server and if the server gives the "OK" that this order can actually be placed by this user. The server may also deny the order, sending corresponding messages to the user.
There are probably quite a few subtle issues that may arise due to the requirement of eventual consistency.
See also this question which contains pointers to solutions from commercial products, and if you visit their web sites give valuable information about the complexity of the problem and how this can be solved.

Recommendations on handling object status fields in rails apps: store versus calculate?

I have a rails app that tracks membership cardholders, and needs to report on a cardholder's status. The status is defined - by business rule - as being either "in good standing," "in arrears," or "canceled," depending on whether the cardholder's most recent invoice has been paid.
Invoices are sent 30 days in advance, so a customer who has just been invoiced is still in good standing, one who is 20 days past the payment due date is in arrears, and a member who fails to pay his invoice more than 30 days after it is due would be canceled.
I'm looking for advice on whether it would be better to store the cardholder's current status as a field at the customer level (and deal with the potential update anomalies resulting from potential updates of invoice records without updating the corresponding cardholder's record), or whether it makes more sense to simply calculate the current cardholder status based on data in the database every time the status is requested (which could place a lot of load on the database and slow down the app).
Recommendations? Or other ideas I haven't thought of?
One important constraint: while it's unlikely that anyone will modify the database directly, there's always that possibility, so I need to try to put some safeguards in place to prevent the various database records from becoming out of sync with each other.
The storage of calculated data in your database is generally an optimisation. I would suggest that you calculate the value on every request and then monitor the performance of your application. If the fact that this data is not stored becomes an issue for you then is the time to refactor and store the value within the database.
Storing calculated values, particularly those that can affect multiple tables are generally a bad idea for the reasons that you have mentioned.
When/if you do refactor and store the value in the DB then you probably want a batch job that checks the value for data integrity on a regular basis.
The simplest approach would be to calculate the current cardholder status based on data in the database every time the status is requested. That way you have no duplication of data, and therefore no potential problems with the duplicates becoming out of step.
If, and only if, your measurements show that this calculation is causing a significant slowdown, then you can think about caching the value.
Recently I had similar decision to take and I decided to store status as a field in database. This is because I wanted to reduce sql queries and it looks simpler. I choose to do it that way because I will very often need to get this status and calculating it is (at least in my case) a bit complicated.
Possible problem with it is that it get out of sync, so I added some after_save and after_destroy to child model, to keep it synchronized. And of course if somebody would modify database in different way, it would make some problems.
You can write simple rake task that will check all statuses and, if needed, correct them. You can run it in cron so you don't have to worry about it.

Resources