PHR (Personal Health Record) - CCD Development - hl7

We are developing a PHR App. The Patient information should go as CCD to the clinic that they want to go. Whether the CCD should have only the narative block or should this accompany with semantic block too?

Here are some assumptions I am making about your question with my answer:
1) The PHR App has data that needs to be formatted as a CCD and sent to the clinic EMR.
2) The CCD that we are sending from the PHR to the clinic is primarily patient entered data. I presume it could also have data that was fed into it from other source systems.
The standard that EMR systems use and are certified to currently is the HL7 IG for CDAR2: C-CDA Templates for Clinical Notes (US Realm) DSTU Release 2.1. The document was published in August 2015. It contains the templates for several document types including the CCD.
The CCD document requires several Sections within it, including:
Allergies and Intolerances Section (entries required) (V3)
Medications Section (entries required) (V2)
Problem Section (entries required) (V3)
Procedures Section (entries required) (V2)
Results Section (entries required) (V3)
Social History Section (V3)
Vital Signs Section (entries required) (V3)
As you can see, there are entries required for all of the sections except for the Social History Section. That means that the semantic data is required for all those Sections that state "entries required". Since these Sections are required, it is necessary to include both narrative and semantic data.
This would be desirable in any case, as most systems are now required to have the ability to parse the semantic data and add the semantic elements to the EMR as discrete data elements. Sending the semantic data will help the clinic EMR provide a unified view of the patient medical record that is more complete. Even though other Sections are optional and entries may be optional, it is still beneficial to the EMR to get the semantic data elements when possible.

How do you discriminate those sections are Entry Required? Do you mean the entry required from Health Providers? 2. Since the Patient is given data and get ready his CCD to be shared for an appointment for the FIRST TIME, are these semantic data is required?

Related

Firestore billing for reading a document with subcollections

I'm making an app where it stores how many minutes a user has studied with my app. My Firestore database starts with a "users" collection, and each user has their own document that is named by their userID generated in Auth.
My question is if I read their userID document, which has many documents in its sub collections, does that count as one read or does it also count the number of documents in the sub collections as well?
Thank You in advance.
The answer here from Torewin is mostly correct, but it missing one important detail. It says:
if you retrieve a document; anywhere, it counts as a read
This is not entirely true. Cached document reads are not billed as reads. This is one important feature of the Firestore client SDKs that helps lower billing costs. If you get a single document using the source option cache (options are "cache" or "server" or "default"), then the cache will be consulted first, and you get the document without billing. The cache is also used for query results when the app is offline.
The same is true for query results. If a document comes from cache for some reason, there is no billing for that read.
I am uncertain what Torewin means by this in comments: "They recommend you make multiple reads instead of 1 big one because you will save money that way". All reads are the same "size" in terms of billing, considering only the cost of the read itself. The size of the document matters only for the cost of internet egress usage, for which there is documentation on pricing.
It's worth noting that documents can't "contain" other documents. Documents are contained in collections or subcollections. These collection just have a "path" that describes where they live. A subcollection can exist without a "parent" document. When a document doesn't exist, but a collection is organized under it, the document ID is shown in italics in the console. When you delete a document using the client API, none of its subcollections are deleted. Deletes are said to be "shallow" in this respect.
If you are referring to is it 1 read to access a Document (in this case your generatedUserID) from FireStore?
I would imagine the answer would be yes.
Any query or read from Firestore only pulls the reference that you are mapping to. For example, if you grab the 3rd document in your User -> userID -> 3rd document, only the 3rd document will be returned. None of the other documents in that collection or any of the collections besides the userID.
Does that answer your question or are you asking something completely different?
For reference: https://firebase.google.com/docs/firestore/pricing#operations
Edit: Each individual Document that is pulled from the query will be charged. For example, if you pull the parent collection (with 6 documents in it), you will be charged for all 6 documents. The idea is to only grab the documents you need or use a cursor which let's you resume a long-running query. For example, if you only want the document pertaining to use data on a specific date (if your data is set up like that), you'd only retrieve that specific document and not retrieve all of the documents in the collection for the other days.
A simple way of thinking about it is: if you retrieve a document; anywhere, it counts as a read.

Solr Dynamic filter

I have electronics documents associated with metadata that I indexed into Solr. I also have a web application which allows users to login and perform searches. But then, I would like to apply dynamic access rights on documents. Let me explain. Basically for us, a document has:
one type (Contract, CV, birth certificate, ...) about 250 unique types.
one person concerned, about 10 000 unique persons.
one effective date.
one content: the electronic document
Some users should (or shouldn't) have access to some documents according to who they are in our organization. For example, user 'x' can see the CV of user 'y' from date #1 to date #2. There thousands of combinations and in fact it's more complex than just these three parameters. So I developed an application based on a rules engine which computes the access rights giving a user and a document. The rules might change quite often and the facts are constantly changing.
At this time, it works to filter the results that Solr returns on my client web application. However, by filtering after searching I lost many features provided by Solr, facets, paging, ... I am looking for a way to call my rules engine (web service) to filter results before calling other Solr components (especially facets).

Omniture Site Catalyst Extract/Export/Download Report Data

I have a drilled down report as shown in the below image:
When I try to download the report normally, I get only the 5 items shown in a category. I want to be able to download all the subcategories within all the categories, along with the category names and not just the 5 subcategories in a category.
How can I achieve this? Any ideas/suggestions?
If you have access to data warehouse, you can obtain this information easily. Most contracts include it by default; if you have access to the request interface, you have it.
Click on Adobe Marketing Cloud in the upper left | Reports & Analytics | Data Warehouse
Select the date range you'd like to request data from
In the breakdowns section, select series name following by video name
In the metrics section, select the appropriate metric you'd like to include
Ensure the other settings in the request are as desired, and click 'request report'.
If you don't have access to data warehouse, you could try your luck at a data extract report:
Open the report you'd like to download, and under more options there should be 'extract data'
In the data extract wizard, click each 'top 1-50' and set them to 'all' or 'top 1-50000'
Ensure the other settings in the request are as desired, and click 'request report'.
Data extracts are subject to processing limitations, meaning if there's too much data to process, the request will fail. Data warehouse on the other hand is not subject to this limitation, it just takes a really long time for the report to arrive.
Yet another option would be to write your own script to pull the data using Adobe's Analytics Reporting API
Also a note about Data Warehouse.. it's "free" for anybody on a newer Adobe Digital Marketing contract. I say "free" because it's now included in the "package" with most all of the other adobe digital marking tools, instead of being charged separately.
If you have an older contract that hasn't been renewed yet, you may not actually have it, since part of it now being included also involves them jacking up the prices! Contact your rep to find out. But if you do have access to Data Warehouse, that's certainly the easier route.

HL7 relation to clinical flows

Does anyone know of any articles our tutorials that describe the clinical flows
as it relates to the hl7 message that is generated?
I have read the hl7 spec i am just looking to connect the dots on for example.
A patient is admitted to a hosptial and all the events that happen and theh hl7 triggers that are created.
A lab test is ordered how are all does it route to the respective systems etc...
Its actually pretty simple to understand clinical workflows. Here is a comprehensive list of all trigger events, and core HL7 components, that you can go through. If you google you will get many resources like interfaceware, that give more glorified examples.
In short,this is how it works.
There are 2 broad categories of application in healthcare
PMS - Practice Management System (in simple words the front desk). This acts as a repository of all the patient demographics, Appointment schedule of the doctors, Billing information etc. Most of the critical healthcare transactions are done at the PMS. Eg. GE Centricity, Allscripts ProPM etc
EHR - Electronic Health Record (in simple words the doctors application). This acts as a repository of all the patient medications, diagnosis, allergy, history and physical etc. Every medical information is recorded in the EHR. Eg. Cerner EHR, Allscripts ProEHR etc.
ADT -Admission, Discharge and Transfer is a broad category of trigger, and covers almost all the major events, Starting from ADT^A01 to ADT^A40.
When a patient is registered at the practice PMS, an A28 or add patient demographics is triggered. If any of the details of that patient is updated (eg his address), that's an A31. After you have demo of the patient you schedule an appointment.
Specifically you have a scheduling message(SIU) to do the job, or you can also use encounter demographics(ADT). The key difference is that SIU doesn't have complete demographics details(insurance,guarantor etc) of the patient in the message, and encounter demographics are more useful than SIUs for they contain demographic+appointment details.
So, if the patient does a walk-in without previously being registered at the practice, an encounter demographics is sufficient.
Let us assume for sake of understanding, we go with schedule.So, an SIU^S12,^S14,^S15 is for appointment add, update and cancellation respectively.Now, this appointment shows on the respective Doctors application (EHR) if the PMS and EHR are connected via an interface.
Every procedure has a code associated to it. The billing happens on this procedure code and is done through a charge message(DFT^P03).A charge is placed in the EHR and it always travel from the EHR to the PMS.
There is also another set of applications called as Billing applications - Whiteplume is an example that specifically process charges and handles billing. Also, there is something known as Clearing Houses that handles billing and Claim processing for insurances.
If we talk about Labs, the Lab connects to both the PMS and the EHR.
It connects to the PMS through a Query interface (QRY^Q01,^Q04 trigger events) to request for patient demographics, for the lab should know whether the corresponding patient is registered or not.
It connects to the EHR application through OM interface (orders management interface) or a Results only interface. ORM^O01 is for orders, ORU^R01 is for Results. OM interfaces are bi-directional.
Results only interface is a unidirectional interface running from the Lab to the EHR and consists of results. The request is either placed manually through phone or fax.
The order or basically the tests that need to be performed on the patient are placed from the EHR, the order message (ORM^O01) is triggered containing the required procedure code of the test/battery that needs to be performed on the patient. The lab then queries for the respective patient through a QRY^Q01 message, and recieves a response containing the patient demographics(basically just a PID information), and after the tests are conducted sends the results using ORU^R01 message. However, you won't find Query interface that mandatory.
The more you go deeper into it, there is Transcription(TRN), Radiology, document(MDM) message` for different content and purpose of Results.
Hope this helps!
If you are looking for a description of transactions occurring in healthcare processes and how they are mapped to HL7 messages, then the best and most standard way is to go for IHE:
http://www.ihe.net/Technical_Frameworks/
Regards
Davide

How do you decide how much data to push to the user in Single Page Applications?

Say you have a Recipe Manager application that you're building with a Web Api project. Do you send the list of recipes along with their ingredient names in JSON? Or do you send the recipes, ingredient names, and ingredient details? What's the process in determining how big the initial payload should be for a SPA?
These are the determining factors in how much to send to the client in an initial page:
Data that will be displayed for that first page
Lookup list data for any drop downs on that page
Data that is required for and presentation rules (might not be displayed but is used)
On a recipe page that would show a list of recipes, I would get the recipes and some key factors to display (like recipe name, the dish, and other key info) that can be displayed in a list. Enough for the user to make a determination on what to pick. Then when the user dives into a recipe, then go get that 1 recipe's details.
The general rule is get what you user will almost certainly need up front. Then get other data as they request it.
The process by which you determine how much data to send solely depends on the experience you want to provide your users - however it's as simple as this. If my experience demands that I readily display all of the recipes with a brief description and then allow them to drill into the recipe to get more information, then I'm only going to send enough information to produce the display and navigate further into the entity.
If then after navigating into the recipe it requires that you display the ingredient names and measures then send down that and enough information to navigate further into any single ingredient.
And as you can see it just goes on and on.
It depends if your application is just a simple HTTP API backing your web page, or your goal is something more akin to Platform As A Service. One driver for the adoption of SPA is that it makes the browser another client, just like an iOS or Android app,or a 3rd party.
If you want to support multiple clients, then it's likely that you want to design your APIs around the resources that you are trying to expose, such that you can use the uniform interface of GET/POST/PUT etc. against that resource. This will means it is much more likely that you are not coding in an client specific style and your API will be usable by a wide range of clients.
A resource is anything you would want to have its own URN.
I would suggest that is likely that in this case you would want a Recipe Book resource which has links to individual Recipe resources, which probably contain all the information necessary for that Recipe. Ingredients would only be a separate resource if you had more depth on what an Ingredient contained and they had their own resource.
At Huddle we use a Documentation Driven Design approach. That is we write the documentation for our API up front so that we can understand how usable our API would be. You can measure API quality in WTFs. http://code.google.com/p/huddle-apis/
Now this logical division might not be optimal in terms of performance. Your dealing with a classic tradeoff (ultimately architecture is all about balancing design tradeoffs) here between usability of your API and the performance of your API. Usually, don't favour performance until you know that it is an issue, because you will pay a penalty in usability or maintainability for early optimization.
Another possibility is to implement the OData query support for WebAPI. http://www.asp.net/web-api/overview/odata-support-in-aspnet-web-api
That way, your clients can perform their own queries to return only the data they need.

Resources