Hi All i am new in mirth.
i want to know the best way to store hl7 message in mysql database in a table where it store only one column as hl7 message or in row and columns like sending application column ,receiving application column etc..
I am unable to find a suitable answer so help me.
Thanks in advance.
Parsing the HL7 and storing each HL7 segment in an individual field would be an overkill for the database (HL7 has a hierachical structure and changes from each version to the other).
I would recommend to store really just the attributes you need (IDs, Sending Applications, Timestamp, ...) and put the original HL7 as text(or xml) in one field. You can then take the HL7 and parse it again when you need it.
It's also worth noting that in Mirth Connect 3.0, you will have the ability to add custom metadata columns on a per-channel basis, which are stored in the database and indexed on. So for example, you might add a "Sending Application" column to your channel, and use a transformer to pull the data out of MSH.3.1. In addition to storing this value in a separate database column, you could then view that column easily for each message in the Channel Messages screen, and even -search- on that column. It's especially useful for things like patient IDs, names, accession numbers, etc.
The 3.0 GA will be released later this year, but the first beta has already been released: http://www.mirthcorp.com/community/forums/showthread.php?t=8126
Related
I'm currently implementing a very basic IMAP client into an application I'm building in Rails. I'm using the Mail gem which supplies lots of useful ways of parsing the imap data.
I'd like to store the Mail object that it's generating in the database. Is that possible?
i.e.
email = Email.new
email.uid = id
email.mail = Mail.new(imap.fetch(id, "RFC822")[0]["attr"]["RFC822"]
email.save
It's a convenience thing where I don't want to have to download the object again unless I have to since performance on the IMAP call is slow, but I'd like to be able to have it there to look back on (and do any breaking down I needed to later).
I could then call
email.find(x).mail.body
and various other useful things without having to build out that functionality in my own email model.
Q1: How would I set up the active record model?
Q1a: Would I be better off doing something that excluded the attachments to make it an easier object to store? (is that even possible?)
Appreciate your help,
Several database schemata have been developed to store mail. I've worked on one, and there are others. Believe me, it's hard work. The result can be very useful, but since your question doesn't focus on the result I suspect it's not worthwhile in your case.
You might find it easier to use a json library to write your object graph to a file with an automatically inferred structure, as most json libraries seem to support these days. That won't let you do as much, but it's very much easier and lets you store both completely and incompletely retrieved messages. If you haven't fetched a particular body part, the json library will just write a null for that field.
It depends on what you want to do with the stored mails. If you need only specific parts of the mail to be easily accessible trough the database you won't need a complex setup like archiveopteryx, which basically maps a complete representation of emails to relational database tables. In most cases though you won't need that much detail and it will be totally perfect to use a simple data model.
A1: rails g model Email from to subject date:datetime message_id body. this are just the basic parts, should get you started.
A1a: You don't need to store the attachments if you don't want to. If you need them, you'll probably be better off not storing them in the database itself. Attachments are just like uploads so there are plenty of gems that can help you do that (https://www.ruby-toolbox.com/categories/rails_file_uploads).
Using posgres jsonb columns, you can store the email as json, in my case I disregard the attachments (which I store the reference to and retrieve as and when required).
This works pretty well with the Mail gem.
I am looking for solution of logging data changes for public API.
There is a need to tell client app which tables form db has changed and need to be synchronised since the app synchronised last time and also need to be for specific brand and country.
Current Solution:
Version table with class_names of models which is touched from every model on create, delete, touch and save action.
When we are touching version for specific model we also look at the reflected associations and touch them too.
Version model is scoped to brand and country
REST API is responding to a request that includes last_sync_at:timestamp, brand and country
Rails look at Version with given attributes and return class_names of models which were changed since lans_sync_at timestamp.
This solution works but the problem is performance and is also hard to maintenance.
UPDATE 1:
Maybe the simple question is.
What is the best practice how to find out and tell frontend apps when and what needs to be synchronized. In terms of whole concept.
Conditions:
Front end apps needs to download only their own content changes not whole dataset.
Does not invoked synchronization when application from different country or brand needs to be synchronized.
Thank you.
I think that the best solution would be to use redis (or some other key-value store) and save your information there. Writing to redis is much faster than any sql db. You can write some service class that would save the data like:
RegisterTableUpdate.set(table_name, country_id, brand_id, timestamp)
Such call would save given timestamp under key that could look like i.e. table-update-1-1-users, where first number is country id, second number is brand id, followed by table name (or you could use country and brand names if needed). If you would like to find out which tables have changed you would just need to find redis keys with query "table-update-1-1-*", iterate through them and check which are newer than timestamp sent through api.
It is worth to rmember that redis is not as reliable as sql databases. Its reliability depends on configuration so you might want to read redis guidelines and decide if you would like to go for it.
You can take advantage of the fact that ActiveModel automatically logs every time it updates a table row (the 'Updated at' column)
When checking what needs to be updated, select the objects you are interested in and compare their 'Updated at' with the timestamp from the client app
The advantage of this approach is that you don't need to keep an additional table that lists all the updates on models, which should speed things up for the API users and be easier to maintain.
The disadvantage is that you cannot see the changes in data over time, you only know that a change occurred and you can access the latest version. If you need to track changes in data over time efficiently, than I'm afraid you'll have to rework things from the top.
(read last part - this is what you are interested in)
I would recommend that you use the decorator design pattern for changing the client queries. So the client sends a query of what he wants and the server decides what to give him based on the client's last update.
so:
the client sends a query that includes the time it last synched
the server sees the query and takes into account the client's nature (device-country)
the server decorates (changes accordingly) the query to request from the DB only the relevant data, and if that is not possible:
after the data are returned from the database manager they are trimmed to be relevant to where they are going
returns to the client all the new stuff that the client cares about.
I assume that you have a time entered field on your DB entries.
In that case the "decoration" of the query (abstractly) would be just to add something like a "WHERE" clause in your query and state you want data entered after the last update.
Finally, if you want that to be done for many devices/locales/whatever implement a decorator for the query and the result of the query and serve them to your clients as they should be served. (Keep in mind that in contrast with a subclassing approach you will only have to implement one decorator for each device/locale/whatever - not for all combinations!
Hope this helped!
I have a dilemma regarding Core Data and syncing data with server.
I wrote an app which uses Core Data, don't use id attributes, everything is set with relationships. Most of data is being generated on device and should be sent to server as backup. On the other hand, there is some data that can be reused among users and I want to have control over it, i.e. modifying, deleting, adding.
Question
When sending data to server, what's prefered way of dealing with relationships? In my opinion, it would be very inefficient to think in terms of Core Data, sending all relation objects to server and then deal with them if they already exist on server. So, using uniqueId is obligatory? Generating ones on server which will be shared and others on devices? Is there any other approach?
Thank you.
Assuming that the server database works with foreign keys, one common solution is to introduce id attributes and set them to some invalid state for new objects. For example, for new relationships you could generate an arbitrary number of unique "invalid" ids by using negative integers. The server would have to then assign new (server-unique) ids and send them back to the client. Of course, when importing data from the server, you replace foreign keys with relationships.
So if you have potentially more than one device trying to modify data also used by other users or devices, the server will have to be part of the solution. Otherwise, you could just generate unique IDs so the server can store the relationships.
I have now played with the QBO and QBD APIs and feel I have a fair understanding of how it thinks and how to interact with it. So now it is time to design the actual integration solution.
Inside my application you can create new customers, quote services, perform services, and soon, pass invoices to QuickBooks, sounds easy.
But what if the customer is not in QB yet? No problem - for each invoice I will look up the customer (need the id anyway) and if it doesn’t exist, add it. But if I have to look up the customer for each invoice it seems like it might be slow. I will likely have 30,000 customers and have 500-3000 invoices per day.
So my question is this; what are others doing?
a) Are you storing the QB id for each customer in your data?
b) How do you detect address changes (changed in your app and changed in QB)?
c) Is the batch submission interface so much faster I should use that?
Thanks for your help!
We often times do store the QB id in our database for use. If we post an invoice into QB, we'll then store the QB id for future use if we need to modify it.
As far as detecting changes on the customer record and other info, there's a couple ways to handle the conflict resolution. One is to keep a timestamp on your side as to when changes are made. You can then compare this with the timestamp of the last change on the QB record and then make your decision as to which one gets updated.
FreddyMac,
To detect changes on the Intuit side you can construct a query with a CDCasOf Filter, which will return only the data that has changed since a date you provide. (ChangeDataCapture as of)
https://ipp.developer.intuit.com/0010_Intuit_Partner_Platform/0050_Data_Services/0500_QuickBooks_Windows/0100_Calling_Data_Services/0015_Retrieving_Objects
You need to keep track of data changes on your side.
The batch submission is not faster, its just easier for you to write the code.
The IPP SDK can queue the API calls for your and aggregate the responses.
regards,
Jarred
I have a few data values that I need to store on my rails app and wanted to know if there are any alternatives to creating a database table just to do this simple task.
Background: I'm writing some analytics and dashboard tools for my ruby on rails app and i'm hoping to speed up the dashboard by caching results that will never change. Right now I pull all users for the last 30 days, and re-arrange them so I can see the number of new users per day. It works great but takes quite a long time, in reality I should only need to calculate the most recent day and just store the rest of the array somewhere else.
Where is the best way to store this array?
Creating a database table seems a bit overkill, and I'm not sure that global variables are the correct answer. Is there a best practice for persisting data like this?
If anyone has done anything like this before let me know what you did and how it turned out.
Ruby has a built-in Hash-based key value store named PStore. This provides simple file based, transactional persistance.
PStore documentation
If you've got a database already, it's really not a big deal to create a separate table for tracking this sort of thing. When doing reporting, it's often to your advantage to create derivative summary tables exactly like what you're describing. You can update these as required using a simple SQL statement and there's no worry that your temporary store will somehow go away.
That being said, the type of report you're trying to generate is actually something that can be done in real-time except on extravagantly large data sets. The key is to have indexes that describe the exact grouping operation you're trying to do. For instance, if you're grouping by calendar date, you can create a "date" field and sync it to the "created_at" time as required. An index on this date field will make doing a GROUP BY created_date very quick:
SELECT created_date AS on_date, COUNT(id) AS new_users FROM users GROUP BY created_date
Using a lightweight database like sqlite shouldn't feel like an overkill. Alternatively, you can use key-store solutions like tokyo cabinet or even store the array in a flat file manually but I really don't see any overkill in using sqlite.