Retrieving history of tracked fields odoo 11 - field

I want to know how to retrieve the history of tracked fields in odoo 11, so I can use them later for statistics and maybe use graphs to display some of its significant changes and such.
I know they get displayed in the chatter below the record and its related to mail.thread, but I don't know if there's a way to get those information for other manipulations, or where they're located in the database

Changes to tracked fields are stored in the mail.tracking.value model. You can review the table structure and methods in core/addons/mail/models/mail_tracking_value.py (in Odoo 11.0).
You can view the Messages directly to review some of the data by going to Settings > Technical > Email > Messages and filtering on Tracking values "is set".
The model is very basic, but you should be able to work with the message values to get your report/history data by sorting on the mail_message_id's date and time.

Related

QuickFIXJ setting SendingTime in messages

I have a FIX application which is connected to several price providers. It distributes the data it received to our inner applications. When it is sending the received messages from the price providers to the target applications, it modifies the SendingTime(52) field in FIX header which is not acceptable. The inner applications want to get the original SendingTime value. How can I say to the QuickFIXJ engine not to assign a timestamp value?
Thanks
What you desire... is kind of wrong. Header fields are to be used by the engine, and for application data (which is what this relayed SendingTime kind of is on the second leg). Your inner FIX connection should not be clobbering the SendingTime field. You might need the actual SendingTime field if you are diagnosing problems with your inner connection!
What you really need is a second SendingTime field. You should edit the DD of your inner FIX applications to add another field for which to store the old SendingTime value. Tell your inner target apps to refer to that field.
NOTE: You probably don't want to use OrigSendingTime (tag 122) for this. That field has a very specific usage already. Name your new field something else.
FIX Market Data messages (35=W, 35=X) usually have MDEntryDate (#272) and MDEntryTime (#273) fields to represent the timestamp of the market data price. If it is related to Quote/trade messages, you may have the TransactTime (#60) field.
It worth keep the SendingTime (#52) and MDEntryDate MDEntryTime/TransactTime separated, you can compare the difference between the price's timestamp and the counter party's infra structure timestamp (sending time). It would help to identify delay between the systems.
If the message you are handling does not have any Application DateTime field, you can pick one. which its value would be the SendingTime for the original FIX message you've received.
You can either select and use an existent field (http://www.onixs.biz/fix-dictionary/4.4/fields_by_name.html) or you can create your own user defined field.
Once you decided to create your own field, it is a good practice to check the oficial Global Technical Committee user defined fields list at https://www.fixtrading.org/standards/user-defined-fields/ and using the user defined fields range.
Sites
Fields by message: https://www.onixs.biz/fix-dictionary.html
User defined fields: https://www.fixtrading.org/standards/user-defined-fields/

Updating core data performance

I'm creating an app that uses core data to store information from a web server. When there's an internet connection, the app will check if there are any changes in the entries and update them. Now, I'm wondering which is the best way to go about it. Each entry in my database has a last updated timestamp. Which of these 2 will be more efficient:
Go through all entries and check the timestamp to see which entry needs to be updated.
Delete the whole entity and re-download everything again.
Sorry if this seems like an obvious question and thanks!
I'd say option 1 would be most efficient, as there is rarely a case where downloading everything (especially in a large database with large amounts of data) is more efficient than only downloading the parts that you need.
I recently did something similiar.
I solve the problem, by assigning an unique ID and a global 'updated timestamp' and thinking about 'delta' change.
I explain better, I have a global 'latest update' variable stored in user preferences, with a default value of 01/01/2010.
This is roughly my JSON service:
response: {
metadata: {latestUpdate: 2013...ecc}
entities: {....}
}
Then, this is what's going on:
pass the 'latest update' to the web service and retrieve a list of entities
update the core data store
if everything went fine with core data, the 'latestUpdate' from the service metadata became my new 'latest update variable' stored in user preferences
That's it. I am only retrieving the needed change, and of course the web service is structured to deliver a proper list. Which is: a web service backed by a database, can deal with this matter quite well, and leave the iphone to be a 'simple client' only.
But I have to say that for small amount of data, it is still quite performant (and more bug free) to download the whole list at each request.
As per our discussion in the comments above, you can model your core data object entries with version control like this
CoreDataEntityPerson:
name : String
name_version : int
image : BinaryData
image_version : int
You can now model the server xml in the following way:
<person>
<name>michael</name>
<name_version>1</name_version>

<image_version>1</image_version>
</person>
Now, you can follow the following steps :
When the response arrives and you parse it, you initially create a new object from entity and fill the data directly.
Next time, when you perform an update on the server, you increase the version count of an entry by 1 and store it.
E.g. lets say the name michael is now changed to abraham, then version count of name_version on server will be 2
This updated version count will come in the response data.
Now, while storing the data in the same object, if you find the version count to be same, then the data update of that entry can be skipped, but if you find the version count to be changed, then the update of that entry needs to be done.
This way you can efficiently perform check on each entry and perform updates only on the changed entries.
Advice:
The above approach works best when you're dealing with large amount of data updation.
In case of simple text entries for an object, simple overwrite of data on all entries is efficient enough. And this also keeps the data reponse model simple.

Resolving conflicts between automated updates and manual overrides

This is a bit of a complex, abstract question, so forgive me if it's not specific enough.
I've encountered a specific type of problem numerous times: That on one hand, a data source is used to update a certain data structure in an automated fashion at regular intervals, but on the other hand, stakeholders want to be able to manually override the automated entries.
Example:
You have a list of products, which are kept up-to-date (title, description, etc.) by some automated script which uses external data sources (product databases, etc.).
Let's say that in your data source you have a toaster "Freshtoast XYZ 300" and if its name changes to "FreshToast! XYZ-300", you want to propagate that update into your own (differently structured) product model.
At the same time, if a co-worker doesn't like the name "Freshtoast XYZ 300" and wants to change it to "Toaster XYZ 300 by Freshtoast" (manually), you don't want to override that change automatically (he would get angry), but you also don't want to simply ignore the updated name, since if the co-worker knew about the change, he'd adjust the name to "Toaster XYZ-300 by FreshToast!".
What's the best method to "consider" updated data sources - even for overridden data - while still allowing manual override?
PS: I'm using mostly Ruby / Rails, but I guess the question is very general. Also, to be clear, automated updates are the rule, while manual overrides are the exception in this scenario. So let's say 200,000 products get updated every single day, only 20 of which have manually overridden titles. So, for example, having to approve every single update is not an option.
Here goes nothing...
Hands off approach: Add a string column to products table that contains a serialized list of user-touched columns. Anytime a user touches a column in the products table, put it in the serialized list. When the automatic updater hits that record it checks the list for columns it should ignore.
Hand-wringing micro-manager approach: Use a versioning library (e.g. vestal_versions gem) and add a user_id column to the products table. Anytime a user-touched record is automatically updated, send them a notification and allow them to view a before/after which they can approve or reject.

Access 2010 form not displaying query

I'm sure this is an easy fix but I can't seem to find it. I just have a form, that will be a subform of another, that needs to display the results of a query.
The query is simple enough, just displays all fields of records that fall between specified dates. The query works great, but when I attach it to the form as its record source it doesn't display the data. I can see the correct amount of record selectors so I know its understanding the query but its as if all fields are hidden!
I have also tried building a query to the forms record source that was simply Select query.* From query. Oddly I have had this working before but I had to specify every field. What I mean is:
Select title From query
Select type From query
Select date From query
...
And so on for all the fields but this seems foolish, can anyone think of what I may be doing wrong?
Thanks in advance!
Edit, forgot to mention I also tried the foolish solution that I mentioned above and it didn't work so its definitely some issue that I'm not seeing, some property that's probably not appropriately set
#sshekhar well its not really code at the moment I'm using Access 2010. I have a form that needs to display a subform that executes this query of displaying records that have a data field that fall between dates specified by the user. The query works and displays the correct records, but the form that it is attached to only shows the record selectors and all the fields appear to be "hidden." I thought it may be one of the form's properties set incorrectly but I checked on the test form from another database that I used and each have what appears to be identical settings. So I'm at a loss!
So it turns out even though I using a query that holds all the fields it will not display the content unless you go to the Add Existing Fields and add all the the fields you want to see. This seems really silly especially when the results in the query but at least its working now.
I had this problem and discovered that having the property DataEntry set to YES will only display new records. From Microsoft Help:
You can use the DataEntry property to specify whether a bound form
opens to allow data entry only. The Data Entry property doesn't
determine whether records can be added; it only determines whether
existing records are displayed. Read/write Boolean.

how to send HL7 message using mirth by reading data from my database

I'm having a problem is sending(creating) an HL7 message using mirth.
I want to read data from my patient table in SQLSERVER 2008 and, using that data,
I want to send a message to my destination connector, a file writer. I want my messages to get saved in the file writer's output directory.
So far I'm able to generate the message, but the size of the output file in my destination directory is increasing as the channel's polling time goes on.
Have I done something wrong in the transformer mapping?
UPDATE:
The size of the output file in my destination directory IS increasing. (My .txt file starts from 1 kb and goes to 900kb and so on). This is happening becasue same data is getting generated again and again and multiple times too. for eg. my generated message has one(MSH,PID,PV1,ORM) for one row of data in my Database. The same MSH,PID, PV1 and ORM are getting generated multiple times.
If you are seeing the same data generated in your output directory multiple time, the most likely cause is that you are not doing anything to indicate to your database that a given record has been processed.
For example, if you have 1 record in your database: ["John", "Smith", "12134" ...] on the first poll, you will generate 1 message. If on the second poll you also have a second record ["Fred", "Jones", "98371" ...], you will generate TWO messages - one for John Smith and one for Fred Jones. And so on.
The key is to use the "Run On-Update Statement" of your Database Reader (Source) connector to update the database table you are polling with an indication that a given record has been processed. This ensures that the same record is not processed multiple times.
This requires that your source table have some kind of column to indicate the record has been processed. Mirth will not keep track of this for you - you must do it manually.
You can't have a file reader as a destination, so I assume you mean file writer. You say that "the size of my file in my destination is increasing." Is that a typo? Do you mean NOT increasing?
If it is increasing, then your messages are getting generated and you can view them to start your next round of troubleshooting...
If not, the you should look at the message log in the dashboard to see what is happening on a message-by-message basis - that would be the next place to troubleshoot.
You have to have a way of distinguishing what records to pull from the database by filtering on some sort of status flag or possible a time-stamp. Then, you have to use some sort of On-Update statement to mark these same records as processed.
i.e.
Select id, patient, result from results where status_flag='N'
or
Select * from results where status_flag = 'N' and created_date >= '9/25/2012'
Then, in either a transformer step or the On-Update section of your Source, you would do something like:
Update results
set status_flag = 'Y' where id=$(id)
If you do not do something like this and you have Mirth polling at a certain interval, it will just keep pulling the same records over and over.
You have to change your connector type as Database reader in source.
You have to change your connector type as file writer in the destination.
And you can write your data in the file, For which you have access to write.
while creating HL7 template you have to use the following code in outbound message template
MSH|^~\&|||
Thanks
Krishna

Resources