Data parsing and sending from DICOM image in .net core - asp.net-mvc

I am currently working on a complete DICOM Web application based on .net core + Postgresql and OHIF viewer ( to render DICOM images).
I've built a database with tables as Patient, Study, etc. and the attributes I am storing as PatientName, PatientDOB, etc. now while returning the json the output is also the same as
"PatientName" : "temp"
"PatientDOB" : "2332"
..
but as DICOM viewers have a standard in which they recieve JSON objects as
{
"0020000D": {
"vr": "UI",
"Value": [ "1.2.392.200036.9116.2.2.2.1762893313.1029997326.945873" ]
}
}
so I want to map my JSON input/output in such a way that while returning I return values in above Dicom format and while getting the data I store them as attributes (column names) and not as tags?
I am pretty new in .net core and Dicom web so how to proceed further with that? Also, I am using fo-Dicom to read the data from Dicom image.
Please provide some hint/code that I can use.

You will propably store only few DicomTags into your database (the tags you need for doing a query against your database), but the viewer may want to have all the tags as Json. So I would not try to map your database-Jasons into Dicom-jsons, but I would use fo-dicom to generate the Json out of the DICOM file:
You need to add the nugeg package fo-dicom.json and then you can call
DicomDataset dataset = ... // wherever you get your DICOM file
string json = JsonConvert.SerializeObject(dataset, new JsonDicomConverter());
or the othe way round, if you want to convert such a DICOM conformant json into a DicomDataset
string json = ... // wherever you get the json from
DicomDataset dataset = JsonConvert.DeserializeObject<DicomDataset>(json, new JsonDicomConverter());

OHIF Viewer supports the standard DICOMweb WADO-RS JSON metadata format in addition to the custom format you mentioned in your question. This means you can use any DICOMweb server such as Orthanc, DCM4CHE or DICOMcloud
DICOMcloud may fit your scenario better as it uses fo-dicom. However, it currently only support MS SQL Server and .NET 4.6. (there is an effort to support mySQL but it is not 100% completed)
If you still want to write your own, you can look of how it is implemented and adapt it to your own solution.
[Disclosure] I am the author of DICOMcloud

Related

How to read .eeg file from BrainVision Core Data Format in python?

I have a dataset in BrainVision Core Data Format which consists of the header file (.vhdr), marker file (.vmrk), and raw EEG data (.eeg) file for each subject. I know that python has mne.io.read_raw_brainvision() function which reads header file and returns a raw object containing BrainVision data. I do not know how to proceed after that or how can I read .eeg file. Thanks
Overall, MNE Python has a great tutorial on handling raw EEG data: https://mne.tools/stable/auto_tutorials/raw/10_raw_overview.html#the-raw-data-structure-continuous-data
You can follow this tutorial and use the file loading with mne.io.read_raw_brainvision() as used in this more specific tutorial that happens to work with sample data in the BrainVision Core Data Format: https://mne.tools/stable/auto_tutorials/time-freq/50_ssvep.html#frequency-tagging-basic-analysis-of-an-ssvep-vssr-dataset

Convert JSON to XML format through Azure Logic app

Scenario 1 - I have some XML files stored in FTP.Those files are being fetched by FTP connector in Azure logic app. Then I am reading those files by parsing it into JSON & storing those objects in String variables for my operation. Then after my processing I want to convert that json back to XML for the output.
Scenario 2 - I am merging multiple XMl files(all are of same structure) into an single one. after merging I can get the output in JSON format but then I want to convert the same into XML format.
So please suggest how can I convert JSON to XML through Logic App & Azure function only.
Try the 'xml' function.
screenshot of xml function example in Logic App
Make sure that your JSON input is structured suitably for conversion to XML, for example you should only have a single element at the top level, which will form your XML root element.

Parsing and indexing documents with Apache Tika

I'm trying to parse and index .doc files into elasticsearch with apache Tika.
Actually, my project is to build a resume search engine for my company.
Since we have a standardized resume format, I would like to parse these resume using apache tika in Java.
Basically I have a .doc file like this :
Jean Wisser avenue des Ternes
75017 Paris
Business Intelligence Consultant
Skills : Qlikview, SAS, Cognos, ...
Companies : IBM, Orange, ...
And I would like to extract and parse the content to index it in elasticsearch like this :
XContentBuilder builder = jsonBuilder()
.startObject()
.field("Name", "Jean")
.field("Lastname", "Wisser")
.startObject("Adress")
.field("Street", "avenue des Ternes")
.field("City", "Paris")
......
.endObject()
.endObject()
What is the best way to achieve this ?
Should I use Tika, POI or something else ?
I don't know if I understand well your question, but If you want a tool that can help you to extract information corresponding to each type of information in the '.doc' file, Tika can't do that for you automatically (if they are not already in the metadata of the document), but you need to prepare your data first (extract the text and write your own program to parse and extract data). Once you have extracted the data, you can index the document with the fields needed.

How to put tweets in avro files and save them in HDFS using Spring XD?

how can I put tweets in avro files and save them in HDFS using Spring XD? The docu only tells me to do the following:
xd:>stream create --name mydataset --definition "time | hdfs-dataset --batchSize=20" --deploy
This works fine for the source "time" but if I want to store tweets as avro it only puts the raw json Strings in the avro files, which is pretty dumb.
I could not find any detailed information about how to tell Spring XD to apply a specific Avro Schema (avsc) or convert the json String to Tweet object.
Do I have to build a custom converter?
Can somebody please help? This is driving me insane...
Thanks.
According to the hdfs-dataset documentation, Kite SDK is used to infer the AVRO schema based on the object you passed into it. From its perspective, you passed in a String, which is why it behaves as it does. Since there is no mechanism to explicitly pick a schema for hdfs-dataset to use, you'll have to create a Java Class representative of the tweet (or use the Twitter4J api), turn the tweet JSON into a Java object (a custom processor will be necessary), and output that to your sink. Hdfs-dataset will use a schema based on your class.

XML Schema - Allow Invalid Dates

Hi I am using biztalk's FlatFile parser (using XML schema) to part a CSV file. The CSV File sometimes contains invalid date - 1/1/1900. Currently the schema validation for the flat file fails because of invalid date. Is there any setting that I can use to allow the date to be used?
I dont want to read the date as string. I might be forced to if there is no other way.
You could change it to a valid XML date time (e.g., 1900-01-00:00:00Z) using a custom pipeline component (see examples here). Or you can just treat it as a string in your schema and deal with converting it later in a map, in an orchestration, or in a downstream system.
Here is a a C# snippet that you could put into a scripting functoid inside a BizTalk map to convert the string to an xs:dateTime, though you'll need to do some more work if you want to handle the potential for bad input data:
public string ConvertStringDateToDateTime(string param1)
{
return DateTime.Parse(inputDate).ToString("s",System.Globalization.DateTimeFormatInfo.InvariantInfo);
}
Also see this blog post if you're looking to do that in multiple places in a single map.

Resources