what is the hierarchical structure of XML structure of this output in Map Editor ? I want to Convert CSV Data to XML By Mapping - mapping

I need this output by convert csv file in map editor . I want to understand how can we get this output after adding in csv file in input side in map editor.
<code description="1" receiverCode"XYZ" senderCode="ABC" text1="Hydrabad" text2="Mumbai"/>
<code description="2" receiverCode"PPZ" senderCode="ABC" text1="Delhi" text2="Mumbai"/>

Related

How pandoc lua filters can be used to add the title as the level-1 header?

I'd like to use pandoc lua filter when I convert multiple markdown files from markdown to pdf. I'd like the titles of individual markdown files to be used as chapters (first-level headers).
I learned the existing examples, and I think this one is close to what I need - basically I need to add pandoc.Header(1, doc.meta.title) to all my markdown files, however I'm struggling to write the lua filter and make it work.
I think this question is doing a similar action pandoc filter in lua and walk_block
The pandoc command:
pandoc -N --lua-filter add-title.lua blog/*.md --pdf-engine=xelatex --toc -s -o my_book.pdf
The add-title.lua (this is just wrong, no exceptions but nothing happens to the output):
function add_header (header)
return {
{Header = pandoc.Header(1, meta.title)}}
end
Input files:
1.md
---
title: Topic1
---
## Sample Header from file 1.md
text text text
2.md
---
title: Topic2
---
## Sample Header from file 2.md
text text text
Expected output equivalent to this markdown (but my final format is pdf)
---
title: Title from pandoc latex variable
---
# Topic1
## Sample Header from file 1.md
text text text
# Topic2
## Sample Header from file 2.md
text text text
I think the key problem is that the lua filters only run once the full set of documents have been parsed into a single AST. So the individual files are effectively concatenated prior to parsing to create a single document with a single set of metadata. The individual title settings in the yaml metadata blocks are being overridden before the filter has a chance to run. Assuming that you need to get the heading from each separate metadata block (and can't just put the header in directly) this means that you cannot let pandoc join the files. You will need to read and parse each file separately. Fortunately this is pretty easy with filters.
The first step is to make a single reference file that contains links to all of the other files.
---
title: Combined title
---
![First file](1.md){.markdown}
![Second file](2.md){.markdown}
Note that the links are specified using images with a special class .markdown. You could use some other method, but images are convenient because they support attributes and are easy to recognise.
Now we just need a filter that will replace these images with the parsed elements from the linked markdown file. We can do this by opening the files from lua, and parsing them as complete documents with pandoc.read (see https://www.pandoc.org/lua-filters.html#module-pandoc). Once we have the documents we can read the title from the metadata and insert the new header. Note that we apply the filter to a Para element rather than the Image itself. This is because pandoc separates Block elements from Inline elements, and the return value of the filter must be of the same type. An Image filter cannot return the list of blocks parsed from the file but a Para can.
So here is the resulting code.
function Para(elem)
if #elem.content == 1 and elem.content[1].t == "Image" then
local img = elem.content[1]
if img.classes:find('markdown',1) then
local f = io.open(img.src, 'r')
local doc = pandoc.read(f:read('*a'))
f:close()
-- now we need to create a header from the metadata
local title=pandoc.utils.stringify(doc.meta.title) or "Title has not been set"
local newHeader=pandoc.Header(1, {pandoc.Str(title)})
table.insert(doc.blocks, 1, newHeader)
return doc.blocks
end
end
end
If you run it on the combined file with
pandoc -f markdown -t markdown -i combined.md -s --lua-filter addtitle.lua
you will get
---
title: Combined title
---
Topic 1
=======
Sample Header from file 1.md
----------------------------
text text text
Topic 2
=======
Sample Header from file 2.md
----------------------------
text text text
as required.
Note that any other yaml metadata in the included files is lost. You could capture anything else by taking it from the individual meta object and placing it into the global one.

F# Convert XML dynami to EXCEL with XMLProvider

I am trying to convert xml file to excel.
I want to use with the xmlProvider tool but it seems that it cannot be generic,
I have some XML files are similar but with some small changes.
For example:
The first XML file:
<?xml version="1.0"?>
<OCEXPORT>
<TABLE>
<subjects>
<sid>510</sid>
<secondary_label></secondary_label>
<person_id></person_id>
<study>US-BID-018</study>
<study_site>Hospital Vall d&apos;Hebron Barcelona</study_site>
<group></group>
<group_class></group_class>
<gender></gender>
<date_of_birth></date_of_birth>
<date_created>2016-06-15 13:35:12.435342+00</date_created>
<enrollment_date>2016-06-15 13:35:12.437+00</enrollment_date>
</subjects>
<subjects>
<sid>509</sid>
<secondary_label></secondary_label>
<person_id></person_id>
<study>US-BID-018</study>
<study_site>Hospital Vall d&apos;Hebron Barcelona</study_site>
<group></group>
<group_class></group_class>
<gender></gender>
<date_of_birth></date_of_birth>
<date_created>2016-06-15 11:20:02.662543+00</date_created>
<enrollment_date>2016-06-15 11:20:02.664+00</enrollment_date>
</subjects>
</TABLE>
</OCEXPORT>
The second one:
<?xml version="1.0"?>
<OCEXPORT>
<TABLE>
<subjects1>
<sid>509</sid>
<secondary>2</secondary>
</subjects1>
<subjects1>
<sid>509</sid>
<secondary>1</secondary>
</subjects1>
</TABLE>
</OCEXPORT>
The First level <TABLE> is the same in each file but the second level (<subjects>/<subjects1>)(the names and the number of the nodes) and the third level (the names and the number of the nodes) are different.
I need to build Excel file with some Sheets (for example with name of the node of the second level - subjects)
The columns name and values with the third level (sid,secondary_label and etc.)
Can I use with the xmlProvider to parse the XML files and export it to Excel file?

Reading XML file using DXL(DOORS Scripting) and exporting to EXCEL dynamically

I need to read an XML file using DXL scripting and Exporting to EXCEl.
Is there any script in DXL already available?
Please help me out.
Thanks,
Sri
It can be done but there is no quick function built in to DXL to read XML, you have to build your own. Exporting to Excel is also possible by either going to Comma Separated Values or by using the OLE methods.
There is one library created by Mathias Mamtsch, using it you can create DOM from input string xml, then read,write attributes, values, iterate over attributes ,tags and so on.
http://sourceforge.net/projects/dxlstandardlib
In DXL world you have to write everything on you own.
It uses some helpers dxl files so to use it you have to use more than one additional dxl files.
For example one of the function there:
/*! \memberof XMLDocument
\return true - if the XML document could be loaded and parsed correctly. 'false' if there was an error loading the document.
\param xd The XMLDocument into which the contents shall be loaded.
\param s the contents of the XML document to load
\brief This function loads the XML content of a string into an XMLDocument object.
*/
bool loadXML (XMLDocument xd, string s) {
bool result = false
checkOLE ( oleMethod (getOleHandle_ xd, "LoadXML", oleArgs <- s, result) )
return result
}
These functions use OLE objects so I think they are Windows specific and under Linux the code will not work. But not sure.

Object Detection using FERNS

I am new to image processing and have just started working in OpenCV. I was trying to do object detection using GenericDescriptorMatcher of type fern. But I don't know what to pass as the params_filename. What should be the format of the file? What parameters do I write in the file and in what format?
Ptr<GenericDescriptorMatcher> descriptorMatcher = GenericDescriptorMatcher::create("FERN", params_filename);
The opencv-2.x.x/samples/cpp should contain an example version of 'fern_params.xml', which according to opencv-2.4.8 contains the following xml content,
<?xml version="1.0"?>
<opencv_storage>
<nclasses>0</nclasses>
<patchSize>31</patchSize>
<signatureSize>INT_MAX</signatureSize>
<nstructs>50</nstructs>
<structSize>9</structSize>
<nviews>1000</nviews>
<compressionMethod>0</compressionMethod>
</opencv_storage>

How do I batch extract metadata from DM3 files using ImageJ?

How can you extract metadata for a batch of images? My first thought was to record a macro and then modify it to operate on a list of file names.
In that vein, I tried recording a macro doing something like this:
Ctrl-o # Open a file
12.dm3Enter # Select file to open
Ctrl-i # Open metadata in a new window
Ctrl-s # Save file
Info for 12.txtEnter# Name of file being saved
Ctrl-w# Close current window
Ctrl-w# Close current window
These steps work when I do them manually. This results in the following macro, which seems to be missing most of what I tried to record:
open("/path/to/file/12.dm3");
run("Show Info...");
run("Close");
run("Close");
Modifying a Jython script that is supposed to extract dimension metadata from an image:
from java.io import File
from loci.formats import ImageReader
from loci.formats import MetadataTools
import glob
# Create output file
outFile = open('./pixel_sizes.txt','w')
# Get list of DM3 files
filenames = glob.glob('*.dm3')
for filename in filenames:
# Open file
file = File('.', filename)
# parse file header
imageReader = ImageReader()
meta = MetadataTools.createOMEXMLMetadata()
imageReader.setMetadataStore(meta)
imageReader.setId(file.getAbsolutePath())
# get pixel size
pSizeX = meta.getPixelsPhysicalSizeX(0)
# close the image reader
imageReader.close()
outFile.write(filename + "\t" + str(pSizeX) + "\n")
# Close the output file
outFile.close()
(Gist).
You could use getImageInfo() instead of run("Show Info..."). This will create a string in the macro containing the run("Show Info...") output, but can then be modified as you like. See http://rsb.info.nih.gov/ij/developer/macro/functions.html#getImageInfo for more information.

Resources