Is there a way to locate a segment line in an 837? - edi

I'm having some issues isolating errors in my 837. The system that's interpreting my 837 is giving me a segment where the error is found, but since I have so many claims (and therefore segments), I can't just count the segments until I get to the one I need.
Is there some way of finding a specific segment line? I know the general area the segment line is in (based on the an account number the error is listed under), but I have no way of knowing which of the segments has the errors.
Here's an example of what I mean. There's revenue codes listed after the SV2, then a corresponding code, then the cost of that code.
SV2*0450*HC:96368*100.00*UN*1~
DTP*472*D8*20171204~
LX*13~
SV2*0450*HC:96371*700.00*UN*5~
DTP*472*D8*20171204~
LX*14~
SV2*0450*HC:96372*50.00*UN*1~
DTP*472*D8*20171204~
LX*15~
Thanks.

Please take a look at X12 Parser
loop.getLoop("2400", 0).getSegment("SV1").getElementValue("SV101")
can get you the value needed.
For more examples look at X12ReaderTest

Related

Best Grasshopper plugin to analyse floor plans

I'm trying to figure out the best way to analyse a grasshopper/rhino floor plan. I am trying to create a room map to determine how many doors it takes to reach an exit in a residential building. The inputs are the room curves, names and doors.
I have tried to use space syntax or SYNTACTIC, but some of the components are missing. Alot of the plugins I have been looking at are good at creating floor plans but not analysing them.
Your help would be greaty appreciated :)
You could create some sort of spine that goes through the rooms that passes only through doors, and do some path finding across the topology counting how many "hops" you need to reach the exit.
So one way to get the topology is to create a data structure (a tuple, keyValuePair) that holds the curve (room) and a point (the door), now loop each room to each other and see if the point/door of each of the rooms is closer than some threshold, if it is, store the relationship as a graph (in the abstract sense you don't really need to make lines out of it, but if you plan to use other plugins for path-finding, this can be useful), then run some path-finding (Dijkstra's, A*, etc...) to find the shortest distance.
As for SYNTACTIC: If copying the GHA after unblocking from the installation path to the special components folder (or pointing the folder from _GrasshopperDeveloperSettings) doesn't work, tick the Memory load *.GHA assemblies using COFF byte arrays option of the _GrasshopperDeveloperSettings.
*Note that SYNTACTIC won't give you any automatic topology.
If you need some pseudo-code just write a comment and I'd be happy to help.

Get some information from HEVC reference software

I am new to HEVC and I am understanding the reference software now (looking at intra prediction right now).
I need to get information as below after encoding.
the CU structure for a given CTU
for each CU during calculations, it's information (eg. QP value, selected mode for Luma, selected mode for chroma, whether the CU is in final CU structure of the CTU-split decision, etc.)
I know CTU decision are made when m_pcCuEncoder->compressCtu( pCtu ) is called in TEncSlice.cpp. But where exactly I can get these specific information? Can someone help me with this?
p.s. I am learning C++ too (I have a Java background).
EDIT: This post is a solution for the encoder side. However, the decoder side solution is far less complex.
Getting CTU information (partitioning etc.) is a bit tricky at encoder if you are new to the code. But I try to help you with it.
Everything that I am going to tell you is based on the JEM code and not HM, but I am pretty sure that you can apply them to HM too.
As you might have noticed, there are two completely separate phases for compression/encoding of each CTU:
The RDO phase: first there is the Rate-Distortion Optimization loop to "make the decisions". In this phase, literally all possible combinations of the parameters are tested (e.g. differetn partitionings, intra modes, filters etc.). At the end of this phase the RDO determines the best combination and passes them to the second phase.
The encoding phase: Here the encoder does the actual final encoding step. This includes writing all the bins into the bitstream, based on the parameters determined during the RDO phase.
In the CTU level, these two phases are performed by the m_pcCuEncoder->compressCtu( pCtu ) and the m_pcCuEncoder->encodeCtu( pCtu ) functions, respectively, both in the compressSlice() function of the TEncSlice.cpp file.
Given the above information, you must look for what you are looking for, in the second phase and not the first phase (you may already know these things, but I suspected that you might be looking at the first phase).
So, now this is my suggestion for getting your information. It's not the best way to do it, but is easier to explain here.
You first go to this point in your HM code:
compressGOP() -> encodeSlice() -> encodeCtu() -> xEncodeCU()
Then you find the line where the prediction mode (intra/inter) is encoded:
m_pcEntropyCoder->encodePredMode()
At this point, you have access to the pcCU object which contains all the final decisions, including the information you look for, that are made during the first phase. At this point of the code, you are dealing with a single CU and not the entire CTU. But if you want your information for the entire CTU, you may go back to
compressGOP() -> encodeSlice() -> encodeCtu()
and find the line where the xEncodeCU() function is called for the first time. There, you will have access to the pCtu object.
Reminder: each TComDataCU object (pcCU if you are in the CU level, or pCtu if you are in the CTU level) of size WxH is split to NumPartition=(W/4)x(H/4) partitions of size 4x4. Each partition is accessible by an index (uiAbsPartIdx) which indicates its Z-scan order. For example, the uiAbsPartIdx for the partition at <x=8,y=0> is 4.
Now, you do the following steps:
Get the number of partitions (NumPartition) within your pCtu by calling pCtu->getTotalNumPart().
Loop over all NumPartition partitions and call the functions pCtu->getWidth(idx), pCtu->getHeight(idx), pCtu->getCUPelX(idx) and pCtu->getCUPelY(), where idx is your loop iterator. These functions return the following information for each CU coincided with the 4x4 partition at idx: width, height, positionX, positionY. [both positions are relative to the pixel <0,0> of the frame]
The above information is enough for deriving the CTU partitioning of the current pCtu! So the last step is to write a piece of code to do that.
This was an example of how to extract CTU partitioning information during the second phase (i.e. encoding phase). However, you may call some proper functions to get the other information in your second question. For example, to get selected luma intra mode, you may call pCtu->getIntraDir(CHANNEL_TYPE_LUMA, idx), instead of getWidth()/getHeight() functions. Or pCtu->getQP(CHANNEL_TYPE_LUMA, idx) to get the QP value.
You can always find a list of functions that provide useful information at the pCtu level, in the TComDataCU class (TComDataCU.cpp).
I hope this helps you. If not, let me know!
Good luck,

How to change data length parameter in maxima software?

I need to use maxima software to deal with data. I try to read data from a text file constructed as
1 2 3
11 22 33
ect.
Following comands allow for loading data sufficiently.
load(numericalio);
read_matrix("path to the file");
The problem arises when I apply them to a more realistic (larger) data set. In this case the message appears Expression longer than allowed by the configuration setting.
How to overcome this problem? I cannot see any option in configuration menu. I would be grateful for advice.
I ran into the same error message today, at it seems to be related to the size of the output that wxMaxima receives from the Maxima executable.
If you wish to display the output regardless, you can change it in the configuration here:
Edit>Configure>Worksheet>Show long expressions
Note that showing a massive expression or amount of data may dramatically slow the program down, so consider hiding the output (use a $ instead of a ; at the end of your lines) if you don't need to visualize the data.

Why does ELKI need db.in file in addition to distance matrix? Also what should db.in file contain?

I tried to follow this tutorial on using ELKI with pre-computed distances for clustering.
http://elki.dbs.ifi.lmu.de/wiki/HowTo/PrecomputedDistances
I used the following set of command line options:
-dbc.filter FixedDBIDsFilter -dbc.startid 0 -algorithm clustering.OPTICS
-algorithm.distancefunction external.FileBasedDoubleDistanceFunction
-distance.matrix /path/to/matrix -optics.minpts 5 -resulthandler ResultWriter
ELkI fails with a configuration error saying db.in file is needed to make the computation.
The following configuration errors prevented execution:
No value given for parameter "dbc.in":
Expected: The name of the input file to be parsed.
No value given for parameter "parser.distancefunction":
Expected: Distance function used for parsing values.
My question is what is db.in file? Why should I provide it in addition to the distance matrix file since the pair-wise distance matrix file completely specifies all the information about the point cloud. (also I don't have access to any other information other than the pair-wise distance information).
What should I do about db.in? Should I override it, or specify some dummy information etc. Kindly help me understand.
thank you.
This is documented in the ELKI HowTos:
http://elki.dbs.ifi.lmu.de/wiki/HowTo/PrecomputedDistances
Using without primary data
-dbc DBIDRangeDatabaseConnection -idgen.count 100
However, there is a bug (patch is on the howto page, and will be in the next release) so you right now can't fully use this; as a workaround you can use a text file that enumerates the objects.
The reason for this is that ELKI is designed to work on multi-relational data. It's not just processing matrixes. But some algorithms may e.g. need a geographic representation of an object, some measurements for this object, and a label for evaluation. That is three relations.
What the DBIDRange data source essentially does is create a single "fake" relation that is just the DBIDs 0 to 99. On algorithms that don't need actual data, but only distances (e.g. LOF or DBSCAN or OPTICS), it is sufficient to have object IDs and a distance matrix.

Assigning labels to triples

I am currently trying to do stream reasoning using Jena, so I want to be able to reason over a certain set of triples that have occurred in a particular window of time, also taking into account some background static knowledge.
My problem is that I have an ontology that I read from several files, however I wish for the triples I obtain to have time stamps for when I receive them, which I thought I could just do by applying labels to the triples (I am just giving them all random time stamps for the moment as this is only a test).
While I didn't think that this would be problem, I am struggling at the initial step of just applying a label to an existing triple and selecting it. I cannot not seem to be able to access triples from the ontModel without having to transform it into a Graph, and while I could then create quads with the extra value being some literal for time, I can't find a way to then reason over this graph.
Any light that people can shed on this issue would help. I hope I am being clear.
I'm not sure exactly how you're putting labels on your triples, but you can get Statements from an OntModel, and Statement implements FrontsTriple through which you can access a corresponding Triple.

Resources