X12 making fields fit - edi

I have a need to create an X12 810 document implementation in order to receive invoices from my customer. I have certain fields that I need like: ShipNumber, ShipmentNumbers, and ProjectCode that exist on the invoice header level. There are also fields like ExpenseType that exist on invoice detail level. In the document that I hand off to the customer it would be correct to say BIG09 should be shipnumber and can be 10 characters long? Or YNQ07 will be ProjectCode and can only be values x,y,z?
I just want to know how do I express the data I need on an X12 810 document? I'm confused because when I look at other companies X12 810 implementation I don't see how I can ask for what I need

So i think you want to create a x12 810 file from the given datas. First understand each fields in the x12 810 file.To create a file first input each segment as string(put a next line command in the end of each string) into an array and the write each line from the array into an file and create a header to download this file as text file.Its easy

Related

How can I read a selected column from a text file

hello I want to save countries,regions and cities names list in the textfile and then want to show in tableview instead from getting the list from web server. I think It would be best to show it from the textfile instead from the web service because as the speed is concerned and please let me what do you think ?
So I have planned to save countries in countries.txt file like this
id name
1 Algeria
2 America
and then regions like this
id region country_id
1 region-name 1
and same like cities.
So I want to know if lets say I want to show the countries name. I want to read ids and names in separate variables and then show countries names in tableview and keep the reference of ids.
Let's say you have a tab-delimited text file:
1<tab>Algeria<lf>
2<tab>America<lf>
...
Then it is trivial to read the text file as a string, split the string into lines, and split each line into its tab-delimited components. You can put that data into any data structure you find convenient: an array of dictionaries, perhaps, or an array of structs especially designed to fit this data. Through that data structure, you now have fast arbitrary access to your data.
I realize that your data is ultimately more complex than that, but my point is, you can certainly design a data structure to match it, and you can read the data and pour it into that structure. The only problem would be if the data is so huge that it can't readily be kept in memory, and it doesn't sound like you're going to have that issue.

Csv bounded source with a custom line Delimitter

I want to read a csv file with a line Delimiter other than the default line delimiter. Each csv record spans multiple lines so the TextIO.Read does not suffice.
Should I extend the FileBasedSource or is there any existing CsvBasedSource (with a custom line/fields delimiter).
I was looking in to the splitIntoBundles() api, the XmlSource did not override the isSplittable() and so it can be split in to bundles and was wondering how the XmlSource handles this because the split can happen at the middle of a <record> as the split is happening based on the desiredBundleSize only.
That's correct that this will need a custom FileBasedSource implementation to work. Regarding XMLSource, record and root element names have to be unique (i.e. no other elements can have those names). We'll update the documentation to reflect that, and look at improving this in the future.

How to create dynamic parser?

I want to create something called dynamic parser.
My project input is some data file like XML, Excel, CSVand ... file and I must parse it and extract its records and its fields and finally save it to SQL Server database.
My problem is that fields of the record is dynamic and I can not write parser in development time. I must provide parser in run-time. By dynamic I mean a user select each record fields using a Web UI. So, I know the numbers of fields in each record in run-time and some information about each field like its name and so on.
I discussed this type of project in question titled 'Design Pattern for Custom Fields in Relational Database'.
I also looked at Parser Generator but i did not get enought information about it and I don't know it is really related to my problem or not.
Is there any design pattern for this type of problem?
If you know the number of fields and the field names then extract the data from the file and then build a query using string concatenation

Use CSV to populate Neo4j

I am very new for Neo4j. I am a learner of this graph database. I need to load a csv file into Neo4j database. I am trying from 2 days,I couldn't able to find good information of reading the csv file in to Neo4j. Please suggest me wil sample code or blogs of reading csv file into Neo4j.
Example:
Suppose if i have a csv file in This way how can we read it into Neo4j
id name language
1 Victor Richards West Frisian
2 Virginia Shaw Korean
3 Lois Simpson Belarusian
4 Randy Bishop Hiri Motu
5 Lori Mendoza Tok Pisin
You may want to try https://github.com/sroycode/neo4j-import
This populates data directly from a pair of CSV files ( entries must be COMMA separated )
To build: (you need maven)
sh build.sh
The nodes file has a mandatory field id and any other fields you like
NODES.txt
id,name,language
1,Victor Richards,West Frisian
2,Virginia Shaw,Korean
3,Lois Simpson,Belarusian
The relationships file has 3 mandatory fields from,to,type. Assuming you have a field age ( long integer), and info, the relations file will look like
RELNS.txt
from,to,type,age#long,info
1,2,KNOWS,10,known each other from school
1,3,CLUBMATES,5,member of country club
Running:
sh run.sh graph.db NODES.txt RELNS.txt
will create graph.db in the current folder which you can copy to the neo4j data folder.
Note:
If you are using neo4j later than 1.6.* , please add this line in conf/neo4j.properties
allow_store_upgrade = true
Have fun.
Please take a look at https://github.com/jexp/batch-import
Can be used as starting point
There is nothing available to generically load CSV data into Neo4j because the source and destination data structures are different: CSV data is tabular whereas Neo4j holds graph data.
In order to achieve such an import, you will need to add a separate step to translate your tabular data into some form of graph (e.g. a tree) before it can be loaded into Neo4j. Taking the tree structure further as an example, the page below shows how XML data can be converted into Cypher which may then be directly executed against a Neo4j instance.
http://geoff.nigelsmall.net/xml2graph/
Please feel free to use this tool if it helps (bear in mind it can only deal with small files) but this will of course require you to convert your CSV to XML first.
Cheers
Nigel
there is probably no known CSV importer for neo4j, you must import it yourself:
i usually do it myself via gremlin's g.loadGraphML(); function.
http://docs.neo4j.org/chunked/snapshot/gremlin-plugin.html#rest-api-load-a-sample-graph
i parse my data with some external script into the xml syntax and load the particular xml file. you can view the syntax here:
https://raw.github.com/tinkerpop/gremlin/master/data/graph-example-1.xml
parsing an 100mb file takes few minutes.
in your case what you need to do is a simple bipartite graph with vertices consisting of users and languages, and edges of "speaks". if you know some programming, then create user nodes with parameters id, name | unique language nodes with parameters name | relationships where you need to connect each user with the particular language. note that users can be duplicite whereas languages can't.
I believe your question is too generic. What does your csv file contain? Logical meaning of the contents of a csv file can vary very much. An example of two columns with IDs, which would represent entities connected to each other.
3921 584
831 9891
3841 92
...
In this case you could either write a BatchInserter code snippet which would import it faster, see http://docs.neo4j.org/chunked/milestone/batchinsert.html.
Or you could import using regular GraphDatabaseService with transaction sizes of a couple of thousands inserts for performance. See how to setup and use the graph db at http://docs.neo4j.org/chunked/milestone/tutorials-java-embedded.html.

Delphi TClientDataSet, maximum number of fields per index

I have a simple Delphi (2007) procedure that given a TDataSet and a (sub)list of fields returns a new TClientDataSet with the distinct values from the given TDataSet.
This works quite well.
In my proc I used the TClientDataSet index to populate the distinct values.
It was fast and easy.
The problem is that TClientDataSet index support at maximum 16 fields.
If you add more of them they will be silently ignored.
I need more than 16 fields in the dataset (and thus in the index).
Is there any solution? Some hack?
Maybe some open source library to use as workaround?
I'm working offline so I must do it in memory. The size of the dataset is not huge
If you're needing to get distinct occurrences of records across more than 16 fields and you want to use an index to keep things fast you'll need to consider concatenating some of those fields. For example:
Test Field Field 1 Field 2 Field 3 Field 4
Apple~Banana~Carrot~Donut Apple Banana Carrot Donut
Create you index on the Test Field.
You might need to create multiple test fields if the total length of your other fields exceeds the maximum length of a text field.
You could swap out the TClientDataSet for a TjvCsvDataset from JVCL. It can be used as a pure "in memory dataset" replacement for Client Data Sets, without any need to read or write any CSV files on disk.
It is not quite like Client Data Set in design. I am not sure what benefit all these "Indexes" in a client data set offer you, other than that you can't have a field without an index definition, but in the case that this is all you need, you can set the TJvCsvDataSet.FieldDef property = 'Field1,Field2,.....FieldN' and then open the dataset and add as many rows as you like to the dataset. It is practically limited to the amount of memory you can address in a 32 bit process.

Resources