Univocity Parser - Last field blank causes parsing error - parsing

I have a fixed width file and I parse the file using bean processor.
If the last field is blank I get a parsing error in spite of trying the below settings
SetIgnoreTrailingWhiteSpaces(false)
SetTrimValues(false)
Settings.getFormat().getPadding(somecharacterThatDoesntOccurInFile).
Please help

Related

neo4j-admin import "Multi-line fields are illegal"

I'm getting the following error in Neo4j community 4.1.2 using the neo4-admin import tool.
Caused by:ERROR in input
data source: BufferedCharSeeker[source:/home/ubuntu/workspace/neo4j-community-4.1.2/bin/../import/nodes.csv, position:24455, line:359]
in field: code:string:6
for header: [id:ID, labels:LABEL, type:string, flags:string, lineno:string, code:string, childnum:string, funcid:string, classname:string, namespace:string, endlineno:string, name:string, doccomment:string]
raw field value: 402
original error: At /home/ubuntu/workspace/neo4j-community-4.1.2/bin/../import/nodes.csv # position 24455 - Multi-line fields are illegal in this context and so this might suggest that there's a field with a start quote, but a missing end quote. See /home/ubuntu/workspace/neo4j-community-4.1.2/bin/../import/nodes.csv # position 24455.
I checked each single byte with hexedit:
the line #359
the char #24455
the line #358
the line #360
357,AST,string,,34,"/load.php",1,310,,"",,,
358,AST,AST_CALL,,37,,9,310,,"",,,
359,AST,AST_NAME,NAME_NOT_FQ,37,,0,310,,"",,,
360,AST,string,,37,"wp_check_php_mysql_versions",0,310,,"",,,
361,AST,AST_ARG_LIST,,37,,1,310,,"",,,
362,AST,AST_INCLUDE_OR_EVAL,EXEC_REQUIRE,40,,10,310,,"",,,
This is the absurd situation:
no multi-line fields are present
no special char are present
no extra 0A byte
no extra "start quote" without its relative "end quote"
I found some issues on Github but are referred to old versions of Neo4j...what can be the reason?
Finally I found the line causing the exception.
The exception cause was correct but the number of the line was totally wrong.
I pointed out it by adding the following flag --multiline-fields=true to the neo4j-admin import command.

SSIS Truncation in Flat file source

I am new to SSIS, I am importing a flat file data into SQL Server. It throws error while importing data in the flat file source task.
[Flat File Source [60]] Error: Data conversion failed. The data conversion for column "DCN_NAME" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
[Flat File Source [60]] Error: The "Flat File Source.Outputs[Flat File Source Output].Columns[DCN_NAME]" failed because truncation occurred, and the truncation row disposition on "Flat File Source.Outputs[Flat File Source Output].Columns[DCN_NAME]" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on Flat File Source returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
How to solve this issue. I don't have problem in destination.Is it possible to change the source field length?
I changed the source length in source connection manager. It works fine now.
You have not provided enough information to answer your question. Here is what I can gleam from what you have provided.
Your SSIS package is read a flat file and it assumes DCN_Name is data DT_STR
(type char). Confirm the origin and the destination have the same data type. If they do not, use a [Data Conversion] function which is found on the left side on the screen under Data Flow.
Be mindful that while SQL uses varchar and nvarchar, SSIS uses DT_WSTR and DT_STR. It can get confusing. In the link below you will find a SSIS to SQL Data Type Conversion
enter link description here
The errors generated by SSIS are counter intuitive.
The solution was use a data conversion because the source and destination were different data types.

XMLStreamException on import-graphml

I exported the neo4j-database in graphml using neo4j-shell-tools format but while importing back the database at the production server I am getting the following error.
XMLStreamException: ParseError at [row,col]:[2542885,95] Message: An
invalid XML character (Unicode: 0x8) was found in the element content
of the document.
But there is no such character on line number 2542885.
I even deleted this line using sed -i (2542885d) but I am still getting the same error at the same line while importing. Strange.
It seems the line number which sed is referring to is not the same as the line at which the error is been thrown.
Please help out, I have spent a day to resolve this error. But no success.
Thank you. Error resolved.
I used xmllint, which gave the same error at another line number, and replacing that unicode character resolves the issue.

Ruby on Rails jsonparse error

Already I have spent a lot time to resolve the issue but not getting the actual reason.
I have a issue during parsing the json, below is the my josn which i am trying to format.
ERROR: 757: unexpected token at '"{\"requestId\":\"2323423432\",\"bids\":\"[ {\\"adId\\":50000001, \\"bidNative\\":100,\\"clickPayload\\":\\"clickPayload-1\\", \\"impressionPayload\\":\\"236|458795|12345\\"}, {\\"adId\\":60000002, \\"bidNative\\":200,\\"clickPayload\\":\\"clickPayload-2\\", \\"impressionPayload\\":\\"236|458795|12345\\"}, {\\"adId\\":60000002, \\"bidNative\\":300,\\"clickPayload\\":\\"clickPayload-3\\", \\"impressionPayload\\":\\"236|458795|12345\\"}]\"}"'
actually wherever it shows 2 slashes there are actually 3, but i dont know why stack overflows editor shows like this.

csv file codification affects salesforce's Dataloader 25.0.2 behavior?

I've been trying to upsert and delete some data at objects in salesforce cloud using Data Loader 25.0.2. Data Loader executes without problems, but the insertions/deletions don't get done. The log file returns some lines with this errors:
2012-10-03 17:13:16,958 ERROR [deleteAccount]
client.PartnerClient processResult (PartnerClient.java:432) - Errors
were found on item0 2012-10-03 17:13:16,958 ERROR
[deleteAccount] client.PartnerClient processResult
(PartnerClient.java:434) - Error code is:INVALID_ID_FIELD 2012-10-03
17:13:16,958 ERROR [deleteAccount] client.PartnerClient
processResult (PartnerClient.java:435) - Error message:invalid record
id
I've checked that the object ids match, so there are not obvious differences between the data at the cloud and the csv being used to the deletion command.
What could be happening?
I finally found that the codification of the .csv file I've uploaded differed from the codification used for internal representation in salesforce for the data of that object. For example: My .csv file was codified on ANSI instead of UTF-8. This detail got salesforce into confussion. So I had only to change the file's codification to UTF-8 and everything got solved.

Resources