csv file codification affects salesforce's Dataloader 25.0.2 behavior? - upload

I've been trying to upsert and delete some data at objects in salesforce cloud using Data Loader 25.0.2. Data Loader executes without problems, but the insertions/deletions don't get done. The log file returns some lines with this errors:
2012-10-03 17:13:16,958 ERROR [deleteAccount]
client.PartnerClient processResult (PartnerClient.java:432) - Errors
were found on item0 2012-10-03 17:13:16,958 ERROR
[deleteAccount] client.PartnerClient processResult
(PartnerClient.java:434) - Error code is:INVALID_ID_FIELD 2012-10-03
17:13:16,958 ERROR [deleteAccount] client.PartnerClient
processResult (PartnerClient.java:435) - Error message:invalid record
id
I've checked that the object ids match, so there are not obvious differences between the data at the cloud and the csv being used to the deletion command.
What could be happening?

I finally found that the codification of the .csv file I've uploaded differed from the codification used for internal representation in salesforce for the data of that object. For example: My .csv file was codified on ANSI instead of UTF-8. This detail got salesforce into confussion. So I had only to change the file's codification to UTF-8 and everything got solved.

Related

Univocity Parser - Last field blank causes parsing error

I have a fixed width file and I parse the file using bean processor.
If the last field is blank I get a parsing error in spite of trying the below settings
SetIgnoreTrailingWhiteSpaces(false)
SetTrimValues(false)
Settings.getFormat().getPadding(somecharacterThatDoesntOccurInFile).
Please help

Neo4j to Gephi import : Failed to invoke procedure Invalid UTF-8

I tried to import my data from Neo4j into Gephi but it doesn't work.
I have the following result in Neo4j :
Failed to invoke procedure apoc.gephi.add: Caused by: com.fasterxml.jackson.core.JsonParseException: Invalid UTF-8 start byte 0xfb at [Source: (apoc.export.util.CountingInputStream); line: 1, column: 136]
As previously mentioned, it looks like neo4j is not exporting using UTF-8, so that, I would check how neo4j is generating the output.
Another possibility is that, when neo4j writing the output, something went slightly wrong.
I got this very same problem in the past when concurrently managing content in a file.
A thread crashed and closed "not correctly enough" the file. I mean, when reviewing the file, everything looks pretty normal, but some characters have been introduced which are not UTF-8. A tool like Atom can help you.
Best

SSIS Truncation in Flat file source

I am new to SSIS, I am importing a flat file data into SQL Server. It throws error while importing data in the flat file source task.
[Flat File Source [60]] Error: Data conversion failed. The data conversion for column "DCN_NAME" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
[Flat File Source [60]] Error: The "Flat File Source.Outputs[Flat File Source Output].Columns[DCN_NAME]" failed because truncation occurred, and the truncation row disposition on "Flat File Source.Outputs[Flat File Source Output].Columns[DCN_NAME]" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on Flat File Source returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
How to solve this issue. I don't have problem in destination.Is it possible to change the source field length?
I changed the source length in source connection manager. It works fine now.
You have not provided enough information to answer your question. Here is what I can gleam from what you have provided.
Your SSIS package is read a flat file and it assumes DCN_Name is data DT_STR
(type char). Confirm the origin and the destination have the same data type. If they do not, use a [Data Conversion] function which is found on the left side on the screen under Data Flow.
Be mindful that while SQL uses varchar and nvarchar, SSIS uses DT_WSTR and DT_STR. It can get confusing. In the link below you will find a SSIS to SQL Data Type Conversion
enter link description here
The errors generated by SSIS are counter intuitive.
The solution was use a data conversion because the source and destination were different data types.

Informatica XML read error

In my mapping we are using xml files as our source.Our issue is that while trying to execute our mapping using large xml files (i.e files larger than 300 MB) we are facing an error. The error message is
'Error [Invalid Document Structure] occured while parsing :[FATAL:Error at line1,char1 ']
We have successfully executed our mapping with smaller files(size < 300 MB).
Is there any setting which can be changed to process such large files.If not, is there any workaround that can be done?

saxon: problem reusing XsltTransformer object

Using Saxon-B, I'm trying to follow the javadoc and serially reuse an XsltTransformer object.
I'm thwarted by:
Error
XTDE1490: Cannot write more than one result document to the same URI, or write to a URI
that has been read: file:/Users/benson/x/btweb/web_2_0/sites/us/errors/404/404.xml.prepared
2011-03-22 11:06:23,830 [main] ERROR btweb.compiler.CompileSite - Site compilation terminated with error.
btweb.compiler.CompilerException: Error running transform Cannot write more than one result document to the same URI, or write to a URI that has been read: file:/Users/benson/x/btweb/web_2_0/sites/us/errors/404/404.xml.prepared
It's probably Saxon-B bug. You can find more information here. According to this site "Fixed in 8.9.0.4".

Resources