CheckinAll:The file encoding -3 is not a valid encoding - tfs

While resolving conflict, I will get The file encoding -3 is not a valid encoding.
I am resolving conflict on folder.
Code :
workspace.MergeContent(conflict, false);
Error:
The file encoding -3 is not a valid encoding.
at Microsoft.TeamFoundation.VersionControl.Client.FileType.GetEncodingFromIntOrString(Int32 codePage, String encoding)
at Microsoft.TeamFoundation.VersionControl.Client.Workspace.PreMerge(Conflict conflict, ThreeWayMerge threeWayMerge)
at Microsoft.TeamFoundation.VersionControl.Client.Workspace.InternalMergeContent(Conflict conflict)
at Microsoft.TeamFoundation.VersionControl.Client.Workspace.MergeContent(Conflict conflict, Boolean useExternalMergeTool)

I dont know if this will help you but I have a similar issue and I found out that the Microsoft Team Foundation Server 2010 Service Pack 1 made a fix regarding encoding problems.
Fixed issues:
Very aggressive automatic detection of file encoding can result in
unsuitable encoding mismatches during merges
http://support.microsoft.com/kb/2182621#mtDisclaimer

Related

com.fasterxml.jackson.core.JsonParseException(Invalid UTF-8 start byte) occured when using multibyte character

I tried to save a node entity(#NodeEntity) whose property(string type)
has multi-byte string like Japanese, but JsonParseException occurred.
java.lang.RuntimeException: com.fasterxml.jackson.core.JsonParseException: Invalid UTF-8 start byte 0x8d at [Source: [B#569cfc36; line: 1, column: 67]
at org.neo4j.ogm.drivers.bolt.request.BoltRequest.executeRequest(BoltRequest.java:175) ~[neo4j-ogm-bolt-driver-2.1.2.jar!/:na]
at org.neo4j.ogm.drivers.bolt.request.BoltRequest.execute(BoltRequest.java:89) ~[neo4j-ogm-bolt-driver-2.1.2.jar!/:na]
at org.neo4j.ogm.session.request.RequestExecutor.executeSave(RequestExecutor.java:287) ~[neo4j-ogm-core-2.1.1.jar!/:na]
at org.neo4j.ogm.session.request.RequestExecutor.executeSave(RequestExecutor.java:66) ~[neo4j-ogm-core-2.1.1.jar!/:na]
at org.neo4j.ogm.session.delegates.SaveDelegate.save(SaveDelegate.java:85) ~[neo4j-ogm-core-2.1.1.jar!/:na]
at org.neo4j.ogm.session.delegates.SaveDelegate.save(SaveDelegate.java:44) ~[neo4j-ogm-core-2.1.1.jar!/:na]
at org.neo4j.ogm.session.Neo4jSession.save(Neo4jSession.java:447) ~[neo4j-ogm-core-2.1.1.jar!/:na]
But if I invoked java with -Dfile.encoding=UTF-8 option, a entity is saved correctly...
Please advice me how to save multi-byte string without -D=file.encoding option?
I think that it is preferred that encoding is specified in config file or by coding.
Thanks.
My Environment is here.
OS=Windows 7 64bit(Japanese Edition)
Java=JDK1.8u121
Spring Boot=1.5.2
Spring Boot Neo4j=4.2.1
Noe4j Driver= Bolt Driver 2.1.2
This is a known issue in ogm - https://github.com/neo4j/neo4j-ogm/issues/244.
Current suggested workaround is exactly what you did - provide property at startup
-Dfile.encoding=UTF-8

Json string more than 524288 exception

I try to parse a huge json file, it's more that 524288 characters and I can't parse it with groovy and haven't text of exception. Is it a known issue, is there any workaround?
Can it be limitation of tomcat?
Update:
I've got an exception:
ERROR (org.codehaus.groovy.grails.web.errors.GrailsExceptionResolver) - JSONException occurred when processing request: [POST] /person/parsePersonJson
Expected a ',' or ']' at character 524288 of ...
Update2:
in grails I used:
JSON.parse(params.myJson)
Changed tomcat settings of maxPostSize to "0"
It may concern to value of configured POST parameter size in Tomcat (maxPostSize). You should refer to this documentation: http://tomcat.apache.org/tomcat-5.5-doc/config/http.html (keyword: "maxPostSize") for more explanation. Then you can try to increase that value. Hope this helps!
It was a problem with input size. Max input size by specification is 512k

invalid request BLR at offset 163

I have the following error in the Firebird Database. version 2.5.2
invalid request BLR at offset 163
function F_ENCODEDATE is not defined
module name or entrypoint could not be found
Error while parsing procedure GETMONTHSBYYEAR's BLR
Until last week everything was functioning correctly. This UDF exists on the disk. How can I debug this problem? Anyone can help me to sort out this problem.
PS: What I did so far to fix:
Backup / restore - no result. (any structure problem is fixed in my opinion after a BK/Restore).
Comment all dependencies, drop UDF function, recreate again - no result.
Potential problems could be that he UDF dll is inaccessible for the server (eg due to permissions, or the UDF restriction config in firebird.conf), or you have installed a 64 bit version of Firebird and your UDF is 32 bit (or vice versa), so Firebird cannot load the DLL.

Rails 3, Heroku: Taps Server Error: PGError: ERROR: invalid byte sequence for encoding "UTF8": 0xba

I have a Rails 3.0.9 application running both locally in my dev env and remotely on a heroku app. I have a method that imports a CSV file into a model, and this file can contain non-english characters, like °,á,é,í, etc (it's in spanish).
I am currently able to import the complete file (75k records) without any problems in my local dev (SQLite) database; but, when uploading the db to heroku with heroku db:push, it fails with the error I'm posting in the title:
!!! Caught Server Exception
HTTP CODE: 500
Taps Server Error: PGError: ERROR: invalid byte sequence for encoding "UTF8": 0xba
HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".
Apparently, Heroku has issues inserting the '°' character. (At the moment the file doesn't have any á,é,í, etc characters, but I suspect these might fail too.)
I have set in my application.rb file the default encoding, as follows:
#.../application.rb
config.encoding = "utf-8"
What else can I do to set the 'client encoding' and solve this problem?
The numero sign, º, is 0xBA in ISO-8869-1 not UTF-8. So your CSV file is encoded with Latin-1 but you're trying to store it in your database as UTF-8 without fixing the encoding.
You can try telling your CSV library that it is dealing with Latin-1 encoded text and maybe it will take care of converting to UTF-8. If that doesn't work, then you can do it yourself with Iconv:
ruby-1.9.2 > Iconv.iconv('UTF-8', 'ISO-8859-1', "\xba")
=> ["º"]
ruby-1.9.2 > Iconv.iconv('UTF-8', 'ISO-8859-1', "\xb0")
=> ["°"]
You're not having trouble with SQLite because SQLite tends be very forgiving and it has a very loose type system. PostgreSQL, OTOH, tends to be rather strict and properly complains if you try to feed it invalid data. I'd recommend that you stop developing on top of SQLite if you're going to be deploying to Heroku and PostgreSQL, there are other differences that will cause problems (the behavior of GROUP BY and LIKE for example).

Compilation error Problem with 10000 lines pas file in Delphi XE

I have a unit with 10000 rows for which I already asked a question in the past.
Anyway the problem now is that I just migrated from 2009 to XE. And everytime I compile that unit (or build my application) I get an error:
[DCC Error] 10000linesuni.pas(452): E2029 ',' or ':' expected but identifier 'dxBarLargeButton17' found
The workaround is to do a dummy modification to the pas file (add a '.' and remove it). Now it will compile correctly.
Is this a known problem? Does anyone know a workaround?
Note: I didn't have this problem in Delphi 2009.
This is the code you can see that 452 is nothing special, just one of the components on the form:
BarManagerBar4: TdxBar;
dxBarLargeButton16: TdxBarLargeButton;
dxBarLargeButton17: TdxBarLargeButton; // This is line 452
dxBarLargeButton18: TdxBarLargeButton;
dxBarLargeButton19: TdxBarLargeButton;
dxBarLargeButton20: TdxBarLargeButton;
user193655, as the advice in my comment helped you, I will post as answer to help someone in the future to resolve this issue.
Sometimes the compilation of one or more files is interrupted due to the existence of invalid characters in the source code or mismatched line endings (should be CR/LF). To fix this use a hex editor to track the invalid chars and delete from the source file, or in the case of the line endings, open the file in Notepad, and save it; this fixes the line endings correctly.

Resources