I have problem with encoding when I insert into dbf file through ADO.
Connecting string: 'Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties="dBASE IV";Data Source='+extractfilepath(ParamStr(0))+';'+
'User ID=Admin;Password='
In a insert command i'm converting data to certain codepage but in table are landing different bytes. Probably because of unicode strings.
insert into someDBFFile (f1,f2,f3,f4,f5)
values(
'ATENEUM SPŁśKA Z OGRANICZONŹ ODPOWIEDZIALNOCIŹ S.K.A. Wpisana w S†dz',
'ie rej.dla Krakowa-r˘dmiežcie Wydzia’ XI Gospodarczy',
'30-741',
'Krak˘w',
'Nad Drwin† 10, '
)
Data after insert into DBF:
ATENEUM SPťKA Z OGRANICZONŤ ODPOWIEDZIALNO?CIŤ S.K.A. Wpisana w SĹdzie rej.dla Krakowa-?rôdmie§cie Wydzia' XI Gospodarczy
30-741Krakôw
Nad DrwinĹ 10,
Should be as in sample code:
'ATENEUM SPŁśKA Z OGRANICZONŹ ODPOWIEDZIALNOCIŹ S.K.A. Wpisana w S†dz',
'ie rej.dla Krakowa-r˘dmiežcie Wydzia’ XI Gospodarczy',
'30-741',
'Krak˘w',
'Nad Drwin† 10, '
I tried TADOTable with the same result.
tblZAEX.Insert;
tblZAEX.FindField('NA1').AsAnsiString:=ToMazovia(Copy(TInvoiceHeaderPartiesSummaryB_Name.asansiString,1,70));
tblZAEX.FindField('NA2').AsAnsiString:=ToMazovia(Copy(TInvoiceHeaderPartiesSummaryB_Name.asansiString,71,70));
tblZAEX.Post;
Related
We have Apoc to support lots of functions in Neo4j. But I am wondering whether we can do bitwise operations on hexadecimal in Neo4j.
My plan is to firstly convert two strings to two hash values by using apoc.util.md5. And then I suppose to use apoc.bitwise.op to do bitwise operation on these two hash values. However, apoc.util.md5 gives a hexadecimal string, while apoc.bitwise.op can only take Integer. Further, as we know, md5 hash produces 128 bit result, which is out of range of Integer. So I am wondering whether there is any other approach can implement the bitwise operation on hexadecimal string.
Here is an outline for one way to do that for non-shifting bitwise operations (but it may be better to write your own plugin to do it directly instead):
Split the 32-character-wide input hex string into 4 separate 8-character-wide strings. Similarly split the other operand (also assumed to be a 32-character-wide input hex string).
Use apoc.convert.toInteger to convert each 8-character-wide hex string (after prefixing each with "0x") to an integer.
Perform the bitwise operation on each corresponding pair of integers.
Use apoc.text.hexValue to convert each integer result back into hex, and prepend enough "0" characters to the front to reach a width of 8 characters.
Concatenate the 4 hex strings back into a single 32-character-wide hex string.
In Cypher, assuming that the input 32-character-wide hex string, the bitwise operation (e.g., '|'), and the other operand are passed as the parameters $input, $operation, and $operand:
WITH [i IN RANGE(0, 24, 8) |
"0000000" + apoc.text.hexValue(
apoc.bitwise.op(
apoc.convert.toInteger("0x" + SUBSTRING($input, i, 8)),
$operation,
apoc.convert.toInteger("0x" + SUBSTRING($operand, i, 8))))
] AS x
RETURN REDUCE(r = "", s IN x | r + SUBSTRING(s, LENGTH(s)-8)) AS res;
In fact, the above can actually be done in a single concise clause:
RETURN REDUCE(r = "",
s IN [i IN RANGE(0, 24, 8) |
"0000000" + apoc.text.hexValue(
apoc.bitwise.op(
apoc.convert.toInteger("0x" + SUBSTRING($input, i, 8)),
$operation,
apoc.convert.toInteger("0x" + SUBSTRING($operand, i, 8))))] |
r + SUBSTRING(s, LENGTH(s)-8)
) AS res;
I am trying to open a binary file that I have some knowledge of its internal structure, and reinterpret it correctly in Julia. Let us say that I can load it already via:
arx=open("../axonbinaryfile.abf", "r")
databin=read(arx)
close(arx)
The data is loaded as an Array of UInt8, which I guess are bytes.
In the first 4 I can perform a simple Char conversion and it works:
head=databin[1:4]
map(Char, head)
4-element Array{Char,1}:
'A'
'B'
'F'
' '
Then it happens to be that in the positions 13-16 is an integer of 32 bytes waiting to be interpreted. How should I do that?
I have tried reinterpret() and Int32 as function, but to no avail.
You can use reinterpret(Int32, databin[13:16])[1]. The last [1] is needed, because reinterpret returns you a view.
Now note that read supports type passing. So if you first read 12 bytes of data from your file e.g. like this read(arx, 12) and then run read(arx, Int32) you will get the desired number without having to do any conversions or vector allocation.
Finally observe that what conversion to Char does in your code is converting a Unicode number to a character. I am not sure if this is exactly what you want (maybe it is). For example if the first byte read in has value 200 you will get:
julia> Char(200)
'È': Unicode U+00c8 (category Lu: Letter, uppercase)
EDIT one more comment is that when you do a conversion to Int32 of 4 bytes you should be sure to check if it should be encoded as big-endian or little-endian (see ENDIAN_BOM constant and ntoh, hton, ltoh, htol functions)
Here it is. Use view to avoid copying the data.
julia> dat = UInt8[65,66,67,68,0,0,2,40];
julia> Char.(view(dat,1:4))
4-element Array{Char,1}:
'A'
'B'
'C'
'D'
julia> reinterpret(Int32, view(dat,5:8))
1-element reinterpret(Int32, view(::Array{UInt8,1}, 5:8)):
671219712
I'm trying to add a few extra features to my ejabberd mod_muc_room, but jlib:now_to_utc_string doesn't seem to accept Unix timestamps and requires them to be in Erlang's built-in format. Trying to use "1519633372486003" instead of "{1519,633372,486003}" makes mod_muc_room crash.
I found at least several ways to convert an Erlang timestamp into a Unix timestamp, but I can't find a way to make a reverse conversion.
Is there a way to do that without converting integer to binary and binary to tuple before concatenating the numbers together and converting them back into numbers?
You can use div and rem to extract the three values:
1> M = 1000000.
1000000
2> T = 1519633372486003.
1519633372486003
3> {T div M div M, T div M rem M, T rem M}.
{1519,633372,486003}
Recently I found strange thing: result of
var
d: double;
begin
d := StrToFloat('-1.79E308');
is not the same as string value '-1.79E308' converted to float field type by ASE and SQL Server through
INSERT INTO my_table (my_float_field) VALUES (-1.79E308)
For Delphi memory dump is 9A BB AD 58 F1 DC EF FF
For ASE/SQL Server value in packet on select is 99 BB AD 58 F1 DC EF FF.
Who is wrong, both servers or Delphi?
The premise that we are working from is that StrToFloat yields the closest representable binary floating point value to the supplied decimal value.
The two hexadecimal values the you present are adjacent. You can see that they differ by 1 in the significand. Here is some Python code that decodes the two values:
>>> import struct
>>> struct.unpack('!d', 'ffefdcf158adbb9a'.decode('hex'))[0]
-1.7900000000000002e+308
>>> struct.unpack('!d', 'ffefdcf158adbb99'.decode('hex'))[0]
-1.79e+308
Bear in mind that Python prints floating point values using the shortest possible significant for which the closest representable value is the actual value. That ffefdcf158adbb99 decodes to a value the prints as -1.79e+308 in the eyes of Python, is sufficient proof that ffefdcf158adbb99 is the closest representable value. In other words, the Delphi code is giving the wrong answer.
And, just out of curiosity, in the opposite direction:
>>> hex(struct.unpack('<Q', struct.pack('<d', float('-1.79e308')))[0])
'0xffefdcf158adbb99L'
It is interesting to note that the 32 bit Delphi compiler yields ffefdcf158adbb99 but the 64 bit Delphi compiler yields ffefdcf158adbb9a. This is a clear defect, and should be submitted as a bug report to Quality Portal.
Does anybody know, if there is a command string size limitation in Firebird?
When executing a small "insert" script it works perfectly, but when the script has a lot of lines it returns the following errer: "Unexpected end of command - line X, column Y".
Interessting, the line and column number varies dependanding on the actual script size.
I'm using Firebird 2.5
Here is the executing script:
set term ^ ;
EXECUTE BLOCK AS BEGIN
insert into TABLE (COLUMNA) values (13);
...
insert into TABLE (COLUMNA) values (14);
END^
set term ; ^
Firebird 2.5 and earlier have a limitation of 64 kilobytes for the query text, for Firebird 3.0 this limit was increased to 10 MB when the new API is used. An EXECUTE BLOCK is one query, so it should not exceed 64 kilobyte.