Determine the DateTime encoding or format - delphi

I'm writing a Delphi program that needs to interact with data stored in a SQLite database that belongs to another program. Everything ok so far, except that I'm unable to get the date/time value from the data store in a column in the SQLite database, with the data type 'datetime'.
I do know that the data type of fields is not relevant in SQLite, and that everything is stored as strings, and perhaps that is possibly the reason why I find myself in this predicament.
Below is a sample of a few rows of the data stored in the SQLite database (column 3) vs. the corresponding date value displayed in the program (column 2) that reads and writes to this SQLite database:
1 11/7/1971 621939168000000000
2 3/17/1976 623314656000000000
3 5/4/1996 629667648000000000
4 9/21/2007 633259296000000000
5 11/17/1972 622264032000000000
6 2/7/1996 629592480000000000
7 6/13/2000 630964512000000000
My requirement: Once I read the value in column 3 from my Delphi program, how do I translate it to the same date/time value as displayed in column 2? I have tried the UnixToDateTime function, but it does not result in the same value (as column 2). Can anyone help?
Thanks in advance!

The value in column 3 is the number of 100 nanoseconds since 1/1/0001. You can convert it into a Delphi TDateTime like this:
theDate := (Value/864000000000) - 693593;

Related

OLEDB not reading all rows from XLSX under Windows 10 and Microsoft office 2013

I have XLSX file with 243k rows.
Same program gives different result on diffrent computers.
If i open it on Windows 8.1 + Office 2010, program reads all of 243k Rows and all works fine.
Under Windows 10+Office 2013 it reads only first 237k Rows, truncating last 6k rows.
Im using Delphi, with following connection string
ADOConnection1.ConnectionString:='Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\File.xlsx;Extended Properties="Excel 12.0;HDR=NO;IMEX=1"';
ADOQuery1.SQL.Text:='SELECT * FROM Sheet1$'
ADOQuery1.Open;
ShowMessage(ADOQuery1.RecordCount.ToString);
Problem was with table indexies and i messed up with code a bit.
Query text was taking first table name from table list instead of constant.
ADOQuery1.SQL.Text:='SELECT * FROM '
However, table indexes are different from version to version.
So selection was from different tables
Windows 8.1+Office2010 table list:
_xlnm#_FilterDatabase
_xlnm#Database
_xlnm#Print_Area
Instruction$
CATALOG$
Windows 10+Office2013 table list
_xlnm#Database
Instruction$
Instruction$_xlnm#Print_Area
CATALOG$
CATALOG$_xlnm#_FilterDatabase
But if i open file in Excel, there are only 2 sheets named : "Instruction" and "Catalog".
I dont know where the 237k rows are came from.

Month calculation in neo4j

I am working in neo4j database.
i have csv file column is ENTRY_DATE format is "08-apr-15" String type date.and current date format is "21-10-2016".How to change the String type Entry_date is like current date format.but my need is compare the current date with entry_date column then calculate the difference of month between two dates.
You can parse both strings using Cypher's string functions.
WITH {current_date} AS current_date
WITH split(current_date, '-') AS month_tuple
WITH month_tuple[1] AS c_month, month_tuple[2] AS c_year
WITH {jan: 1, feb: 2, ... dec: 12} AS month_map, c_month, c_year
LOAD CSV WITH HEADERS FROM "place" AS row
WITH row, row.ENTRY_DATE as e_date, c_month, c_year, month_map
WITH row, c_month, c_year, split(e_date, '-') AS e_tuple, month_map
WITH row, c_month, c_year, e_tuple[2] AS e_year, e_tuple[1] AS e_month_key, month_map
WITH row, (toInt(e_year) -toInt(c_year)) * 12 as year_diff, month_map[e_month_key] - toInt(c_month) AS month_diff
WITH row, year_diff + month_diff AS months_later
But InverseFalcon is definitely right in that this would be way better if handled through the backend. Clean up your CSV before import and pass in a better source of current date and you can cut most of that query out.
Neo4j doesn't have much support for date/time operations at all. If your backend framework/language has good date/time support, I'd recommend doing that calculation there instead of from the db itself.
But if you are looking for a pure neo4j solution, the APOC Procedures library has some conversion and formatting support, but if that's not enough, you may want to look at GraphAware's neo4j timetree. That should give you some graph operations for finding the years or months between different dates.

Joining two Pandas DataFrames does not work anymore?

I have 2 Pandas Dataframes.
The first one looks like this:
date rank id points
2010-01-04 1 100001 10550
2010-01-04 2 100002 9205
The second one like this:
id name
100001 A
100002 B
I want to join both dataframes via the id column. So the result should look like:
date rank id points name
2010-01-04 1 100001 10550 A
2010-01-04 2 100002 9205 B
Some weeks ago I wrote code for that, but for some reason it does not work anymore. I end up with an empty dataframe after I execute this code for joining:
join = pd.merge(df1,df2, on='id')
Why is join empty?
short story: as pointed out in the comment already, i was comparing strings with integers.
long story: i didn't expect python to parse the id-columns of two input csv files to different datatpyes. df1.id was of type Object. df2.id was of type int. and i needed to find out why df1.id was parsed to Object and not automatically to int, because it only contained numbers.
turns out that it had something to do with the encoding of my CSV file. in notepad++ the file was encoded as plain UTF-8. it seems that pandas did not like this, because when i tried to convert the id column to int, it raised an error like ValueError: invalid literal for int() with base 10: '\ufeff100001'. The number 100001 is the first ID of the first row. So there seems to be some encoded character before this number (at the very beginning of the file) \ufeff that prevented pandas to parse the whole column as int. in notepad++ i then changed the encoding of the file to UTF-8 without BOM and then everything worked.

Firebird TIME SQL format displayed properly in TMaskEdit with mask: '!90:00;1;_'

I have Firebird table with field 'ABC' of Time type. I use TMaskedEdit to populate it with mask: !90:00;1;_
It works fine when populating the field with values but does not work properly when displaying values from the field.
I use:
editABC.Text := DateTimeToStr(fieldbyname('ABC').AsDateTime);
The problem I get is that time like for example: 14:48 is displayed as 14:00
Question: how can I display value of field of type Time properly in TMaskedEdit properly?

input from form being converted to decimal on insert

I have a db column for upc codes setup as a 'numeric' data type in postgres.
I generated scaffolding to input values into this table, but rails is converting the input in FLOAT for some reason,
eg, a 13 digit number entry 1234567890000 is converted to 1234567890000.0
from the logs:
INSERT INTO "upc_codes" ("upccode") VALUES (1234567890000.0) RETURNING "id"
Where is the data type for the SQL statement being set, or not set as the case may be?
What data type are using for this column? Try changing the column type to an integer in a migration:
change_column :upc_codes, :upc_code, :integer
max integer value for an mysql integer should be 2147483647 .
i will suppose this could cause an errors somewhere .
try to change the coulm into a bigint .
aniway ,from my experience is better to handle bignum using string in rails (in this manner you could saftly change db). you could always use to_i later in your code .
sorry for my english.
When you set the column type to NUMERIC are you specifying the precision/scale like this: NUMERIC(13,0)? (13 is precision, 0 is scale)
I submitted this again as an answer because I guess I commented when I should have answered.

Resources