How can I open Excel file in Delphi 7 and move the data to a paradox file?
Using the Ado___ components From the ADO tab.
To "connect" with the file use the TAdoConnection then double-click it, in the provider tab you must select "Microsoft Jet 4.0 OLE DB Provider", in the connection tab you put the name of the file relative to the current directory of your process, in the fourth tab in extended properties you select the version of excel that you want to use.
Note: This connection only works at runtime.
Now you can add an TAdoQuery and link it up with the TAdoConnection, in this query you can use SQL DML statements like select, insert (didn't try this one) and update, delete doesn't work, the only trick is that instead of using table names in the from clause you use excel ranges, for example A range from the A1 cell to to the C10 cell on the sheet MySheet1: [MySheet1$A1:c10], here's the full select for this range:
Select *
From [MySheet1$A1:c10]
You can also use named ranges [MyNamedRangeName$] and entire sheets: [MyEntireSheet$] (notice the mandatory $ after the names).
Now with the data in a dataset you should be able to move it to the paradox dataset.
This about.com article explains in more detail: http://delphi.about.com/od/database/l/aa090903a.htm
Related
I already have a rule file (ex. Rule MM01), and I need to add more data rows in rule MM01 to one dimension like below.
For example I want to add more 100 rows of data in column "Replace" and column "With"
Do I have to add 100 rows one by one? Input manually? Or anything else to add bulk data into a rule file?
Nope, you just have to type them in.
If new items keep on popping-up in your source data, you might consider one of the following:
put your source text file into a SQL table and make your load rule read from the table (or even better, try to directly load from the tables that generated the text file)
(assuming you have the data load automated via MaxL) add a powershell script that does the rename before you load the data
I've got an absolutely horrible dataset in Excel (13,500 variables, a lot of them irrelevant to my purposes). I need to analyse in SPSS as I have a lot of data transformations to do... but SPSS 24 struggles with a dataset that size. Well... either SPSS struggles, or my work PC does.
Is there a way, when importing data from Excel, to import multiple ranges? Specifically, I want Column A (my unique identifier), then several other ranges (e.g. G:AC, DD:JJ, etc.).
/CELLRANGE=RANGE only seems to allow a single range.
If you have the appropriate ODBC driver for Excel, you can use the database reading facility in Statistics to select the fields. However, the catch with ODBC and Office is that the bigness has to match. If both are 32-bit, that's easy, but 64-bit Statistics would need 64-bit Office, and that's a world of hurt.
No (you can't specify multiple ranges), is the short answer. What you ought to be doing is importing the entire relevant range and then using SPSS commands to manipulate as you desire. For example using ADD FILES FILE with DROP sub command to delete irrelevant columns/variables.
I don't know of a way to directly import multiple ranges from Excel. You might try importing the file to an MS Access database first, then read the table from the database to SPSS using (in SPSS) "open database" --> "new query". The query editor will let you specify the fields you want to import.
I was wondering if gephi supports importing a cluster file that has the community assignment of each node in a graph on a separate line (similar to the Pajek .clu format)? I am looking for a way to color the nodes belonging to the same community. In igraph (for R), I can import this file and set the vertex color attributes based on it. Was wondering if gephi had a similar feature?
I saw this answer here from over 5 years ago saying that it wasn't possible, was wondering if that had changed now?
Thanks!
The .clu file contains one line per node with the cluster number at least the example I see here. Gephi cannot import it directly but you can trick it to obtain the same result. Here are the steps I propose:
Import your .net file with Gephi
Go to Data Laboratory and sort your nodes according to the Id column, by clicking on the column name
Create a new column called Cluster by pressing Add column the bottom of the screen. The default String type will do
Click Export table, select only the fields Id and Cluster, and export the file somewhere
Open the CSV file with e.g. Excel or LibreOffice
Open your .clu file with a text editor, even Notepad will do
Copy all the numbers in the file and paste them in the Cluster column of your CSV. Save your CSV
Import the CSV back into Gephi, by clicking Import Spreadsheet and press ok through the next steps.
At the end you should see your Cluster values having the same values as in the .clu file!
Make sure that the same field delimiter is used throughout in steps 4. and 8. I would suggest to use ; as Excel directly understands it.
You are welcome to report back if you are still having problems
I was basically following this example :
http://www.youtube.com/watch?v=B4uxLLIUddg
New -> Other ->Datasnap Server -> VCL Forms Application
(All default settings,Port 211 working, TDSServerModule).
Then created a table in SQLite :
CREATE TABLE [T1] (
[ID] INTEGER PRIMARY KEY AUTOINCREMENT,
[DATE] DATE,
[TIME] TIME);
On my ServerMethodsUnit1 I dropped a TSQLConnection.
Changed the Driver to Sqlite.
Removed LoginPrompt.
Connection connected OK.
Added TSQLDataset and connected it to my SQLITECONNECTION.
Set CommandText to my T1 (the table name).
Made it active without a problem.
Added a datasetprovider1 and lined it to my dataset (The table T1).
Saved all.
Run the server without a problem.With the server running,I constructed the client side:
To my project I added a new project (vcl forms application).
Added a SQLConnection component.
Set its driver name to Datasnap.
Removed loginprompt.
On the form I dropped DSProviderConnection1.
Connected it to my sqlconnection.
Set its ServerClassname to TServerMethods1.
Tested the connection - both work OK.
Dropped a Clientdataset. Connected its RemoteServer property to that of DSProviderConnection1.
ProviderName to DataSetProvider1.
Connection succeeded. Clientdataset is active.
Added a DataSource.Linked it to my Clientdataset.
All connections work. So I added a little GUI.
Dropped a TDBGrid and a TDBNavigator. Linked them to Datasource1.
The first strange thing I noticed is that all fields displayed Widememo.
Why is this when fields are completely different,I dont know.
Went to the fields editor,added the fields and when checking the BlobType
all displayed ftWideMemo.
I tried to insert todays date directly in the grid and somehow in my DB it ended up : 1899-12-30.
Checking the table (T! on the server side), the DATE,and TIME field also displays widememo.
What am I missing here ?
SQlite has very loose typing.
DATE and TIME are not part of SQLite.
And DBGrids don't know what to do with SQLite TEXT.
The way I got around this problem was to use VARCHAR(length) instead of TEXT when defining the fields.
Then those will display OK in a DBGrid. And I did dates also as VARCHAR().
See Also
Displaying and editing MEMO fields in Delphi's TDBGrid
http://delphi.about.com/library/weekly/aa030105a.htm
There is no such native DATE nor TIME type in SQlite3:
SQLite does not have a storage class set aside for storing dates
and/or times. Instead, the built-in Date And Time Functions of SQLite
are capable of storing dates and times as TEXT, REAL, or INTEGER
values:
TEXT as ISO8601 strings ("YYYY-MM-DD HH:MM:SS.SSS").
REAL as Julian day numbers, the number of days since noon in Greenwich on November 24, 4714 B.C. according to the proleptic
Gregorian calendar.
INTEGER as Unix Time, the number of seconds since 1970-01-01 00:00:00 UTC.
Applications can chose to store dates and times in any of these
formats and freely convert between formats using the built-in date and
time functions.
Also ensure that you understood the "type affinity" feature of SQlite3 columns - which is pretty powerful, but may be confusing if you come from a "strongly typed" RDBMS.
What you can do is
Either uses Unix time and an INTEGER column - see UnixToDateTime function for instance in DateUtils.pas unit;
Or a TEXT column and ISO-8601 encoding (my preferred).
Then you will have to map this value in DataSnap - I know that our mORMot units allow this, and I do not know exactly for DataSnap.
Also take a look at all integrated date/time functions in SQlite3 which are pretty complete.
I had the same issue as all my SQLite fields were showing up as WideMemo fields in the DBGrid.
One simple solution is to populate the corresponding sqlite table with at least one (1) row of 'proper' data (no empty fields).
When VCL components are connecting (for the first time), they seem to be able to create fields with the right type.That is, if you right click on a TSQLDataSet component and select the Fields Editor -> Create fields, with a table that has at least one row of data, the fields will be correctly mapped.
I have not checked if that works for the DATE type, but it does work for integers, doubles and text.
For some reason, if the table is empty, all fields are created as WideMemo blobs.
I have a spread sheet which has 5 tabs, each tab updates from a SQL data source (not as a pivot table though), is there a way in excel (2010) that gives me the option to save a copy of just the data and not the connection properties ect. Basically the file is 6mb and i need to get it down.
Thanks
Since it does not seem like you have tried anything I have two suggestions:
It could be as simple as:
Grouping Sheets and copying them to a new workbook.
Sheets(Array("Sheet1", "Sheet2", "Sheet3")).Select
Sheets("Sheet3").Activate
Sheets(Array("Sheet1", "Sheet2", "Sheet3")).Copy
...
Or even:
Range("A1:E20").Select
Selection.Copy
Sheets.Add After:=Sheets(Sheets.Count)
Selection.PasteSpecial Paste:=xlPasteValues, Operation:=xlNone, SkipBlanks _
:=False, Transpose:=False
You have to give those a try and let us know of the output, so then we can adjust those and find a solution that suits you!
I save the workbook with the connections to a new workbook. Go into the new workbook and remove the data connections.