Does anyone know of a table or chart that compares the features of SQL Server, SQLite, and MySQL (maybe Oracle too) ?
An example of what I'm looking for exists at DEVX, but doesn't include SQL Sever or SQLite
Comparison of Relational Database Management Systems
http://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems
See : http://www.h2database.com/html/features.html#comparison
*1 HSQLDB supports text tables.
*2 MySQL supports linked MySQL tables under the name 'federated tables'..
*3 Derby support for roles based security and password checking as an option..
*4 Derby only supports global temporary tables..
*5 The default H2 jar file contains debug information, jar files for other databases do not..
*6 PostgreSQL supports functional indexes..
*7 Derby only supports updatable result sets if the query is not sorted.
*8 Derby doesn't support standard compliant information schema tables..
*9 When using MVCC (multi version concurrency).
*10 Derby and HSQLDB don't hide data patterns well.
*11 The MULTI_THREADED option is not enabled by default, and not yet supported when using MVCC.
*12 Derby doesn't support the EXPLAIN statement, but it supports runtime statistics and retrieving statement execution plans.
*13 Derby doesn't support the syntax LIMIT .. [OFFSET ..], however it supports FETCH FIRST .. ROW[S] ONLY.
*14 Using collations. *15 Derby and H2 support ROW_NUMBER() OVER().
Related
I have an SQL Server database with plenty of iso_1 (iso8859-1) columns that are retrieved incorrectly on a Windows desktop with utf-8 codepage (65001) while they are retrieved fine on Windows desktops with Windows-1252 (iso8819-1) codepage.
Error: [FireDAC][DatS]-32. Variable length column [nom] overflow. Value length - [51], column maximum length - [50]
This is due because characters like Ñ are retrieved incorrectly coded as a couple of characters.
SQL Server Management Studio retrieves those columns correctly, so I guess the problem is configuring the FireDAC connection of my application, but I can't see anywhere a charset property to indicate the codepage of the original data.
How do you indicate the transcode needed on a FireDAC connection when the codepage is different in the database and the desktop running your application ?
Is there any way to ship metrics gathered form Telegraf to FluentD, then into InfluxDB?
I know it's possible to write data from FluentD into InfluxDB; but how does one ship data from Telegraf into FluentD, basically using use FluentD as a buffer (as opposed to using Kafka or Redis)?
While it might be possible to do with FluentD using some of the available, although outdated output plugins, such as InfluxDB-Metrics, I couldn't get the plugin to work properly and it hasn't been updated in over six years, so it will probably not work with newer releases of FluentD.
Fluent Bit however, has an Influxdb output built right into it, so I was able to get it to work with that. The caveat is that it has no Telegraf plugin. So the solution I found was to setup a tcp input plugin in Fluent Bit, and set Telegraf to write JSON formatted data to it in it's output section.
The caveat of doing this, is that the JSON data is nested and not formatted properly for InfluxDB. The workaround is to use nest filters in Fluent Bit to 'lift' the nested data format, and re-format properly for InfluxDB.
Below is an example for disk-space, which is not a metric that is natively supported with Fluent Bit metrics but is natively supported with Telegraf:
#SET me=${HOST_HOSTNAME}
[INPUT] ## tcp recipe ## Collect data from telegraf
Name tcp
Listen 0.0.0.0
Port 5170
Tag telegraf.${me}
Chunk_Size 32
Buffer_Size 64
Format json
[FILTER] ## rename the three tags sent from Telegraf to prevent duplicates
Name modify
Match telegraf.*
Condition Key_Value_Equals name disk
Rename fields fieldsDisk
Rename name nameDisk
Rename tags tagsDisk
[FILTER] ## un-nest nested JSON formatted info under 'field' tag
Name nest
Match telegraf.*
Operation lift
Nested_under fieldsDisk
Add_prefix disk.
[FILTER] ## un-nest nested JSON formatted info under 'disk' tag
Name nest
Match telegraf.*
Operation lift
Nested_under tagsDisk
Add_prefix disk.
[OUTPUT] ## output properly formatted JSON info
Name influxdb
Match telegraf.*
Host influxdb.server.com
Port 8086
#HTTP_User whatever
#HTTP_Passwd whatever
Database telegraf.${me}
Sequence_Tag point_in_time
Auto_Tags On
NOTE: This is just a simple awkward config for my own proof of concept
I am trying to use transactions to speed up the insertion of a large number of database entries.
I am using SQL Server Compact 4.0 and the ATL OLEDB API based on C++.
Here is the basic sequence:
sessionobj.StartTransaction();
tableobject.Insert(//);
tableobject.Insert(//);
tableobject.Insert(//);
...
sessionobj.Commit();
NOTE: the tableobj object is a CTable that is initialized by the sessionobj CSession object.
I should be seeing a sizeable performance increase but I am not. Does anyone know what I am doing wrong?
I have an application which used embedded neo4j earlier but now I migrated to neo4j server (using java rest binding). I need to import 4k nodes, around 40k properties and 30k relationships at a time. When I did import with embedded neo4j, it used to take 10-15 minutes, but it is taking more than 3 hours for neo4j server for the same data, which is unacceptable. How can I configure the server to import the data faster.
This is my what my neo4j.properties looks like
# Default values for the low-level graph engine
use_memory_mapped_buffers=true
neostore.nodestore.db.mapped_memory=200M
neostore.relationshipstore.db.mapped_memory=1G
neostore.propertystore.db.mapped_memory=500M
neostore.propertystore.db.strings.mapped_memory=500M
#neostore.propertystore.db.arrays.mapped_memory=130M
# Enable this to be able to upgrade a store from 1.4 -> 1.5 or 1.4 -> 1.6
#allow_store_upgrade=true
# Enable this to specify a parser other than the default one. 1.5, 1.6, 1.7 are available
#cypher_parser_version=1.6
# Keep logical logs, helps debugging but uses more disk space, enabled for
# legacy reasons To limit space needed to store historical logs use values such
# as: "7 days" or "100M size" instead of "true"
keep_logical_logs=true
# Autoindexing
# Enable auto-indexing for nodes, default is false
node_auto_indexing=true
# The node property keys to be auto-indexed, if enabled
node_keys_indexable=primaryKey
# Enable auto-indexing for relationships, default is false
relationship_auto_indexing=true
# The relationship property keys to be auto-indexed, if enabled
relationship_keys_indexable=XY
cache_type=weak
Can you share the code that you use for importing the data?
The java-rest-binding is just a thin wrapper around the verbose REST API which is not intended for data import.
I recommend to use cypher queries in batches using parameters if you want to import more data. Check out RestCypherQueryEngine(restGraphDb.getRestAPI()) for that. And see restGraphDB.executeBatch() for executing multiple queries in a single request.
Just don't rely on the results of those queries to make decisions later in your import.
Or import the data embedded and then copy the directory over to the servers data/graph.db directory.
I want to know if there is any other way to load data from a text file other than using external tables.
Text file looks something like
101 fname1 lname1 D01..
102 fname2 lname2 D02..
I want to load it into a table with columns emp_id, fname, lname, dept etc.
Thanks!
There are three utilities in Informix to load data to the database from flat files:
Load SQL command. Very simple to use, but not very flexible. I would recommend this for small amounts of records (less than 10k)
Dbload, which is a command line utility, a bit more complex than the load sql command. This will allow you to have more control on how the records are loaded: commit intervals, starting point in the flat file, number of errors before exiting, etc. I'd recommend this utility for a small to medium sized data loads (>10k<100k).
HPL, or High Performance Loader, which is a rather complex utility that can load data at a very high rate of speed, but with a lot of overhead. Recommended for large to x-large data loads.
As ceinmart suggested in comments you can do it from server side or from client side. From server side you can use DB-Access and LOAD command. From client side you can use any tool you like. For such tasks I often use Jython that can use Python string and CSV libraries as well as JDBC database drivers. With Jython you can use csv module to read data from file and PreparedStatement to insert it into database. In my other answer: Substring in Informix you will see such PreparedStatement.