Delphi/Firedac/Firebird Get query performance info - delphi

In the ibexpert tool, when executing a query, it shows the query performance info like this:
------ Performance info ------
Prepare time = 0ms
Execute time = 0ms
Avg fetch time = 0,00 ms
Current memory = 14*717*320
Max memory = 16*060*920
Memory buffers = 3*000
Reads from disk to cache = 69
Writes from cache to disk = 0
Fetches from cache = 572
I would like to get this information in my own tool, does anyone know how to get this information using Firedac en FDQuery on a Firebird 2.5 database in Delphi?

Related

Google cloud dataflow : Shutting down JVM after 8 consecutive periods of measured GC thrashing

Am using google cloud dataflow to do some transformation
am treading about 3 million records from GBQ and performing a transformation and writing transform result to GCS.
While doing this operation dataflow is failing with error
Error :
Shutting down JVM after 8 consecutive periods of measured GC thrashing
Workflow failed. Causes: S20:Read GBQ/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/Read+Read GBQ/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/GroupByWindow+Read GBQ/Reshuffle.ViaRandomKey/Reshuffle/ExpandIterable+Read GBQ/Reshuffle.ViaRandomKey/Values/Values/Map+Read GBQ/ReadFiles+Read GBQ/PassThroughThenCleanup/ParMultiDo(Identity)+Read GBQ/PassThroughThenCleanup/View.AsIterable/ParDo(ToIsmRecordForGlobalWindow)+transform+Split results/ParMultiDo(Partition)+Write errors/WriteFiles/RewindowIntoGlobal/Window.Assign+Write errors/WriteFiles/WriteShardedBundlesToTempFiles/ApplyShardingKey+Write errors/WriteFiles/WriteShardedBundlesToTempFiles/GroupIntoShards/Reify+Write errors/WriteFiles/WriteShardedBundlesToTempFiles/GroupIntoShards/Write+Write entities Gzip/WriteFiles/WriteShardedBundlesToTempFiles/ApplyShardingKey+Write entities Gzip/WriteFiles/WriteShardedBundlesToTempFiles/GroupIntoShards/Reify+Write entities Gzip/WriteFiles/WriteShardedBundlesToTempFiles/GroupIntoShards/Write failed., A work item was attempted 4 times without success. Each time the worker eventually lost contact with the service. The work item was attempted on:
DataConverterOptions options = PipelineOptionsFactory.fromArgs(args).withValidation()
.as(DataConverterOptions.class);
Pipeline p = Pipeline.create(options);
EntityCreatorFn entityCreatorFn = EntityCreatorFn.newWithGCSMapping(options.getMapping(),
options.getWithUri(), options.getLineNumberToResult(), options.getIsPartialUpdate(), options.getQuery() != null);
PCollectionList<String> resultByType =
p.apply("Read GBQ", BigQueryIO.read(
(SchemaAndRecord elem) -> elem.getRecord().get("lineNumber") + "|" + elem.getRecord().get("sourceData"))
.fromQuery(options.getQuery()).withoutValidation()
.withCoder(StringUtf8Coder.of()).withTemplateCompatibility()).apply("transform",ParDo.of(entityCreatorFn))
.apply("Split results",Partition.of(2, (Partition.PartitionFn<String>) (elem, numPartitions) -> {
if (elem.startsWith(PREFIX_ERROR)) {
return PARTITION_ERROR;
}
return PARTITION_SUCCESS;
}));
FileIO.Sink sink = TextIO.sink();
resultByType.get(0).apply("Write entities Gzip", FileIO.write().to(options.getOutput()).withCompression(Compression.GZIP).withNumShards(options.getShards()).via(sink));
resultByType.get(1).apply("Write errors", TextIO.write().to(options.getErrorOutput()).withoutSharding());
p.run();
Shutting down JVM after 8 consecutive periods of measured GC thrashing. Memory is used/total/max = 109/301/2507 MB, GC last/max = 54.00/54.00 %, #pushbacks=0, gc thrashing=true.
Does 'EntityCreatorFn.newWithGCSMapping' cache elements in memory by any chance ? Seems like one of the steps in your pipeline is consuming too much memory (note that Dataflow cannot parallelize processing of a single element of a DoFn). I suggest adjusting your pipeline or trying out highmem machines. If the problem persists, please consider contacting Google Cloud Support with relevant job IDs etc.

Size in MB of mnesia table

How do you read the :mnesia.info?
For example I only have one table, some_table, and :mnesia.info returns me this.
---> Processes holding locks <---
---> Processes waiting for locks <---
---> Participant transactions <---
---> Coordinator transactions <---
---> Uncertain transactions <---
---> Active tables <---
some_table: with 16020 records occupying 433455 words of mem
schema : with 2 records occupying 536 words of mem
===> System info in version "4.15.5", debug level = none <===
opt_disc. Directory "/home/ubuntu/project/Mnesia.nonode#nohost" is NOT used.
use fallback at restart = false
running db nodes = [nonode#nohost]
stopped db nodes = []
master node tables = []
remote = []
ram_copies = ['some_table',schema]
disc_copies = []
disc_only_copies = []
[{nonode#nohost,ram_copies}] = [schema,'some_table']
488017 transactions committed, 0 aborted, 0 restarted, 0 logged to disc
0 held locks, 0 in queue; 0 local transactions, 0 remote
0 transactions waits for other nodes: []
Also calling:
:mnesia.table_info("some_table", :size)
It returns me 16020 which I think is the number of keys, but how can I get the memory usage?
First, you need mnesia:table_info(Table, memory) to obtain the number of words occupied by your table, in your example you are getting the number of items in the table, not the memory. To transform that value to MB, you can first use erlang:system_info(wordsize) to get the word size in bytes for your machine architecture(on a 32 bit system a word is 4 bytes and 64 bits it's 8 bytes), multiply it by your Mnesia table memory to obtain the size in bytes and finally transform the value to MegaBytes like:
MnesiaMemoryMB = (mnesia:table_info("some_table", memory) * erlang:system_info(wordsize)) / (1024*1024).
You can use erlang:system_info(wordsize) to get the word size in bytes, on a 32 bit system a word is 32 bits or 4 bytes, on 64 bit it's 8 bytes. So your table is using 433455 x wordsize.

Apache Ignite use too much RAM

I've tried to use Ignite to store events, but face a problem of too much RAM usage during inserting new data
I'm runing ignite node with 1GB Heap and default configuration
curs.execute("""CREATE TABLE trololo (id LONG PRIMARY KEY, user_id LONG, event_type INT, timestamp TIMESTAMP) WITH "template=replicated" """);
n = 10000
for i in range(200):
values = []
for j in range(n):
id_ = i * n + j
event_type = random.randint(1, 5)
user_id = random.randint(1000, 5000)
timestamp = datetime.datetime.utcnow() - timedelta(hours=random.randint(1, 100))
values.append("({id}, {user_id}, {event_type}, '{timestamp}')".format(
id=id_, user_id=user_id, event_type=event_type, uid=uid, timestamp=timestamp.strftime('%Y-%m-%dT%H:%M:%S-00:00')
))
query = "INSERT INTO trololo (id, user_id, event_type, TIMESTAMP) VALUES %s;" % ",".join(values)
curs.execute(query)
But after loading about 10^6 events, I got 100% CPU usage because all heap are taken and GC trying to clean some space (unsuccessfully)
Then I stop for about 10 minutes and after that GC succesfully clean some space and I could continue loading new data
Then again heap fully loaded and all over again
It's really strange behaviour and I couldn't find a way how I could load 10^7 events without those problems
aproximately event should take:
8 + 8 + 4 + 10(timestamp size?) is about 30 bytes
30 bytes x3 (overhead) so it should be less than 100bytes per record
So 10^7 * 10^2 = 10^9 bytes = 1Gb
So it seems that 10^7 events should fit into 1Gb RAM, isn't it?
Actually, since version 2.0, Ignite stores all in offheap with default settings.
The main problem here is that you generate a very big query string with 10000 inserts, that should be parsed and, of course, will be stored in heap. After decreasing this size for each query, you will get better results here.
But also, as you can see in doc for capacity planning, Ignite adds around 200 bytes overhead for each entry. Additionally, add around 200-300MB per node for internal memory and reasonable amount of memory for JVM and GC to operate efficiently
If you really want to use only 1gb heap you can try to tune GC, but I would recommend increasing heap size.

Postgresql 8.4 -> 9.1 : ANALYSE VERBOSE; -> out of shared memory

FYI, This question is posted on PG-General mailing list too,
We have a problem since we migrate from 8.4 to 9.1.
When we play:
ANALYSE VERBOSE;
( stat on all databases, with 500 tables and 1to DATA in all tables)
We now have this message :
org.postgresql.util.PSQLException: ERROR: out of shared memory Indice : You might need to increase max_locks_per_transaction.
When we was in 8.4, there was no error, there is our specific postgresql.conf configuration on the server:
default_statistics_target = 200
maintenance_work_mem = 1GB
constraint_exclusion = on
checkpoint_completion_target = 0.9
effective_cache_size = 7GB
work_mem = 48MB
wal_buffers = 32MB
checkpoint_segments = 64
shared_buffers = 2304MB
max_connections = 150
random_page_cost = 2.0
max_locks_per_transaction = 128 **
max_lock_per_transaction was at is default value before (64?), we already tried to increase it according to error hint.
We already tried to increase linux shared memory too. Have you any suggestions?
I'd try lowering maintenance_work_mem (try 256MB) and setting max_locks_per_transaction to much higher value e.g. 1024.

Error -206 table (aus_command) not in database

Informix 11.70.TC4DE on Windows Vista SP2, i7 Dual Core, 8GB RAM:
I searched for (aus_command) table and it is in the database. Any idea why it says it could not find it?
Tue May 15 22:07:21 2012
22:07:21 Booting Language <c> from module <>
22:07:21 Loading Module <CNULL>
22:07:21 Booting Language <builtin> from module <>
22:07:21 Loading Module <BUILTINNULL>
22:07:28 DR: DRAUTO is 0 (Off)
22:07:28 DR: ENCRYPT_HDR is 0 (HDR encryption Disabled)
22:07:28 IBM Informix Dynamic Server Version 11.70.TC4DE Software Serial Number AAA#B000000
22:07:29 Performance Advisory: The physical log size is smaller than the recommended size for a
server configured with RTO_SERVER_RESTART.
22:07:29 Results: Fast recovery performance might not be optimal.
22:07:29 Action: For best fast recovery performance when RTO_SERVER_RESTART is enabled,
increase the physical log size to at least 242000 KB. For servers
configured with a large buffer pool, this might not be necessary.
22:07:29 IBM Informix Dynamic Server Initialized -- Shared Memory Initialized.
22:07:29 Started 1 B-tree scanners.
22:07:29 B-tree scanner threshold set at 5000.
22:07:29 B-tree scanner range scan size set to -1.
22:07:29 B-tree scanner ALICE mode set to 6.
22:07:29 B-tree scanner index compression level set to med.
22:07:29 Physical Recovery Started at Page (2:5459).
22:07:29 Physical Recovery Complete: 0 Pages Examined, 0 Pages Restored.
22:07:29 Logical Recovery Started.
22:07:29 5 recovery worker threads will be started.
22:07:30 Logical Recovery has reached the transaction cleanup phase.
22:07:30 Logical Recovery Complete.
6 Committed, 0 Rolled Back, 0 Open, 0 Bad Locks
22:07:31 Onconfig parameter STORAGE_FULL_ALARM modified from 0 to 3.
22:07:31 Dataskip is now OFF for all dbspaces
22:07:31 Init operation complete - Mode Online
22:07:31 Checkpoint Completed: duration was 0 seconds.
22:07:31 Tue May 15 - loguniq 21, logpos 0x500b4, timestamp: 0xa252b Interval: 62
22:07:31 Maximum server connections 0
22:07:31 Checkpoint Statistics - Avg. Txn Block Time 0.000, # Txns blocked 0, Plog used 16, Llog used 1
22:07:31 On-Line Mode
22:07:34 SCHAPI: Started dbScheduler thread.
22:07:36 Defragmenter cleaner thread now running
22:07:36 Defragmenter cleaner thread cleaned:0 partitions
22:07:36 Booting Language <spl> from module <>
22:07:36 Loading Module <SPLNULL>
22:07:36 Auto Registration is synced
22:07:36 SCHAPI: Started 2 dbWorker threads.
22:07:38 SCHAPI: [Auto Update Statistics Refresh 33-4] Error -206 The specified table (aus_command) is not in the database.
22:07:38 SCHAPI: [Auto Update Statistics Refresh 33-4] Error -111 ISAM error: no record found.
22:07:38 SCHAPI: [Auto Update Statistics Refresh 33-4] Error -206 The specified table (aus_command) is not in the database.
22:07:38 SCHAPI: [Auto Update Statistics Refresh 33-4] Error -111 ISAM error: no record found.
22:07:40 SCHAPI: [Auto Update Statistics Evaluation 32-8] Error -242 Could not open database table (informix.aus_command).
22:07:40 SCHAPI: [Auto Update Statistics Evaluation 32-8] Error -106 ISAM error: non-exclusive access.
22:07:45 Logical Log 21 Complete, timestamp: 0xae6d3.
22:23:40 Explain file for session 31 : C:\PROGRA~1\IBM\Informix\11.70\sqexpln\cost.out
22:24:07 Explain file for session 31 : C:\PROGRA~1\IBM\Informix\11.70\sqexpln\cost.out
22:24:24 Explain file for session 31 : C:\PROGRA~1\IBM\Informix\11.70\sqexpln\cost.out
22:27:10 Checkpoint Completed: duration was 0 seconds.
22:27:10 Tue May 15 - loguniq 22, logpos 0x545018, timestamp: 0xb4f29 Interval: 63
22:27:10 Maximum server connections 1
22:27:10 Checkpoint Statistics - Avg. Txn Block Time 0.000, # Txns blocked 0, Plog used 657, Llog used 3573
22:27:11 IBM Informix Dynamic Server Stopped.

Resources