Error -206 table (aus_command) not in database - informix

Informix 11.70.TC4DE on Windows Vista SP2, i7 Dual Core, 8GB RAM:
I searched for (aus_command) table and it is in the database. Any idea why it says it could not find it?
Tue May 15 22:07:21 2012
22:07:21 Booting Language <c> from module <>
22:07:21 Loading Module <CNULL>
22:07:21 Booting Language <builtin> from module <>
22:07:21 Loading Module <BUILTINNULL>
22:07:28 DR: DRAUTO is 0 (Off)
22:07:28 DR: ENCRYPT_HDR is 0 (HDR encryption Disabled)
22:07:28 IBM Informix Dynamic Server Version 11.70.TC4DE Software Serial Number AAA#B000000
22:07:29 Performance Advisory: The physical log size is smaller than the recommended size for a
server configured with RTO_SERVER_RESTART.
22:07:29 Results: Fast recovery performance might not be optimal.
22:07:29 Action: For best fast recovery performance when RTO_SERVER_RESTART is enabled,
increase the physical log size to at least 242000 KB. For servers
configured with a large buffer pool, this might not be necessary.
22:07:29 IBM Informix Dynamic Server Initialized -- Shared Memory Initialized.
22:07:29 Started 1 B-tree scanners.
22:07:29 B-tree scanner threshold set at 5000.
22:07:29 B-tree scanner range scan size set to -1.
22:07:29 B-tree scanner ALICE mode set to 6.
22:07:29 B-tree scanner index compression level set to med.
22:07:29 Physical Recovery Started at Page (2:5459).
22:07:29 Physical Recovery Complete: 0 Pages Examined, 0 Pages Restored.
22:07:29 Logical Recovery Started.
22:07:29 5 recovery worker threads will be started.
22:07:30 Logical Recovery has reached the transaction cleanup phase.
22:07:30 Logical Recovery Complete.
6 Committed, 0 Rolled Back, 0 Open, 0 Bad Locks
22:07:31 Onconfig parameter STORAGE_FULL_ALARM modified from 0 to 3.
22:07:31 Dataskip is now OFF for all dbspaces
22:07:31 Init operation complete - Mode Online
22:07:31 Checkpoint Completed: duration was 0 seconds.
22:07:31 Tue May 15 - loguniq 21, logpos 0x500b4, timestamp: 0xa252b Interval: 62
22:07:31 Maximum server connections 0
22:07:31 Checkpoint Statistics - Avg. Txn Block Time 0.000, # Txns blocked 0, Plog used 16, Llog used 1
22:07:31 On-Line Mode
22:07:34 SCHAPI: Started dbScheduler thread.
22:07:36 Defragmenter cleaner thread now running
22:07:36 Defragmenter cleaner thread cleaned:0 partitions
22:07:36 Booting Language <spl> from module <>
22:07:36 Loading Module <SPLNULL>
22:07:36 Auto Registration is synced
22:07:36 SCHAPI: Started 2 dbWorker threads.
22:07:38 SCHAPI: [Auto Update Statistics Refresh 33-4] Error -206 The specified table (aus_command) is not in the database.
22:07:38 SCHAPI: [Auto Update Statistics Refresh 33-4] Error -111 ISAM error: no record found.
22:07:38 SCHAPI: [Auto Update Statistics Refresh 33-4] Error -206 The specified table (aus_command) is not in the database.
22:07:38 SCHAPI: [Auto Update Statistics Refresh 33-4] Error -111 ISAM error: no record found.
22:07:40 SCHAPI: [Auto Update Statistics Evaluation 32-8] Error -242 Could not open database table (informix.aus_command).
22:07:40 SCHAPI: [Auto Update Statistics Evaluation 32-8] Error -106 ISAM error: non-exclusive access.
22:07:45 Logical Log 21 Complete, timestamp: 0xae6d3.
22:23:40 Explain file for session 31 : C:\PROGRA~1\IBM\Informix\11.70\sqexpln\cost.out
22:24:07 Explain file for session 31 : C:\PROGRA~1\IBM\Informix\11.70\sqexpln\cost.out
22:24:24 Explain file for session 31 : C:\PROGRA~1\IBM\Informix\11.70\sqexpln\cost.out
22:27:10 Checkpoint Completed: duration was 0 seconds.
22:27:10 Tue May 15 - loguniq 22, logpos 0x545018, timestamp: 0xb4f29 Interval: 63
22:27:10 Maximum server connections 1
22:27:10 Checkpoint Statistics - Avg. Txn Block Time 0.000, # Txns blocked 0, Plog used 657, Llog used 3573
22:27:11 IBM Informix Dynamic Server Stopped.

Related

Neo4j GraphSage training does not log anything

I am working on extracting graph embeddings with training GraphSage algorihm. I am working on a large graph consisting of (82,339,589) nodes and (219,521,164) edges. When I checked with ":queries" command the query is listed as running. Algorithm started in 6 days ago. When I look the logs with "docker logs xxx" the last logs listed as
2021-12-01 12:03:16.267+0000 INFO Relationship Store Scan (RelationshipScanCursorBasedScanner): Imported 352,492,468 records and 0 properties from 16247 MiB (17,036,668,320 bytes); took 59.057 s, 5,968,663.57 Relationships/s, 275 MiB/s (288,477,487 bytes/s) (per thread: 1,492,165.89 Relationships/s, 68 MiB/s (72,119,371 bytes/s))
2021-12-01 12:03:16.269+0000 INFO [neo4j.BoltWorker-3 [bolt] [/10.0.0.6:56143] ] LOADING
INFO [neo4j.BoltWorker-3 [bolt] [/10.0.0.6:56143] ] LOADING Actual
memory usage of the loaded graph: 8602 MiB
INFO [neo4j.BoltWorker-3 [bolt] [/10.0.0.6:64076] ] GraphSageTrain ::
Start
There is a way to see detailed logs about training process. Is it normal for taking 6 days for graphs with shared sizes ?
It is normal for GraphSAGE to take a long time compared to FastRP or Node2Vec. Starting in GDS 1.7, you can use
CALL gds.beta.listProgress(jobId: String)
YIELD
jobId,
taskName,
progress,
progressBar,
status,
timeStarted,
elapsedTime
If you call without passing in a jobId, it will return a list of all running jobs. If you call with a jobId, it will give you details about a running job.
This query will summarize the details for job 03d90ed8-feba-4959-8cd2-cbd691d1da6c.
CALL gds.beta.listProgress("03d90ed8-feba-4959-8cd2-cbd691d1da6c")
YIELD taskName, status
RETURN taskName, status, count(*)
Here's the documentation for progress logging. The system monitoring procedures might also be helpful to you.

Erlang change VM process initial size. Tune Erlang VM

First I have to mention that I run on a CentOS 7 tuned up to support 1 million connections. I tested with a simple C server and client and I connected 512000 clients. I could have connect more but I did not have enought RAM to spawn more linux client machines, since from a machine I can open 65536 connections; 8 machines * 64000 connections each = 512000.
I made a simple Erlang server to which I want to connect 1 million or half a million clients, using the same C client. The problem I'm having now is memory related. For each successfully gen_tcp:accept call I spawn a process. Around 50000 open connections costs me 3.7 GB RAM on server, meanwhile using the C server I could have open 512000 connections using 1.9 GB RAM. It is true that on the C server I did not created a process after accept to handle stuff, I just called accept again in while loop, but even so... guys on web did this erlang thing with less memory ( ejabberd riak )
I presume that the flags that I pass to the erlang VM should do the trick. From what I read in documentation and on the web this is what I have: erl +K true +Q 64200 +P 134217727 -env ERL_MAX_PORTS 40960000 -env ERTS_MAX_PORTS 40960000 +a 16 +hms 1024 +hmbs 1024
This is the server code, I open 1 listener that monitors port 5001 by calling start(1, 5001).
start(Num,LPort) ->
case gen_tcp:listen(LPort,[{reuseaddr, true},{backlog,9000000000}]) of
{ok, ListenSock} ->
start_servers(Num,ListenSock),
{ok, Port} = inet:port(ListenSock),
Port;
{error,Reason} ->
{error,Reason}
end.
start_servers(0,_) ->
ok;
start_servers(Num,LS) ->
spawn(?MODULE,server,[LS,0]),
start_servers(Num-1,LS).
server(LS, Nr) ->
io:format("before accept ~w~n",[Nr]),
case gen_tcp:accept(LS) of
{ok,S} ->
io:format("after accept ~w~n",[Nr]),
spawn(ex,server,[LS,Nr+1]),
proc_lib:hibernate(?MODULE, loop, [S]);
Other ->
io:format("accept returned ~w - goodbye!~n",[Other]),
ok
end.
loop(S) ->
ok = inet:setopts(S,[{active,once}]),
receive
{tcp,S, _Data} ->
Answer = 1, % Not implemented in this example
gen_tcp:send(S,Answer),
proc_lib:hibernate(?MODULE, loop, [S]);
{tcp_closed,S} ->
io:format("Socket ~w closed [~w]~n",[S,self()]),
ok
end.
Given this configuration your my beam consumed about 2.5 GB of memory just on start without even your module loaded.
However, if you reduce maximum number of processes to the reasonable value, like +P 60000 for 50 000 connections test, memory consumption drops rapidly.
With 60 000 processes limit VM only used 527MB of virtual memory on start.
I've tried to reproduce your test, but unfortunately I was only able to launch 30 000 netcat's on my system before running out of memory (because of client jobs). However I only observed increase of VM memory consumption up to 570MB.
So my suggestion is that your numbers come from high startup memory consumption and not great number of opened connections. Even then you actually should pay attention to the stats change along with increasing number of opened connections and not absolute values.
I finally used the following configuration for my benchmark:
erl +K true +Q 64200 +P 60000 -env ERL_MAX_PORTS 40960000 -env ERTS_MAX_PORTS 40960000 +a 16 +hms 1024 +hmbs 1024
So I've launched clients with the command
for i in `seq 1 50000`; do nc 127.0.0.1 5001 & done
Apart from tunes you already made you can adjust tcp buffers as well. By default they take OS default values, but you can pass {recbuf, Size}and {sndbuf, Size} to gen_tcp:listen. It may reduce memory footprints significantly.

insufficient memory when using proc assoc in SAS

I'm trying to run the following and I receive an error saying that ERROR: The SAS System stopped processing this step because of insufficient memory.
The dataset has about 1170(row)*90(column) records. What are my alternatives here?
The error infor. is below:
332 proc assoc data=want1 dmdbcat=dbcat pctsup=0.5 out=frequentItems;
333 id tid;
334 target item_new;
335 run;
----- Potential 1 item sets = 188 -----
Counting items, records read: 19082
Number of customers: 203
Support level for item sets: 1
Maximum count for a set: 136
Sets meeting support level: 188
Megs of memory used: 0.51
----- Potential 2 item sets = 17578 -----
Counting items, records read: 19082
Maximum count for a set: 119
Sets meeting support level: 17484
Megs of memory used: 1.54
----- Potential 3 item sets = 1072352 -----
Counting items, records read: 19082
Maximum count for a set: 111
Sets meeting support level: 1072016
Megs of memory used: 70.14
Error: Out of memory. Memory used=2111.5 meg.
Item Set 4 is null.
ERROR: The SAS System stopped processing this step because of insufficient memory.
WARNING: The data set WORK.FREQUENTITEMS may be incomplete. When this step was stopped there were
1089689 observations and 8 variables.
From the documentation (http://support.sas.com/documentation/onlinedoc/miner/em43/assoc.pdf):
Caution: The theoretical potential number of item sets can grow very
quickly. For example, with 50 different items, you have 1225 potential
2-item sets and 19,600 3-item sets. With 5,000 items, you have over 12
million of the 2-item sets, and a correspondingly large number of
3-item sets.
Processing an extremely large number of sets could cause your system
to run out of disk and/or memory resources. However, by using a higher
support level, you can reduce the item sets to a more manageable
number.
So - provide a support= option make sure it's sufficiently high, e.g.:
proc assoc data=want1 dmdbcat=dbcat pctsup=0.5 out=frequentItems support=20;
id tid;
target item_new;
run;
Is there a way to frame the data mining task so that it requires less memory for storage or operations? In other words, do you need all 90 columns or can you eliminate some? Is there some clear division within the data set such that PROC ASSOC wouldn't be expected to use those rows for its findings?
You may very well be up against software memory allocation limits here.

Redis MSOpenTech : max memory "OOM command not allowed when used memory > 'maxmemory'" error even though RDB file after save is only 3 GB

The redis server version I use is 2.8.9 from MSOpenTech github. Can anyone shed light on why redis "info" command indicates that used memory is 21 GB even though the RDB file that's saved on disk is < than 4 GB? I did successfully run a "save" command before noting down the size of the RDB file. The qfork heapfile is 30 Gb as it's been configured in redis.windows.conf.
Configuration :
maxheap 30gb
max-memory 20 Gb
appendonly no
save 18000 1
The server has 192 GB of physical RAM, but unfortunately only has about 60 GB of free disk space and I had to set max-heap and max-memory to 30 Gb and 20 Gb respectively so that I have additional space to persist the data on disk.
I'm using redis as a cache and the save interval is large as seeding the data takes a long time and I don't want constant writing to file. Once seeding is done, the DB is updated with newer data once a day.
My questions are :
How is the saved RDB file so small? Is it solely due to compression (rdbcompression yes)? If yes, can the same compression mechanism be used to store data in memory too? I make use of lists extensively.
Before I ran the "save" command, the working set and private bytes in process-explorer was very small. Is there a way I can breakdown memory usage by datastructure? For example : List uses x amount, Hash uses y amount etc?
Is there any way I can store the AOF file ( I turned off AOF and use RDB because the AOF files were filling up disk space fast ) in a network path ( shared drive or NAS )? I tried setting the dir config to \someip\some folder but the service failed to start with the message "Cant CHDIR to location"
I'm unable to post images, but this is what process-explorer has to say about the redis-server instance :
Virtual Memory:
Private Bytes : 72,920 K
Peak Private Bytes : 31,546,092 K
Virtual Size : 31,558,356 K
Page faults : 12,479,550
Physical Memory:
Working Set : 26,871,240 K
WS Private : 63,260 K
WS Shareable : 26,807,980 K
WS Shared : 3,580 K
Peak Working Set : 27,011,488 K
The latest saved dump.rdb is 3.81 GB and the heap file is 30 GB.
# Server
redis_version:2.8.9
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:1fe181ad2447fe38
redis_mode:standalone
os:Windows
arch_bits:64
multiplexing_api:winsock_IOCP
gcc_version:0.0.0
process_id:12772
run_id:553f2b4665edd206e632b7040aa76c0b76083f4d
tcp_port:6379
uptime_in_seconds:24087
uptime_in_days:0
hz:50
lru_clock:14825512
config_file:D:\RedisService/redis.windows.conf
# Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:21484921736
used_memory_human:20.01G
used_memory_rss:21484870536
used_memory_peak:21487283360
used_memory_peak_human:20.01G
used_memory_lua:3156992
mem_fragmentation_ratio:1.00
mem_allocator:dlmalloc-2.8
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1407328559
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:1407328560
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
# Stats
total_connections_received:9486
total_commands_processed:241141370
instantaneous_ops_per_sec:0
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:30143
keyspace_misses:81
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:1341134

Delphi Shared Module for Apache Runs Out of Memory

I hope someone will shed some lights on issue I am having.
I am writing an Apache shared object module that acts as a server for my app. Client or multiple Clients make SOAP request to this module, module talks to database and returns SOAP responses to client(s).
Task Manager shows httpd processes (incoming requests from clients) in Task Manager using about 35000K but every once in a while, one of httpd processes will grow in memory/CPU usage and over time reach the cap of 2GB, and then it will crash. Server reports "Internal Server 500" error in this case. (screenshot added)
I use FastMM to check for memory leaks and it does produce log but there is no memory leaks reported. To make sure I use FastMM properly, I introduced memory leaks and FastMM will log it. So, my assumption is that I do not have memory leak but that somehow memory gets consumed until 2GB threshold is reached and not released until I manually restart Apache.
Then I started taking snapshots using FastMM's LogMemoryManagerStateToFile call to create memory snapshot at the time of call. LogMemoryManagerStateToFile creates file with following information but I dont understand how is it helpful to me other than telling me that for example 8320 calls to UnicodeString allocated 676512 bytes.
Which of these are not released properly?
Note that this information is shortened version of created file and it is just for one single method call. There could be many calls that occur to different methods:
FastMM State Capture:
---------------------
1793K Allocated
7165K Overhead
20% Efficiency
Usage Detail:
676512 bytes: UnicodeString x 8320
152576 bytes: TTMQuery x 256
144312 bytes: TParam x 1718
134100 bytes: TDBQuery x 225
107444 bytes: Unknown x 1439
88368 bytes: TSDStringList x 1052
82320 bytes: TStringList x 980
80000 bytes: TList x 4000
53640 bytes: TTMStoredProc x 90
47964 bytes: TFieldList x 571
47964 bytes: TFieldDefList x 571
41112 bytes: TFields x 1142
38828 bytes: TFieldDefs x 571
20664 bytes: TParams x 574
...
...
...
4 bytes: TChunktIME x 1
4 bytes: TChunkpHYs x 1
4 bytes: TChunkgAMA x 1
Additional Information:
-----------------------
Finalizing MyMethodName
How is this data helpful to figure out where memory is being consumed so much in terms of locating it in code and fixing it?
Much appreciated,

Resources