SI attempted to run a script to generate a table in my Informix database, but the script was missing a newline at EOF, so I think Informix had problems to read it and hence the script got blocked doing nothing. I had to kill the script and add the new line to the file so now the script works fine, except it does not create the table due to a lockecreated when I killed the script abruptly.
I am new to this, so sorry for the dumb question. IBM page does not have a clear and simple explanation of how to clean this now.
So, my question is: How do I unlock the locks so I can continue working in my script?
admin_proyecto#li1106-217 # onstat -k
IBM Informix Dynamic Server Version 12.10.FC9DE -- On-Line (CKPT REQ) -- Up 9 ds
Blocked:CKPT
Locks
address wtlist owner lklist type tbz
44199028 0 44ca6830 0 HDR+S
44199138 0 44cac0a0 0 HDR+S
441991c0 0 44cac0a0 4419b6f0 HDR+IX
44199358 0 44ca44d0 0 S
441993e0 0 44ca44d0 44199358 HDR+S
4419ac50 0 44cac0a0 441991c0 HDR+X
4419aef8 0 44ca44d0 441993e0 HDR+IX
4419b2b0 0 44ca79e0 0 S
4419b3c0 0 44ca82b8 0 S
4419b6f0 0 44cac0a0 44199138 HDR+X
4419b998 0 44ca8b90 0 S
4419bdd8 0 44ca44d0 4419aef8 HDR+X
12 active, 20000 total, 16384 hash buckets, 0 lock table overflows
On my "toy" systems i usually point LTAPEDEV to a directory:
LTAPEDEV /usr/informix/dumps/motor_003/backups
Then, when Informix blocks due to having all of it's logical logs full, i manually do an ontape -a to backup to files the used logical logs and free them to be reused.
For example, here I have an Informix instance blocked due to no more logical logs available:
$ onstat -l
IBM Informix Dynamic Server Version 12.10.FC8DE -- On-Line (CKPT REQ) -- Up 00:18:58 -- 213588 Kbytes
Blocked:CKPT
Physical Logging
Buffer bufused bufsize numpages numwrits pages/io
P-1 0 64 1043 21 49.67
phybegin physize phypos phyused %used
2:53 51147 28085 240 0.47
Logical Logging
Buffer bufused bufsize numrecs numpages numwrits recs/pages pages/io
L-1 13 64 191473 12472 6933 15.4 1.8
Subsystem numrecs Log Space used
OLDRSAM 191470 15247376
HA 3 132
Buffer Waiting
Buffer ioproc flags
L-1 0 0x21 0
address number flags uniqid begin size used %used
44d75f88 1 U------ 47 3:15053 5000 5 0.10
44b6df68 2 U---C-L 48 3:20053 5000 4986 99.72
44c28f38 3 U------ 41 3:25053 5000 5000 100.00
44c28fa0 4 U------ 42 3:53 5000 2843 56.86
44d59850 5 U------ 43 3:5053 5000 5 0.10
44d598b8 6 U------ 44 3:10053 5000 5 0.10
44d59920 7 U------ 45 3:30053 5000 5 0.10
44d59988 8 U------ 46 3:35053 5000 5 0.10
8 active, 8 total
On the online log I have:
$ onstat -m
04/23/18 18:20:42 Logical Log Files are Full -- Backup is Needed
So I manually issue the command:
$ ontape -a
Performing automatic backup of logical logs.
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000041
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000042
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000043
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000044
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000045
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000046
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000047
File created: /usr/informix/dumps/motor_003/backups/informix003.ifx.marqueslocal_3_Log0000000048
Do you want to back up the current logical log? (y/n) n
Program over.
If I check again the status of the logical logs:
$ onstat -l
IBM Informix Dynamic Server Version 12.10.FC8DE -- On-Line -- Up 00:23:42 -- 213588 Kbytes
Physical Logging
Buffer bufused bufsize numpages numwrits pages/io
P-2 33 64 1090 24 45.42
phybegin physize phypos phyused %used
2:53 51147 28091 36 0.07
Logical Logging
Buffer bufused bufsize numrecs numpages numwrits recs/pages pages/io
L-1 0 64 291335 15878 7023 18.3 2.3
Subsystem numrecs Log Space used
OLDRSAM 291331 22046456
HA 4 176
address number flags uniqid begin size used %used
44d75f88 1 U-B---- 47 3:15053 5000 5 0.10
44b6df68 2 U-B---- 48 3:20053 5000 5000 100.00
44c28f38 3 U---C-L 49 3:25053 5000 3392 67.84
44c28fa0 4 U-B---- 42 3:53 5000 2843 56.86
44d59850 5 U-B---- 43 3:5053 5000 5 0.10
44d598b8 6 U-B---- 44 3:10053 5000 5 0.10
44d59920 7 U-B---- 45 3:30053 5000 5 0.10
44d59988 8 U-B---- 46 3:35053 5000 5 0.10
8 active, 8 total
The logical logs are now marked as "Backed Up" and can be reused and the Informix instance is no longer blocked on Blocked:CKPT .
Related
The problem
One of my Python Redis clients fails with the following exception:
redis.exceptions.ResponseError: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
I have checked the redis machine, and it seems to be out of memory:
free
total used free shared buffers cached
Mem: 3952 3656 295 0 1 9
-/+ buffers/cache: 3645 306
Swap: 0 0 0
top
top - 15:35:03 up 14:09, 1 user, load average: 0.06, 0.17, 0.16
Tasks: 114 total, 2 running, 112 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.2 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.2 st
KiB Mem: 4046852 total, 3746772 used, 300080 free, 1668 buffers
KiB Swap: 0 total, 0 used, 0 free. 11364 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1102 root 20 0 3678836 3.485g 736 S 1.3 90.3 10:12.53 redis-server
1332 ubuntu 20 0 41196 3096 972 S 0.0 0.1 0:00.12 zsh
676 root 20 0 10216 2292 0 S 0.0 0.1 0:00.03 dhclient
850 syslog 20 0 255836 2288 124 S 0.0 0.1 0:00.39 rsyslogd
I am using a few dozens Redis DBs in a single Redis instance. Each DB is denoted by numeric ids given to redis-cli, e.g.:
$ redis-cli -n 80
127.0.0.1:6379[80]>
How do I know how much memory does each DB consume, and what are the largest keys in each DB?
How do I know how much memory does each DB consume, and what are the largest keys in each DB?
You CANNOT get the used memory for each DB. With INFO command, you can only get the totally used memory for Redis instance. Redis records the newly allocated memory size, each time it dynamically allocates some memory. However, it doesn't do such record for each DB. Also, it doesn't have any record for the largest keys.
Normally, you should config your Redis instance with the maxmemory and maxmemory-policy (i.e. eviction policy when the maxmemory is reached).
You can write some sh-script like to this (show element count in each DB):
#!/bin/bash
max_db=501
i=0
while [ $i -lt $max_db ]
do
echo "db_nubner: $i"
redis-cli -n $i dbsize
i=$((i+1))
done
Example output:
db_nubner: 0
(integer) 71
db_nubner: 1
(integer) 0
db_nubner: 2
(integer) 1
db_nubner: 3
(integer) 1
db_nubner: 4
(integer) 0
db_nubner: 5
(integer) 1
db_nubner: 6
(integer) 28
db_nubner: 7
(integer) 1
I know that we can have a one database with large key, but anyway, in some cases this script can help.
I have cluster RHEL6,
cman, corosync, pacemaker.
After adding new memebers I got error in GFS mounting. GFS never mounts on servers.
group_tool
fence domain
member count 4
victim count 0
victim now 0
master nodeid 1
wait state none
members 1 2 3 4
dlm lockspaces
name clvmd
id 0x4104eefa
flags 0x00000000
change member 4 joined 1 remove 0 failed 0 seq 1,1
members 1 2 3 4
gfs mountgroups
name lv_gfs_01
id 0xd5eacc83
flags 0x00000005 blocked,join
change member 3 joined 1 remove 0 failed 0 seq 1,1
members 1 2 3
In processes:
root 2695 2690 0 08:03 pts/1 00:00:00 /bin/bash /etc/init.d/gfs2 start
root 2702 2695 0 08:03 pts/1 00:00:00 /bin/bash /etc/init.d/gfs2 start
root 2704 2703 0 08:03 pts/1 00:00:00 /sbin/mount.gfs2 /dev/mapper/vg_shared-lv_gfs_01 /mnt/share -o rw,_netdev,noatime,nodiratime
fsck.gfs2 -yf /dev/vg_shared/lv_gfs_01
Initializing fsck
jid=1: Replayed 0 of 0 journaled data blocks
jid=1: Replayed 20 of 21 metadata blocks
Recovering journals (this may take a while)
Journal recovery complete.
Validating Resource Group index.
Level 1 rgrp check: Checking if all rgrp and rindex values are good.
(level 1 passed)
RGs: Consistent: 183 Cleaned: 1 Inconsistent: 0 Fixed: 0 Total: 184
2 blocks may need to be freed in pass 5 due to the cleaned resource groups.
Starting pass1
Pass1 complete
Starting pass1b
Pass1b complete
Starting pass1c
Pass1c complete
Starting pass2
Pass2 complete
Starting pass3
Pass3 complete
Starting pass4
Pass4 complete
Starting pass5
Block 11337799 (0xad0047) bitmap says 1 (Data) but FSCK saw 0 (Free)
Fixed.
Block 11337801 (0xad0049) bitmap says 1 (Data) but FSCK saw 0 (Free)
Fixed.
RG #11337739 (0xad000b) free count inconsistent: is 65500 should be 65502
RG #11337739 (0xad000b) Inode count inconsistent: is 15 should be 13
Resource group counts updated
Pass5 complete
The statfs file is wrong:
Current statfs values:
blocks: 12057320 (0xb7fae8)
free: 9999428 (0x989444)
dinodes: 15670 (0x3d36)
Calculated statfs values:
blocks: 12057320 (0xb7fae8)
free: 9999432 (0x989448)
dinodes: 15668 (0x3d34)
The statfs file was fixed.
Writing changes to disk
gfs2_fsck complete
gfs2_edit -p 0xad0047 field di_size /dev/vg_shared/lv_gfs_01
10 (Block 11337799 is type 10: Ext. attrib which is not implemented)
Howto drop flag blocked,join from GFS?
I solved it by reboot all servers which have GFS,
it is one of the unpleasant behavior of GFS.
GFS lock based on kernel and in the few cases it solved only with reboot.
there is very usefull manual - https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/Global_File_System_2/index.html
I have a project (Rails 4.0.2) that I'm currently running inside of Vagrant (1.3.5) running VirtualBox (4.3.4). The Guest OS is Debian 6.0. When I run the application on the Host OS, or I start up the Virtualbox manually, I see a dramatic improvement in responsiveness. As soon as I use 'vagrant up', performance seems to become really poor. Here are the relevant Apache Bench results:
Apache Bench Command
ab -n 10 -c 1 http://127.0.0.1:3000/application.js
Host OS
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 27 44 28.3 33 118
Waiting: 25 41 28.6 31 116
Total: 27 44 28.3 33 118
Virtualbox
min mean[+/-sd] median max
Connect: 0 0 0.4 0 1
Processing: 57 71 19.1 67 119
Waiting: 46 59 19.3 57 110
Total: 57 71 19.1 68 119
Vagrant
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 849 916 76.2 901 1115
Waiting: 831 892 72.6 883 1081
Total: 849 916 76.2 901 1115
I would expect a slowdown running the application in Virtualbox, but not an order of magnitude. I'm also not doing anything fancy with my Vagrantfile:
Vagrantfile
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "squeeze"
config.vm.network :forwarded_port, guest: 3000, host: 3000
end
I've tried the fixes specified in this github issue and this HackerNews comment but to no avail.
Make sure you don't place your project in synced folder (by default it uses vboxsf which has known performance issues with large numbers of files/directories).
This may also be related to "Webrick Reverse DNS Lookup", take a look at https://stackoverflow.com/a/19284483/1801697
Hope it helps.
Here is my problem:
top - 11:32:47 up 22:20, 2 users, load average: 0.03, 0.72, 1.27
Tasks: 112 total, 1 running, 110 sleeping, 1 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8193844k total, 7508292k used, 685552k free, 80636k buffers
Swap: 2102456k total, 15472k used, 2086984k free, 7070220k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28555 root 20 0 57424 38m 1492 S 0 0.5 0:06.38 bash
28900 root 20 0 39488 7732 3176 T 0 0.1 0:03.12 python
28553 root 20 0 72132 5052 2600 S 0 0.1 0:00.22 sshd
28859 root 20 0 70588 3424 2584 S 0 0.0 0:00.06 sshd
29404 root 20 0 70448 3320 2600 S 0 0.0 0:00.06 sshd
28863 root 20 0 42624 2188 1472 S 0 0.0 0:00.02 sftp-server
29406 root 20 0 19176 1984 1424 S 0 0.0 0:00.00 bash
2854 root 20 0 115m 1760 488 S 0 0.0 5:37.02 rsyslogd
29410 root 20 0 19064 1400 1016 R 0 0.0 0:05.14 top
3111 ntp 20 0 22484 604 460 S 0 0.0 10:26.79 ntpd
3134 proftpd 20 0 64344 452 280 S 0 0.0 6:29.16 proftpd
2892 root 20 0 49168 356 232 S 0 0.0 0:31.58 sshd
1 root 20 0 27388 284 132 S 0 0.0 0:01.38 init
3121 root 20 0 4308 248 172 S 0 0.0 0:16.48 mdadm
As you can see 7.5 GB of memory is used, but there is no process that use it.
How it can be, and how to fix this?
Thanks for answer.
www.linuxatemyram.com
It's too good of a site to ruin by copy/pasting the entire contents here.
in order to see all process you can use that command:
ps aux
and then try to sort with different filters
ps faux
Hope that helps.
If your system starts using the swap file - then you have high memory load. Depends on the file system, programs that you use - linux system may allocate all of your system memory - but that doesn't mean that they are using it.
Lots of ubuntu and debian servers that we use have free memory 32 or 64 mb but don't use swap.
I'm not Linux-gure however, so please someone to correct me if I'm wrong :)
I don't have a Linux box handy to experiment, but it looks like you can sort top's output with interactive commands, so you could bring the biggest memory users to the top. Check the man page and experiment.
Update: In the version of top I have (procps 3.2.7), you can hit "<" and ">" to change the field it's sorting by. Doesn't actually say what field it is, you have to look at how the display is changing. It's not hard once you experiment a little.
However, Arrowmaster's point (that it's probably being used for cache) is a better answer. Use "free" to see how much is being used.
I had a similar problem. I was running Raspbian on a Pi B+ with a TP-Link USB Wireless LAN stick connected. The stick caused a problem which resulted in nearly all memory being consumed on system start (around 430 of 445 MB). Just like in your case, the running processes did not consume that much memory. When I removed the stick and rebooted everything was fine, just 50 MB memory consumption.
I have a rails app (2.3.5) running on a VPS with 4 cores # 2 GHz and 4GB memory. I am running nginx (0.7.61) and phusion passenger(2.2.14) on Ruby Enterprise (1.8.7-2010.01) with the max pool size set at 30. My problem is that it seems as if every ruby process that is executing a rails request runs at near 100% cpu. If I run TOP they drop off every time the display refreshes so they are not getting hung, but they are still running at 100%.
Is there any way I can bring this down? Or at least figure out what portion of code is spiking the CPU? Is this a normal behavior?
Here is the TOP output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2427 psadmin 25 0 91904 76m 2696 R 100 1.9 739:05.96 Rails: /var/www/apps/main_rails_app/current
3457 psadmin 25 0 98180 82m 2532 R 100 2.0 711:21.91 Rails: /var/www/apps/main_rails_app/current
2415 psadmin 25 0 93952 77m 2708 R 99 1.9 727:49.31 Rails: /var/www/apps/main_rails_app/current
3455 psadmin 25 0 99204 83m 2528 R 69 2.0 726:04.70 Rails: /var/www/apps/main_rails_app/current
2791 psadmin 16 0 98044 81m 2492 S 31 2.0 0:10.16 Rails: /var/www/apps/main_rails_app/current
8034 psadmin 15 0 8160 3656 1772 S 1 0.1 0:35.39 nginx: worker process
8035 psadmin 15 0 8324 3696 1732 S 0 0.1 0:31.34 nginx: worker process
2588 psadmin 15 0 197m 183m 2712 S 0 4.5 1:02.16 Rails: /var/www/apps/main_rails_app/current
Thanks!
Edit: Tried strace with follow forks as mentioned below. This is the output that is dumped over and over:
sudo strace -f -p 3455
clock_gettime(CLOCK_MONOTONIC, {394577, 508326476}) = 0
select(0, [], [], [], {0, 0}) = 0 (Timeout)
--- SIGVTALRM (Virtual timer expired) # 0 (0) ---
sigreturn()
check your logs for suspicious behavior. In general rails does suck a bunch of cpu though...you could also try pointing strace at the offending pids.