Missing creation time for LVM volume - lvm

One of our storage has an lvm volume group on which multiple logical volumes are not set the creating host name and time (LV Creation host, time attribute).
These were created quite a long time ago, but unfortunately (since the date is missing) I can’t say exactly. It can be many, many years, even 8-10.
Currently, we want to pull it under a Proxmox 6 (shared lvm via fiber, currently used by Proxmox 3), which we can’t do because Proxmox 6 needs creation time.
I couldn't find a command to set an exact time for this.
Can anyone help me how to set the hostname and creation time on an lvm logical volume?
Regards,
Laszlo

sorry, I can't format in a comment.
So, lvdisplay output example ( 1 volume, but there are 32 in total, 5 of which have no creation time):
LV Creation host, time attribute is empty.
Probably, created under proxmox 3 - is it possible that this attribute did not exist then?
--- Logical volume ---
LV Path /dev/stvg1/vm-103-disk-1
LV Name vm-103-disk-1
VG Name stvg1
LV UUID sbSgG8-4nuw-RwG5-skxe-J9e1-OkGo-ImEMfO
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 0
LV Size 1.50 TiB
Current LE 393216
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:20
and proxmox error message when trying to read lvm volumes
Result verification failed (400)
[9].ctime: type check ('integer') failed - got ''
[8].ctime: type check ('integer') failed - got ''
[4].ctime: type check ('integer') failed - got ''
[7].ctime: type check ('integer') failed - got ''
[10].ctime: type check ('integer') failed - got ''

Related

what is the default value for IBM Informix set isolation level

1 the stored procedure
create procedure sp_count_demo(
i_user_id varchar(30)
)
returning p_count as num_of_row ;
define p_count integer ;
set isolation to dirty read ;
let p_row = 0 ;
select count(*)
into p_count
from some_table a
where a.user_id = i_user_id
;
return p_row;
end procedure ;
2 The procedure at (1) will be called from java webapps with connection pool
3 Do I need to set the isolation level back to previous value before returning the result? (ie to avoid another process reusing the connection from having "dirty read" isolation level)
4 What is the default isolation level
5 Where/How can I get the default value for isolation level
Thanks in advance
Since a connection pool is in use the stored procedure should return the isolation level to its previous setting in order to avoid unexpected results when another app uses the same connection. The default isolation level depends on the logging mode of the database:
For an unlogged database it will effectively be "Dirty Read" (shown as NL by the onstat -g ses command).
For a mode ANSI database it will be "Repeatable Read."
For other logged databases it will be "Committed Read."
The onconfig parameter USELASTCOMMITTED can also be used to change how the default isolation level is used. More information on that can be found in the Knowledge Center (search on USELASTCOMMITTED).
It is possible for a session to find out its current isolation level using a query against the sysmaster database. This query was run on Informix 12.10 but should also be valid for 11.70:
select tx.isolevel
from sysmaster:systxptab tx, sysmaster:sysrstcb r, sysmaster:sysscblst s
where s.address = r.scb and tx.owner = r.address
and s.sid = dbinfo("sessionid");
It returns the isolation level as an integer which is an internal value - for example committed read has value 2. I don't believe the mapping of isolation level to integer value is published so you will need to experiment with setting different levels for a session and then running the above query.

How can I read a directory on iso9660 from the path table when the table does not include size?

According to the spec for the structure of an iso9660 / ecma119, the path table contains records for each path, including the location of the starting sector and its name, but not its size. I can find the directory entry, but don't know how many sectors (normally 2048 bytes) it contains. Is it one? Two? Six?
If I "walk the directory tree", each directory entry includes the referenced location and size, so I can know how many bytes (essentially, how many sectors, since a directory must use entire sectors) to read. However, the path table only includes the starting location, and not the size, leaving me not knowing how many bytes to read.
In an example iso I have (ubuntu-18.04.1-live-server-amd64.iso fwiw), the root directory entry in the primary volume descriptor shows:
Root Directory:
Directory Record Length: 34
Extended Attribute Length: 0
Location of Extent: 20 $00000014 00:00:20
Data Length: 2048 $00000800
Recording Date and Time: 23:39:04 07/25/2018 GMT 0
File Flags: $02 visible regular dir non-record no-perms single-extent
File Unit Size: 0
Interleave Gap Size: 0
Volume Sequence Number: 1
File Identifier: . (current directory)
Since it says the Data Length is 2048, I know to read just one sector.
However, the root directory entry in the path table shows:
Path Record Length: 10 $0A
Extended Attribute Length: 0 $00
Location of Extent: 20 $00000014 00:00:20
Parent Directory Number: 1 $0001
File Identifier: . (current directory)
It also points to sector 20, but doesn't tell me how many sectors it uses, leaving me guessing.
Yes, unused bytes in a sector should be all 0x00, so if I read in a sector, read records, and then come to one whose first byte (length) is 0x00, then I know I have reached the end of records, but that has three issues:
If that were the canonical way, why bother including size in the directory entry?
If it includes 2 or 3 sectors, it is more efficient for me to read them all at once than one at a time.
If I have a directory whose records precisely fill a sector, without some size attribute, I don't know if the next sector is supposed to be read as an entry, or if the directory ended here.
Basically, I know how to read the ordered path table to get the directory entry, but don't know how to use that to know how many sectors to read for the directory itself. I could, in theory, read the parent to get the entry for this directory to know the size, but that adds a seek and read and pretty much defeats the purpose of the path table.
Ah, I figured it out. Because the directory entries always start with a directory entry for the directory itself, and the data length always is bytes 10-17 (10-13 for little-endian, 13-17 for big-endian), you can just read bytes 10-17 from the beginning of the sector and get the size. Still not as efficient as putting it in the path table itself - no idea why they did not - but it works.

Pointer position at the time of VSAM START command

I am a bit confused. I know when you START a VSAM file, a pointer is set to a specific record, which will be read by subsequent READ command.
Let's assume that VSAM has records:
100
200
300
400
500
When you write:
START filename
KEY IS GREATER THAN 400
It will place pointer at 500. But if you say
START filename
KEY IS GREATER THAN 600
where will the pointer be placed ?
Will it be on 500 or will it be an error.
Also, my understanding is that START will never give an end of file (RETURN CODE 10).
The pointer will not be positioned to any record.
An INVALID KEY condition will be raised with FILE STATUS "23", indicating no record found.
From the 2002 COBOL standard:
"14.8.37.3 General rules
"7) Following the unsuccessful execution of a START statement, the
file position indicator is set to indicate that no valid record
position has been established. For indexed files, the key of reference
is undefined."

Redis MSOpenTech : max memory "OOM command not allowed when used memory > 'maxmemory'" error even though RDB file after save is only 3 GB

The redis server version I use is 2.8.9 from MSOpenTech github. Can anyone shed light on why redis "info" command indicates that used memory is 21 GB even though the RDB file that's saved on disk is < than 4 GB? I did successfully run a "save" command before noting down the size of the RDB file. The qfork heapfile is 30 Gb as it's been configured in redis.windows.conf.
Configuration :
maxheap 30gb
max-memory 20 Gb
appendonly no
save 18000 1
The server has 192 GB of physical RAM, but unfortunately only has about 60 GB of free disk space and I had to set max-heap and max-memory to 30 Gb and 20 Gb respectively so that I have additional space to persist the data on disk.
I'm using redis as a cache and the save interval is large as seeding the data takes a long time and I don't want constant writing to file. Once seeding is done, the DB is updated with newer data once a day.
My questions are :
How is the saved RDB file so small? Is it solely due to compression (rdbcompression yes)? If yes, can the same compression mechanism be used to store data in memory too? I make use of lists extensively.
Before I ran the "save" command, the working set and private bytes in process-explorer was very small. Is there a way I can breakdown memory usage by datastructure? For example : List uses x amount, Hash uses y amount etc?
Is there any way I can store the AOF file ( I turned off AOF and use RDB because the AOF files were filling up disk space fast ) in a network path ( shared drive or NAS )? I tried setting the dir config to \someip\some folder but the service failed to start with the message "Cant CHDIR to location"
I'm unable to post images, but this is what process-explorer has to say about the redis-server instance :
Virtual Memory:
Private Bytes : 72,920 K
Peak Private Bytes : 31,546,092 K
Virtual Size : 31,558,356 K
Page faults : 12,479,550
Physical Memory:
Working Set : 26,871,240 K
WS Private : 63,260 K
WS Shareable : 26,807,980 K
WS Shared : 3,580 K
Peak Working Set : 27,011,488 K
The latest saved dump.rdb is 3.81 GB and the heap file is 30 GB.
# Server
redis_version:2.8.9
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:1fe181ad2447fe38
redis_mode:standalone
os:Windows
arch_bits:64
multiplexing_api:winsock_IOCP
gcc_version:0.0.0
process_id:12772
run_id:553f2b4665edd206e632b7040aa76c0b76083f4d
tcp_port:6379
uptime_in_seconds:24087
uptime_in_days:0
hz:50
lru_clock:14825512
config_file:D:\RedisService/redis.windows.conf
# Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:21484921736
used_memory_human:20.01G
used_memory_rss:21484870536
used_memory_peak:21487283360
used_memory_peak_human:20.01G
used_memory_lua:3156992
mem_fragmentation_ratio:1.00
mem_allocator:dlmalloc-2.8
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1407328559
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:1407328560
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
# Stats
total_connections_received:9486
total_commands_processed:241141370
instantaneous_ops_per_sec:0
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:30143
keyspace_misses:81
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:1341134

AIX 7.1 Creating an enhanced JFS file system issues?

I am fairly new to AIX and am trying to create my first jfs file system mounted on /usr1.
I started off by creating my volume group from the available disks..
#/usr/sbin/mklv -y'vl_usr1' -t'jfs2' -c'2' vg_usr1 6005
van-oppy# lsvg vg_usr1
VOLUME GROUP: vg_usr1 VG IDENTIFIER: 000015010000d60000000140f464119b
VG STATE: active PP SIZE: 128 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 12012 (1537536 megabytes)
MAX LVs: 256 FREE PPs: 2 (256 megabytes)
LVs: 1 USED PPs: 12010 (1537280 megabytes)
OPEN LVs: 0 QUORUM: 12 (Enabled)
TOTAL PVs: 22 VG DESCRIPTORS: 22
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 22 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
PV RESTRICTION: none INFINITE RETRY: noegabytes)
OPEN LVs: 2 QUORUM: 3 (Enabled)
TOTAL PVs: 4 VG DESCRIPTORS: 4
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 4 AUTO ON: yes
MAX PPs per VG: 32768 MAX PVs: 1024
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
MIRROR POOL STRICT: off
PV RESTRICTION: none INFINITE RETRY: no
Below is the lslv of the logical volume...
# lslv vl_usr1
LOGICAL VOLUME: vl_usr1 VOLUME GROUP: vg_usr1
LV IDENTIFIER: 000015010000d60000000140f464119b.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: closed/syncd
TYPE: jfs2 WRITE VERIFY: off
MAX LPs: 6005 PP SIZE: 128 megabyte(s)
COPIES: 2 SCHED POLICY: parallel
LPs: 6005 PPs: 12010
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: N/A LABEL: None
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
INFINITE RETRY: no
When creating the logical volume i've set it to have 2 copies, so now my free LPs is 6005.
When I try next to create an enhanced JFS2 file system it fails..
Add an Enhanced Journaled File System
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
Volume group name vg_usr1
SIZE of file system
Unit Size Megabytes +
* Number of units [768640] #
* MOUNT POINT [/usr1]
Mount AUTOMATICALLY at system restart? yes +
PERMISSIONS read/write +
Mount OPTIONS [] +
Block Size (bytes) 4096 +
Logical Volume for Log +
Inline Log size (MBytes) [] #
Extended Attribute Format +
ENABLE Quota Management? no +
[MORE...3]
F1=Help F2=Refresh F3=Cancel F4=List
Esc+5=Reset Esc+6=Command Esc+7=Edit Esc+8=Image
Esc+9=Shell Esc+0=Exit Enter=Do
The commit fails with..
0516-404 allocp: This system cannot fulfill the allocation request. There are not enough free partitions or not enough physical volumes to keep strictness and satisfy allocation requests. The command should be retried with different allocation characteristics.
0516-822 mklv: Unable to create logical volume.
crfs: 0506-972 Cannot create logical volume for file system.
rmlv: Logical volume loglv00 is removed.
I am confused as to why? My free PPs was 6005, so 6005 x 128 = 768640mb which is what I set during the creation. I also tried to lower the number to 768000mb which is 6000 PPs but still no go.
Any AIX experts out there able to shed some light as to why this is not working? Trying to wrap my head around how the LVM works...
I have resolved my issue, I was allocating more space than available and using wrong calculations to come up with the maximum available.

Resources