I am fairly new to AIX and am trying to create my first jfs file system mounted on /usr1.
I started off by creating my volume group from the available disks..
#/usr/sbin/mklv -y'vl_usr1' -t'jfs2' -c'2' vg_usr1 6005
van-oppy# lsvg vg_usr1
VOLUME GROUP: vg_usr1 VG IDENTIFIER: 000015010000d60000000140f464119b
VG STATE: active PP SIZE: 128 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 12012 (1537536 megabytes)
MAX LVs: 256 FREE PPs: 2 (256 megabytes)
LVs: 1 USED PPs: 12010 (1537280 megabytes)
OPEN LVs: 0 QUORUM: 12 (Enabled)
TOTAL PVs: 22 VG DESCRIPTORS: 22
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 22 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
PV RESTRICTION: none INFINITE RETRY: noegabytes)
OPEN LVs: 2 QUORUM: 3 (Enabled)
TOTAL PVs: 4 VG DESCRIPTORS: 4
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 4 AUTO ON: yes
MAX PPs per VG: 32768 MAX PVs: 1024
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
MIRROR POOL STRICT: off
PV RESTRICTION: none INFINITE RETRY: no
Below is the lslv of the logical volume...
# lslv vl_usr1
LOGICAL VOLUME: vl_usr1 VOLUME GROUP: vg_usr1
LV IDENTIFIER: 000015010000d60000000140f464119b.1 PERMISSION: read/write
VG STATE: active/complete LV STATE: closed/syncd
TYPE: jfs2 WRITE VERIFY: off
MAX LPs: 6005 PP SIZE: 128 megabyte(s)
COPIES: 2 SCHED POLICY: parallel
LPs: 6005 PPs: 12010
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: N/A LABEL: None
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
INFINITE RETRY: no
When creating the logical volume i've set it to have 2 copies, so now my free LPs is 6005.
When I try next to create an enhanced JFS2 file system it fails..
Add an Enhanced Journaled File System
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
Volume group name vg_usr1
SIZE of file system
Unit Size Megabytes +
* Number of units [768640] #
* MOUNT POINT [/usr1]
Mount AUTOMATICALLY at system restart? yes +
PERMISSIONS read/write +
Mount OPTIONS [] +
Block Size (bytes) 4096 +
Logical Volume for Log +
Inline Log size (MBytes) [] #
Extended Attribute Format +
ENABLE Quota Management? no +
[MORE...3]
F1=Help F2=Refresh F3=Cancel F4=List
Esc+5=Reset Esc+6=Command Esc+7=Edit Esc+8=Image
Esc+9=Shell Esc+0=Exit Enter=Do
The commit fails with..
0516-404 allocp: This system cannot fulfill the allocation request. There are not enough free partitions or not enough physical volumes to keep strictness and satisfy allocation requests. The command should be retried with different allocation characteristics.
0516-822 mklv: Unable to create logical volume.
crfs: 0506-972 Cannot create logical volume for file system.
rmlv: Logical volume loglv00 is removed.
I am confused as to why? My free PPs was 6005, so 6005 x 128 = 768640mb which is what I set during the creation. I also tried to lower the number to 768000mb which is 6000 PPs but still no go.
Any AIX experts out there able to shed some light as to why this is not working? Trying to wrap my head around how the LVM works...
I have resolved my issue, I was allocating more space than available and using wrong calculations to come up with the maximum available.
Related
I hava a Redis runing in a container .
Inside the container cgroup rss show using about 1283MB memory.
The kmem memory usage is 30.75MB.
The summary of the memory usage of all processes in the docker container is 883MB.
How can i figure out the "disappeared memory "(1296-883-30=383MB).The "disappeared memory" will growing with the time pass.Flinally the container will be oom killed .
environmet info is
redis version:4.0.1
docker version:18.09.9
k8s version:1.13
**the memory usage is 1283MB **
root#redis-m-s-rbac-0:/opt#cat /sys/fs/cgroup/memory/memory.usage_in_bytes
1346289664 >>>> 1283.921875 MB
the kmem memory usage is 30.75MB
root#redis-m-s-rbac-0:/opt#cat /sys/fs/cgroup/memory/memory.kmem.usage_in_bytes
32194560 >>> 30.703125 MB
root#redis-m-s-rbac-0:/opt#cat /sys/fs/cgroup/memory/memory.stat
cache 3358720
rss 1359073280 >>> 1296.11328125 MB
rss_huge 515899392
shmem 0
mapped_file 405504
dirty 0
writeback 0
swap 0
pgpgin 11355630
pgpgout 11148885
pgfault 25710366
pgmajfault 0
inactive_anon 0
active_anon 1359245312
inactive_file 2351104
active_file 1966080
unevictable 0
hierarchical_memory_limit 4294967296
hierarchical_memsw_limit 4294967296
total_cache 3358720
total_rss 1359073280
total_rss_huge 515899392
total_shmem 0
total_mapped_file 405504
total_dirty 0
total_writeback 0
total_swap 0
total_pgpgin 11355630
total_pgpgout 11148885
total_pgfault 25710366
total_pgmajfault 0
total_inactive_anon 0
total_active_anon 1359245312
total_inactive_file 2351104
total_active_file 1966080
total_unevictable 0
**the summary of the memory usage of all processes in the docker container is 883MB **
root#redis-m-s-rbac-0:/opt#ps aux | awk '{sum+=$6} END {print sum / 1024}'
883.609
This is happening because usage_in_bytes does not show exact value of memory and swap usage. The memory.usage_in_bytes show current memory(RSS+Cache) usage.
5.5 usage_in_bytes For efficiency, as other kernel components, memory cgroup uses some optimization to avoid unnecessary cacheline
false sharing. usage_in_bytes is affected by the method and doesn't
show 'exact' value of memory (and swap) usage, it's a fuzz value for
efficient access. (Of course, when necessary, it's synchronized.) If
you want to know more exact memory usage, you should use
RSS+CACHE(+SWAP) value in memory.stat(see 5.2).
Reference:
https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
One of our storage has an lvm volume group on which multiple logical volumes are not set the creating host name and time (LV Creation host, time attribute).
These were created quite a long time ago, but unfortunately (since the date is missing) I can’t say exactly. It can be many, many years, even 8-10.
Currently, we want to pull it under a Proxmox 6 (shared lvm via fiber, currently used by Proxmox 3), which we can’t do because Proxmox 6 needs creation time.
I couldn't find a command to set an exact time for this.
Can anyone help me how to set the hostname and creation time on an lvm logical volume?
Regards,
Laszlo
sorry, I can't format in a comment.
So, lvdisplay output example ( 1 volume, but there are 32 in total, 5 of which have no creation time):
LV Creation host, time attribute is empty.
Probably, created under proxmox 3 - is it possible that this attribute did not exist then?
--- Logical volume ---
LV Path /dev/stvg1/vm-103-disk-1
LV Name vm-103-disk-1
VG Name stvg1
LV UUID sbSgG8-4nuw-RwG5-skxe-J9e1-OkGo-ImEMfO
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 0
LV Size 1.50 TiB
Current LE 393216
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:20
and proxmox error message when trying to read lvm volumes
Result verification failed (400)
[9].ctime: type check ('integer') failed - got ''
[8].ctime: type check ('integer') failed - got ''
[4].ctime: type check ('integer') failed - got ''
[7].ctime: type check ('integer') failed - got ''
[10].ctime: type check ('integer') failed - got ''
Today the allocated space for memxt tarantool is over - memtx_memory = 5GB, the RAM was really busy at 5GB, after restarting tarantool more than 4GB was freed.
What could be clogged with RAM? What settings can this be related to?
box.slab.info()
---
- items_size: 1308568936
items_used_ratio: 91.21%
quota_size: 5737418240
quota_used_ratio: 13.44%
arena_used_ratio: 89.2%
items_used: 1193572600
quota_used: 1442840576
arena_size: 1442840576
arena_used: 1287551224
box.info()
---
- version: 2.3.2-26-g38e825b
id: 1
ro: false
uuid: d9cb7d78-1277-4f83-91dd-9372a763aafa
package: Tarantool
cluster:
uuid: b6c32d07-b448-47df-8967-40461a858c6d
replication:
1:
id: 1
uuid: d9cb7d78-1277-4f83-91dd-9372a763aafa
lsn: 89759968433
2:
id: 2
uuid: 77557306-8e7e-4bab-adb1-9737186bd3fa
lsn: 9
3:
id: 3
uuid: 28bae7dd-26a8-47a7-8587-5c1479c62311
lsn: 0
4:
id: 4
uuid: 6a09c191-c987-43a4-8e69-51da10cc3ff2
lsn: 0
signature: 89759968442
status: running
vinyl: []
uptime: 606297
lsn: 89759968433
sql: []
gc: []
pid: 32274
memory: []
vclock: {2: 9, 1: 89759968433}
cat /etc/tarantool/instances.available/my_app.lua
...
memtx_memory = 5 * 1024 * 1024 * 1024,
...
Tarantool vesrion 2.3.2, OS CentOs 7
https://i.stack.imgur.com/onV44.png
It's the result of a process called fragmentation.
The simple reason for this process is the next situation:
you have some allocated area for tuples
you put one tuple and next you put another one
when you need to increase the first tuple, a database needs to relocate your tuple at another place with enough capacity. After that, the place for the first tuple will be free but we took the new place for the extended tuple.
You can decrease a fragmentation factor by increasing a tuple size for your case.
Choose the size by estimating your typical data or just find the optimal size via metrics of your workload for a time.
According to the spec for the structure of an iso9660 / ecma119, the path table contains records for each path, including the location of the starting sector and its name, but not its size. I can find the directory entry, but don't know how many sectors (normally 2048 bytes) it contains. Is it one? Two? Six?
If I "walk the directory tree", each directory entry includes the referenced location and size, so I can know how many bytes (essentially, how many sectors, since a directory must use entire sectors) to read. However, the path table only includes the starting location, and not the size, leaving me not knowing how many bytes to read.
In an example iso I have (ubuntu-18.04.1-live-server-amd64.iso fwiw), the root directory entry in the primary volume descriptor shows:
Root Directory:
Directory Record Length: 34
Extended Attribute Length: 0
Location of Extent: 20 $00000014 00:00:20
Data Length: 2048 $00000800
Recording Date and Time: 23:39:04 07/25/2018 GMT 0
File Flags: $02 visible regular dir non-record no-perms single-extent
File Unit Size: 0
Interleave Gap Size: 0
Volume Sequence Number: 1
File Identifier: . (current directory)
Since it says the Data Length is 2048, I know to read just one sector.
However, the root directory entry in the path table shows:
Path Record Length: 10 $0A
Extended Attribute Length: 0 $00
Location of Extent: 20 $00000014 00:00:20
Parent Directory Number: 1 $0001
File Identifier: . (current directory)
It also points to sector 20, but doesn't tell me how many sectors it uses, leaving me guessing.
Yes, unused bytes in a sector should be all 0x00, so if I read in a sector, read records, and then come to one whose first byte (length) is 0x00, then I know I have reached the end of records, but that has three issues:
If that were the canonical way, why bother including size in the directory entry?
If it includes 2 or 3 sectors, it is more efficient for me to read them all at once than one at a time.
If I have a directory whose records precisely fill a sector, without some size attribute, I don't know if the next sector is supposed to be read as an entry, or if the directory ended here.
Basically, I know how to read the ordered path table to get the directory entry, but don't know how to use that to know how many sectors to read for the directory itself. I could, in theory, read the parent to get the entry for this directory to know the size, but that adds a seek and read and pretty much defeats the purpose of the path table.
Ah, I figured it out. Because the directory entries always start with a directory entry for the directory itself, and the data length always is bytes 10-17 (10-13 for little-endian, 13-17 for big-endian), you can just read bytes 10-17 from the beginning of the sector and get the size. Still not as efficient as putting it in the path table itself - no idea why they did not - but it works.
The redis server version I use is 2.8.9 from MSOpenTech github. Can anyone shed light on why redis "info" command indicates that used memory is 21 GB even though the RDB file that's saved on disk is < than 4 GB? I did successfully run a "save" command before noting down the size of the RDB file. The qfork heapfile is 30 Gb as it's been configured in redis.windows.conf.
Configuration :
maxheap 30gb
max-memory 20 Gb
appendonly no
save 18000 1
The server has 192 GB of physical RAM, but unfortunately only has about 60 GB of free disk space and I had to set max-heap and max-memory to 30 Gb and 20 Gb respectively so that I have additional space to persist the data on disk.
I'm using redis as a cache and the save interval is large as seeding the data takes a long time and I don't want constant writing to file. Once seeding is done, the DB is updated with newer data once a day.
My questions are :
How is the saved RDB file so small? Is it solely due to compression (rdbcompression yes)? If yes, can the same compression mechanism be used to store data in memory too? I make use of lists extensively.
Before I ran the "save" command, the working set and private bytes in process-explorer was very small. Is there a way I can breakdown memory usage by datastructure? For example : List uses x amount, Hash uses y amount etc?
Is there any way I can store the AOF file ( I turned off AOF and use RDB because the AOF files were filling up disk space fast ) in a network path ( shared drive or NAS )? I tried setting the dir config to \someip\some folder but the service failed to start with the message "Cant CHDIR to location"
I'm unable to post images, but this is what process-explorer has to say about the redis-server instance :
Virtual Memory:
Private Bytes : 72,920 K
Peak Private Bytes : 31,546,092 K
Virtual Size : 31,558,356 K
Page faults : 12,479,550
Physical Memory:
Working Set : 26,871,240 K
WS Private : 63,260 K
WS Shareable : 26,807,980 K
WS Shared : 3,580 K
Peak Working Set : 27,011,488 K
The latest saved dump.rdb is 3.81 GB and the heap file is 30 GB.
# Server
redis_version:2.8.9
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:1fe181ad2447fe38
redis_mode:standalone
os:Windows
arch_bits:64
multiplexing_api:winsock_IOCP
gcc_version:0.0.0
process_id:12772
run_id:553f2b4665edd206e632b7040aa76c0b76083f4d
tcp_port:6379
uptime_in_seconds:24087
uptime_in_days:0
hz:50
lru_clock:14825512
config_file:D:\RedisService/redis.windows.conf
# Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:21484921736
used_memory_human:20.01G
used_memory_rss:21484870536
used_memory_peak:21487283360
used_memory_peak_human:20.01G
used_memory_lua:3156992
mem_fragmentation_ratio:1.00
mem_allocator:dlmalloc-2.8
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1407328559
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:1407328560
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
# Stats
total_connections_received:9486
total_commands_processed:241141370
instantaneous_ops_per_sec:0
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:30143
keyspace_misses:81
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:1341134