Jenkins web ui is totally unresponsive - jenkins

My jenkins instance has been running for over two years without issue but yesterday quit responding to http requests. No errors, just clocks and clocks.
I've restarted the service, then restarted the entire server.
There's been a lot of mention of a thread dump. I attempted to get that but I'm not sure that this is that.
Heap
PSYoungGen total 663552K, used 244203K [0x00000000d6700000, 0x0000000100000000, 0x0000000100000000)
eden space 646144K, 36% used [0x00000000d6700000,0x00000000e4df5f70,0x00000000fde00000)
from space 17408K, 44% used [0x00000000fef00000,0x00000000ff685060,0x0000000100000000)
to space 17408K, 0% used [0x00000000fde00000,0x00000000fde00000,0x00000000fef00000)
ParOldGen total 194048K, used 85627K [0x0000000083400000, 0x000000008f180000, 0x00000000d6700000)
object space 194048K, 44% used [0x0000000083400000,0x000000008879ee10,0x000000008f180000)
Metaspace used 96605K, capacity 104986K, committed 105108K, reserved 1138688K
class space used 12782K, capacity 14961K, committed 14996K, reserved 1048576K
Ubuntu 16.04.5 LTS

I prefer looking in the jenkins log file. There you can see error and then fix them.

Related

Neo4j TransactionMemoryLimit

I am running Neo4j (v4.1.5) community edition on a server node with 64GB RAM.
I set the heap size configuration as follows:
dbms.memory.heap.initial_size=31G
dbms.memory.heap.max_size=31G
During the ingestion via bolt, I got the following error:
{code: Neo.TransientError.General.TransactionMemoryLimit} {message:
Can't allocate extra 512 bytes due to exceeding memory limit;
used=2147483648, max=2147483648}
What I don't understand is that the max in the error message shows 2GB, while I've set the initial and max heap size to 31GB. Can someone help me understand how memory setting works in Neo4j?
It turned out that the default transaction memory allocation for this version was OFF_HEAP. Meaning that all the transactions were executed off heap with 2GB max. Adding the following setting in Neo4j resolved the issue:
dbms.tx_state.memory_allocation=ON_HEAP
I'm not sure why OFF_HEAP is the default setting while Neo4j manual recommends ON_HEAP setting:
When executing a transaction, Neo4j holds not yet committed data, the result, and intermediate states of the queries in memory. The size needed for this is very dependent on the nature of the usage of Neo4j. For example, long-running queries, or very complicated queries, are likely to require more memory. Some parts of the transactions can optionally be placed off-heap, but for the best performance, it is recommended to keep the default with everything on-heap.

Is it neccesary to increase VM memory for redis instance

Now I'm using 4CPU 8GB memory Virtual machine in GCP, and I'm also using redisearch docker container.
I have 47.5Millon Hash keys and I estimate it is about 35GB over. So if I import all of my data at redis-cli in VM, It needs really 35GB over memory?
+ I already tried to import 7.5Millon but memory utilization is about 70% full
If your cache need 35Gb, then, your cache will need 35Gb.
The values you gave are coherent. If 47M keys use 35Gb then 7.5M will use about 5.6Gb (which is also 70% of 8Gb).
If you dont want to change your VM properties then you can use the swap property in the redis conf file to use part of the cold storage of the VM.
Note that you have to be careful using swap, depending on the hardware it can be a pretty bad idea. Using anything but NVMes is bad (even SSDs), as you can see here :
Benchmark with SSds
Benchmark with NVMes

How to get rid of warnings with MEM and SSD tiers

I have two tiers: MEM+SSD. The MEM layer is almost always at 90% full and sometimes the SSD tier is also full.
Now this (kind of) message is sometimes spamming my log:
2022-06-14 07:11:43,607 WARN TieredBlockStore - Target tier: BlockStoreLocation{TierAlias=MEM, DirIndex=0, MediumType=MEM} has no available space to store 67108864 bytes for session: -4254416005596851101
2022-06-14 07:11:43,607 WARN BlockTransferExecutor - Transfer-order: BlockTransferInfo{TransferType=SWAP, SrcBlockId=36401609441282, DstBlockId=36240078405636, SrcLocation=BlockStoreLocation{TierAlias=MEM, DirIndex=0, MediumType=MEM}, DstLocation=BlockStoreLocation{TierAlias=SSD, DirIndex=0, MediumType=SSD}} failed. alluxio.exception.WorkerOutOfSpaceException: Failed to find space in BlockStoreLocation{TierAlias=MEM, DirIndex=0, MediumType=MEM} to move blockId 36240078405636
2022-06-14 07:11:43,607 WARN AlignTask - Insufficient space for worker swap space, swap restore task called.
Is my setup flawed? What can I do to get rid of these warnings?
looks like alluxio worker is trying to move/swap some blocks but there is no enough space to finish the operation. I guess it might be caused by both the ssd and mem tiers are full. Have you tried this property? alluxio.worker.tieredstore.free.ahead.bytes This can help us determine whether the swap failed due to insufficient storage space.

ARM: Safe physical memory position (to reserve) for my ARM hypervisor in relation to a Linux/Android guest

I am developing a basic hypervisor on ARM (using the board Arndale Exynos 5250).
I want to load Linux(ubuntu or smth else)/Android as the guest. Currently I'm using a Linaro distribution.
I'm almost there, most of the big problems have already been dealt with, except for the last one: reserving memory for my hypervisor such that the kernel does not try to OVERWRITE it BEFORE parsing the FDT or the kernel command line.
The problem is that my Linaro distribution's U-Boot passes a FDT in R2 to the linux kernel, BUT the kernel tries to overwrite my hypervisor's memory before seeing that I reserved that memory region in the FDT (by decompiling the DTB, modifying the DTS and recompiling it). I've tried to change the kernel command-line parameters, but they are also parsed AFTER the kernel tries to overwrite my reserved portion of memory.
Thus, what I need is a safe memory location in the physical RAM where to put my hypervisor's code at such that the Linux kernel won't try to access (r/w) it BEFORE parsing the FDT or it's kernel command line.
Context details:
The system RAM layout on Exynos 5250 is: physical RAM starts at 0x4000_0000 (=1GB) and has the length 0x8000_0000 (=2GB).
The linux kernel is loaded (by U-Boot) at 0x4000_7000, it's size (uncompressed uImage) is less than 5MB and it's entry point is set to be at 0x4000_8000;
uInitrd is loaded at 0x4200_0000 and has the size less than 2MB
The FDT (board.dtb) is loaded at 0x41f0_0000 (passed in R2) and has the size less than 35KB
I currently load my hypervisor at 0x40C0_0000 and I want to reserve 200MB (0x0C80_0000) starting from that address, but the kernel tries to write there (a stage 2 HYP trap tells me that) before looking in the FDT or in the command line to see that the region is actually reserved. If instead I load my hypervisor at 0x5000_0000 (without even modifying the original DTB or the command line), it does not try to overwrite me!
The FDT is passed directly, not through ATAGs
Since when loading my hypervisor at 0x5000_0000 the kernel does not try to overwrite it whatsoever, I assume there are memory regions that Linux does not touch before parsing the FDT/command-line. I need to know whether this is true or not, and if true, some details regarding these memory regions.
Thanks!
RELATED QUESTION:
Does anyone happen to know what is the priority between the following: ATAGs / kernel-command line / FDT? For instance, if I reserve memory through the kernel command-line, but not in the FDT (.dtb) should it work or is the command-line overriden by the FDT? Is there somekind of concatenation between these three?
As per
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm/Booting, safe locations start 128MB from start of RAM (assuming the kernel is loaded in that region, which is should be). If a zImage was loaded lower in memory than what is likely to be the end address of the decompressed image, it might relocate itself higher up before it starts decompressing. But in addition to this, the kernel has a .bss region beyond the end of the decompressed image in memory.
(Do also note that your FDT and initrd locations already violate this specification, and that the memory block you are wanting to reserve covers the locations of both of these.)
Effectively, your reserved area should go after the FDT and initrd in memory - which 0x50000000 is. But anything > 0x08000000 from start of RAM should work, portably, so long as that doesn't overwrite the FDT, initrd or U-Boot in memory.
The priority of kernel/FDT/bootloader command line depends on the kernel configuration - do a menuconfig and check under "Boot options". You can combine ATAGS with the built-in command lines, but not FDT - after all, the FDT chosen node is supposed to be generated by the bootloader - U-boot's FDT support is OK so you should let it do this rather than baking it into the .dts if you want an FDT command line.
The kernel is pretty conservative before it's got its memory map since it has to blindly trust the bootloader has laid things out as specified. U-boot on the other hand is copying bits of itself all over the place and is certainly the culprit for the top end of RAM - if you #define DEBUG in (I think) common/board_f.c you'll get a dump of what it hits during relocation (not including the Exynos iRAM SPL/boot code stuff, but that won't make a difference here anyway).

Issues with Profiling java application using jProbe

Im currently doing dynamic memory analysis, for our eclipse based application using jprobe.After starting the eclipse application and jprobe, when I try to profile the eclipse application, the application gets closed abruptly causing a Fatal error. A fatal error log file is generated. In the Fatal error log file, I could see that the PermGen space seems to be full. Below is a sample Heap summary which I got in the log file
Heap
def new generation total 960K, used 8K [0x07b20000, 0x07c20000, 0x08000000)
eden space 896K, 0% used [0x07b20000, 0x07b22328, 0x07c00000)
from space 64K, 0% used [0x07c00000, 0x07c00000, 0x07c10000)
to space 64K, 0% used [0x07c10000, 0x07c10000, 0x07c20000)
tenured generation total 9324K, used 5606K [0x08000000, 0x0891b000, 0x0bb20000)
the space 9324K, 60% used [0x08000000, 0x08579918, 0x08579a00, 0x0891b000)
compacting perm gen total 31744K, used 31723K [0x0bb20000, 0x0da20000, 0x2bb20000)
the space 31744K, 99% used [0x0bb20000, 0x0da1af00, 0x0da1b000, 0x0da20000)
ro space 8192K, 66% used [0x2bb20000, 0x2c069920, 0x2c069a00, 0x2c320000)
rw space 12288K, 52% used [0x2c320000, 0x2c966130, 0x2c966200, 0x2cf20000)
I tried to increase the permGen space, using the command -XX:MaxPermSize=512m. But that doesnt seem to work. I would like to know how to increase the PermGen size via command prompt. I would like to know if I have to go to the java location in my computer and execute the above command or should I increase the PermGen space specifically for the eclipse application or Jprobe ? Please advise.
Any help on this is much appreciated.

Resources