I am monitoring Java heap usage on all managed servers in the Weblogic 10.3 domain using WLST. I have written a Jython script to achieve this. This script first logs into the admin server in the domain. Following is the code snippet that fetches the heap statistics for each managed server:
def getServerJavaHeap():
domainRuntime()
servers=domainRuntimeService.getServerRuntimes()
for server in servers:
free = int(server.getJVMRuntime().getHeapFreeCurrent())/(1024*1024)
freePct = int(server.getJVMRuntime().getHeapFreePercent())
current = int(server.getJVMRuntime().getHeapSizeCurrent())/(1024*1024)
max = int(server.getJVMRuntime().getHeapSizeMax())/(1024*1024)
print 'Domain Name #', cmo.getName()
print 'Server Name #', server.getName()
print 'Current Heap Size #', current
print 'Current Heap Free #', free
print 'Maximum Heap Size #', max
print 'Percentage Heap Free #', freePct
The heap statistics that the above code fetches is different from what the Weblogic admin console shows. For instance for managed server123
The above code gives the heap size usage as 1.25GB while the admin console shows heap usages as 3GB
I am wondering why is there a discrepancy in what admin console shows and the output of the above code. I am trying to determine if I am looking in the right place and invoking the right method calls (listed here in the docs) to get the heap statistics on each managed server.
I am sure the time when the script ran also is a factor. Was wondering how frequently the admin console refreshes these tables.
I can't see anything wrong with your approach tbh. The admin console page won't update automatically unless you click on the auto-refresh icon (the two arrows forming a circle) in the top left of the table. By default the refresh interval is 10 seconds but this can be set from the 'Preferences' page - the link is on the banner of every page.
I tried on both an Admin server and a managed server and as long as I ran the code close to a refresh, the numbers tied up. I can only assume a garbage collect run between the time the console displayed the data and your script ran.
Related
While this Q/A does not address the actual issue of: How to detect with client (eg redis-py) that redis is running out of memory constraint not by machine but by the maxmem configuration? Before inserts fail which command to use in the programm to detect about to be full?
My first guess is: info and check if used_memory_peak < maxmem setting. Is this correct?
(Besides, for out of machine memory, since defrag, use which setting, none of the returned INFO fields help here)
Well should i just try an insert and see if fail (but that would be after the fact then.)
Trail and error, good enough tested by running
while true; do redis-cli lpush mm longstringhere; done; results on maxmem - used_memory < 0.1MB with insert failures:
(error) OOM command not allowed when used memory > 'maxmemory'.
So i have set i poll it via redis-py client and once the diff goes <1mb threshold throw up, sry raise Error of course. Make sure the user_memory memory addon of your longest command is < threshold too of course otherwise you run into it on insert.
I try to figure how to calc the ~percentage of used mem so i get notification way earlier eg 90% of maxmem, therefore this solution is fine.
Info dump:
# Memory
used_memory:3126272
used_memory_human:2.98M
used_memory_rss:5292032
used_memory_rss_human:5.05M
used_memory_peak:4914296
used_memory_peak_human:4.69M
used_memory_peak_perc:63.62%
used_memory_overhead:696654...
Furthermore maxmem is not a hardcap, when running it further by eg adding members to existing set.
used_memory:3162584
used_memory_human:3.02M
code to get percent 0-100
rmem_info = pipe.info(section='memory')
{'redis_mem_percent': math.ceil(rmem_info['used_memory'] / rmem_info['maxmemory'] *100)}
We are trying to use Dask to clean up some data as part of an ETL process.
The original file is over 3GB csv .
When we run the code on a subset (1GB) the code runs successfully (with a few user warning regarding our cleaning procedures such as:
ddf[id1] = ddf[id1].str.extract(´(\d+)´)
repeater = re.compile(r´((\d)\2{5,}´)
mask_repeater = ddf[id1].str.contrains(repeater, regex=True)
ddf = ddf[~mask_repeater]
On the 3GB file the process nearly completes (there is only one task left - drop-duplicates-agg) and then restarts from the middle (that is what I can see from the bokeh status website). we also see the warning which is the same as when the script starts to run.
RuntimeWarning: Couldn't detect a suitable IP address for reaching '8.8.8.8', defaulting to '127.0.0.1'...
I´m running on a offline single windows64bit workstation with 24 cores .
Any suggestions?
Hi I am using LabView 2012, Delphi XE7 and GPIB (I think 488.2), Win7 SP1 and Agilent 53131A.
I used the given NI examples.
NI Labview example - Found in LabVIEW's help - GPIB.vi.
I tried writing and reading to query frequencies from 2 channels and they are successful.
They are are sent and read in succession.
*IDN?
:FUNC 'FREQ 1'
:READ:FREQ?
If they are successful, that meant GPIB for Agilent and NI MAX and driver are successfully installed and configured.
I am also able to use KeySight Connection Expert's to write and read, Again it is also successful.
However, When I used the given NI example in Delphi. Orginally it was saved as Delphi 3 or 4.
I used the Scope Simple example for universal counter. I used it mostly for writing and reading in the simple way. All it needs initialization, read/write and cleanup
I changed the following codes as shown below, in SimpleForm.pas
The detected device is at GPIB0::3::INSTR so, at line 32,
PRIMARY_ADDR_OF_COUNTER = 3;
String to write and read so, at line 132,
CommandBox.Text := '*IDN?';
then it was compiled with no error and run.
String to write was successfully
But upon reading, it was not successfully.
The string output is supposed to be ' HEWLETT-PACKARD,53131A,0,4806'.
The error at the end of the program is as follows below:-
Unable to read from device
ibsta = SC000 <ERR TMO>
iberr = 6 <EABO>
ibcntl = 0
From these readings, I figured out as :-
EABO means abort
I am not familiar with working of GPIB. Kindly advise.
You are correct that EABO is the identifier for an abort. In addition, we can see from ibsta = SC000 <ERR TMO> that the cause of the abort was a GPIB timeout error. I am not familiar with Keysight Connection Expert or your instrument, but since the error was from GPIB timeout, the most likely causes are:
The query was improperly formatted and the instrument thought it was just a write statement with no response needed. (That's probably why the write function had no error, but the read function timed out.)
The query was improperly formatted and the instrument returned an error.
Instrument needs to have 'Talker' capability enabled to send data. (Most instruments do this automatically with queries.)
For more information on generic GPIB commands, see this reference from the folks at National Instruments.
I need to load ~29 million nodes from a CSV file (with USING PERIODIC COMMIT) but I'm getting "Unknown error" after the first ~75k nodes are loaded. I've tried changing the commit size (250, 500, and 1000), increasing the java heap (-Xmx4096m), and using memory mapping, but nothing changes (except the number of nodes that get loaded - with commit size 500 I get "Unkown error" after 75,499 nodes and with commit size 250 I get "Unkown error" after 75,749 nodes).
I'm doing it in the browser, using Neoj4 2.1.7 on a remote machine with 10GB of RAM and Windows Server 2012. Here's my code:
USING PERIODIC COMMIT 1000
LOAD CSV FROM "file:/C:/Users/thiago.marzagao/Desktop/CSVs/cnpj.csv" AS node
CREATE (:PessoaJuridica {id: node[0], razaoSocial: node[1], nomeFantasia: node[2], CNAE: node[3], porte: node[4], dataAbertura: node[5], situacao: node[6], dataSituacao: node[7], endereco: node[8], CEP: node[9], municipio: node[10], UF: node[11], tel: node[12], email: node[13]})
The really bad part is that the nioneo_logical.log files have some weird encoding that no text editor can't figure out. All I see is eÿÿÿÿ414141, ÿÿÿÿÿÿÿÿ, etc. The messages file, in turn, ends with hundreds of garbage collection warnings, like these:
2015-02-05 17:16:54.596+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for 304ms.
2015-02-05 17:16:55.033+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for 238ms.
2015-02-05 17:16:55.471+0000 WARN [o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for 231ms.
I've found somewhat related questions but not exactly what I'm looking for.
What am I missing?
The browser is the worst choice to run such an import, also because of http timeouts.
Enough RAM helps as well as a fast disk.
Try to use bin/Neo4jShell.bat which connects to the running server. And best make sure the CSV file is locally available.
those nioneo.*log files are logical logs (write ahead logs for transactions)
the log files your looking for are data/log/*.log and data/graph.db/messages.log
Something else that you can please do, is to open the Browser-Inspector, go to the Network/Requests tab and re-run the query, so that you can get the raw HTTP-response, we just discussed that and will try to dump it directly to the JS console in the future.
I want to load a text file in Session.
The file size is about 50KB ~ 100KB.
When user trigger the function in my page. it will create the Session.
My Server's RAM is about 8GB. and the max users is about 100
Because there will be a script run in background to collect IP and MAC in LAN.
The script continues write data into text file.
In the same time, the webpage will using Ajax to fetch fresh data from text file.and display on the page.
Is it suitable to implement by session to keep the result? or any better way to achieve ?
Thanks ~
The Python script will collect the data in the LAN in 1 ~ 3 minutes.(Background job)
To avoid blocking for 1~3 minutes. I will use Ajax to fetch the data in text file (continuing added by Python script) and show on the page.
And my user should carry the information cross pages. So I want to store the data in Session.
00:02:D1:19:AA:50: 172.19.13.39
00:02:D1:13:E8:10: 172.19.12.40
00:02:D1:13:EB:06: 172.19.1.83
C8:9C:DC:6F:41:CD: 172.19.12.73
C8:9C:DC:A4:FC:07: 172.19.12.21
00:02:D1:19:9B:72: 172.19.13.130
00:02:D1:13:EB:04: 172.19.13.40
00:02:D1:15:E1:58: 172.19.12.37
00:02:D1:22:7A:4D: 172.19.11.84
00:02:D1:24:E7:0F: 172.19.1.79
00:FD:83:71:00:10: 172.19.11.45
00:02:D1:24:E7:0D: 172.19.1.77
00:02:D1:81:00:02: 172.19.11.58
00:02:D1:24:36:35: 172.19.11.226
00:02:D1:1E:18:CA: 172.19.12.45
00:02:D1:0D:C5:A8: 172.19.1.45
74:27:EA:29:80:3E: 172.19.12.62
Why does this need to be stored in the browser? Couldn't you fire off what you're collecting to a data store somewhere?
Anyway, assuming you HAD to do this, and the example you gave is pretty close to the data you'll actually be seeing, you have a lot of redundant data there. You could save space for the IPs by creating a hash pointing to each successive value, I.E.
{172 => {19 => {13 => [39], 12 => [40, 73, 21], 1 => [83]}}} ...etc. Similarly for the MAC addresses. But again, you can probably simplify this problem a LOT by storing the info you need somewhere other than the session.