Memory Monitoring in Apache Ignite - memory

I am using Apache Ignite 2.8.0.
i am running my server without persistence.
i have some records in my cache. it shows "totalAllocatedSize":18869600.
Now i cleared my cache, again it shows as the same "totalAllocatedSize":18869600.(i don't have any records in my cache)
why it shows like this, actually i don't have any records in cache, so it need to be show as 0. but it shows the previous value i got when some records in my cache..
why it's behave like this? or How i will get my actual memory used right now?

Like many databases, Apache Ignite will not de-allocate memory it has already allocated. You can see that you have space available by decreased fillFactor metric.

Related

How to prevent storing files i haven't imported\pinned into my node?

I have just installed an IPFS Desktop app on my computer for the first time ever, gone to Files sectoion and removed all 2 pinned files that were there. I didn't even get why something was pinned by default right after installation.
Then, I just started to watch what would happen. After a few minutes I've started to see spikes in network bandwidth as well as an amount of blocks and storage size started to increase.
So, the questions are:
If I haven't even imported\pinned any file yet, why the storage is started to fill? I guess it was filling with someones files.
How can I prevent it and "seed" only files\data I manually add to my IPFS node?
I'd like to just "seed" my files in read-only mode and prevent constant writes and wearing out my SSD as well as exclude unneeded network traffic.
IPFS caches things you access by default.
That cache is cleared during "garbage collection", which happens by default once every hour.
You can change this default behavior:
reprovider/strategy "pinned" https://docs.ipfs.io/how-to/configure-node/#reprovider
routing.type "dhtclient" https://docs.ipfs.io/how-to/configure-node/#routing

Umbraco goes crazy after loosing DB connection

We use Umbraco v.7.2.6 on multiple nodes.
When Database Server gets reloaded. Umbraco starts pushing infinite amount of sql queries similar to those shown on the image.
The load on the network channels increases by 6 times, and runs into the bandwidth limitation. No need to say our environment starts operate very slow.
The only way to solve the problem is to restore previous backup of the database. This issue happens occasionally and we don't know how to fix it.
What would be the steps to troubleshoot the problem?

Neo4J Memory tuning having little effect

I am currently running some simple cypher queries (count etc) on a large dataset (>10G) and am having some issues with tuning NE04J.
The machine running the queries has 4TB of ram, 160 cores and is running Ubuntu 14.04/neo4j version 2.3. Originally I left all the settings as default as it is stated that free memory will be dynamically allocated as required. However, as the queries are taking several minutes to complete I assumed this was not the case. As such I have set various combinations of the following parameters within the neo4j-wrapper.conf:
wrapper.java.initmemory=1200000
wrapper.java.maxmemory=1200000
dbms.memory.heap.initial_size=1200000
dbms.memory.heap.max_size=1200000
dbms.jvm.additional=-XX:NewRatio=1
and the following within neo4j.properties:
use_memory_mapped_buffers=true
neostore.nodestore.db.mapped_memory=50G
neostore.relationshipstore.db.mapped_memory=50G
neostore.propertystore.db.mapped_memory=50G
neostore.propertystore.db.strings.mapped_memory=50G
neostore.propertystore.db.arrays.mapped_memory=1G
following every guide/Stackoverflow post I could find on the topic, but I seem to have exhausted the available material with little effect.
I am running queries through the shell using the following command neo4j-shell -c < "queries/$1.cypher", but have also tried explicitly passing the conf files with -config $NEO4J_HOME/conf/neo4j-wrapper.conf (restarting the sever everytime I make a change).
I imagine that I have missed something silly which is causing the issue, as there are many reports of neo4j working well with data of this size, but cannot think what it could be. As such any help would be greatly appreciated.
Type :SCHEMA in your neo4j browser to show if you have indexes.
Share a couple of your queries.
In the neo4j.properties file, you need to set the dbms.pagecache.memory setting to about 1.5x the size of your database files. In your example, you can set it to 15g

How to debug Neo4J stalls?

I have a Neo4J server running that periodically stalls out for 10s of seconds. The web frontend will say it's "disconnected" with the red warning bar at the top, and a normally instant REST query in my application will apparently hang, until the stall ends and then everything returns to normal. The web frontend becomes usable and my REST query completes fine.
Is there any way to debug what is happening during one of these stall periods? Can you get a list of currently running queries? Or a list of what hosts are connected to the server? Or any kind of indication of server load?
Most likely JVM garbage collection kicking in because you haven't allocated enough heap space.
There's a number of ways to debug this. You can for example enable GC logging (uncomment appropriate lines in neo4j-wrapper.conf), or use a profiler (e.g. YourKit) to see what's going on and why the pauses.

rails - tire - elasticsearch : need to restart elasticsearch server

I use rails - tire - elasticsearch, everything is mainly working very well, but just from time to time, my server start to be very slow. So I have to restart elasticsearch service and then everything go fine again.
I have impression that it happens after bulk inserts (around 6000 products). Can it be linked? Inserts last like 2 min max, but still after server has problem
EDIT :
finally it is not linked to bulk inserts
I have only this line in log
[2013-06-29 01:15:32,767][WARN ][monitor.jvm ] [Jon Spectre] [gc][ParNew][26438][9941] duration [3.4s], collections [1]/[5.2s], total [3.4s]/[57.7s], memory [951.6mb]->[713.7mb]/[989.8mb], all_pools {[Code Cache] [10.6mb]->[10.6mb]/[48mb]}{[Par Eden Space] [241.1mb]->[31mb]/[273mb]}{[Par Survivor Space] [32.2mb]->[0b]/[34.1mb]}{[CMS Old Gen] [678.3mb]->[682.6mb]/[682.6mb]}{[CMS Perm Gen] [35mb]->[35mb]/[166mb]}
Does someone understand this ?
This is just stabbing in the dark, but from what you report, there might be a bad memory setting for your java virtual machine.
ElasticSearch is built with Java and so runs on a JVM. Each JVM process has a defined set of memory to allocate when you start it up. When the available memory is not enough, it crashes, so it has to do garbage collection to free up space. When you run a Java process on the memory limit, it is occupied with a lot of GC runs and will get very slow.
You can have a look at the java jmx management console for what the process is doing and how much memory it has.

Resources