Elasticsearch not refresh ram memory - memory

I use ES for index of notifications, notifications are changing daily and number is around 120 000 daily.
Since there are frequent changes (delete old and add new) I've noticed that the memory is increased by 1GB per day. However, when I restart the ES server memory is reduced to about 1GB. Not a solution to restart the server once a day.
It seems to me that the deleted documents remain in RAM memory.
I have one server with 95GB RAM.
ES version : {
"number" : "0.90.1",
"snapshot_build" : false,
"lucene_version" : "4.3"
},
ES settings are default.
Is there a way to remove deleted documents from ram memory?
Thank you in advance.

Related

Redis high RSS memory usage

When running redis INFO command, I'm getting the following:
used_memory_rss_human:2.69G
I understand this is the amount of memory redis freed but was not released back to the OS
How can I release this memory back to the OS?
According to Redis Docs:
Redis will not always free up (return) memory to the OS when keys are
removed. This is not something special about Redis, but it is how most
malloc() implementations work. For example, if you fill an instance
with 5GB worth of data, and then remove the equivalent of 2GB of data,
the Resident Set Size (also known as the RSS, which is the number of
memory pages consumed by the process) will probably still be around
5GB, even if Redis will claim that the user memory is around 3GB. This
happens because the underlying allocator can't easily release the
memory. For example often most of the removed keys were allocated in
the same pages as the other keys that still exist. The previous point
means that you need to provision memory based on your peak memory
usage. If your workload from time to time requires 10GB, even if most
of the times 5GB could do, you need to provision for 10GB.

IIS worker process continually increase memory usage

We are running a web site on a machine with 16 GB RAM, IIS 8.5 and .Net 4.5.2.
After recycle our web site w3wp process memory increase to 9 GB in a day.
(Machine wide view but only our web site worker process memory usage increase)
When I got a heap snapshot to worker process with PerfView I see that most of memory is hold by MemoryCache
When I go to MemoryCache detail I see 3 named cache.
One of them is default one and we do not put anything on it.
Other one called timeless, we put small amount of data which will live until recycle
And biggest one (named _cache), we put html outputs on it and it increases to 70K-80K item, around 10 to 30 min cache time.
When I right click _cache item and open Memory > View Object I see a list as follows:
I am taking a list of cache items and their sizes and try to sum of them and I see that total item size lower then 100 MB.
Is there any way that I can see what is increase continually ?

Unicorn+rails3.1+nginx fill the memory

I've read that Ruby doesn't return the memory to OS, it is okay. But how does it fill 512 mb of RAM? There is quite simple rails application with even one worker (i had to cut others to reduce the demand), but, anyway in few days after restart process it fills all the memory.
Thanks.

linux virtual memory parameters

Can anyone tell me the working of dirty_bytes and dirty_background_bytes in the Linux VM tunable parameters.
I infer that dirty_bytes specifies the amount of memory after which the application doing a write, starts writing directly to disk. Is it correct or if the amount of memory allocated is used up, that portion is first transferred to disk and then new data is again stored in memory. eg. suppose I want to transfer a 1 GB file to disk and I set dirty_bytes to be 100 MB then once 100 MB have been written to memory, the application doing the writing now starts writing the data directly to disk or the 100 MB is transferred to the disk and then again 100 MB is written to memory and then transferred to disk and so on?
And in case of dirty_background_bytes, when the portion of dirty memory exceeds this then pdflush writes the dirty data back to disk in the background.
Is my understanding correct for these 2 parameters?
No, exceeding dirty_bytes (or dirty_ratio) does not cause processes to start writing directly to disk.
Instead, when a process dirties a page in excess of the limit, that process is used to perform synchronous writeout of some dirty pages - exactly which ones is still decided by the usual heuristics. They may not necessarily even be pages that were originally dirtied by that particular process.
Effectively, the process sees its write (which may just be a memory write) suspended until some writeout has occurred.
You are correct about dirty_background_*. When the background limit is exceeded, asynchronous writeout is started, but the userspace process is allowed to continue.

"Mem Usage" higher than "VM Size" in WinXP Task Manager

In my Windows XP Task Manager, some processes display a higher value in the Mem Usage column than the VMSize. My Firefox instance, for example shows 111544 K as mem usage and 100576 K as VMSize.
According to the help file of Task Manager Mem Usage is the working set of the process and VMSize is the committed memory in the Virtual address space.
My question is, if the number of committed pages for a process is A and the number of pages in physical memory for the same process is B, shouldn't it always be B ≤ A? Isn't the number of pages in physical memory per process a subset of the committed pages?
Or is this something to do with sharing of memory among processes? Please explain. (Perhaps my definition of 'Working Set' is off the mark).
Thanks.
Virtual Memory
Assume that your program (eg Oracle) allocated 100 MB of memory upon startup - your VM size goes up by 100 MB though no additional physical / disk pages are touched. ie VM is nothing but memory book keeping.
The total available physical memory + paging file memory is the maximum memory that ALL the processes in the system can allocate. The system does this so that it can ensure that at any point time if the processes actually start consuming all that memory it allocated the OS can supply the actual physical pages required.
Private Memory
If the program copies 10 MB of data into that 100 MB, OS senses that no pages have been allocated to the process corresponding to those addresses and assigns 10 MB worth of physical pages into your process's private memory. (This process is called page fault)
Working Set
Definition : Working set is the set of memory pages that have been recently touched by a program.
At this point these 10 pages are added to the working set of the process. If the process then goes and copies this data into another 10 MB cache previously allocated, everything else remains the same but the Working Set goes up again by 10 Mb if those old pages where not in the working set. But if those pages where already in the working set, then everything is good and the programs working set remains the same.
Working Set behaviour
Imagine your process never touches the first 10 pages ever again, in which case these pages are trimmed off from your process's working set and possibly sent to the page file so that the OS can bring in other pages that are more frequently used. However if there are no urgent low memory requirements, then this act of paging need not be done and OS can act as if its rich in memory. In this case the working set simply lets these pages remain.
When is Working Set > Virtual Memory
Now imagine the same program de-allocates all the 100 Mb of memory. The programs VM size is immediately reduced by 100 MB (remember VM = book keeping of all memory allocation requests)
The working set need not be affected by this, since that doesn't change the fact that those 10 Mb worth of pages where recently touched. Therefore those pages still remain in the working set of the process though the OS can reclaim them whenever it requires.
This would effectively make the VM < working set. However this will rectify if you start another process that consumes more memory and the working set pages are reclaimed by the OS.
XP's Task Manager is simply wrong. EDIT: If you don't believe me (and someone doesn't, because they voted this down), read Firefox 3 Memory Usage. I quote:
If you’re looking at Memory Usage
under Windows XP, your numbers aren’t
going to be so great. The reason:
Microsoft changed the meaning of
“private bytes” between XP and Vista
(for the better).
Sounds like MS got confused. You only change something like that if it's broken.
Try Process Explorer instead. What Task Manager labels "VM Size", Process Explorer (more correctly) labels "Private Bytes". And in Process Explorer, Working Set (and Private Bytes) are always less than or equal to Virtual Size, as you would expect.
File mapping
Very common way how Mem Usage can be higher than VM Size is by using file mapping objects (hence it can be related to shared memory, as file mapping is used to share memory). With file mapping you can have a memory which is committed (either in page file or in physical memory, you do not know), but has no virtual address assigned to it. The committed memory appears in Mem Usage, while used virtual addresses usage is tracked by VM Size.
See also:
What does “VM Size” mean in the Windows Task Manager? on Stackoverflow
Breaking the 32 bit Barrier in my developer blog
Usenet discussion Still confused why working set larger than virtual memory
Memory usage is the amount of electronic memory currently allocated to the process.
VM Size is the amount of virtual memory currently allocated to the process.
so ...
A page that exists only electronically will increase only Memory Usage.
A page that exists only on disk will increase only VM Size.
A page that exists both in memory and on disk will increase both.
Some examples to illustrate:
Currently on my machine, iexplore has 16,000K Memory Usage and 194,916 VM Size. This means that most of the memory used by Internet Explorer is idle and has been swapped out to disk, and only a fraction is being kept in main memory.
Contrast with mcshield.exe with has 98,984K memory usage and 98,168K VM Size. My conclusion here is that McAfee AntiVirus is active, with at lot of memory in use. Since it's been running for quite some time (all day, since booting), I expect that most of the 98,168K VM Size is copies of the electronic memory - though there's nothing in Task Manager to confirm this.
You might find some explaination in The Memory Shell Game
Working Set (A) – This is a set of virtual memory pages (that are committed) for a process and are located in physical RAM. These pages fully belong to the process. A working set is like a "currently/recently working on these pages" list.
Virtual Memory – This is a memory that an operating system can address. Regardless of the amount of physical RAM or hard drive space, this number is limited by your processor architecture.
Committed Memory – When an application touches a virtual memory page (reads/write/programmatically commits) the page becomes a committed page. It is now backed by a physical memory page. This will usually be a physical RAM page, but could eventually be a page in the page file on the hard disk, or it could be a page in a memory mapped file on the hard disk. The memory manager handles the translations from the virtual memory page to the physical page. A virtual page could be in located in physical RAM, while the page next to it could be on the hard drive in the page file.
BUT: PF (Page File) Usage - This is the total number of committed pages on the system. It does not tell you how many are actually written to the page file. It only tells you how much of the page file would be used if all committed pages had to be written out to the page file at the same time.
Hence B > A...
If we agree that B represents "mem usage" or also PF usage, the problem comes from the fact it actually represents potential page usages: in Xp, this potential file space can be used as a place to assign those virtual memory pages that programs have asked for, but never brought into use...
Memory fragmentation is probably the reason:
If the process allocates 1 octet, it counts for 1 octet in the VMSize, but this 1 octet requires a physical page (4K on windows operating system).
If after allocating/freeing memory, the process has a second octet that is separated by more than 4K from the first one, this second octet will always be stored on a separate physical page than the 1 one.
So the VM Size count is 2 octets but the Memory Usage is 2 pages== 8K
So the fact that MemUsage is greater than VMSize shows that process does a lot of allocation and deallocation and fragments the memory.
This could be because the process is started a long time ago.
Or else there is place for optimization ;-)

Resources