Is there a way to calculate how much heap (or any other) memory approximately will be used by KTable/KStream in java/scala application over time?
I have some specific assumptions and I'd like to know whether they are correct:
Kafka streams use internal topics and RocksDB only.
RocksDB is embeddable DB, so it uses heap memory of my application.
KStream constantly deletes all records from RocksDB after they can no longer be used by any of processors in topology (e.g for join with specified JoinWindow) (== not so much memory used)
KTable is fully stored in RocksDB (== in memory)
When KTable receive null-value record it deletes record from RocksDB (== memory freed up)
It's hard to estimate. For general sizing, consider this guide: https://docs.confluent.io/current/streams/sizing.html
Kafka streams use internal topics and RocksDB only.
Yes. You can also replace RocksDB with in-memory stores (that are part of Kafka Streams) or implement your own custom stores.
RocksDB is embeddable DB, so it uses heap memory of my application.
RocksDB use off-heap memory and also spills to disk.
KStream constantly deletes all records from RocksDB after they can no longer be used by any of processors in topology (e.g for join with specified JoinWindow) (== not so much memory used)
It depends on the store type. For key-value stores (ie, "regular" KTables) data is not deleted (with the exception of explicit delete messages, so-call tombstones). For time-windowed/session-windowed KTables (result of windowed-aggregations) and joins, there is a retention period after which data is deleted.
KTable is fully stored in RocksDB (== in memory)
RocksDB also spills do disk. It's not in-memory only.
When KTable receive null-key record it deletes record from RocksDB (== memory freed up)
null-key records are not malformed. I assume you mean null-value record, so-called tombstone. Those are treated as deletes.
Related
In my understanding, CPU changes the operations order which are written on machine code for optimization and it is called out-of-order execution.
In the term "memory order", it defines the order of accessing to the memory. For example, in relaxed order, it defines very weak ordering rules and execution reordering is easy to happen.
There are some memory ordering models like TSO in x86. In such memory ordering model, the semantics of memory access order by the processor is defined.
What I don't understand is the relationship of them.
Is memory order a kind of out of order execution and are there any other ways for OoOe?
Or, is memory order the implementation of out of order execution and all the reorders by processors are based on the semantics?
The general issue is that on a modern multiprocessor system, load and store instructions may become visible to other cores in a different order than program order. Out-of-order execution is one way in which this can happen, but there are others.
For instance, you could have a CPU which executes and retires all instructions in strict program order, but when it does a store instruction, instead of committing it to L1 cache immediately, it puts it in a store buffer to be written to cache later. The store buffer could be designed to write out stores in a different order than they came in; for instance, if a first store misses L1 cache but a second one would hit, you could save time by writing out the second one while waiting for the first one's cache line to load.
Or, even if the store buffer doesn't reorder, you could have a situation where, while a store is still waiting in the store buffer, the CPU executes a load instruction that came later in program order. Other cores will thus see the load happening before the store. This is the situation with x86, for instance.
The memory ordering model defines, in an abstract way, what the programmer is entitled to expect about the order in which loads and stores become visible to other cores (or hardware, etc). It also usually specifies how the programmer can gain stronger guarantees when needed (e.g. by executing barrier instructions). The CPU then has to be designed to provide the defined behavior, which may place constraints on the features it can include. For instance, if the architecture promises TSO, the CPU probably can't include a store buffer that's capable of reordering, unless they manage to do it in such a clever way that the reordering can never be noticed by other cores.
Related questions:
Are memory barriers needed because of cpu out of order execution or because of cache consistency problem?
Out of Order Execution and Memory Fences
How does memory reordering help processors and compilers?
How do modern Intel x86 CPUs implement the total order over stores
Multi processor systems perform "real" memory operations (those that influence definitive executions, not just speculative execution) out of order and asynchronously as waiting for global synchronization of global state would needlessly stall all executions nearly all the time. On the other hand, immediately outside each individual core, it seems that the memory system, starting with L1 cache, is purely synchronous, consistent, flat from the allowed behavior point of view (allowed semantics); obviously timing depends on the cache size and behavior.
So on a CPU there on one extreme are named "registers" which are private by definition, and on the other extreme there is memory which is shared; it seems a shame that outside the minuscule space of registers, which have peculiar naming or addressing mode, the memory is always global, shared and globally synchronous, and effectively entirely subject to all fences, even if it's memory used as unnamed registers, for the purpose of storing more data than would fit in the few registers, without a possibility of being examined by other threads (except by debugging with ptrace which obviously stalls, halts, serializes and stores the complete observable state of an execution).
Is that always the case on modern computers (modern = those that can reasonably support C++ and Java)?
Why doesn't the dedicated L1 cache provide register-like semantics for those memory units that are only used by a particular core? The cache must track which memory is shared, no matter what. Memory operations on such local data doesn't have to be stalled when strict global ordering of memory operations are needed, as no other core is observing it, and the cache has the power to stall such external accesses if needed. The cache would just have to know which memory units are private (non globally readable) until a stall of out of order operations, which makes then consistent (the cache would probably need a way to ask the core to serialize operations and publish a consistent state in memory).
Do all CPU stall and synchronize all memory accesses on a fence or synchronizing operation?
Can the memory be used as an almost infinite register resource not subject to fencing?
In practice, a single core operating on memory that no other threads are accessing doesn't slow down much in order to maintain global memory semantics, vs. how a uniprocessor system could be designed.
But on a big multi-socket system, especially x86, cache-coherency (snooping the other socket) is part of what makes memory latency worse for cache misses than on a single-socket system, though. (For accesses that miss in private caches).
Yes, all multi-core systems that you can run a single multi-threaded program on have coherent shared memory between all cores, using some variant of the MESI cache-coherency protocol. (Any exceptions to this rule are considered exotic and have to be programmed specially.)
Huge systems with multiple separate coherency domains that require explicit flushing are more like a tightly-coupled cluster for efficient message passing, not an SMP system. (Normal NUMA multi-socket systems are cache-coherent: Is mov + mfence safe on NUMA? goes into detail for x86 specifically.)
While a core has a cache line in MESI Modified or Exclusive state, it can modify it without notifying other cores about changes. M and E states in one cache mean that no other caches in the system have any valid copy of the line. But loads and stores still have to respect the memory model, e.g. an x86 core still has to commit stores to L1d cache in program order.
L1d and L2 are part of a modern CPU core, but you're right that L1d is not actually modified speculatively. It can be read speculatively.
Most of what you're asking about is handled by a store buffer with store forwarding, allowing store/reload to execute without waiting for the store to become globally visible.
what is a store buffer? and Size of store buffers on Intel hardware? What exactly is a store buffer?
A store buffer is essential for decoupling speculative out-of-order execution (writing data+address into the store buffer) from in-order commit to globally-visible L1d cache.
It's very important even for an in-order core, otherwise cache-miss stores would stall execution. And generally you want a store buffer to coalesce consecutive narrow stores into a single wider cache write, especially for weakly-ordered uarches that can do so aggressively; many non-x86 microarchitectures only have fully efficient commit to cache for aligned 4-byte or wider chunks.
On a strongly-ordered memory model, speculative out-of-order loads and checking later to see if any other core invalidated the line before we're "allowed" to read it is also essential for high performance, allowing hit-under-miss for out-of-order exec to continue instead of one cache miss load stalling all other loads.
There are some limitations to this model:
limited store-buffer size means we don't have much private store/reload space
a strongly-ordered memory model stops private stores from committing to L1d out of order, so a store to a shared variable that has to wait for the line from another core could result in the store buffer filling up with private stores.
memory barrier instructions like x86 mfence or lock add, or ARM dsb ish have to drain the store buffer, so stores to (and reloads from) thread-private memory that's not in practice shared still has to wait for stores you care about to become globally visible.
conversely, waiting for shared store you care about to become visible (with a barrier or a release-store) has to also wait for private memory operations even if they're independent.
the memory is always global, shared and globally synchronous, and
effectively entirely subject to all fences, even if it's memory used
as unnamed registers,
I'm not sure what you mean here. If a thread is accessing private data (i.e., not shared with any other thread), then there is almost no need for memory fence instructions1. Fences are used to control the order in which memory accesses from one core are seen by other cores.
Why doesn't the dedicated L1 cache provide register-like semantics for
those memory units that are only used by a particular execution unit?
I think (if I understand you correctly) what you're describing is called a scratchpad memory (SPM), which is a hardware memory structure that is mapped to the architectural physical address space or has its own physical address space. The software can directly access any location in an SPM, similar to main memory. However, unlike main memory, SPM has a higher bandwidth and/or lower latency than main memory, but is typically much smaller in size.
SPM is much simpler than a cache because it doesn't need tags, MSHRs, a replacement policy, or hardware prefetchers. In addition, the coherence of SPM works like main memory, i.e., it comes into play only when there are multiple processors.
SPM has been used in many commercial hardware accelerators such as GPUs, DSPs, and manycore processor. One example I am familiar with is the MCDRAM of the Knights Landing (KNL) manycore processor, which can be configured to work as near memory (i.e., an SPM), a last-level cache for main memory, or as a hybrid. The portion of the MCDRAM that is configured to work as SPM is mapped to the same physical address space as DRAM and the L2 cache (which is private to each tile) becomes the last-level cache for that portion of MCDRAM. If there is a portion of MCDRAM that is configured as a cache for DRAM, then it would be the last-level cache of DRAM only and not the SPM portion. MCDRAM has a much higher bandwdith than DRAM, but the latency is about the same.
In general, SPM can be placed anywhere in the memory hierarchy. For example, it could placed at the same level as the L1 cache. SPM improves performance and reduces energy consumption when there is no or little need to move data between SPM and DRAM.
SPM is very suitable for systems with real-time requirements because it provides guarantees regarding the maximum latency and/or lowest bandwdith, which is necessary to determine with certainty whether real-time constraints can be met.
SPM is not very suitable for general-purpose desktop or server systems where they can be multiple applications running concurrently. Such systems don't have real-time requirements and, currently, the average bandwdith demand doesn't justify the cost of including something like MCDRAM. Moreover, using an SPM at the L1 or L2 level imposes size constraints on the SPM and the caches and makes difficult for the OS and applications to exploit such a memory hierarchy.
Intel Optance DC memory can be mapped to the physical address space, but it is at the same level as main memory, so it's not considered as an SPM.
Footnotes:
(1) Memory fences may still be needed in single-thread (or uniprocessor) scenarios. For example, if you want to measure the execution time of a specific region of code on an out-of-order processor, it may be necessary to wrap the region between two suitable fence instructions. Fences are also required when communicating with an I/O device through write-combining memory-mapped I/O pages to ensure that all earlier stores have reached the device.
I am running ignite 2.3 in embedded mode which means my default mode of data storage is off-heap. I have not enabled on heap cache.
The problem is when I run a large query, the data is stored ON the heap for a long time before it finally garbage collects. Is this expected? Why does it take so long for JVM to garbage collect this data.
I am concerned because as a result of my query, and the data it occupies on heap this will affect my application performance.
My CacheConfiguration are as follows:
data region max size: 12288000000L
CacheMode.LOCAL
indexedType
Ignite always stores data off-heap, on-heap caching is a feature that gives you ability to use Java heap as a cache for off-heap memory and configure eviction policies specific to this cache.
So, the data that you observe on Java heap with onHeapCacheEnabled=false is not Ignite cache data in your case, but a Java application memory footprint which is absolutely expected to exist. If you are experiencing application peformance issues that are connected with GC, check Preparing for Production and namely Garbage Collection Tuning sections of Ignite documentation for tuning tips or ask any specific questions here.
Regards.
Partitioned Global Address Space divides the memory into chunks of local memory to make access faster. My question: What are the reasons (maybe on a hardware level) access to local memory is faster? If I understand correctly, local memory is still at the address of the original shared memory address...
In computer science, a partitioned global address space (PGAS) is a parallel programming model. It assumes a global memory address space that is logically partitioned and a portion of it is local to each process, thread, or processing element.
If you refer to that, then it is just a model. The actual performance is dependent on the implementation.
Local Implementation
If the implementation is "local", i.e., threads or processes inside the same machine (node), then there might be performance implication due to several reasons.
First, the need to use synchronization will reduce the performance. Locks have a terrible implication on performance.
Second, sharing the CPU's cache lines between cores, when at least one of the cores is writing to the cache line, will have to invalidate the cache line of all the other cores, which will induce an high performance penalty. Separating the working memory of each core, will prevent cache invalidation. Such method should be used if the application/algorithm enables it.
Distributed Implementation
If the implementation is "distributed", i.e., DSM, then the penalty of accessing remote memory is even greater (but have similar sequence to the cache invalidation penalty) as a message have to send across the network.
We are trying to Integrate SQLite in our Application and are trying to populate as a Cache. We are planning to use it as a In Memory Database. Using it for the first time. Our Application is C++ based.
Our Application interacts with the Master Database to fetch data and performs numerous operations. These Operations are generally concerned with one Table which is quite huge in size.
We replicated this Table in SQLite and following are the observations:
Number of Fields: 60
Number of Records: 1,00,000
As the data population starts, the memory of the Application, shoots up drastically to ~1.4 GB from 120MB. At this time our application is in idle state and not doing any major operations. But normally, once the Operations start, the Memory Utilization shoots up. Now with SQLite as in Memory DB and this high memory usage, we don’t think we will be able to support these many records.
When I create the DB on Disk, the DB size sums to ~40MB. But still the Memory Usage of the Application remains very high.
Q. Is there a reason for this high usage. All buffers have been cleared and as said before the DB is not in memory?
Any help would be deeply appreciated.
Thanks and Regards
Sachin
You can use the vacuum command to free up memory by reducing the size of sqlite database.
If you are doing a lot of insert update operations then the db size may increase. You can use vaccum command to free up space.
SQLite uses memory for things other than the data itself. It holds not only the data, but also the connections, prepared statements, query cache, query results, etc. You can read more on SQLite Memory Allocation and tweak it. Make sure you are properly destroying your objects too (sqlite3_finalize(), etc.).