Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
We are little confused about the disks types that kafka machine needs.
In our Kafka cluster in production, we have producers, 3 kafka brokers, and consumers.
When producer push data to topics and consumer read data from topics,
how to avoid the situation that consumer try to read data from topic partitions but the data not really inside the topic?
Second - since we are not use SSD disks in Kafka brokers, how to know when consumer read the data from memory cache or from the disks?
how to avoid the situation that consumer try to read data from topic
partitions but the data not really inside the topic ?
Kafka reads data sequentially so there is no random access. That's why you cannot read a specific data. (you can just specify offset to read from)
Also, because there is no random access, using SSD has no significant effect on performance.
From cloudera blog (link):
Using SSDs instead of spinning disks has not been shown to provide a
significant performance improvement for Kafka, for two main reasons:
Kafka writes to disk are asynchronous. That is, other than at startup/shutdown, no Kafka operation waits for a disk sync to
complete; disk syncs are always in the background. That’s why
replicating to at least three replicas is critical—because a single
replica will lose the data that has not been sync’d to disk, if it
crashes.
Each Kafka Partition is stored as a sequential write ahead log. Thus, disk reads and writes in Kafka are sequential, with very few
random seeks. Sequential reads and writes are heavily optimized by
modern operating systems.
SSD will help when consumers are slower than producers, which is quite possible.
When consumers are slow , file system cache misses ,
then random access happens , spinning disk will result in worst case scenario.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I've been looking for information on how efficiently Kubernetes & Docker are in terms of using machine resources, but I haven't found much so far. Here are my three questions, all about Kubernetes+Docker:
If multiple containers on the same node are running the same binary, are the code pages shared between all these instances? That is, is there a single set of physical pages allocated on the node for all these processes? For example, if I'm running a service mesh like Istio, which runs Envoy in every pod, is the system smart enough to only load the Envoy code in memory once, or does all the indirection taking place prevent the Linux kernel from recognizing that sharing is possible?
In a large Kubernetes deployment, there will end up being a considerable number of redundantly downloaded docker images on each node. Instead, it would seem more effective to have a single in-cluster repository for these images that all nodes can fetch from. I saw this about having docker use NFS for a common image store. Is this the only answer?
I heard there's a practical limit to the number of pods Kubernetes will schedule on a single node (30). Such a small limit forces you to use smaller VMs in order to be able to fully saturate them. Anybody know why this limit exists and whether it will eventually be raised? I ask this in the context of trying to run Kubernetes on bare metal where VMs aren't used at all. In such a world, I'd want to be able to pack way more than 30 pods on a (large) physical machine.
Thank you for any insights or pointers.
You state your question in the way that you plan to use docker as container runtime for kubernetes. That is fine - but there are more choices. Depending on the runtime the answers will change.
In general kubernetes provides an abstraction over the actual scheduling and running of pods/containers. Perhaps you invest too much human time into details that can be solved with more metal, which is cheap.
Multiple containers on a single node are usually (docker/containerd/crio) just system processes. Like you launch your Apache httpd multiple times yourself. If the kernel uses memory deduplication, it can indeed share pages.
If you use a container runtime that launches micro-VMs (firecracker,kata, ...) I doubt memory deduplication will be possible.
I would not recommend to share storage for the container images, f.e. with NFS. With some customer setups I had to diagnose issues caused by this. like deadlocks. Basically you would reduce the robustness of your cluster in order to save disk space. Just use more metal.
The usual limit is 110 Pods per node which is usually plenty. You can change this limit using --max-pods parameter to the kubelet process or configuration file for kubelet. The reason for the limit is that the management of a pod incurs effort on the kubelet and etcd/apiserver side.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am looking for a better n optimal solution who can replace AppFabricCache and improve the performance of my ASP.Net-MVC application.
According to Microsoft, Azure Cache (the name of their Redis offering) should be used for all development on Azure instead of AppFabric Cache. I think that's a rather good endorsement for Redis and the only alternative if you want to deploy your application to Azure.
That said, a distributed cache will only help with performance in specific scenarios: when you deploy your application to a multi-machine farm and you need consistency of the cached data. It will actually hurt performance if you have only one machine or if you want to cache read-only lookup data. The network call will always be slower than a memory lookup.
You should also consider, why do you want to replace AppFabric Cache? What doesn't work for you? You may encounter the same problems if you change to another solution.
For example, synchronization problems will always appear if you host AppFabric or Memcached on the web servers themselves. Both the web server and the cache use a lot of CPU (and RAM) during high traffic. This will lead to problems, with delayed requests, timeouts or ... sync problems. Redis avoids these because there is no local caching at all - only a remote in-memory cache cluster.
There are a ton of resources on how to use Redis in .NET. A lot of them refer to Azure Cache but you can use the same code and simply change the connection strings if you want to host Redis yourself.
For example, in Session state with Azure Redis cache the only change required is to change the server's DNS name in the configuration file. The article How to Use Azure Redis Cache uses a third-party Redis client to connect to Azure Redis Cache. Again, you only need to change the host name to connect to an on-premise Redis server.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I've been playing with a neo4j 2.0 db for a few months and I plan to install the db on a dedicated server. I've already tried several configs of neo4j (jvm, caches, ...) but I'm still not sure to have found the best one.
Therefore, it seems better to ask to the experts :)
Context
Db primitives:
Nodes = 224,114,478
Relationships = 417,681,104
Properties = 224,342,951
Db files:
nodestore.db = 3.064Gb
relationshipstore.db = 13.460Gb
propertystore.db = 8.982 Gb
propertystore.db.string = 5.998 Gb
propertystore.db.arrays = 1kb
OS server:
Windows server 2012 (64b)
Db usage:
Mostly graph traversals using cypher queries.
Perfs are not too bad on my dev laptop even if some queries have huge lags (I suspect that the main reason is swap caused by lack of RAM)
Graph specificities:
I suspect that some nodes may be huge hubs (till 1M relationships) but it should remain exceptional.
What would be your advices regarding:
hardware sizing,
neo4j configuration:
heap size,
use memory map buffer (is there any reason to keep the value to false with windows ?)
cache type,
recommended jvm settings for windows,
...
Thanks in advance !
laurent
The on-disk size of your graph is in total ~32 GB. Neo4j has a two layered cache architecture. The first layer is the file buffer cache. Ideally it should have the same size as the on disk graph, so ~32GB in your case.
IMPORTANT When running Neo4j on Windows, the file buffer cache is part of Java heap (due to the suckiness of Windows by itself). On Linux/Mac it's off heap. That is reason why I generally do not recommend production environments for Neo4j on Windows.
cache_type should be hpc when using enterprise edition and soft for community.
To have some reasonable amount for the second cache layer (object cache) I'd suggest to have a machine with at least 64GB RAM. Since file buffer and object cache are both on heap, make heap large and consider using G1: -Xmx60G -XX:+UseG1GC. Observe GC behaviour by uncommenting gc logging in neo4j-wrapper.conf and tweak the settings step by step.
Please note that Neo4j 2.2 might come with a different file buffer cache implementation that works off heap on Windows as well.
Please take a look at these two guides for performance tuning and hardware sizing calculations:
http://neo4j.com/developer/guide-performance-tuning/
http://neo4j.com/developer/guide-sizing-and-hardware-calculator/
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I'm working on a my own PXE server (so I could install new OS's I want to test easily without the need to find and format USB's). I've stated by examining psychomario/PyPXE project, but quickly implemented my own TFTP Server. I'm testing it agains Intel UNDI PXE-2.1I have on my laptop.
One of the things psychomario doesn't support is sending large files (>32M). The RFC's (1350, 2347) don't discuss how it should be done, but apparently I had two option. The first option, increasing the block size, didn't work since the PXE client apparently ignores fragmented IP packets.
The second option is using rolling block, i.e. starting the counting from the beginning when reaching the end. The client acks the data, but when the data ends, the client starts sending ack's for block 0xffff (even if that's not the last block).
I tried closing the connection and sending empty data packets for that block. The first resulted on error on the PXE, the second resulted in infinite loop with the PXE.
What packet do I need to send in response for the ack of block 0xffff in order to end the session?
1) your TFTP server should really implement the block size option if not you will be limited to 512 byte blocks. Please see RFC 2348. Fragmentation can always be avoided negotiating a blksize such that the whole packet never gets bigger than the minimum MTU (1500 in a typical Ethernet environment).
2) You have to implement a TFTP "roll over"; after sending and getting acked block # = 0xFFFF you should send the next block as block # = 0x0000 and so on until you finish your transfer. When you test this feature be sure to use a TFTP client able to deal with TFTP block roll over; virtually all the PXE clients available today do this very well.
Besides your learning experience coding your own PXE server please consider you will run into countless isuess down the road. If you need to get quick results just google "pxe server" for a list of ready to use PXE server options.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I need some suggestion for the erlang in-memory cache system.
The cache item is key-value based storage.
key is usually an ASCII string; value is erlang's types include number / list / tuple / etc.
The cache item can be set by any of the node.
The cache item can be get by any of the node.
The cache item is shared cross all nodes even on different servers
dirty-read is permitted, I don't want any lock or transaction to reduce the performance.
Totally distributed, no centralized machine or service.
Good performance
Easy install and deployment and configuration and maintenance
First choice seems to me is mnesia, but I have no experence on it.
Does it meet my requirement?
How the performance can I expect?
Another option is memcached --
But I am afraid the performance is lower than mnesia because extra serialization/deserialization are performed as memcached daemon is from another OS process.
Yes. Mnesia meets your requirements. However, like you said, a tool is good when the one using it understands it in depth. We have used mnesia on a distributed authentication system and we have not experienced any problem thus far. When mnesia is used as a cache it is better off than memcached, for one reason "Memcached cannot guarantee that what you write, you can read at any time, due to memory swap out issues and stuff" (follow here). However, this means that your distributed system is going to be built over Erlang. Indeed mnesia in your case beats most NoSQL cache solutions because their systems are Eventually consistent. Mnesia is consistent, as long as network availability can be ensured across the cluster. For a distributed cache system, you dont want a situation where you read different values for the same key from different nodes, hence mnesia's consistency comes in handy here. Something you should think about, is that, it is possible to have a centralised Memory cache for a distributed system. This works like this: You have RABBITMQ server running and accessible by AMQP clients on each Cluster node. Systems interact over the AMQP interface. Because, the cache is centralised, consistency is ensured by the process/system responsible for writing and reading from the cache. The other systems just place a request for a key, onto the AMQP message bus, and the system responsible for cache receives this message and replies it with the value.
We have used the Message bus Architecture using RABBITMQ for a recent system which involved integration with banking systems, an ERP system and Public online service. What we built was responsible for fusing all these together and we are glad that we used RABBITMQ. The details are many but what we did is to come up with a message format, and a system identification mechanism. All systems must have a RABBITMQ client for writing and reading from the message bus. Then you would create a read Queue for each system, so that other system write their requests into that queue, whose name inside RABBITMQ, is the same as the system owning it. Then, later, you must encrypt the messages passing over the bus. In the end, you have systems bound together over large distance/across states, but with an efficient network, you wont believe how fast RABBITMQ binds these systems. Anyhow, RABBITMQ can also be clustered, and i should tell you that it is Mnesia which powers RABBITMQ (that tells you how good mnesia can be).
Another thing is that, you should do some reading and write many programs until you are comfortable with it.