I'm setting up an environment for our data scientists to work on. Currently we have a single node running Jupyterhub with Anaconda and Dask installed. (2 sockets with 6 cores and 2 threads per core with 140 gb ram). When users create a LocalCluster, currently the default settings are to take all the available cores and memory (as far as I can tell). This is okay when done explicitly, but I want the standard LocalCluster to use less than this. Because almost everything we do is
Now when looking into the config I see no configuration dealing with n_workers, n_threads_per_worker, n_cores etc. For memory, in dask.config.get('distributed.worker') I see two memory related options (memory and memory-limit) both specifying the behaviour listed here: https://distributed.dask.org/en/latest/worker.html.
I've also looked at the jupyterlab dask extension, which lets me do all this. However, I can't force people to use jupyterlab.
TL;DR I want to be able set the following standard configuration when creating a cluster:
n_workers
processes = False (I think?)
threads_per_worker
memory_limit either per worker, or for the cluster. I know this can only be a soft limit.
Any suggestions for configuration is also very welcome.
As of 2019-09-20 this isn't implemented. I recommend raising an feature request at https://github.com/dask/distributed/issues/new , or even a pull request.
Related
Using JConsole someone can access to the metrics that were gathered by default for OS like memory, CPU load and ..., in addition to process specific metrics. My question is can we add some OS customized metrics, like the usage of some directory using Java Files API or checking if a specific port is responsive?
I gather so-called metrics using remote SSH and the commands like du -sh /directory that has so many delays and I want to get it using JMX so it could run faster.
This question talked about adding spring metrics.
As the linked question shows it is easy to expose a Java class as an MBean, so you could certainly write a class that collects the metrics you need. Implementing du in Java is not difficult. However, I'm not sure that it will solve your problem. The example of du -sh /directory is probably slow because it needs to recursively measure the size of a directory hierarchy. That will be just as slow (probably slower!) in Java.
As a side note I would normally use collectd or Telegraf for that kind of thing, but again the I/O cost for finding disk usage would be the same.
I would suggest adding some logs with times to your current script so that you can see where it spends time. If it takes less than a second to connect with SSH and 15 seconds to determine the directory size, for example, moving from SSH to JMX won't help.
We have neo4j configured in casual cluster on Kubernetnes based setup. All components are deployed on individual machine with size: t2.xlarge on aws. And we use pod affinity to schedule a deployment. While working with application under stress, we observed that there is considerable system load only one machine. For example see this:
First neo4j machine for core:
and second machine for core in same cluster:
We have bolt+router protocol configured in backend. I am not sure what causing this much resource utilization on one machine whereas others work in minimum.
I checked memory consumption on pod level as well. Neo4j-1 takes 9gb of memory whereas others are taking around 4gb. So my questions is, is this expected behavior?
I am posting my findings as answer. Not sure if its correct though. I checked status of each instance in neo4j using monitoring api's:
http://localhost:7474/db/manage/server/core/writable
This will give leader as True and False for followers. This could be the reason leader takes high resources while others are not taking much. Ref link:https://neo4j.com/docs/operations-manual/current/monitoring/causal-cluster/
Our company is developing an application which runs in 3 seperate kubernetes-clusters in different versions (production, staging, testing).
We need to monitor our clusters and the applications over time (metrics and logs). We also need to run a mailserver.
So basically we have 3 different environments with different versions of our application. And we have some shared services that just need to run and we do not care much about them:
Monitoring: We need to install influxdb and grafana. In every cluster there's a pre-installed heapster, that needs to send data to our tools.
Logging: We didn't decide yet.
Mailserver (https://github.com/tomav/docker-mailserver)
independant services: Sentry, Gitlab
I am not sure where to run these external shared services. I found these options:
1. Inside each cluster
We need to install the tools 3 times for the 3 environments.
Con:
We don't have one central point to analyze our systems.
If the whole cluster is down, we cannot look at anything.
Installing the same tools multiple times does not feel right.
2. Create an additional cluster
We install the shared tools in an additional kubernetes-cluster.
Con:
Cost for an additional cluster
It's probably harder to send ongoing data to external cluster (networking, security, firewall etc.).
3) Use an additional root-server
We run docker-containers on an oldschool-root-server.
Con:
Feels contradictory to use root-server instead of cutting-edge-k8s.
Single point of failure.
We need to control the docker-containers manually (or attach the machine to rancher).
I tried to google for the problem but I cannot find anything about the topic. Can anyone give me a hint or some links on this topic?
Or is it just no relevant problem that a cluster might go down?
To me, the second option sound less evil but I cannot estimate yet if it's hard to transfer data from one cluster to another.
The important questions are:
Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?
Thanks for your hints,
Marius
That is a very complex and philosophic topic, but I will give you my view on it and some facts to support it.
I think the best way is the second one - Create an additional cluster, and that's why:
You need a point which should be accessible from any of your environments. With a separate cluster, you can set the same firewall rules, routes, etc. in all your environments and it doesn't affect your current workload.
Yes, you need to pay a bit more. However, you need resources to run your shared applications, and overhead for a Kubernetes infrastructure is not high in comparison with applications.
With a separate cluster, you can setup a real HA solution, which you might not need for staging and development clusters, so you will not pay for that multiple times.
Technically, it is also OK. You can use Heapster to collect data from multiple clusters; almost any logging solution can also work with multiple clusters. All other applications can be just run on the separate cluster, and that's all you need to do with them.
Now, about your questions:
Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
No, it is not a problem with a separate cluster.
Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
I think, yes. At least I did it several times, and I know some other projects with similar architecture.
Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?
Yes, nothing complex there. Usually, it does not depend on the platform.
I am involved in developing of a set of microservices with distributed processing capabilities with the help of Akka.NET.
Typically they consist of some dispatcher and some workers. Dispatcher by default assign work to his local worker, but when it [somehow] determines that current host is overloaded then assigns work to remote workers.
Say we have 10 hosts (VMs) and 30 such services (semantically different).
The question is: how to properly scale them?
The first solution is to run 3-services-per-host with capability to auto-scale each service on-demand on other 9 machines. And scale-down when not needed after some time.
The second solution is to run all 30 services on all 10 hosts always.
At a high level, you need to consider fault tolerance, localisation, recovery, general distributed computing issues such as CAP etc..
Unless you have different scaling needs for different services, I'd probably go for the second approach of running them on all hosts. This gives greater fault tolerance and seems conceptually simpler than having auto-scaling. However, this presumes you have similar needs for each type of service and means all services will be affect by an outage or failure of a host. If one particular service has different needs (I.e. Different SLA, non functional requirements, more powerful machine needed etc.) then there is more of an argument for more specialised deployments per service.
I have a problem that membase is being very slow on my environment.
I am running several production servers (Passenger) on rails 2.3.10 ruby 1.8.7.
Those servers communicate with 2 membase machines in a cluster.
the membase machines each have 64G of memory and a100g EBS attached to them, 1G swap.
My problem is that membase is being VERY slow in response time and is actually the slowest part right now in all of the application lifecycle.
my question is: Why?
the rails gem I am using is memcache-northscale.
the membase server is 1.7.1 (latest).
The server is doing between 2K-7K ops per second (for the cluster)
The response time from membase (based on NewRelic) is 250ms in average which is HUGE and unreasonable.
Does anybody know why is this happening?
What can I do inorder to improve this time?
It's hard to immediately say with the data at hand, but I think I have a few things you may wish to dig into to narrow down where the issue may be.
First of all, do your stats with membase show a significant number of background fetches? This is in the Web UI statistics for "disk reads per second". If so, that's the likely culprit for the higher latencies.
You can read more about the statistics and sizing in the manual, particularly the sections on statistics and cluster design considerations.
Second, you're reporting 250ms on average. Is this a sliding average, or overall? Do you have something like max 90th or max 99th latencies? Some outlying disk fetches can give you a large average, when most requests (for example, those from RAM that don't need disk fetches) are actually quite speedy.
Are your systems spread throughout availability zones? What kind of instances are you using? Are the clients and servers in the same Amazon AWS region? I suspect the answer may be "yes" to the first, which means about 1.5ms overhead when using xlarge instances from recent measurements. This can matter if you're doing a lot of fetches synchronously and in serial in a given method.
I expect it's all in one region, but it's worth double checking since those latencies sound like WAN latencies.
Finally, there is an updated Ruby gem, backwards compatible with Fauna. Couchbase, Inc. has been working to add back to Fauna upstream. If possible, you may want to try the gem referenced here:
http://www.couchbase.org/code/couchbase/ruby/2.0.0
You will also want to look at running Moxi on the client-side. By accessing Membase, you need to go through a proxy (called Moxi). By default, it's installed on the server which means you might make a request to one of the servers that doesn't actually have the key. Moxi will go get it...but then you're doubling the network traffic.
Installing Moxi on the client-side will eliminate this extra network traffic: http://www.couchbase.org/wiki/display/membase/Moxi
Perry