Perfino agent overhead - perfino

What is the approximate overhead of the Perfino agent in terms of memory and CPU utilization? Would prefer numbers if you have them.
We currently have an evaluation copy hooked up to an order matching application and there is concern that the agent may introduce additional latency into the system.

It is difficult to quantify agent overhead in relative or absolute numbers, because they depend so much on the application that is being monitored and how monitoring is configured.
In any case, the agent overhead in terms of CPU time is negligible. Agent operations only occur for high-level events like a URL invocation or a database call and only add microseconds of processing time to each such event.
In terms of memory overhead, the agent can store substantial amounts of data especially for probe data like JDBC statements. However, this is carefully capped and periodically offloaded to the perfino server.

Related

Do Idle Snowflake Connections Use Cloud Services Credits?

Motivation | Suppose one wanted to execute two SQL queries against a Snowflake DB, ~20 minutes apart.
Optimization Problem | Which would cost fewer cloud services credits:
Re-using one connection, and allowing that connection to idle in the interim.
Connecting once per query.
The documentation indicates that authentication incurs cloud services credit usage, but does not indicate whether idle connections incur credit usage.
Question | Does anyone know whether idle connections incur cloud services credit usage?
Snowflake connections are stateless. They do not occupy a resource, and they do not need to keep the TCP/IP connection alive like other database connections.
Therefore idle connections do not consume any the Cloud Services Layer credits unless you enable "CLIENT_SESSION_KEEP_ALIVE".
https://docs.snowflake.com/en/sql-reference/parameters.html#client-session-keep-alive
When you set CLIENT_SESSION_KEEP_ALIVE, the client will update the token for the session (default value is 1 hour).
https://docs.snowflake.com/en/sql-reference/parameters.html#client-session-keep-alive-heartbeat-frequency
As Peter mentioned, the CSL usage up to 10% of daily warehouse usage is free, so refreshing the tokens will not cost you anything in practice.
About your approaches: I do not know how many queries you are planning to run daily, but creating a new connection for each query can be a performance killer. For costs perspective, idle connection will do max 24 authorization requests on a day, so if you are planning to run more than 24 queries on a day, I suggest you to pick the first approach.
Even if idle connections do not cost anything in the Cloud Services respect, is your warehouse running with idle connections hence giving you other costs to consider? I am guessing there's more factors to consider overall which you can speak to your Snowflake Account Team to discuss. Not trying to dodge your question, but trying to give a more wholesome answer!
In general, the Cloud Services costs are typically on the lower side compared to your other costs. Here are the main drivers for cloud service costs and how to minimalize them: https://community.snowflake.com/s/article/Cloud-Services-Billing-Update-Understanding-and-Adjusting-Usage
The best advice you may get is to test your connections/workflows and compare the costs over time. The overall costs are going to depend on several factors. Even if there's a difference in costs between two workflows, you may still have to analyze the cost/output ratio and your business needs to determine if it's worth the savings.
Approach 1 will incur less cloud services usage, but more data transfer charges (to keep the connection alive). Only the Auth event incurs cloud services usage.
Approach 2 will incur more cloud services usage, but less data transfer charges.
However, the amount of cloud services usage or data transfer charges are extremely small in either case.
Note - any cloud services used (up to 10% of daily warehouse usage) are free, whereas there is no free bandwidth allocation, so using #2 may save you a few pennies.

What purpose do CPU limits have in Kubernetes resp. Docker?

I dig into Kubernetes resource restrictions and have a hard time to understand what CPU limits are for. I know Kubernetes passes requests and limits down to the (in my case) Docker runtime.
Example: I have 1 Node with 1 CPU and 2 Pods with CPU requests: 500m and limits: 800m. In Docker, this results in (500m -> 0.5 * 1024 = 512) --cpu-shares=512 and (800m -> 800 * 100) --cpu-quota=80000. The pods get allocated by Kube scheduler because the requests sum does not exceed 100% of the node's capacity; in terms of limits the node is overcommited.
The above allows each container to get 80ms CPU time per 100ms period (the default). As soon as the CPU usage is 100%, the CPU time is shared between the containers based on their weight, expressed in CPU shares. Which would be 50% for each container according to the base value of 1024 and a 512 share fo each. At this point - in my understanding - the limits have no more relevance because none of the containers can get its 80ms anymore. They both would get 50ms. So no matter how much limits I define, when usage reaches critical 100%, it's partitioned by requests anyway.
This makes me wonder: Why should I define CPU limits in the first place, and does overcommitment make any difference at all? requests on the other hand in terms of "how much share do I get when everything is in use" is completely understandable.
One reason to set CPU limits is that, if you set CPU request == limit and memory request == limit, your pod is assigned a Quality of Service class = Guaranteed, which makes it less likely to be OOMKilled if the node runs out of memory. Here I quote from the Kubernetes doc Configure Quality of Service for Pods:
For a Pod to be given a QoS class of Guaranteed:
Every Container in the Pod must have a memory limit and a memory request, and they must be the same.
Every Container in the Pod must have a CPU limit and a CPU request, and they must be the same.
Another benefit of using the Guaranteed QoS class is that it allows you to lock exclusive CPUs for the pod, which is critical for certain kinds of low-latency programs. Quote from Control CPU Management Policies:
The static CPU management policy allows containers in Guaranteed pods with integer CPU requests access to exclusive CPUs on the node. ... Only containers that are both part of a Guaranteed pod and have integer CPU requests are assigned exclusive CPUs.
According to the Motivation for CPU Requests and Limits section of the Assign CPU Resources to Containers and Pods Kubernetes walkthrough:
By having a CPU limit that is greater than the CPU request, you
accomplish two things:
The Pod can have bursts of activity where it makes use of CPU resources that happen to be available.
The amount of CPU resources a Pod can use during a burst is limited to some reasonable amount.
I guess that might leave us wondering why we care about limiting the burst to "some reasonable amount" since the very fact that it can burst seems to seems to suggest there are no other processes contending for CPU at that time. But I find myself dissatisfied with that line of reasoning...
So first off I checked out the command line help for the docker flags you mentioned:
--cpu-quota int Limit CPU CFS (Completely Fair Scheduler) quota
-c, --cpu-shares int CPU shares (relative weight)
Reference to the Linux Completely Fair Scheduler means that in order to understand the value of CPU limit/quota we need to undestand how the underlying process scheduling algorithm works. Makes sense, right? My intuition is that it's not as simple as time-slicing CPU execution according to the CPU shares/requests and allocating whatever is leftover at the end of some fixed timeslice on a first-come, first-serve basis.
I found this old Linux Journal article snippet which seems to be a legit description of how CFS works:
The CFS tries to keep track of the fair share of the CPU that would
have been available to each process in the system. So, CFS runs a fair
clock at a fraction of real CPU clock speed. The fair clock's rate of
increase is calculated by dividing the wall time (in nanoseconds) by
the total number of processes waiting. The resulting value is the
amount of CPU time to which each process is entitled.
As a process waits for the CPU, the scheduler tracks the amount of
time it would have used on the ideal processor. This wait time,
represented by the per-task wait_runtime variable, is used to rank
processes for scheduling and to determine the amount of time the
process is allowed to execute before being preempted. The process with
the longest wait time (that is, with the gravest need of CPU) is
picked by the scheduler and assigned to the CPU. When this process is
running, its wait time decreases, while the time of other waiting
tasks increases (as they were waiting). This essentially means that
after some time, there will be another task with the largest wait time
(in gravest need of the CPU), and the currently running task will be
preempted. Using this principle, CFS tries to be fair to all tasks and
always tries to have a system with zero wait time for each
process—each process has an equal share of the CPU (something an
“ideal, precise, multitasking CPU” would have done).
While I haven't gone as far as to dive into the Linux kernel source to see how this algorithm actually works, I do have some guesses I would like to put forth as to how shares/requests and quotas/limits play into this CFS algorithm.
First off, my intuition leads me to believe that different processes/tasks accumulate wait_runtime at different relative rates based on their assigned CPU shares/requests since Wikipedia claims that CFS is an implementation of weighted fair queuing and this seems like a reasonable way to achieve a shares/request based weighting in the context of an algorithm that attempts to minimize the wait_runtime for all processes/tasks. I know this doesn't directly speak to the question that was asked, but I want to be sure that my explanation as a whole has a place for both concepts of shares/requests and quotas/limits.
Second, with regard to quotas/limits I intuit that these would be applicable in situations where a process/task has accumulated a disproportionately large wait_runtime while waiting on I/O. Remember that the quoted description above CFP prioritizes the process/tasks with the largest wait_runtime? If there were no quota/limit on a given process/task then it seems to me like a burst of CPU usage on that process/task would have the effect of, for as long as it takes for its wait_runtime to reduce enough that another task is allowed to preempt it, blocking all other processes/tasks from execution.
So in other words, CPU quotas/limits in Docker/Kubernetes land is a mechanism that allows the given container/pod/process to burst in CPU activity to play catch up to other processes after waiting on I/O (rather than CPU) without in the course of doing so unfairly blocking other processes from also doing work.
There is no upper bound with just cpu shares. If there are free cycles, you are free to use them. limit is imposed so that one rogue process is not holding up the resource forever.
There should be some fair scheduling. CFS imposes that using cpu quota and cpu period via the limit attribute configured here.
To conclude, this kind of property ensures that when I schedule your task you get a minimum of 50 milliseconds to finish it. If you need more time, then if no one is waiting in the queue I would let you run for few more but not more than 80 milliseconds.
I think it's correct that, during periods where the Node's CPU is being fully utilized, it's the requests (CPU shares) that will determine how much CPU time each container gets, rather than the limits (which are effectively moot at that point). In that sense, a rogue process can't do unlimited damage (by depriving another of its requests).
However, there are still two broad uses for limits:
If you don't want a container to be able to use more than a fixed amount of CPU even if extra CPU is available on the Node. It might seem weird that you wouldn't want excess CPU to be utilized, but there are use cases for this. Some that I've heard:
You're charging customers for the right to use up to x amount of compute resources (a limit), so you don't want to give them more sometimes for free (which might dissuade them from paying for a higher tier on your service).
You're trying to figure out how a service will perform under load, but this gets complicated/unpredictable, because the performance during your load testing depends on how much spare CPU is lying around that the service is able to utilize (which might be a lot more than the spare CPU that'll actually be on the Node during a real-world high-load situation). This is mentioned here as a big risk.
If the requests on all the containers aren't set especially accurately (as is often the case; devs might set the values upfront and forget to update them as the service evolves, or not even set them very carefully initially). In these cases, things sometimes still function well enough if there's enough slack on the Node; limits can then be useful to prevent a buggy workload from eating all the slack and forcing the other pods back to their incorrectly-set(!) requested amounts.

Storm process increasing memory

I am implementing a distributed algorithm for pagerank estimation using Storm. I have been having memory problems, so I decided to create a dummy implementation that does not explicitly save anything in memory, to determine whether the problem lies in my algorithm or my Storm structure.
Indeed, while the only thing the dummy implementation does is message-passing (a lot of it), the memory of each worker process keeps rising until the pipeline is clogged. I do not understand why this might be happening.
My cluster has 18 machines (some with 8g, some 16g and some 32g of memory). I have set the worker heap size to 6g (-Xmx6g).
My topology is very very simple:
One spout
One bolt (with parallelism).
The bolt receives data from the spout (fieldsGrouping) and also from other tasks of itself.
My message-passing pattern is based on random walks with a certain stopping probability. More specifically:
The spout generates a tuple.
One specific task from the bolt receives this tuple.
Based on a certain probability, this task generates another tuple and emits it again to another task of the same bolt.
I am stuck at this problem for quite a while, so it would be very helpful if someone could help.
Best Regards,
Nick
It seems you have a bottleneck in your topology, ie, a bolt receivers more data than in can process. Thus, the bolt's input queue grows over time consuming more and more memory.
You can either increase the parallelism for the "bottleneck bolt" or enable fault-tolerance mechanism which also enables flow-control via limited number of in-flight tuples (https://storm.apache.org/documentation/Guaranteeing-message-processing.html). For this, you also need to set "max spout pending" parameter.

Machine type for google cloud dataflow jobs

I noticed there is an option that allows specifying a machine type.
What is the criteria I should use to decide whether to override the default machine type?
In some experiments I saw that throughput is better with smaller instances, but on the other hand jobs tend to experience more "system" failures when many small instances are used instead of a smaller number of default instances.
Thanks,
G
Dataflow will eventually optimize the machine type for you. In the meantime here are some scenarios I can think of where you might want to change the machine type.
If your ParDO operation needs a lot of memory you might want to change the machine type to one of the high memory machines that Google Compute Engine provides.
Optimizing for cost and speed. If your CPU utilization is less than 100% you could probably reduce the cost of your job by picking a machine with fewer CPUs. Alternatively, if you increase the number of machines and reduce the number of CPUs per machine (so total CPUs stays approximately constant) you can make your job run faster but cost approximately the same.
Can you please elaborate more on what type of system failures you are seeing? A large class of failures (e.g. VM interruptions) are probabilistic so you would expect to see a larger absolute number of failures as the number of machines increases. However, failures like VM interruptions should be fairly rare so I'd be surprised if you noticed an increase unless you were using order of magnitude more VMs.
On the other hand, its possible you are seeing more failures because of resource contention due to the increased parallelism of using more machines. If that's the case we'd really like to know about it to see if this is something we can address.

Hadoop namenode memory usage

I am confused by hadoop namenode memory problem.
when namenode memory usage is higher than a certain percentage (say 75%), reading and writing hdfs files through hadoop api will fail (for example, call some open() will throw exception), what is the reason? Does anyone has the same thing?
PS.This time the namenode disk io is not high, the CPU is relatively idle.
what determines namenode'QPS (Query Per Second) ?
Thanks very much!
Since the namenode is basically just a RPC Server managing a HashMap with the blocks, you have two major memory problems:
Java HashMap is quite costly, its collision resolution (seperate chaining algorithm) is costly as well, because it stores the collided elements in a linked list.
The RPC Server needs threads to handle requests- Hadoop ships with his own RPC framework and you can configure this with the dfs.namenode.service.handler.count for the datanodes it is default set to 10. Or you can configure this dfs.namenode.handler.count for other clients, like MapReduce jobs, JobClients that want to run a job. When a request comes in and it want to create a new handler, it go may out of memory (new Threads are also allocating a good chunk of stack space, maybe you need to increase this).
So these are the reasons why your namenode needs so much memory.
What determines namenode'QPS (Query Per Second) ?
I haven't benchmarked it yet, so I can't give you very good tips on that. Certainly fine tuning the handler counts higher than the number of tasks that can be run in parallel + speculative execution.
Depending on how you submit your jobs, you have to fine tune the other property as well.
Of course you should give the namenode always enough memory, so it has headroom to not fall into full garbage collection cycles.

Resources