Autoscaling: Unable to reach resize target in zone us-central1-b. QUOTA_EXCEEDED: Quota 'IN_USE_ADDRESSES' exceeded. Limit: 575.0 in region us-central1.
From https://cloud.google.com/dataflow/service/dataflow-service-desc my limit should be 1,000, but when running my dataflow job I get the warning above about a limit of only 575. Should I explicitly set a different region as specified in https://cloud.google.com/dataflow/docs/concepts/regional-endpoints or can I increase the limit to 1,000 in the central region?
I guess you are using the default machine type, so each machine has one cpu but each of them has a standalone IP_ADDRESS. Even you can use up to 1000 instances, it seems your "IN_USE_ADDRESSES" quota in the region is set to 575 thus the error.
If you don't want to increase the number of "In Use Addresses", you can find other machine types to use more CPUs per instance, for example n1-standard-4. Otherwise, you can ask for more quota in "In Use Addresses".
Related
I'm using Splunk to monitor my applications.
I also store resource statistics in my Splunk too.
Goal: I want to find the optimum CPU limit for each container.
How to I write a query that finds an optimum CPU limit? Or the other question is Should I?
Concern1: When I start customizing my query and let's say that I have used MAX(CPU) command. It doesn't mean that my container will be running at level most of the time. So, I might set an unnecessary high limit for my containers.
Let me explain, when I find a CPU limit value via MAX(CPU) command as 10, this top value might be happened because of a bulk operation. So, my container's expected resource may be around 1.2 all the time, except this single 1 operation that one. So, using MAX value won't work.
Concern2: Let's say that I have used the value of AVG(CPU) value and used it. And that is 2, So how many of my operations will be waited for how many minutes after this change? Or how many of them are going to be timed out? It may create a lot of side-effects. How will I decide the real average value? What parameters should be used?
Is it possible to include such conditions in the query? Or do I need an AI to decide it? :)
Here are my givin parameters:
path=statistics.cpus_system_time_secs
path=statistics.cpus_user_time_secs
path=statistics.cpus_nr_periods
path=statistics.cpus_nr_throttled
path=statistics.cpus_throttled_time_secs
path=statistics.cpus_limit
I bet you can ask better questions than me. Let's discuss.
"Optimum" is going to depend greatly on your own environment (resources available, application priority, etc)
You probably want to look at a combination of the following factors:
avg(CPU)
max(CPU) (and time spent there)
min(CPU) (and time spent there)
I suspect your "optimum" limit is going to be a % below your max...but only if you're spending 'a lot' of time maxxed-out
And, of course, being "maxed" may not matter, if other containers are running acceptably
Keep in mind, once you set that limit, your max will drop (as, likely, will your avg)
Folks,
With regards to docker compose v3's 'cpus' parameter setting (under 'deploy' 'resources' 'limits') to limit the available CPUs to a service, is it an absolute number that specifies the count of CPUs or is it a more useful percentage of available CPUs setting.
From what i read it appears to be an absolute number, where in, say if a host has 4 CPUs and one were to set two services in the compose file with 0.5 then both the services combined can only use a max of 1 CPU (0.5 each) while leaving the 3 remaining CPUs idle.
But thinking loudly it appears to me that it would be nicer if this is a percentage of available cores setting in which case for the same previous example this would result in both services each being able to use up to 2 CPUs each & thereby the two combined could use up all 4 when needed. This way when i increase or decrease the available cores the relative settings would help avoid modifying this value again.
EDIT(09/10/21):
On reading this it appears that the above can be achieved with 'cpu-shares' setting instead of setting 'cpus'. Is my understanding correct?
The doc for 'cpu-shares' however mentions the below cautionary note,
"It does not guarantee or reserve any specific CPU access."
If the above is achieved with this setting, then what does it mean (what is to lose) to not have a guarantee or reservation?
EDIT(09/13/21):
Just to summarize,
The 'cpus' parameter setting is an an absolute number that refers to the number of CPUs a service has reserved for it to use at all times. Correct?
The 'cpu-shares' parameter setting is a relative weight number the value of which is used to compute/determine the percentage of total available CPU that a service can use only when there is contention. Correct?
I'm using ubuntu server and configure the odoo project. it has 8GB of ram and available memory is arround 6GB so i need to increase the odoo default memory. So please let me know how to increase?
Have you tried playing with some of Odoo's Advanced and Multiprocessing options?
odoo.py --help
Advanced options:
--osv-memory-count-limit=OSV_MEMORY_COUNT_LIMIT
Force a limit on the maximum number of records kept in
the virtual osv_memory tables. The default is False,
which means no count-based limit.
--osv-memory-age-limit=OSV_MEMORY_AGE_LIMIT
Force a limit on the maximum age of records kept in
the virtual osv_memory tables. This is a decimal value
expressed in hours, and the default is 1 hour.
--max-cron-threads=MAX_CRON_THREADS
Maximum number of threads processing concurrently cron
jobs (default 2).
Multiprocessing options:
--workers=WORKERS Specify the number of workers, 0 disable prefork mode.
--limit-memory-soft=LIMIT_MEMORY_SOFT
Maximum allowed virtual memory per worker, when
reached the worker be reset after the current request
(default 671088640 aka 640MB).
--limit-memory-hard=LIMIT_MEMORY_HARD
Maximum allowed virtual memory per worker, when
reached, any memory allocation will fail (default
805306368 aka 768MB).
--limit-time-cpu=LIMIT_TIME_CPU
Maximum allowed CPU time per request (default 60).
--limit-time-real=LIMIT_TIME_REAL
Maximum allowed Real time per request (default 120).
--limit-request=LIMIT_REQUEST
Maximum number of request to be processed per worker
(default 8192).
Also if you are using WSGI or something similar to run Odoo, these may also need some tuning.
Docker v1.12 service comes with four flags for setting the resource limits on a service.
--limit-cpu value Limit CPUs (default 0.000)
--limit-memory value Limit Memory (default 0 B)
--reserve-cpu value Reserve CPUs (default 0.000)
--reserve-memory value Reserve Memory (default 0 B)
What is the difference between limit and reserve in this context?
What does the cpu value mean in here? Does this mean number of cores? cpu share? What is the unit?
Reserve holds those resources on the host so they are always available for the container. Think dedicated resources.
Limit prevents the binary inside the container from using more than that. Think of controlling runaway processes in container.
Based on my limited testing with stress, --limit-cpu is percent of a core, though if there are multiple threads, it'll spread those out across core's and seems to attempt to keep the total near what you'd expect.
In the below pic, from left to right, was --limit-cpu 4, then 2.5, then 2, then 1. All of those tests had stress set to CPU of 4 (worker threads).
Is there a limit to the number of processes that can be register globally? Or is this only limited by the memory/ max number of atoms ?
Ubuntu 12.04 and Erlang R15B01.
Good question! I'd bet on the number of atoms, if you take into account the following. The Efficiency Guide has a section on system limits:
Processes
The maximum number of simultaneously alive Erlang processes is by default 32768. This limit can be raised up to at most 268435456 processes at startup (see documentation of the system flag +P in the erl(1) documentation). The maximum limit of 268435456 processes will at least on a 32-bit architecture be impossible to reach due to memory shortage.
Distributed nodes
Known nodes
A remote node Y has to be known to node X if there exist any pids, ports, references, or funs (Erlang data types) from Y on X, or if X and Y are connected. The maximum number of remote nodes simultaneously/ever known to a node is limited by the maximum number of atoms available for node names. All data concerning remote nodes, except for the node name atom, are garbage-collected.
Also, the erl manual section describes the flag you can use to alter the number of processes in your node:
+P Number
Sets the maximum number of concurrent processes for this system. Number must be in the range 16..134217727. Default is 32768.
Since you can alter the number of concurrent processes per node, but you cant alter the number of allowed atoms, and the process names are atoms which are copied in replica per node, that should be the total allowed number of globally registered processes.
Hope it helps :)
EDIT: Actually, turns out you can change the number of allowed atoms :)
Atoms
By default, the maximum number of atoms is 1048576. This limit can be raised or lowered using the +t option.
+t size
Set the maximum number of atoms the VM can handle. Default is 1048576.