I'm using ubuntu server and configure the odoo project. it has 8GB of ram and available memory is arround 6GB so i need to increase the odoo default memory. So please let me know how to increase?
Have you tried playing with some of Odoo's Advanced and Multiprocessing options?
odoo.py --help
Advanced options:
--osv-memory-count-limit=OSV_MEMORY_COUNT_LIMIT
Force a limit on the maximum number of records kept in
the virtual osv_memory tables. The default is False,
which means no count-based limit.
--osv-memory-age-limit=OSV_MEMORY_AGE_LIMIT
Force a limit on the maximum age of records kept in
the virtual osv_memory tables. This is a decimal value
expressed in hours, and the default is 1 hour.
--max-cron-threads=MAX_CRON_THREADS
Maximum number of threads processing concurrently cron
jobs (default 2).
Multiprocessing options:
--workers=WORKERS Specify the number of workers, 0 disable prefork mode.
--limit-memory-soft=LIMIT_MEMORY_SOFT
Maximum allowed virtual memory per worker, when
reached the worker be reset after the current request
(default 671088640 aka 640MB).
--limit-memory-hard=LIMIT_MEMORY_HARD
Maximum allowed virtual memory per worker, when
reached, any memory allocation will fail (default
805306368 aka 768MB).
--limit-time-cpu=LIMIT_TIME_CPU
Maximum allowed CPU time per request (default 60).
--limit-time-real=LIMIT_TIME_REAL
Maximum allowed Real time per request (default 120).
--limit-request=LIMIT_REQUEST
Maximum number of request to be processed per worker
(default 8192).
Also if you are using WSGI or something similar to run Odoo, these may also need some tuning.
Related
Folks,
With regards to docker compose v3's 'cpus' parameter setting (under 'deploy' 'resources' 'limits') to limit the available CPUs to a service, is it an absolute number that specifies the count of CPUs or is it a more useful percentage of available CPUs setting.
From what i read it appears to be an absolute number, where in, say if a host has 4 CPUs and one were to set two services in the compose file with 0.5 then both the services combined can only use a max of 1 CPU (0.5 each) while leaving the 3 remaining CPUs idle.
But thinking loudly it appears to me that it would be nicer if this is a percentage of available cores setting in which case for the same previous example this would result in both services each being able to use up to 2 CPUs each & thereby the two combined could use up all 4 when needed. This way when i increase or decrease the available cores the relative settings would help avoid modifying this value again.
EDIT(09/10/21):
On reading this it appears that the above can be achieved with 'cpu-shares' setting instead of setting 'cpus'. Is my understanding correct?
The doc for 'cpu-shares' however mentions the below cautionary note,
"It does not guarantee or reserve any specific CPU access."
If the above is achieved with this setting, then what does it mean (what is to lose) to not have a guarantee or reservation?
EDIT(09/13/21):
Just to summarize,
The 'cpus' parameter setting is an an absolute number that refers to the number of CPUs a service has reserved for it to use at all times. Correct?
The 'cpu-shares' parameter setting is a relative weight number the value of which is used to compute/determine the percentage of total available CPU that a service can use only when there is contention. Correct?
Autoscaling: Unable to reach resize target in zone us-central1-b. QUOTA_EXCEEDED: Quota 'IN_USE_ADDRESSES' exceeded. Limit: 575.0 in region us-central1.
From https://cloud.google.com/dataflow/service/dataflow-service-desc my limit should be 1,000, but when running my dataflow job I get the warning above about a limit of only 575. Should I explicitly set a different region as specified in https://cloud.google.com/dataflow/docs/concepts/regional-endpoints or can I increase the limit to 1,000 in the central region?
I guess you are using the default machine type, so each machine has one cpu but each of them has a standalone IP_ADDRESS. Even you can use up to 1000 instances, it seems your "IN_USE_ADDRESSES" quota in the region is set to 575 thus the error.
If you don't want to increase the number of "In Use Addresses", you can find other machine types to use more CPUs per instance, for example n1-standard-4. Otherwise, you can ask for more quota in "In Use Addresses".
Docker v1.12 service comes with four flags for setting the resource limits on a service.
--limit-cpu value Limit CPUs (default 0.000)
--limit-memory value Limit Memory (default 0 B)
--reserve-cpu value Reserve CPUs (default 0.000)
--reserve-memory value Reserve Memory (default 0 B)
What is the difference between limit and reserve in this context?
What does the cpu value mean in here? Does this mean number of cores? cpu share? What is the unit?
Reserve holds those resources on the host so they are always available for the container. Think dedicated resources.
Limit prevents the binary inside the container from using more than that. Think of controlling runaway processes in container.
Based on my limited testing with stress, --limit-cpu is percent of a core, though if there are multiple threads, it'll spread those out across core's and seems to attempt to keep the total near what you'd expect.
In the below pic, from left to right, was --limit-cpu 4, then 2.5, then 2, then 1. All of those tests had stress set to CPU of 4 (worker threads).
We are using sidekiq pro 1.7.3 and sidekiq 3.1.4, Ruby 2.0, Rails 4.0.5 on heroku with the redis green addon with 1.75G of memory.
We run a lot of sidekiq batch jobs, probably around 2 million jobs a day. What we've noticed is that the redis memory steadily increases over the course of a week. I would have expected that when the queues are empty and no workers are busy that redis would have low memory usage, but it appears to stay high. I'm forced to do a flushdb pretty much every week or so because we approach our redis memory limit.
I've had a series of correspondence with Redisgreen and they suggested I reach out to the sidekiq community. Here are some stats from redisgreen:
Here's a quick summary of RAM use across your database:
The vast majority of keys in your database are simple values taking up 2 bytes each.
200MB is being consumed by "queue:low", the contents of your low-priority sidekiq queue.
The next largest key is "dead", which occupies about 14MB.
And:
We just ran an analysis of your database - here is a summary of what we found in 23129 keys:
18448 strings with 1048468 bytes (79.76% of keys, avg size 56.83)
6 lists with 41642 items (00.03% of keys, avg size 6940.33)
4660 sets with 3325721 members (20.15% of keys, avg size 713.67)
8 hashs with 58 fields (00.03% of keys, avg size 7.25)
7 zsets with 1459 members (00.03% of keys, avg size 208.43)
It appears that you have quite a lot of memory occupied by sets. For example - each of these sets have more than 10,000 members and occupies nearly 300KB:
b-3819647d4385b54b-jids
b-3b68a011a2bc55bf-jids
b-5eaa0cd3a4e13d99-jids
b-78604305f73e44ba-jids
b-e823c15161b02bde-jids
These look like Sidekiq Pro "batches". It seems like some of your batches are getting filled up with very large numbers of jobs, which is causing the additional memory usage that we've been seeing.
Let me know if that sounds like it might be the issue.
Don't be afraid to open a Sidekiq issue or email prosupport # sidekiq.org directly.
Sidekiq Pro Batches have a default expiration of 3 days. If you set the Batch's expires_in setting longer, the data will sit in Redis longer. Unlike jobs, batches do not disappear from Redis once they are complete. They need to expire over time. This means you need enough memory in Redis to hold N days of Batches, usually not a problem for most people, but if you have a busy Sidekiq installation and are creating lots of batches, you might notice elevated memory usage.
Is there a limit to the number of processes that can be register globally? Or is this only limited by the memory/ max number of atoms ?
Ubuntu 12.04 and Erlang R15B01.
Good question! I'd bet on the number of atoms, if you take into account the following. The Efficiency Guide has a section on system limits:
Processes
The maximum number of simultaneously alive Erlang processes is by default 32768. This limit can be raised up to at most 268435456 processes at startup (see documentation of the system flag +P in the erl(1) documentation). The maximum limit of 268435456 processes will at least on a 32-bit architecture be impossible to reach due to memory shortage.
Distributed nodes
Known nodes
A remote node Y has to be known to node X if there exist any pids, ports, references, or funs (Erlang data types) from Y on X, or if X and Y are connected. The maximum number of remote nodes simultaneously/ever known to a node is limited by the maximum number of atoms available for node names. All data concerning remote nodes, except for the node name atom, are garbage-collected.
Also, the erl manual section describes the flag you can use to alter the number of processes in your node:
+P Number
Sets the maximum number of concurrent processes for this system. Number must be in the range 16..134217727. Default is 32768.
Since you can alter the number of concurrent processes per node, but you cant alter the number of allowed atoms, and the process names are atoms which are copied in replica per node, that should be the total allowed number of globally registered processes.
Hope it helps :)
EDIT: Actually, turns out you can change the number of allowed atoms :)
Atoms
By default, the maximum number of atoms is 1048576. This limit can be raised or lowered using the +t option.
+t size
Set the maximum number of atoms the VM can handle. Default is 1048576.