Redis has the following settings:
"config get maxmemory"
1) "maxmemory"
2) "2147483648"
(which is 2G)
But when I do "info"
used_memory:6264349904
used_memory_human:5.83G
used_memory_rss:6864515072
Clearly it ignores all the settings... Why?
P.S.
"config get maxmemory-policy" shows:
1) "maxmemory-policy"
2) "volatile-ttl"
and: "config get maxmemory-samples" shows:
1) "maxmemory-samples"
2) "3"
What means, they should expire keys with the nearest expiration date...
Do you have expiration settings on all your keys? volatile-ttl will only remove keys with an expiration set. This should be in your info output.
If you don't have expiration ttls set try allkeys-lru or allkeys-random for your policy.
According to http://redis.io/topics/faq
You can also use the "maxmemory" option in the config file to put a limit to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only commands).
Related
Use case: Set the maximum number of messages (within a timeframe) to be sent to a target service.
Example.
We collect logs from service X which has these kind of logs:
{"#timestamp":"2020-10-30T13:00:00.310Z","level":"INFO","message":"This is some event"}
{"#timestamp":"2020-10-30T13:00:00.315Z","level":"WARN","message":"This is warn abc123"}
{"#timestamp":"2020-10-30T13:00:00.325Z","level":"WARN","message":"This is warn abc123"}
{"#timestamp":"2020-10-30T13:00:00.327Z","level":"WARN","message":"This is warn abc123"}
{"#timestamp":"2020-10-30T13:00:00.335Z","level":"WARN","message":"This is warn xyz123"}
As you can see the same warning (abc123) was logged multiple time by the service within 12ms.
What I want is to send only one from them.
So fluentD should forward these to the target service:
{"#timestamp":"2020-10-30T13:00:00.310Z","level":"INFO","message":"This is some event"}
{"#timestamp":"2020-10-30T13:00:00.315Z","level":"WARN","message":"This is warn abc123"}
{"#timestamp":"2020-10-30T13:00:00.335Z","level":"WARN","message":"This is warn xyz123"}
Which timestamp to use or to have a counter doesn't matter for me.
Is there a filter,plugin for this use case? Something like where I can set a regex rule for the messages(for deciding whether more messages should be considered as equal) and a timeframe?
In fluentd one may try the throttle plugin https://github.com/rubrikinc/fluent-plugin-throttle with a message key as a group_key (not sure about performance in this case tho).
In FluentBit you can use a built-in SQL stream processor and write a SELECT with WINDOW and GROUP BY statements: https://docs.fluentbit.io/stream-processing/getting_started/fluent_bit_sql#select-statement.
I'm using nebula docker but it throws me an error when I connected first. Then I retried everything is ok. Why is that? Must I retry every time?
This is likely caused by setting the heartbeat_interval_secs value to fetch data from the meta server. Conduct the following steps to resolve:
If meta has registered, check heartbeat_interval_secs value in console with the following command.
nebula> GET CONFIGS storage:heartbeat_interval_secs
nebula> GET CONFIGS graph:heartbeat_interval_secs
If the value is large, change it to 1s with the following command.
nebula> UPDATE CONFIGS storage:heartbeat_interval_secs=1
nebula> UPDATE CONFIGS graph:heartbeat_interval_secs=1
Note the changes take effect in the next period.
I have an OpenWhisk deployment on a Kubernetees cluster that was done using [1]. I know I can change the memory limit for a function by adding --memory x when creating the function. However, if I try to set a value large than 512MB I get the following error.
requirement failed: memory 812 MB exceeds allowed threshold of 536870912 B (code 10543)
I assume this is a configuration set during the setup or within the code. Is there a way to increase this limit to a custom value? if so what is the configuration I need to perform in order to do this?
[1] https://github.com/apache/incubator-openwhisk-deploy-kube
The memory limits are configurable for your deployment as of this patch https://github.com/apache/incubator-openwhisk/pull/3148. You can set the max memory in your deployment to suite your purposes.
You have to set the environment variable CONFIG_whisk_memory_max=1073741824 on the invoker with a higher value for example 1GB
One way to edit the invoker deployment file
https://github.com/apache/incubator-openwhisk-deploy-kube/blob/master/kubernetes/invoker/invoker-dcf.yml#L74
- name: "CONFIG_whisk_memory_max"
value: "1073741824"
With ansible-playbook, you can specify an option with -e limit_action_memory_max=1073741824 when you deploy openwhisk. The whole command might be like
ansible-playbook openwhisk.yml -e limit_action_memory_max=1073741824 # with other options like '-e invoker_user_memory=25600m'
I am trying to migrate a Play 2.5 version to 2.6.2. I keep getting the URI-length exceeds error. Anyone knows how to override this?
I tried below Akka setting but still no luck.
play.server.akka{
http.server.parsing.max-uri-length = infinite
http.client.parsing.max-uri-length = infinite
http.host-connection-pool.client.parsing.max-uri-length = infinite
http.max-uri-length = infinite
max-uri-length = infinite
}
Simply add
akka.http {
parsing {
max-uri-length = 16k
}
}
to your application.conf. The prefix play.server is only used for a small subset of convenience features for Akka-HTTP integration into the Playframework, e.g. play.server.akka.requestTimeout. Those are documented in the Configuring the Akka HTTP server backend documentation.
I was getting error due to header length exceeding default 8 KB(8192). Added the following to build.sbt and it worked for me :D
javaOptions += "-Dakka.http.parsing.max-header-value-length=16k"
You can try similar for uri length if other options don't work
This took me way to long to figure out. It is somehow NOT to be found in the documentation.
Here is a snippet (confirmed working with play 2.8) to put in your application.conf which is also configurable via an environment variable and works for BOTH dev and prod mode:
# Dev Mode
play.akka.dev-mode.akka.http.parsing.max-uri-length = 16384
play.akka.dev-mode.akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
# Prod Mode
akka.http.parsing.max-uri-length = 16384
akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
You can then edit the config or with an already deployed application just set PLAY_MAX_URI_LENGTH and it is dynamically configurable without the need to modify commandline arguments.
env PLAY_MAX_URI_LENGTH=16384 sbt run
If anyone getting this type of error in chrome browser when trying to access a site or login. [HTTP header value exceeds the configured limit of 8192 characters]
, Go to chrome
settings -> Security and Privacy -> Site Settings , View Permission and data stored across sites
Search for the specific website and on that site do Clear all data.
I would like to configure a global retry limit in Sidekiq to limit the number of retries. By default Sidekiq limits the number of retries to 25 but I want to set it lower for all Workers to prevent the long default maximum retry period if the limit is not explicitly specified on the Worker.
You can also configure in your sidekiq.yml
:max_retries: 10
:queues:
- queue_1
- queue_2
Refer doc here
Sidekiq.default_worker_options['retry'] = 10
https://github.com/mperham/sidekiq/wiki/Advanced-Options#workers
This value is being stored in options and (AFAIK) has no nifty setter for it, so here you go:
Sidekiq.options[:max_retries] = 5
It might be set for RetryJobs in the middleware initializer as well.
You can use Sidekiq.default_worker_options in your initializer. So to set a lower limit it'd be
Sidekiq.default_worker_options = { retry: 5 }
Currently working on setting this up to limit the amount of error noise created by our staging environments (for the sake of trying to stay well below our error handling service limits). It seems that the key is now max_retries when changing the amount, and retry is a boolean for whether it should retry at all or go right to the "Dead" queue.
https://github.com/mperham/sidekiq/wiki/Error-Handling#automatic-job-retry
This is what it looks for me in my Sidekiq config file:
if Rails.env.staging?
Sidekiq.default_worker_options['max_retries'] = 5
end
UPDATE: could have been my own confusion, but for some reason, default_worker_options did not seem to be working consistently for me. I ended up changing it to this and it worked as I hoped. Failed jobs went straight to the Dead queue:
Sidekiq.options[:max_retries] = 0