How to manage expiration of Kafka Groups - stream

I was trying to reset a consumer configuration by deleting it and letting my script recreate it later but I hit an error about new consumers not being deletable.
kafka#kafka-0:~$ ./bin/kafka-consumer-groups.sh --bootstrap kafka-0:9092 --delete --group etl
Option '[delete]' is only valid with '[zookeeper]'. Note that there's no need to delete group metadata for the new consumer as the group is deleted when the last committed offset for that group expires.
Now I'm wondering, what's the name of the consumer config option which controls the expiration from this error message?

The config is actually a broker config that determines how long to keep committed offsets around: offsets.retention.minutes. You may also want to adjust offsets.retention.check.interval.ms depending on the retention value you pick. (reference)

Related

How to configure PostgreSQL client_min_messages on Heroku & Rails

I am trying to reduce some logging noise I am getting from PostgreSQL on my Heroku/Rails application. Specifically, I am trying to configure the client_min_messages setting to warning instead of the default notice.
I followed the steps in this post and specified min_messages: warning in my database.yml file but that doesn't seem to have any effect on my Heroku PostgreSQL instance. I'm still seeing NOTICE messages in my logs and when I run SHOW client_min_messages on the database it still returns notice.
Here is a redacted example of the logs I'm seeing in Papertrail:
Nov 23 15:04:51 my-app-name-production app/postgres.123467 [COLOR] [1234-5] sql_error_code = 00000 log_line="5733" application_name="puma: cluster worker 0: 4 [app]" NOTICE: text-search query contains only stop words or doesn't contain lexemes, ignored
I can also confirm that the setting does seem to be in the Rails configuration - Rails.application.config.database_configuration[Rails.env] in a production console does show a hash containing "min_messages"=>"warning"
I also tried manually updating that via the PostgreSQL console - so SET client_min_messages TO WARNING; - but that setting doesn't 'stick'. It seems to be reset on the next session.
How do I configure client_min_messages to be warning on Heroku/Rails?
If all else fails and your log is flooded by the server logs you can't control or client logs you can't trace and switch off, Papertrail lets you filter out anything you don't want. The database/client still generate them, Heroku still passes them to Papertrail, but Papertrail discards them once they come in.
Shotgun method, PostgreSQL-specific
REVOKE SET ON PARAMETER client_min_messages,log_min_messages FROM your_app_user;
REVOKE GRANT OPTION FOR SET ON PARAMETER client_min_messages,log_min_messages FROM your_app_user;
ALTER SYSTEM SET client_min_messages=WARNING;
ALTER SYSTEM SET log_min_messages =WARNING;
ALTER DATABASE db_user_by_your_app SET client_min_messages=WARNING;
ALTER DATABASE db_user_by_your_app SET log_min_messages =WARNING;
ALTER ROLE your_app_user SET client_min_messages=WARNING;
ALTER ROLE your_app_user SET log_min_messages =WARNING;
And then you need to either wait, restart the app, force it to re-connect or restart the db/instance/server/cluster it connects to.
If your app opens and closes connections - just wait and with time old connections will be replaced by new ones using the new settings.
If it uses a pool, it'll keep re-using connections it already has, so you'll have to force it to re-open them for the change to propagate. You might need to restart the app, or they can be force-closed:
SELECT pg_terminate_backend(pid) from pg_stat_activity where pid<>pg_backend_pid();
The reason is that there's no way for you to alter session-level settings on the fly, from the outside - and all of the above only affects defaults for new sessions. The REVOKE will prevent the app user from changing the setting but it'll also throw an error if they actually try. I'm leaving this in for future reference, keeping in mind that at the moment Heroku supports PostgreSQL versions up to 14, and REVOKE SET ON PARAMETER was added in version 15.
To need all these at once, you'd have to be seeing logs from both ends of each connection in your Papertrail, connecting to multiple databases, using different users, who also can keep changing the settings. Check one by one to isolate the root cause.
Context
There's one log written to each client, one or more written on the server.
client_min_messages applies the client log, sent back in each connection.
log_min_messages applies to the server log(s).
Depending on what feeds the log into your Papertrail, you might need to change only one of these. Manipulating settings can always be tricky because of how and when they apply. You have multiple levels where parameters can be specified, then overriden:
system-level parameters, loaded from postgresql.conf, then postgresql -c/pg_ctl -o and postgresql.auto.conf, which reflects changes applied using ALTER SYSTEM SET ... or directly.
database overrides system. Applied with ALTER DATABASE db SET...
user/role overrides database. ALTER ROLE user SET...
session overrides user/role. Changed with SET... that clients also use upon connection init. If the value for client_min_messages set under min_messages is specified both in database.yml and ENV['DATABASE_URL'], Rails will use the env setting, overriding the one found in .yml with it DATABASE_URL=postgres://localhost/rails_event_store_active_record?min_messages=warning
transaction-level parameters are the most specific, overriding all else - they are basically session-level parameters that will change back to their initial setting at the end of transaction. SET LOCAL...
When a new session opens, it loads the system defaults, overrides them with the database-level, then role-level, after which clients typically issue their own SETs.
It might be a good idea to make sure you're using the settings in database.yml that you think you're using, since it's possible to have multiple sets of them. There can be some logic in your app that keeps altering the setting.
I think you want log_min_messages, not client_min_messages:
Controls which message levels are written to the server log. Valid values are DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. Each level includes all the levels that follow it. The later the level, the fewer messages are sent to the log. The default is WARNING. Note that LOG has a different rank here than in client_min_messages. Only superusers and users with the appropriate SET privilege can change this setting.
I'm not sure if your database user will be allowed to set it, but you can try doing so at the database level:
ALTER DATABASE your_database
SET log_min_messages TO 'warning';
If this doesn't work, and setting at the role or connection level doesn't work, and heroku pg:settings doesn't work (confirmed via other answers and comments), the answer might unfortunately be that this isn't possible on Heroku.
Heroku Postgres is a managed service, so the vendor makes certain decisions that aren't configurable. This might be one of those situations.

Retention Policy Not Applied to Database (and not working)

I am running out of disk space, and I am not sure why the retention policy that I had created, wasn't "used". I created a new retention policy, but it also doesn't seem to be used.
Any idea what I'm doing wrong?
Not sure why the influx settings command in your docker won't show you the immediate change. Could you double check your docker session?
Another way to confirm a list of retention policies for the specified database is to run following query:
SHOW RETENTION POLICIES ON patience
And you should be able to see your rp '30_day' listed in the result and it is the default one now.

How Redis manage the unused cache keys?

My questions is very simple
Assuming I'm not specifying expires_in key for my generated cache key
Let's says i generate a cache key for posts with key "posts/#{maximum_record_updated at}" with no expires_in key
Now my content changed and new key has been set and is getting used with new "posts/#{maximum_record_updated_at}"
The cache is now calling the latest key only
Now the question is... what happens to the first key which is not going to be used anymore and has no expires_in specified ?
will it live forever or Redis will manage deleting it if it's not going to be used anymore ?
I know i would just specify the expires_in simply, but posts (in my case) could stay 1 week without any changes, maybe months, years, so I'm generating new cache key only when something changes
I'm just worried about the old keys and any unexpected memory issue
The old unused key will stay there until Redis reaches maxmemory usage.
Then, Redis will stop accepting write commands or will start evicting keys, depending on the config value of maxmemory-policy. See https://redis.io/topics/lru-cache

get service worker id or date from within service worker

Does anybody know if there is a way to get this number or date inside the service worker:
It would be handy to name my service worker cache either cache-1182 or cache-20171127171448
I guess the received date must be known before the install event.
No, this thing does not exist in the Service Worker specs so we must assume it is an internal implementation detail of Google Chrome, thus unaccessible.
To simplify, you might use installation timestamp as the version number and then look for cached resources in the cache marked with the newest timestamp. You can inspect all the cache keys with caches.keys().

IMAP Client Sync local messages Server?

What's the best general technique for creating an IMAP client and keeping its local message store in sync with the server?
I guess I'm looking for the right way to figure out what's changed in an IMAP folder on the server since the last time I checked, and download those changes, to persist them to my local database... This would include messages no longer in the folder (deleted or moved), new messages, and changed messages...
I guess new messages is easy, I can grab the highest UID i have for a folder and then find messages since that UID. I'm not so sure about detecting messages that were deleted or moved though, or changed (maybe some flags changed on a message).
Thanks!
For sync, probably you need each folder all messages UID and flags.
You can compare local cached UIDs to server returned, with this you can dedect new messages and deleted(
Probably you should use some kind of hastable for search/compare, this will speed up all.

Resources