Anyone know how long Azure Keyvault retains old keys? The scenario would be an on-prem SQL server VM using keyvault for TDE with keys being rotated regularly.
If we were to restore an old snapshot of the SQL server, will it be able to access the old, retired key?
Soft deleted resources are retained for a set period of time, 90 days.
Upon deleting a key vault object, such as a key, the service will place the object in a deleted state, making it inaccessible to any retrieval operations. While in this state, the key vault object can only be listed, recovered, or forcefully/permanently deleted.
At the same time, Key Vault will schedule the deletion of the underlying data corresponding to the deleted key vault or key vault object for execution after a predetermined retention interval. The DNS record corresponding to the vault is also retained for the duration of the retention interval.
Use a key without an expiration date – and don't set an expiration date on a key already in use: once the key expires, the encrypted databases lose access to their TDE Protector and are inaccessible within 24 hours.
For more details, you could refer to this article and this one.
Related
My questions is very simple
Assuming I'm not specifying expires_in key for my generated cache key
Let's says i generate a cache key for posts with key "posts/#{maximum_record_updated at}" with no expires_in key
Now my content changed and new key has been set and is getting used with new "posts/#{maximum_record_updated_at}"
The cache is now calling the latest key only
Now the question is... what happens to the first key which is not going to be used anymore and has no expires_in specified ?
will it live forever or Redis will manage deleting it if it's not going to be used anymore ?
I know i would just specify the expires_in simply, but posts (in my case) could stay 1 week without any changes, maybe months, years, so I'm generating new cache key only when something changes
I'm just worried about the old keys and any unexpected memory issue
The old unused key will stay there until Redis reaches maxmemory usage.
Then, Redis will stop accepting write commands or will start evicting keys, depending on the config value of maxmemory-policy. See https://redis.io/topics/lru-cache
I was trying to reset a consumer configuration by deleting it and letting my script recreate it later but I hit an error about new consumers not being deletable.
kafka#kafka-0:~$ ./bin/kafka-consumer-groups.sh --bootstrap kafka-0:9092 --delete --group etl
Option '[delete]' is only valid with '[zookeeper]'. Note that there's no need to delete group metadata for the new consumer as the group is deleted when the last committed offset for that group expires.
Now I'm wondering, what's the name of the consumer config option which controls the expiration from this error message?
The config is actually a broker config that determines how long to keep committed offsets around: offsets.retention.minutes. You may also want to adjust offsets.retention.check.interval.ms depending on the retention value you pick. (reference)
As described in https://developer.apple.com/reference/cloudkit/ckserverchangetoken, the CloudKit servers return a change token as part of the CKFetchRecordZoneChangesOperation callback response. For what set of subsequent record fetches should I include the given change token in my fetch calls?
only fetches to the zone we fetched from?
or would it apply to any fetches to the db that that zone is in? or perhaps the whole container that the db is in?
what about app extensions? (App extensions have the same iCloud user as the main app, but have a different "user" as returned by fetchUserRecordIDWithCompletionHandler:, at least in my testing) Would it be appropriate to supply a change token from the main app in a fetch call from, say, a Messages app extension? I assume not, but would love to have a documented official answer.
I, too, found the scope of CKServerChangeToken a little unclear. However, after reviewing the documentation, both CKFetchDatabaseChangesOperation and CKFetchRecordZoneChangesOperation provide and manage their own server change tokens.
This is particularly useful if you decide to follow the CloudKit workflow Dave Browning outlines in his 2017 WWDC talk when fetching changes (around the 8 minute mark).
The recommended approach is to:
1) Fetch changes for a database using CKFetchDatabaseChangesOperation. Upon receiving the updated token via changeTokenUpdatedBlock, persist this locally. This token is 'scoped' to either the private or shared CKDatabase the operation was added to. The public database doesn't offer change tokens.
2) If you receive zone IDs via the recordZoneWithIDChangedBlock in the previous operation, this indicates there are zones which have changes you can fetch with CKFetchRecordZoneChangesOperation. This operation takes in it's own unique server change token via it's rather cumbersome initializer parameter: CKFetchRecordZoneChangesOperation.ZoneConfiguration. This is 'scoped' to this particular CKRecordZone. So, again, when receiving an updated token via recordZoneChangeTokensUpdatedBlock, it needs persisting locally (perhaps with a key which relates to it's CKRecordZone.ID).
The benefit here is that it probably minimises the number of network calls. Fetching database changes first prevents making calls for each record zone if the database doesn't report any changed zone ids.
Here's a code sample from the CloudKit team which runs through this workflow. Admittedly a few of the APIs have since changed and the comments don't explicitly make it clear the 'scope' of the server change tokens.
I had at first misinterpreted the timestamp implementation of OAuth into thinking that it meant a timestamp that was not within 30 seconds past the current time would be denied, it turned out this was wrong for a few reasons including the fact that we could not guarantee that each system clock was in sync enough down to the minutes and seconds regardless of time zone. Then I read it again to get more clarity:
"Unless otherwise specified by the Service Provider, the timestamp is
expressed in the number of seconds since January 1, 1970 00:00:00 GMT.
The timestamp value MUST be a positive integer and MUST be equal or
greater than the timestamp used in previous requests."
source: http://oauth.net/core/1.0/#nonce
Meaning the timestamps are only compared in relation to previous requests from the same source, not in comparison to my server system clock.
Then I read a more detailed description here: http://hueniverse.com/2008/10/beginners-guide-to-oauth-part-iii-security-architecture/
(TL;DR? - skip to the bold parts below)
To prevent compromised requests from being used again (replayed),
OAuth uses a nonce and timestamp. The term nonce means ‘number used
once’ and is a unique and usually random string that is meant to
uniquely identify each signed request. By having a unique identifier
for each request, the Service Provider is able to prevent requests
from being used more than once. This means the Consumer generates a
unique string for each request sent to the Service Provider, and the
Service Provider keeps track of all the nonces used to prevent them
from being used a second time. Since the nonce value is included in
the signature, it cannot be changed by an attacker without knowing the
shared secret.
Using nonces can be very costly for Service Providers as they demand
persistent storage of all nonce values received, ever. To make
implementations easier, OAuth adds a timestamp value to each request
which allows the Service Provider to only keep nonce values for a
limited time. When a request comes in with a timestamp that is older
than the retained time frame, it is rejected as the Service Provider
no longer has nonces from that time period. It is safe to assume that
a request sent after the allowed time limit is a replay attack. OAuth
provides a general mechanism for implementing timestamps but leaves
the actual implementation up to each Service Provider (an area many
believe should be revisited by the specification). From a security
standpoint, the real nonce is the combination of the timestamp value
and nonce string. Only together they provide a perpetual unique value
that can never be used again by an attacker.
The reason I am confused is if the Nonce is only used once, why would the Service Provider ever reject based on timestamp? "Service Provider no longer has nonces from that time period" is confusing to me and sounds as if a nonce can be re-used as long as it is within 30 seconds of the last time it was used.
So can anyone clear this up for me? What is the point of the timestamp if the nonce is a one time use and I am not comparing the timestamp against my own system clock (because that obviously would not be reliable). It makes sense that the timestamps will only be relative to each other, but with the unique nonce requirement it seems irrelevant.
The timestamp is used for allowing the server to optimize their storage of nonces. Basically, consider the read nonce to be the combination of the timestamp and random string. But by having a separate timestamp component, the server can implement a time-based restriction using a short window (say, 15 minutes) and limit the amount of storage it needs. Without timestamps, the server will need infinite storage to keep every nonce ever used.
Let's say you decide to allow up to 15 minutes time difference between your clock and the client's and are keeping track of the nonce values in a database table. The unique key for the table is going to be a combination of 'client identifier', 'access token', 'nonce', and 'timestamp'. When a new request comes in, check that the timestamp is within 15 minutes of your clock then lookup that combination in your table. If found, reject the call, otherwise add that to your table and return the requested resource. Every time you add a new nonce to the table, delete any record for that 'client identifier' and 'access token' combination with timestamp older than 15 minutes.
OK, after enough pondering I believe I have cracked this one.
They want me to always know the timestamp of the last successful request that was made so that if any timestamp comes in prior to that it will be ignored.
Also the Nonce must be unique, but I am only going to store them up to a certain date range, therefore if the timestamp is so many hours old the Nonce will be dropped and can then be used again, however because the last used timestamp is also stored, they cannot re-use an old request even if the Nonce is considered unique because the timestamp on that request would be outdated.
However this only works because of the signature. If they changed the timestamp or the Nonce on a request the signature would no longer match the request and would be denied (as the timestamp and Nonce are both a part of the signature creation and they do not have the signing key).
Phew!
If OAuth used just the timstamp, it'd be relatively easy for an attacker to guess what the next timestamp would be, and inject their own request into the process. They'd just have to do "previous timestamp + 1".
By using the nonce, which is generated in a cryptographically secure manner (hopefully), the attacker can't simply inject TS+1, because they won't have the proper nonce to authenticate themselves.
Think of it as a secure door lock that requires both a keycard and a pin code. You can steal the keycard, but still can't get through the door because you don't know the pin.
Can't comment yet, so posting as an answer. Reply to #tofutim's comment
Indeed, if we insist the the timestamp value of the new request has to be greater than the timestamps of all the previous requests, there seems to be little point in nonce:
Replay attack is prevented, since the provider would reject the message with timestamp equal to the previous one
Yes, the next timestamp is easy to guess for an attacker - just use timestamp + 1 - but the attacker still has no way to tamper the timestamp parameter, since all parameters are signed using consumer's secret (and token secret)
However, reading the OAuth 1.0a spec reveals that
Unless otherwise specified by the Service Provider, the timestamp is expressed in the number of seconds since January 1, 1970 00:00:00 GMT. The timestamp value MUST be a positive integer and MUST be equal or greater than the timestamp used in previous requests.
The Consumer SHALL then generate a Nonce value that is unique for all requests with that timestamp
So nonces are used to prevent replay attack when you send multiple requests with the same timestamp.
Why allow to send requests with the same timestamp? Consider the case when you want to send multiple requests to independent resources in parallel, to finish the processing faster. Conceptually, the server handles each request one-by-one. You don't know in which orders the requests will arrive, since it depends on many things such as OS, network the packages will travel, the server logic and so on
If you send requests with increasing timestamp, there's still a possibility that the request with hire timestamp will be handled first, and then all requests with lower timestamp will fail. Instead, you can send request with equal timestamps and different nonces
It is reasonable to assume one could attempt to crack the nonce with brute force. A timestamp reduces the chance someone might succeed.
I am storing many images in Amazon S3,
using a ruby lib (http://amazon.rubyforge.org/)
I don't care the photos older than 1 week, then to free the space in S3 I have to delete those photos.
I know there is a method to delete the object in a certain bucket:
S3Object.delete 'photo-1.jpg', 'photos'
Is there a way to automatically delete the image older than a week ?
If it does Not exist, I'll have to write a daemon to do that :-(
Thank you
UPDATE: now it is possible, check the Roberto's answer.
You can use the Amazon S3 Object Expiration policy
Amazon S3 - Object Expiration | AWS Blog
If you use S3 to store log files or other files that have a limited
lifetime, you probably had to build some sort of mechanism in-house to
track object ages and to initiate a bulk deletion process from time to
time. Although our new Multi-Object deletion function will help you to
make this process faster and easier, we want to go ever farther.
S3's new Object Expiration function allows you to define rules to
schedule the removal of your objects after a pre-defined time period.
The rules are specified in the Lifecycle Configuration policy that you
apply to a bucket. You can update this policy through the S3 API or
from the AWS Management Console.
Object Expiration | AWS S3 Documentation
Some objects that you store in an Amazon S3 bucket might have a
well-defined lifetime. For example, you might be uploading periodic
logs to your bucket, but you might need to retain those logs for a
specific amount of time. You can use using the Object Lifecycle
Management to specify a lifetime for objects in your bucket; when the
lifetime of an object expires, Amazon S3 queues the objects for
deletion.
Ps: Click on the links for more information.
If you have access to a local database, it's easy to simply log each image (you may be doing this already depending on your application), and then you can perform a simple query to retrieve the entire list and delete them each. This is much faster than querying S3 directly, but does require local storage of some kind.
Unfortunately, Amazon doesn't offer an API for automatic deletion based on a specific set of criteria.
You'll need to write a daemon that goes through all of the photos and and selects just those that meet your criteria, and then delete them one by one.