We use secrets in AWS Secrets Manager to store environment variable. It happens that we want to store cron job configuration there.
Tried this
Secret Key Secret value
MySpringScheduler 0 15 19 * * *
However, once our AWS application instance started, I got an error "Cron expression must consists of 6 fields (found 1 in "01519ap-0.0.1.......). It seems that it removed all the spaces. Is there a way I can keep the space in the value? I tried single quote like this '0 15 19 * * *' but it is not working.
Thanks for any help!
Have you tried using command line to retrieve secret and check if it is removing spaces?
If you created secret as 'Other type of Secret', it should store the key value as it is.
I tested your example and retrieved the value as it is.
aws secretsmanager get-secret-value --secret-id my/secret
{
"ARN": "arn:aws:secretsmanager:us-east-1:120908898939}:secret:my/secret-4VBhSx",
"Name": "my/secret",
"VersionId": "4dd9e462-8748-4621-b388-2050a0d9de33",
"SecretString": "{\"MySpringScheduler\":\"0 15 19 * * *\"}",
"VersionStages": [
"AWSCURRENT"
],
"CreatedDate": "2022-07-07T21:47:30.110000-04:00"
}
Related
So I'm using puppet3 and I have X.yaml and Y.yaml. X.yaml has profiles::resolv_conf::nameservers: [ '1.1.1.1', '8.8.8.8', '2.2.2.2' ]in it. I want to add that [ '1.1.1.1', '8.8.8.8', '2.2.2.2' ] as a value to the servers: which is in Y.yaml:
'dns_test':
plugin_type: 'dns_query'
options:
'servers': \['1.1.1.1', '8.8.8.8', "2.2.2.2"\]
'domains': \['google.com'\]
'record_type': 'A'
'timeout': 5
tags:
'input_source': 'dns_query'
By doing this I want to make sure that when someone change values in profiles::resolv_conf::nameservers: that value is changed in this telegraf plugin too.
I tried multiple solution but the one that was the closest was:
'dns_test':
plugin_type: 'dns_query'
options:
'servers': "%{hiera('profiles::resolv_conf::nameservers')}"
'domains': ['google.com']
'record_type': 'A'
'timeout': 5
tags: 'input_source': 'dns_query'
but problem is that puppet was adding extra " " to the value and final value in plugin conf was:
"["1.1.1.1", "2.2.2.2", "8.8.8.8"]" instead of ["1.1.1.1", "2.2.2.2", "8.8.8.8"]
TL;DR: You can't.
From the current docs and the Puppet documentation archive, I confirm that no version of the %{hiera} interpolation function or its replacement, %{lookup}, ever supported interpolating values other than strings. That's expressed in the current docs like so:
The lookup and hiera interpolation functions look up a key and return
the resulting value. The result of the lookup must be a string; any
other result causes an error.
(Emphasis added)
What you're looking for would be supported by Hiera 5's %{alias} function, provided that the data are available somewhere else in the same hierarchy (which is also a requirement for %{hiera}). Since you're stuck on Puppet 3, however, you're probably on Hiera 2, and certainly not later than Hiera 3.
"But wait!" You may say. "I'm getting a successful interpolation, but the data are just munged". Specifically, you wrote:
problem is that puppet was adding extra " " to the value and final value
Since %{hiera()} interpolates only strings, it is not surprising that you got a string value, given that you got a value at all. I do find it a bit surprising that Puppet did not throw an error, but I'm not prepared to comment further on that without a minimum reproducible example that demonstrates the behavior.
I am learning AWS Cloud Development Kit (CDK).
As part of this learning, I am trying to understand how I am supposed to correctly handle production and development environment.
I know AWS CDK provides the environment parameter to allow deploying stacks to specific account.
But then, how to have specific options for development versus production stacks ? It does not seem to be provided by default by AWS CDK or am I missing/misunderstanding something ?
A very simple example could be that I want a S3 bucket called my-s3-bucket-dev for my development account and one named my-s3-bucket-prod for my production account. But then how to have e.g. a variable stage correctly handled in AWS CDK ?
I know I can add parameters in the cdk.json file but again, I don't know how to correctly use this file to depend upon the deployed stack i.e. production vs development.
Thanks for the support
Welcome to AWS CDK.
Enjoy the ride. ;)
Actually, there is no semantic (in your case the stage) in an account itself.
This has nothing to do with CDK or Cloud Formation.
You need to take care of this.
You're right, that you could use the CDK context in the cdk.json.
There's no schema enforcement in the context, except for some internally used variables by CDK.
You could define your dev and prod objects within.
There are other ways of defining the context.
Here is an example, what it could look like:
{
"app": "node app",
// usually there's some internal definition for your CDK project
"context": {
"dev": {
"accountId" : "prod_account",
"accountRegion" : "us-east-1",
"name": "dev",
"resourceConfig":
{
// here you could differentiate the config per AWS resource-type
// e.g. dev has lower hardware specs
}
},
"prod": {
"accountId" : "prod_account",
"accountRegion" : "us-east-1",
"name": "prod",
"resourceConfig":
{
// here you could differentiate the config per AWS resource-type
// prod has higher hardware specs or more cluster nodes
}
}
}
}
With this being defined, you need to run your CDK application with the -c flag to specify which configuration object (dev or prod), you want to have.
For instance, you could run it with cdk synth -c stage=prod.
This sets the stage variable in your context and makes it available.
When it was successful, you can re-access the context again and fetch the appropriate config object.
const app = new cdk.App();
const stage = app.node.tryGetContext('stage');
// the following step is only needed, if you have a different config per account
const stageConfig = app.node.tryGetContext(stage );
// ... do some validation and pass the config to the stacks as constructor argument
As I said, the context is one way of doing this.
However, there are drawbacks to it.
It's JSON and no code.
What I prefer is to have TypeScript types per resource configuration (e.g. S3) and wire them all together as a plain object.
The object maps the account/region information and the corresponding resource configurations.
I am trying to find out if there are keys which have versions that are older than one year and setting their rotation period to 24 hours from now. Unfortunately, each list keyring call is counting as a key.read which there is a quota which is very small (~300/min) is there a way to work around these quotas besides increasing them? I am trying to run this code periodically in a cloud function so there is a runtime limit such that I cannot just wait for the quota to reset.
def list_keys(project):
client = kms_v1.KeyManagementServiceClient()
#this location list is based on a running of `gcloud kms locations list` and represents a where a key could be created
location_list = ['asia','asia-east1','asia-east2','asia-northeast1','asia-northeast2',
'asia-south1','asia-southeast1','australia-southeast1','eur4','europe',
'europe-north1','europe-west1','europe-west2','europe-west3','europe-west4',
'europe-west6','global','nam4','northamerica-northeast1','southamerica-east1',
'us','us-central1','us-east1','us-east4','us-west1','us-west2']
for location in location_list:
key_ring_parent = client.location_path(project,location)
key_ring_list = client.list_key_rings(key_ring_parent)
for key_ring in key_ring_list:
parent = client.key_ring_path(project,location,format_keyring_name(key_ring.name))
for key in client.list_crypto_keys(parent):
start_time = key.primary.create_time # need to use primary to get latest version of the key
now = time.time()
now_seconds = int(now)
elapsed = now_seconds - start_time.seconds
next_rotate_age =(key.next_rotation_time.seconds - now_seconds) + elapsed
days_elapsed = elapsed/3600/24
print(key.name," is this many days old: ", days_elapsed)
print(key.name," will be this many days old when it is scheduled to rotate: ", next_rotate_age/3600/24)
#if the key is a year old set it to rotate tomorrow
if days_elapsed > 364:
#client.
update_mask = kms_v1.types.UpdateCryptoKeyRequest.update_mask
#print(update_mask)
new_rotation_time = now_seconds + (3600*24) # 1 day from now because can't set less than 24 hrs notice on certain keys
key.next_rotation_time.seconds = new_rotation_time
update_mask = {'paths':{'next_rotation_time': new_rotation_time}}
print(client.update_crypto_key(key, update_mask))
Is cloud asset inventory an option? You could run something like
$ gcloud asset export --organization YOUR_ORG_ID \
--asset_types cloudkms.googleapis.com/CryptoKey \
--content-type RESOURCE \
--output-path "gs://YOUR_BUCKET/NEW_FILE"
The output file will contain the full CryptoKey resource for every single key in the organization so you don't need to send a ton of List/Get requests to the KMS API.
Having looking into your request, it would seem that it would not be possible to work around the quotas besides increasing them.
I would suggest looking at these following documentations on the following:
Resource quotas
Working with Quotas
Quotas and Limits
These documents should provide you with the information you need on quotas.
I have encrypted a PIN block under a TPK (clear)
When I am going to translation my PIN block from encryption under TPK to encryption under ZPK given from client on real HSM then it is giving me either error code 24 or 20.
What can i do to resolve my issue ? I have tried many ways but it is not getting resolved.
Translation command I am using is CA - Translate a PIN from TPK to ZPK/BDK (3-DES DUKPT) Encryption.
Al these my operations working beautifully with thales HSM simulator.
Errors you are getting are:
Error 20:PIN block does not contain valid values
Error 24:PIN is fewer than 4 or more than 12 digits in length
You said that you have clear TPK, but you can't do anything with clear keys on HSM. You have to import key and get key under LMK for any command.
You also have to import this key as TPK key in HSM to use CA command. You can also import it as ZPK, but than you should use CC command.
Redis has the following settings:
"config get maxmemory"
1) "maxmemory"
2) "2147483648"
(which is 2G)
But when I do "info"
used_memory:6264349904
used_memory_human:5.83G
used_memory_rss:6864515072
Clearly it ignores all the settings... Why?
P.S.
"config get maxmemory-policy" shows:
1) "maxmemory-policy"
2) "volatile-ttl"
and: "config get maxmemory-samples" shows:
1) "maxmemory-samples"
2) "3"
What means, they should expire keys with the nearest expiration date...
Do you have expiration settings on all your keys? volatile-ttl will only remove keys with an expiration set. This should be in your info output.
If you don't have expiration ttls set try allkeys-lru or allkeys-random for your policy.
According to http://redis.io/topics/faq
You can also use the "maxmemory" option in the config file to put a limit to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only commands).