AWS (ResourceNotFoundException) when calling the GetSecretValue operation: Secrets Manager can't find the specified secret - aws-secrets-manager

Having ResourceNotFoundException when using secrets ARN
Tried getting secret only using secret name - doesn't work.
Tried getting secret using ARN - doesn't work.
I've checked my assumed role's policy and SM is set as far as I understand like this in the JSON format "secretsmanager:*"
The command I'm using in a CloudBees job is this:
aws secretsmanager get-secret-value --secret-id <ARN>
Not sure what is the issue at the moment. All help appreciated!
Dave

There is not enough information here to tell for sure what the problem is. However, the command line you have does not specify a region and you may be defaulting to the wrong region. Pass --region REGION to the CLI (where REGION is the real region name; e.g. us-east-1) and make sure REGION is the same as the region in the ARN.

I ran into this one as well, my issue was the id was wrong.
aws secretsmanager get-secret-value --secret-id <ARN>
The ARN actually needed to be the secret name.

You may have a secret resource without any secret value configured
You can verify that the secret resource exists using -
aws secretsmanager describe-secret --secret-id <ARN or NAME>
Quoted from AWS CLI aws secretsmanager command documentation:
--secret-id (string)
The ARN or name of the secret to add a new version to.
For an ARN, we recommend that you specify a complete ARN rather than a
partial ARN.
In case you receive the secret resource details you may need to set a secret value using the command:
aws secretsmanager put-secret-value --secret-id <ARN or NAME> --secret-string '[{"user":"username"},{"pass":"password"}]'
And after the secret resource value is set you should be able to use get-secret-value command

Related

Pass variable name in Jenkins Vault secrets path

I am not able to pass ${environment} in the vault secret path for reading the values.
May be secret getting initialized before variables are getting set.
Kindly help as I'm not able to read environment-specific values from the same vault repo.
It worked pretty nicely for me using a choice parameter in a parameterized build. I think your issue is in the used Vault path (vault/secret/$environment). I think the correct in your case is just "secret/$environment". Does your secret engine start with "vault"?
Just FYI, if you define the variable in "Jenkins > Manage Jenkins > Configure System > Environment variables" it'll work too.

AWS CDK Secrets Manager can't find the specified secret

In our project, we store all secrets in us-east-1 region (in the secrets manager). Now, we are deploying a new CDK project to us-west-2 region, this project should use the secrets from us-east-1 by fully specified secret ARN.
Example:
import * as secretsManager from 'aws-cdk-lib/aws-secretsmanager';
...
const mongoDbCredentials = secretsManager.Secret.fromSecretAttributes(this, id, {
secretCompleteArn: props.config.mongoDb.credentials.arn,
});
Problem:
This error happens in attempt to deploy new project:
"Secrets Manager can't find the specified secret. (Service: AWSSecretsManager; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: 918e74b7-bef8-4b6c-b8c0-f9c901037806; Proxy: null)"
Expected result:
We expect the new project will be successfully deployed with secrets from another region.

How to pass in AWS environmental variables to Nextflow for use in Docker container

I would like to run a Nextflow pipeline through a Docker container. As part of the pipeline I would like to push and pull from AWS. To accomplish this end, I need to pass in AWS credentials to the container, but I do not want to write them into the image.
Nextflow has an option to pass in environmental variables as part of the Docker scope via the envWhitelist option, however I have not been able to find an example for correct syntax when doing this.
I have tried the following syntax and get an access denied error, suggesting that I am not passing in the variables properly.
docker {
enabled = true
envWhitelist = "AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID"
}
I explicitly passed these variables into my environment and I can see them using printenv.
Does this syntax seem correct? Thanks for any help!
Usually you can just keep your AWS security credentials in a file called ~/.aws/credentials:
If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not defined in the
environment, Nextflow will attempt to retrieve credentials from your
~/.aws/credentials or ~/.aws/config files.
Alternatively, you can declare your AWS credentials in your nextflow.config (or in a separate config profile) using the aws scope:
aws {
accessKey = '<YOUR S3 ACCESS KEY>'
secretKey = '<YOUR S3 SECRET KEY>'
region = '<REGION IDENTIFIER>'
}
You could also use an IAM Instance Role to provide your credentials.

Serverless offline not taking environment variables

I am new to serverless and would like your help to figure out what I am doing wrong.
In my local development after using sls offline --config cusom.yml i am unable to retrieve secrets. After a bit of debugging, found out that the credentials is null.
However, when i invoke it separately using pure js aws-sdk (not with serverless) I am able to retrieve the secrets and the credentials is prepopulated. Please let me know if you have any suggestions on why this is not working with sls offline
Do you have the following files locally?
~/.aws/credentials
~/.aws/config
These files serves as the credentials if you don't write them in your code. Most libraries and aws cli relies on them for access
$ cat ~/.aws/credentials
[default]
aws_secret_access_key = your_aws_secret_access_key
aws_access_key_id = your_aws_access_key_id
$ cat ~/.aws/config
[default]
region = us-east-1 # or your preferred region

gsutil OAuth2 authorization (example of .boto file is needed)

I'd like to access Google Cloud Storage from my scripts, and I need to automate authentication. By default, gsutil config asks to open a link and type in generated code, and then it writes OAuth token into .boto file.
Google Cloud also supports creating OAuth 2.0 client IDs in "Credentials" page, but I cannot make sense how to plug those credentials (client_id and client_secret) into my .boto file:
{"installed":{"client_id":"677005197220-eim3l5of3m16225qr0m9vquocj6mugt4.apps.googleusercontent.com","auth_uri":"https://accounts.google.com/o/oauth2/auth","token_uri":"https://accounts.google.com/o/oauth2/token","auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs","client_secret":"pFghf5URxxxBFVRsQ1elWbbZ","redirect_uris":["urn:ietf:wg:oauth:2.0:oob","http://localhost"]}}
(Please don't try to use it, as I slightly modified the codes)
I plugged them in .boto file in this way:
[OAuth2]
client_id
="677005197220-eim3l5of3m16225qr0m9vquocj6mugt4.apps.googleusercontent.com"
client_secret ="pFghf5URxxxBFVRsQ1elWbbZ" provider_label = Google
provider_authorization_uri = https://accounts.google.com/o/oauth2/auth
provider_token_uri = https://accounts.google.com/o/oauth2/token
This is how gsutil is failing:
# gsutil ls gs://mybucket/
You are attempting to access protected data with no configured
credentials. Please visit https://cloud.google.com/console#/project
and sign up for an account, and then run the "gsutil config" command
to configure gsutil to use these credentials.
If I run gsutil config I can configure credentials and then it works, but I need to use my client ID and client secret.
Can someone please suggest how to make gsutil work with .boto with client_id and client_secret? Thanks
Here is how you can create a .boto file with access key ID and secret access key.
gsutil config -a
The above commmand will generate a .boto file that you can then use as a sample that you are after.

Resources