AWS SQS: Golang, Error: InvalidClientTokenId: The security token included in the request is invalid - amazon-sqs

Amazon SQS throughing following error:
Error: InvalidClientTokenId: The security token included in the request is invalid
I am using environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to create the session. Both key and secret are valid. I found the following URL regarding this issue:
https://aws.amazon.com/premiumsupport/knowledge-center/security-token-expired/
It says:
All application API requests to Amazon Web Services (AWS) must be cryptographically signed using credentials issued by AWS.
If your application uses temporary credentials when creating an AWS client (such as an AmazonSQS client), the credentials expire at the time interval specified during their creation. You must make sure that the credentials are refreshed before they expire."
Do credentials created through environment variables(AWS_KEY and AWS_SECRET) requires them to refresh? Or what is the default credentials expiry limit created through environment variables?

The same thing was happening with me when I discovered that the application was using the values for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY which were old and had been rotated. Switching to the latest credentials from AWS fixed this for me.

Related

Berglas not finding my google cloud credentials

I am trying to read my google cloud default credentials with berglas, and it says that:
failed to create berglas client: failed to create kms client: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
And I am passing the right path, and i have tried with many paths but none of them work.
$HOME/.config/gcloud:/root/.config/gcloud
I'm unfamiliar with Berglas (please include references) but the error is clear. Google's client libraries attempt to find credentials automatically. The documentation describes the process by which credentials are sought.
Since the credentials aren't being found, you're evidently not running on a Google Cloud compute service (where credentials are found automatically). Have you set an environment variable called APPLICATION_DEFAULT_CREDENTIALS and is it pointing to a valid Service Account key file?
The Berglas' README suggests using the following command to auth your user's credentials as Application Default Credentials. You may not have completed this step:
gcloud auth application-default login

Dapr Secretstore with Azure Keyvault in Azure Kubernetes not working

I am trying to use secret store component with Azure Keyvault in my Azure Kubernetes Cluster. I setup exactly following the "https://docs.dapr.io/reference/components-reference/supported-secret-stores/azure-keyvault/" but I am not able to retrieve the secrets. When I change the secretstore to local file or kubernetes secrets everything works fine. With Azure key vault I am getting the following error:
{
"errorCode": "ERR_SECRET_GET",
"message": "failed getting secret with key {keyName} from secret store {storename}: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://{vault url}/secrets/{secret key}/?api-version=2016-10-01: StatusCode=404 -- Original Error: adal: Refresh request failed. Status Code = '404'. Response body: getting assigned identities for pod {podname} in CREATED state failed after 16 attempts, retry duration [5]s. Error: <nil>\n"
}
I verified that the Client secret I am using is correct. Can anyone please point me to right direction ?
The error indicates that the service principal does not have access to get the secrets from the key vault
You can use System Assigned Managed Identity for the AKS pod and add the access policy to read the key vault secrets
Also, you can use Service Principal with access policy to read the key vault secrets or Key Vault Crypto Officer role so that you can fetch the key vault secrets
Reference: Azure Key Vault secret store | Dapr Docs

Aws::Errors::MissingCredentialsError (unable to sign request without credentials set) - Beanstalk, security via IAM Roles

My setup:
Rails 5.2 application
Amazon SES, using aws-sdk-rails gem
authenticated with IAM roles (not access key & secret)
Elastic Beanstalk
I have just switched my Elastic Beanstalk environment from Amazon Linux AMI (v1) to a new environment with Amazon Linux 2 (v2). I have kept my configuration as identical as possible to maintain application behaviour, although when sending emails with my Rails app, powered by Amazon Simple Email Service (SES), I get the following error:
Aws::Errors::MissingCredentialsError (unable to sign request without credentials set)
The documentation here describes a number of methods to authenticate the AWS SDK, and I'm using the "Setting Credentials Using IAM" approach:
https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/setup-config.html
I'm using the Rails gem for AWS SES email sending here:
https://github.com/aws/aws-sdk-rails/tree/v2.1.0
and given I'm using IAM roles, I only need to set the region when initializing the mailer:
Aws::Rails.add_action_mailer_delivery_method(:aws_sdk, region: "us-west-2")
Both my old v1 EB environment and my new v2 EB environment create EC2 instances with the same role, i.e the aws-elasticbeanstalk-ec2-role, and I can see that it has the same Security Policy attached to it that I set up a while back called "MySendEmailPolicy". And this policy looks like it gives the right permissions access to send emails:
I can't think of any other reason why AWS would say my credentials are now failing. Any thoughts? Perhaps there's something different about Amazon Linux 2?
This isn't an IAM-roles solution to the problem, but a work-around I'm using which gets emails working at least for now.
I'm simply using my own AWS CLI credentials here, which I've added as environment variables via the Elastic Beanstalk web console:
creds = Aws::Credentials.new(ENV["AWS_ACCESS_KEY_ID"], ENV["AWS_SECRET_ACCESS_KEY"])
Aws::Rails.add_action_mailer_delivery_method(:aws_sdk, credentials: creds, region: "us-west-2")
After deploying above, I got this error: Aws::SES::Errors::AccessDenied (User 'arn:aws:iam::XXXXXXXXXXXX:user/<userName>' is not authorized to perform 'ses:SendRawEmail' on resource 'arn:aws:ses:us-west-2:XXXXXXXXXXXX:identity/<example.com>'), but that was resolved by attaching my "MySendEmailPolicy" policy to my IAM user directly.
Any suggestions on the IAM-roles solution though would be welcome.

com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException

Hello I've followed so far the tutorial https://developers.sap.com/tutorials/s4sdk-odata-service-cloud-foundry.html
step by step and I'm having issues, to run the solution on local machine.
I'm running windows 10 and according to tutorial I have set an environment variable to be as following:
destinations=[{name: "ErpQueryEndpoint", url: "xxxx.s4hana.ondemand.com", username: "INT_USER", password: "xxxxxxxx"}]
when i run the solution on localhost i get this:
Message Error occured while handling request: com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: Failed to get destinations of provider service instance: Failed to get access token for destination service. If your application is running on Cloud Foundry, make sure to have a binding to both the destination service and the authorization and trust management (xsuaa) service, AND that you either properly secured your application or have set the "ALLOW_MOCKED_AUTH_HEADER" environment variable to true. Please note that authentication types with user propagation, for example, principal propagation or the OAuth2 SAML Bearer flow, require that you secure your application and will not work when using the "ALLOW_MOCKED_AUTH_HEADER" environment variable. If your application is not running on Cloud Foundry, for example, when deploying to a local container, consider declaring the "destinations" environment variable to configure destinations.
Be sure to set the destinations variable so it is visible to your application. You can check using System.getenv("destinations"); in your code.

Jenkins Jcloud and digitalocean provisionning

I'm trying to provision digitalocean droplet through jenkins jcloud plugin but am having a hard time knowing what to put.
First of all is this the right endpoint url for the api v2?:
https://api.digitalocean.com/v2
In digitalocean I've created an APP and I was given the Identity and Secret key I provided to jenkins.
But when connecting I get this error
Cannot connect to specified cloud, please check the identity and
credentials: status cannot be null connecting to GET
https://api.digitalocean.com/v2/droplets HTTP/1.1
What am I doing wrong here ?
You do not need add endpoint.
Only add one credential, with password (token). And that is it. Like this:

Resources