My organisation has a centralised AWS account to perform actions from with roles that have access to perform actions on other accounts which trust the role.
However serverless deploy tries to deploy using the authenticated account. There doesn't seem to be a way to deploy the application to another account that the assumed role has access to.
You can specify credentials using environment variables when running the commands, so they can indicate different accounts to act as.
See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html for the list, but using AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY definitely work for me
Related
My setup:
Rails 5.2 application
Amazon SES, using aws-sdk-rails gem
authenticated with IAM roles (not access key & secret)
Elastic Beanstalk
I have just switched my Elastic Beanstalk environment from Amazon Linux AMI (v1) to a new environment with Amazon Linux 2 (v2). I have kept my configuration as identical as possible to maintain application behaviour, although when sending emails with my Rails app, powered by Amazon Simple Email Service (SES), I get the following error:
Aws::Errors::MissingCredentialsError (unable to sign request without credentials set)
The documentation here describes a number of methods to authenticate the AWS SDK, and I'm using the "Setting Credentials Using IAM" approach:
https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/setup-config.html
I'm using the Rails gem for AWS SES email sending here:
https://github.com/aws/aws-sdk-rails/tree/v2.1.0
and given I'm using IAM roles, I only need to set the region when initializing the mailer:
Aws::Rails.add_action_mailer_delivery_method(:aws_sdk, region: "us-west-2")
Both my old v1 EB environment and my new v2 EB environment create EC2 instances with the same role, i.e the aws-elasticbeanstalk-ec2-role, and I can see that it has the same Security Policy attached to it that I set up a while back called "MySendEmailPolicy". And this policy looks like it gives the right permissions access to send emails:
I can't think of any other reason why AWS would say my credentials are now failing. Any thoughts? Perhaps there's something different about Amazon Linux 2?
This isn't an IAM-roles solution to the problem, but a work-around I'm using which gets emails working at least for now.
I'm simply using my own AWS CLI credentials here, which I've added as environment variables via the Elastic Beanstalk web console:
creds = Aws::Credentials.new(ENV["AWS_ACCESS_KEY_ID"], ENV["AWS_SECRET_ACCESS_KEY"])
Aws::Rails.add_action_mailer_delivery_method(:aws_sdk, credentials: creds, region: "us-west-2")
After deploying above, I got this error: Aws::SES::Errors::AccessDenied (User 'arn:aws:iam::XXXXXXXXXXXX:user/<userName>' is not authorized to perform 'ses:SendRawEmail' on resource 'arn:aws:ses:us-west-2:XXXXXXXXXXXX:identity/<example.com>'), but that was resolved by attaching my "MySendEmailPolicy" policy to my IAM user directly.
Any suggestions on the IAM-roles solution though would be welcome.
I have an AKS (Kubernetes cluster) created with a managed identity in Azure portal.
I want to automate deployment in the cluster using bitbucket pipelines. For this, it seems I need a service principal.
script:
- pipe: microsoft/azure-aks-deploy:1.0.2
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
Is there a way to get this from the managed identity? Do I need to delete the cluster and re-create it with service principal? Are there any other alternatives?
Thanks!
Unfortunately, the managed identity can only be used inside the Azure Resources. And it seems the bitbucket pipeline should have the service principal with enough permissions first to access the Azure, then it can manage the Azure resources. And for AKS, you can't change the managed identity that you enable it at the creation into service principal.
So finally, you need to delete the existing AKS cluster and recreate a new cluster with a service principal. Then you can use the same service principal to access Azure and manage the AKS cluster.
I wanted to post this for anyone looking.
The OP asked here about retrieving the service principal details for a managed identity. While it is possible to retrieve the azure resource ID and also the "username" of the service principal, as #charles-xu mentioned using a managed identity for anything outside of Azure is not possible, and this is because there is no method to access the password (also known as client secret)
That being said, you can find the command necessary to retrieve your Managed Identity's SP name in case you need it, for example in order to insert it into another azure resource being created by Terraform. The command is documented here: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-cli
I am trying to enable jwt authentication for my self-hosted (Docker) Jitsi server. There is a guide on self hosting with Docker and on that guide, it tells how to enable authentication. This is the guide: https://jitsi.github.io/handbook/docs/devops-guide/devops-guide-docker
Now I will copy and paste the autentication part from that guide so that you can see it more clearly.
Authentication can be controlled with the environment
variables below. If guest access is enabled, unauthenticated users
will need to wait until a user authenticates before they can join a
room. If guest access is not enabled, every user will need to
authenticate before they can join.
Authentication using JWT tokens You can use JWT tokens to authenticate
users. To enable it you have to enable authentication with ENABLE_AUTH
and set AUTH_TYPE to jwt...
After these instructions, I changed my .env file. I did the configurations told above. Then I did docker-compose down. Then I removed the ~/.jitsi-meet-cfg and then created again with mkdir. Then I ran the command docker-compose up -d.
Then to try it, I am entering the URL:
https://{ip_add}:8443/room?jwt=randomwords
I am connecting to the server remotely thus I am accessing it with an ip. And because I did not provide a token, I should not be able to create/join to a meeting but I am able to. Whatever I write to the url I still can join.
Can someone help?
I'm running tests and pushing my docker images from CircleCi to Google Container Registry. At least I'm trying to.
Which roles does my service account require to be able to pull and push images to GCR?
Even as an account with the role "Project Owner", I get this error:
gcloud --quiet container clusters get-credentials $PROJECT_ID
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials)
ResponseError: code=403,
message=Required "container.clusters.get" permission(s)
for "projects/$PROJECT_ID/locations/europe-west1/clusters/$CLUSTER".
According to this doc, you will need the storage.admin role to Push (Read & Write), and storage.objectViewer to Pull (Read Only) from Google Container Registry.
On the topic of not being able to get credentials as owner, you are likely using the service account of the machine instead of your owner account. Check which account you are using with the command:
gcloud auth list
You can change the service account the machine is using through the UI by first stopping the instance, then editing the service account. You can also use your Google credentials using the command:
gcloud auth login
Hope this helps
When you get Required "___ANYTHING____" permission message:
go to Console -> IAM -> Roles -> Create new custom role [ROLE_NAME]
add container.clusters.get and/or whatever other permissions you need in order to get the whole thing going (I needed some rights for kubectl for example)
assign that role (Console -> IAM -> Add+) to your service account
I'd like to write a service (that starts up and runs whenever the machine is on) that queries Active directory since the user IIS uses does not have permission to query AD. How do I determine if A) my workstation where I have local admin rights, and B) a shared team workstation will allow me to do this?
Anything you can do as an interactive user can be done by a service with appropriate permissions and configuration, so it isn't so much an issue of determining if you can, but rather configuring the service so that it can.
Your installation package should request an appropriate set of credentials (and of course must be run by a user with privileges to install such a service). The service itself should simply catch and log any permission exceptions.
As an example - look at the SQL Server installation process. Early on it requests that you specify accounts with the required privileges.