How to omit "Aws :: ECSCredentials.new" in ruby - ruby-on-rails

Currently, the way to write the source differs depending on the execution environment
and I want to fix it to a unified writing style.
The code is as follows depending on the environment.
When connecting to s3 with ECS:
client = Aws::S3::Client.new(region: Settings.aws.region, credentials: Aws::ECSCredentials.new)
When connecting to s3 with not ECS:
client = Aws::S3::Client.new(region: Settings.aws.region)
When connecting to AWS s3 on ECS, an error occurs if there are no credentials.
Let me know If there is a way to improve.

Related

MLflow: Unable to store artifacts to S3

I'm running my mlflow tracking server in a docker container on a remote server and trying to log mlflow runs from local computer with the eventual goal that anyone on my team can send their run data to the same tracking server. I've set the tracking URI to be http://<ip of remote server >:<port on docker container>. I'm not explicitly setting any of the AWS credentials on the local machine because I would like to just be able to train locally and log to the remote server (run data to RDS and artifacts to S3). I have no problem logging my runs to an RDS database but I keep getting the following error when it get to the point of trying to log artifacts: botocore.exceptions.NoCredentialsError: Unable to locate credentials. Do I have to have the credentials available outside of the tracking server for this to work (ie: on my local machine where the mlflow runs are taking place)? I know that all of my credentials are available in the docker container that is hosting the tracking server. I've be able to upload files to my S3 bucket using the aws cli inside of the container that hosts my tracking server so I know that it as access. I'm confused by the fact that I can log to RDS but not S3. I'm not sure what I'm doing wrong at this point. TIA.
Yes, apparently I do need to have the credentials available to the local client as well.

Aws::Errors::MissingCredentialsError (unable to sign request without credentials set) - Beanstalk, security via IAM Roles

My setup:
Rails 5.2 application
Amazon SES, using aws-sdk-rails gem
authenticated with IAM roles (not access key & secret)
Elastic Beanstalk
I have just switched my Elastic Beanstalk environment from Amazon Linux AMI (v1) to a new environment with Amazon Linux 2 (v2). I have kept my configuration as identical as possible to maintain application behaviour, although when sending emails with my Rails app, powered by Amazon Simple Email Service (SES), I get the following error:
Aws::Errors::MissingCredentialsError (unable to sign request without credentials set)
The documentation here describes a number of methods to authenticate the AWS SDK, and I'm using the "Setting Credentials Using IAM" approach:
https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/setup-config.html
I'm using the Rails gem for AWS SES email sending here:
https://github.com/aws/aws-sdk-rails/tree/v2.1.0
and given I'm using IAM roles, I only need to set the region when initializing the mailer:
Aws::Rails.add_action_mailer_delivery_method(:aws_sdk, region: "us-west-2")
Both my old v1 EB environment and my new v2 EB environment create EC2 instances with the same role, i.e the aws-elasticbeanstalk-ec2-role, and I can see that it has the same Security Policy attached to it that I set up a while back called "MySendEmailPolicy". And this policy looks like it gives the right permissions access to send emails:
I can't think of any other reason why AWS would say my credentials are now failing. Any thoughts? Perhaps there's something different about Amazon Linux 2?
This isn't an IAM-roles solution to the problem, but a work-around I'm using which gets emails working at least for now.
I'm simply using my own AWS CLI credentials here, which I've added as environment variables via the Elastic Beanstalk web console:
creds = Aws::Credentials.new(ENV["AWS_ACCESS_KEY_ID"], ENV["AWS_SECRET_ACCESS_KEY"])
Aws::Rails.add_action_mailer_delivery_method(:aws_sdk, credentials: creds, region: "us-west-2")
After deploying above, I got this error: Aws::SES::Errors::AccessDenied (User 'arn:aws:iam::XXXXXXXXXXXX:user/<userName>' is not authorized to perform 'ses:SendRawEmail' on resource 'arn:aws:ses:us-west-2:XXXXXXXXXXXX:identity/<example.com>'), but that was resolved by attaching my "MySendEmailPolicy" policy to my IAM user directly.
Any suggestions on the IAM-roles solution though would be welcome.

how to access aws system manager parameter stores for different env (staging, production) within elastic beanstalk app

I am working to migration a rails app from its current PaaS to aws elastic beanstalk. Everything went well except that elastic beanstalk allows configuration to have key and value combined max 4096bytes in size. As my app has many third parties api credentials making my config way bigger than 4096bytes.
I found an excellent service in AWS for storing secret credentials called AWS System Manager Parameter Store to overcome the 4096byte limitation.
My goal is to store my credentials and then load them back in to ENV variable for my application, however I found the following problems:
How to be able to separate the config value for different env, in my case I will have a staging and a production in parameter store? Do I need to duplicate the key for each env? what is the practice of organizing those keys to be able to easily load into ENV var programmatically?
How to be able to access the parameter store en it current env accordingly? i.e when the container get deployed in production env the parameter store values in production should be loaded ENV var but not those in staging.
What are the best practices to allow ElasticBeanstalk instance to access AWS system manager parameter stores via AWS IAM?
I tried a few commands in AWS CLI to read and write locally, it works well for example something like this
aws --region=us-east-1 ssm put-parameter --name STG_DB --value client --type SecureString
aws --region=us-east-1 ssm get-parameter --name STG_DB --with-decryption --output A --query Parameter.Value
I need some standard procedures or practices that people do to solve all the above problems.
Step by step guide and example will be very useful.

Cannot access S3 bucket from WildFly running in Docker

I am trying to configure WildFly using the docker image jboss/wildfly:10.1.0.Final to run in domain mode. I am using docker for macos 8.06.1-ce using aufs storage.
I followed the instructions in this link https://octopus.com/blog/wildfly-s3-domain-discovery. It seems pretty simple, but I am getting the error:
WFLYHC0119: Cannot access S3 bucket 'wildfly-mysaga': WFLYHC0129: bucket 'wildfly-mysaga' could not be accessed (rsp=403 (Forbidden)). Maybe the bucket is owned by somebody else or the authentication failed.
But my access key, secret and bucket name are correct. I can use them to connect to s3 using AWS CLI.
What can I be doing wrong? The tutorial seems to run it in an EC2 instance, while my test is in docker. Maybe it is a certificate problem?
I generated access keys from admin user and it worked.

Airflow: Could not send worker log to S3

I deployed Airflow webserver, scheduler, worker, and flower on my kubernetes cluster using Docker images.
Airflow version is 1.8.0.
Now I want to send worker logs to S3 and
Create S3 connection of Airflow from Admin UI (Just set S3_CONN as
conn id, s3 as type. Because my kubernetes cluster is running on
AWS and all nodes have S3 access roles, it should be sufficient)
Set Airflow config as follows
remote_base_log_folder = s3://aws-logs-xxxxxxxx-us-east-1/k8s-airflow
remote_log_conn_id = S3_CONN
encrypt_s3_logs = False
and first I tried creating a DAG so that it just raises an exception immediately after it's running. This works, log can be seen on S3.
So I modified so that the DAG now creates an EMR cluster and waits for it to be ready (waiting status). To do this, I restarted all 4 docker containers of airflow.
Now the DAG looks working, a cluster is started and once it's ready, DAG marked as success. But I could see no logs on S3.
There is no related error log on worker and web server, so I even cannot see what may cause this issue. The log just not sent.
Does anyone know if there is some restriction for remote logging of Airflow, other than this description in the official documentation?
https://airflow.incubator.apache.org/configuration.html#logs
In the Airflow Web UI, local logs take precedence over remote logs. If
local logs can not be found or accessed, the remote logs will be
displayed. Note that logs are only sent to remote storage once a task
completes (including failure). In other words, remote logs for running
tasks are unavailable.
I didn't expect it but on success, will the logs not be sent to remote storage?
The boto version that is installed with airflow is 2.46.1 and that version doesn't use iam instance roles.
Instead, you will have to add an access key and secret for an IAM user that has access in the extra field of your S3_CONN configuration
Like so:
{"aws_access_key_id":"123456789","aws_secret_access_key":"secret12345"}

Resources