My target is using AWS Beanstalk, create application environment type 'Worker' which will handle heavy loading tasks, this worker based on our Rails application.
I create AWS Beanstalk Worker Environment:
Environment tier: Ruby, 1.9.3 on 64bit Amazon Linux
Environment type: single instance
(i did try 64bit Amazon Linux 2014.03 v1.0.3 running Ruby 2.0 (Puma) with same failed result)
After solving all issues with GEMS and database connection, i stuck on starting "aws-sqs" Queue client. It should listen Queue and do HTTP requests to Worker application.
I've provide AWS_ACCESS_KEY_ID and AWS_SECRET_KEY to ENV variables for this Worker instance:
$ export | grep AWS
declare -x AWS_ACCESS_KEY_ID="AK...........Q"
declare -x AWS_AUTO_SCALING_HOME="/opt/aws/apitools/as"
declare -x AWS_CLOUDWATCH_HOME="/opt/aws/apitools/mon"
declare -x AWS_ELB_HOME="/opt/aws/apitools/elb"
declare -x AWS_IAM_HOME="/opt/aws/apitools/iam"
declare -x AWS_PATH="/opt/aws"
declare -x AWS_RDS_HOME="/opt/aws/apitools/rds"
declare -x AWS_SECRET_KEY="Hp.....fI"
declare -x EB_CONFIG_SYSTEM_AWSEBAGENTID=""
declare -x EB_CONFIG_SYSTEM_AWSEBREFERRERID=""
Here is log output:
2014-05-19T13:58:59Z init: initializing aws-sqsd 1.0 (2013-12-23)
2014-05-19T13:58:59Z start: polling https://sqs.us-east-1.amazonaws.com/201266939336/awseb-e-dq8cqaud2z-stack-AWSEBWorkerQueue-18836XBBHNDUD
2014-05-19T13:58:59Z fatal: AWS::Errors::MissingCredentialsError:
Missing Credentials.
Unable to find AWS credentials. You can configure your AWS credentials
a few different ways:
* Call AWS.config with :access_key_id and :secret_access_key
<<<
* On EC2 you can run instances with an IAM instance profile and credentials
will be auto loaded from the instance metadata service on those
instances.
* Call AWS.config with :credential_provider. A credential provider should
either include AWS::Core::CredentialProviders::Provider or respond to
the same public methods.
= Ruby on Rails
In a Ruby on Rails application you may also specify your credentials in
the following ways:
* Via a config initializer script using any of the methods mentioned above
(e.g. RAILS_ROOT/config/initializers/aws-sdk.rb).
* Via a yaml configuration file located at RAILS_ROOT/config/aws.yml.
This file should be formated like the default RAILS_ROOT/config/database.yml
file.
Also i have config/initializers/aws-sdk.rb in my Rails application with this content:
AWS.config(
access_key_id: ENV["AWS_ACCESS_KEY_ID"],
secret_access_key: ENV["AWS_SECRET_ACCESS_KEY"])
Daemon aws-sqs don't started at all.
May i have chance to configure aws-sqs in some other way?
Perhaps the instance profile you are using for your Elastic Beanstalk does not have the permissions needed for worker environments.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.iam.roles.aeb.html#AWSHowTo.iam.policies.actions.worker
Can you make sure your IAM Instance profile has all permissions listed in the link above?
(Copied below for reference)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "QueueAccess",
"Action": [
"sqs:ChangeMessageVisibility",
"sqs:DeleteMessage",
"sqs:ReceiveMessage"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "MetricsAccess",
"Action": [
"cloudwatch:PutMetricData"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Related
What I'm trying to accomplish:
I have a ruby on rails app which uses carrierwave to store data using the fog-aws adapter
I'm trying to simulate poor communication with AWS S3
What I've done:
I have a sample ruby script which tries to connect to AWS and enumerate objects:
require 'aws-sdk-s3'
params = {region: 'us-west-2', access_key_id: 'key', secret_access_key: 'secret'}
s3 = Aws::S3::Client.new(params)
puts s3.list_objects({bucket: 'bucketname', prefix: '', max_keys: 1})
This works - so the credentials are fine.
I now want to toxify this using toxiproxy (a chaos engineering tool that'll randomly break connections, etc)
Added an http proxy to the params: {region: 'us-west-2', access_key_id: 'key', secret_access_key: 'secret', http_proxy: 'http://localhost:7890'}
Created a toxiproxy definition using anything I could think of:
toxiproxy-cli create --listen localhost:7890 -u bucketname.s3.amazonaws.com:443 test-aws
toxiproxy-cli create --listen localhost:7890 -u s3-r-w.us-west-2.amazonaws.com:443 test-aws
I then execute the above mentioned script and all I see is: lib/ruby/2.6.0/net/protocol.rb:225:in 'rbuf_fill': end of file reached (Seahorse::Client::NetworkingError)
I can't figure out the magic parameters required to make everything connect.
So questions:
Can what I'm trying to accomplish work using toxiproxy?
If not, what is recommended?
I figured it out. It looks like toxiproxy by itself is not enough to intercept & forward the connection -- I needed to also use tinyproxy.
This is what I did:
Setup tinyproxy as an http proxy on port 8888
Setup toxiproxy on port 7890 to forward to port 8888
Configured the AWS client to connect to an http_proxy on port 7890 -- this allows toxiproxy to mess with the connection, but will allow tinyproxy to consume the HTTP CONNECT and tunnel to AWS.
Example code:
require 'aws-sdk-s3'
s3 = Aws::S3::Client.new(region: 'us-west-2', access_key_id: 'key', secret_access_key: 'secret', http_proxy: 'http://localhost:7890')
puts s3.list_objects({bucket: 'bucketname', prefix: '', max_keys: 1})
Example tinyproxy configuration:
Port 8888
Listen 127.0.0.1
Timeout 600
Allow 127.0.0.1
Example toxiproxy configuration:
toxiproxy-cli create --listen 127.0.0.1:7890 -u 127.0.0.1:8888 test-aws
toxiproxy-cli toxic add --type reset_peer -a timeout=25 test-aws
BACKGROUND:
We are trying to deploy App as a docker container through AWS-Greengrass Connector Service to the edge device (Running Greengrass core as container in Linux env).
We are configuring the greengrass group connector in cloud for docker app deployment.
ISSUES:
While deploying from AWS greengrass group (AWS cloud), we are able to see successful deployment message, but application is not getting deployed to the edge device (running greengrass core as container).
LOGS:
DockerApplicationDeploymentLog:
[2020-11-05T10:35:42.632Z][FATAL]-lambda_runtime.py:381,Failed to initialize Lambda runtime due to exception: "getgrnam(): name not found: 'docker'"
[2020-11-05T10:35:44.789Z][WARN]-ipc_client.py:162,deprecated arg port=8000 will be ignored
[2020-11-05T10:35:45.012Z][WARN]-ipc_client.py:162,deprecated arg port=8000 will be ignored
[2020-11-05T10:35:45.012Z][INFO]-docker_deployer.py:41,docker deployer starting up
[2020-11-05T10:35:45.012Z][INFO]-docker_deployer.py:45,checking inputs
[2020-11-05T10:35:45.012Z][INFO]-docker_deployer.py:52,docker group permissions
[2020-11-05T10:35:45.02Z][FATAL]-lambda_runtime.py:141,Failed to import handler function "handlers.function_handler" due to exception: "getgrnam(): name not found: 'docker'"
RuntimeSystemLog:
[2020-11-05T10:31:49.78Z][DEBUG]-Restart worker because it was killed. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6"}
[2020-11-05T10:31:49.78Z][DEBUG]-Reserve worker. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6"}
[2020-11-05T10:31:49.78Z][DEBUG]-Doing start attempt: {"Attempt count": 0, "workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6"}
[2020-11-05T10:31:49.78Z][DEBUG]-Creating directory. {"dir": "/greengrass/ggc/packages/1.11.0/var/lambda/8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:49.78Z][DEBUG]-changed ownership {"path": "/greengrass/ggc/packages/1.11.0/var/lambda/8b0ee21d-e481-4d27-5e30-cb4d912547f5", "new uid": 121, "new gid": 121}
[2020-11-05T10:31:49.782Z][DEBUG]-Resolving environment variable {"Variable": "PYTHONPATH=/greengrass/ggc/deployment/lambda/arn.aws.lambda.ap-south-1.aws.function.DockerApplicationDeployment.6"}
[2020-11-05T10:31:49.79Z][DEBUG]-Resolving environment variable {"Variable": "PATH=/usr/bin:/usr/local/bin"}
[2020-11-05T10:31:49.799Z][DEBUG]-Resolving environment variable {"Variable": "DOCKER_DEPLOYER_DOCKER_COMPOSE_DESTINATION_FILE_PATH=/home/ggc_user"}
[2020-11-05T10:31:49.82Z][DEBUG]-Creating new worker. {"functionArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6", "workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:49.82Z][DEBUG]-Starting worker process. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:49.829Z][DEBUG]-Worker process started. {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "pid": 20471}
[2020-11-05T10:31:49.83Z][DEBUG]-Start work result: {"workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "funcArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6", "state": "Starting", "initDurationSeconds": 0.012234454}
[2020-11-05T10:31:49.831Z][INFO]-Created worker. {"functionArn": "arn:aws:lambda:ap-south-1:aws:function:DockerApplicationDeployment:6", "workerId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5", "pid": 20471}
[2020-11-05T10:31:53.155Z][DEBUG]-Received a credential provider request {"serverLambdaArn": "arn:aws:lambda:::function:GGTES", "clientId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:53.156Z][DEBUG]-WorkManager getting work {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "funcArn": "arn:aws:lambda:::function:GGTES", "invocationId": "955c2c43-1187-4001-7988-4213b95eb584"}
[2020-11-05T10:31:53.156Z][DEBUG]-Successfully GET work. {"invocationId": "955c2c43-1187-4001-7988-4213b95eb584", "fromWorkerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.156Z][DEBUG]-POST work result. {"invocationId": "955c2c43-1187-4001-7988-4213b95eb584", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.156Z][DEBUG]-WorkManager putting work result. {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "invocationId": "955c2c43-1187-4001-7988-4213b95eb584"}
[2020-11-05T10:31:53.156Z][DEBUG]-WorkManager put work result successfully. {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "invocationId": "955c2c43-1187-4001-7988-4213b95eb584"}
[2020-11-05T10:31:53.156Z][DEBUG]-Successfully POST work result. {"invocationId": "955c2c43-1187-4001-7988-4213b95eb584", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.157Z][DEBUG]-Handled a credential provider request {"clientId": "8b0ee21d-e481-4d27-5e30-cb4d912547f5"}
[2020-11-05T10:31:53.158Z][DEBUG]-GET work item. {"fromWorkerId": "148f7a1a-168f-40a5-682d-92e00d56a5df", "ofFunction": "arn:aws:lambda:::function:GGTES"}
[2020-11-05T10:31:53.158Z][DEBUG]-Worker timer doesn't exist. {"workerId": "148f7a1a-168f-40a5-682d-92e00d56a5df"}
Did you doublecheck to meet the requirments listed in
https://docs.aws.amazon.com/greengrass/latest/developerguide/docker-app-connector.html
https://docs.aws.amazon.com/greengrass/latest/developerguide/docker-app-connector.html#docker-app-connector-linux-user
I dont know this particular error, but it complains about some missing basic user/group settings:
[2020-11-05T10:35:42.632Z][FATAL]-lambda_runtime.py:381,Failed to initialize Lambda runtime due to exception: "getgrnam(): name not found: 'docker'"
I am trying to submit a transaction to Hyperledger Sawtooth v1.0.1 using javascript to a validator running on localhost. The code for the post request is as below:
request.post({
url: constants.API_URL + '/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => {
if (err) {
console.log(err);
return cb(err)
}
console.log(response.body);
return cb(null, response.body);
});
The transaction gets processed when submitted from an backend nodejs application, but it returns an OPTIONS http://localhost:8080/batches 405 (Method Not Allowed) error when submitted from client. These are the options that I have tried:
Inject Access-Control-Allow-* headers into the response using an extension: The response still gives the same error
Remove the custom header to bypass preflight request: This makes the validator throw an error as shown:
...
sawtooth-rest-api-default | KeyError: "Key not found: 'Content-Type'"
sawtooth-rest-api-default | [2018-03-15 08:07:37.670 ERROR web_protocol] Error handling request
sawtooth-rest-api-default | Traceback (most recent call last):
...
The unmodified POST request from the browser gets the following response headers from the validator:
HTTP/1.1 405 Method Not Allowed
Content-Type: text/plain; charset=utf-8
Allow: GET,HEAD,POST
Content-Length: 23
Date: Thu, 15 Mar 2018 08:42:01 GMT
Server: Python/3.5 aiohttp/2.3.2
So, I guess OPTIONS method is not handled in the validator. A GET request for the state goes through fine when the CORS headers are added. This issue was also not faced in Sawtooth v0.8.
I am using docker to start the validator, and the commands to start it are a slightly modified version of those given in the LinuxFoundationX: LFS171x course. The relevant commands are below:
bash -c \"\
sawadm keygen && \
sawtooth keygen my_key && \
sawset genesis -k /root/.sawtooth/keys/my_key.priv && \
sawadm genesis config-genesis.batch && \
sawtooth-validator -vv \
--endpoint tcp://validator:8800 \
--bind component:tcp://eth0:4004 \
--bind network:tcp://eth0:8800
Can someone please guide me as to how to solve this problem?
CORS issues are always the best.
What is CORS?
Your browser trying to protect users from bring directed to a page they think is the frontend for an API, but is actually fraudulent. Anytime a web page tries to access an API on a different domain, that API will need to explicitly give the webpage permission, or the browser will block the request. This is why you can query the API from Node.js (no browser), and can put the REST API address directly into your address bar (same domain). However, trying to go from localhost:3000 to localhost:8008 or from file://path/to/your/index.html to localhost:8008 is going to get blocked.
Why doesn't the Sawtooth REST API handle OPTIONS requests?
The Sawtooth REST API does not know the domain you are going to run your web page from, so it can't whitelist it explicitly. It is possible to whitelist all domains, but this obviously destroys any protection CORS might give you. Rather than try to weigh the costs and benefits of this approach for all Sawtooth users everywhere, the decision was made to make the REST API as lightweight and security agnostic as possible. Any developer using it would be expected to put it behind a proxy server, and they can make whatever security decisions they need on that proxy layer.
So how do you fix it?
You need to setup a proxy server that will put the REST API and your web page on the same domain. There is no quick configuration option for this. You will have to set up an actual server. Obviously there are lots of ways to do this. If you are already familiar with Node, you could serve the page from Node.js, and then have the Node server proxy the API calls. If you are already running all of the Sawtooth components with docker-compose though, it might be easier to use Docker and Apache.
Setting up an Apache Proxy with Docker
Create your Dockerfile
In the same directory as your web app create a text file called "Dockerfile" (no extension). Then make it look like this:
FROM httpd:2.4
RUN echo "\
LoadModule proxy_module modules/mod_proxy.so\n\
LoadModule proxy_http_module modules/mod_proxy_http.so\n\
ProxyPass /api http://rest-api:8008\n\
ProxyPassReverse /api http://rest-api:8008\n\
RequestHeader set X-Forwarded-Path \"/api\"\n\
" >>/usr/local/apache2/conf/httpd.conf
This is going to do a couple of things. First it will pull down the httpd module from DockerHub, which is just a simple static server. Then we are using a bit of bash to add five lines to Apache's configuration file. These five lines import the proxy modules, tell Apache that we want to proxy http://rest-api:8008 to the /api route, and set the X-Forwarded-Path header so the REST API can properly build response URLs. Make sure that rest-api matches the actual name of the Sawtooth REST API service in your docker compose file.
Modify your docker compose file
Now, to the docker compose YAML file you are running Sawtooth through, you want to add a new property under the services key:
services:
my-web-page:
build: ./path/to/web/dir/
image: my-web-page
container_name: my-web-page
volumes:
- ./path/to/web/dir/public/:/usr/local/apache2/htdocs/
expose:
- 80
ports:
- '8000:80'
depends_on:
- rest-api
This will build your Dockerfile located at ./path/to/web/dir/Dockerfile (relative to the docker compose file), and run it with its default command, which is to start up Apache. Apache will serve whatever files are located in /usr/local/apache2/htdocs/, so we'll use volumes to link the path to your web files on your host machine (i.e. ./path/to/web/dir/public/), to that directory in the container. This is basically an alias, so if you update your web app later, you don't need to restart this docker container to see the changes. Finally, ports will take the server, which is at port 80 inside the container, and forward it out to localhost:8000.
Running it all
Now you should be able to run:
docker-compose -f path/to/your/compose-file.yaml up
And it will start up your Apache server along with the Sawtooth REST API and validator and any other services you defined. If you go to http://localhost:8000, you should see your web page, and if you go to http://localhost:8000/api/blocks, you should see a JSON representation of the blocks on chain. More importantly you should be able to make the request from your web app:
request.post({
url: 'api/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => console.log(response) );
Whew. Sorry for the long response, but I'm not sure if it is possible to solve CORS any faster. Hopefully this helps.
The transaction Header should have details like, address of the block where it would be save. Here is example which I have used and is working fine for me :
String payload = "create,0001,BLockchain CPU,Black,5000";
logger.info("Sending payload as - "+ payload);
String payloadBytes = Utils.hash512(payload.getBytes()); // --fix for invaluid payload seriqalization
ByteString payloadByteString = ByteString.copyFrom(payload.getBytes());
String address = getAddress(IDEM, ITEM_ID); // get unique address for input and output
logger.info("Sending address as - "+ address);
TransactionHeader txnHeader = TransactionHeader.newBuilder().clearBatcherPublicKey()
.setBatcherPublicKey(publicKeyHex)
.setFamilyName(IDEM) // Idem Family
.setFamilyVersion(VER)
.addInputs(address)
.setNonce("1")
.addOutputs(address)
.setPayloadSha512(payloadBytes)
.setSignerPublicKey(publicKeyHex)
.build();
ByteString txnHeaderBytes = txnHeader.toByteString();
byte[] txnHeaderSignature = privateKey.signMessage(txnHeaderBytes.toString()).getBytes();
String value = Signing.sign(privateKey, txnHeader.toByteArray());
Transaction txn = Transaction.newBuilder().setHeader(txnHeaderBytes).setPayload(payloadByteString)
.setHeaderSignature(value).build();
BatchHeader batchHeader = BatchHeader.newBuilder().clearSignerPublicKey().setSignerPublicKey(publicKeyHex)
.addTransactionIds(txn.getHeaderSignature()).build();
ByteString batchHeaderBytes = batchHeader.toByteString();
byte[] batchHeaderSignature = privateKey.signMessage(batchHeaderBytes.toString()).getBytes();
String value_batch = Signing.sign(privateKey, batchHeader.toByteArray());
Batch batch = Batch.newBuilder()
.setHeader(batchHeaderBytes)
.setHeaderSignature(value_batch)
.setTrace(true)
.addTransactions(txn)
.build();
BatchList batchList = BatchList.newBuilder()
.addBatches(batch)
.build();
ByteString batchBytes = batchList.toByteString();
String serverResponse = Unirest.post("http://localhost:8008/batches")
.header("Content-Type", "application/octet-stream")
.body(batchBytes.toByteArray())
.asString()
.getBody();
I am following the steps to register a device in AWS-IoT. I am doing the steps described by AWS to use a self-signed certificate. The step three of the tutorial indicates the following command:
aws iot get-registration-code
But I am getting the following exception:
$ aws iot get-registration-code
An error occurred (AccessDeniedException) when calling the
GetRegistrationCode operation: User: arn:aws:iam::xxxxxxxx:user/dalton
is not authorized to perform: iot:GetRegistrationCode on resource: *
It is not clear how I can assign the right permissions. At IAM Management Console, I assigned the following permissions to my user:
AWSIoTThingsRegistration
AWSIoTLogging
AWSIoTConfigAccess
AWSIoTRuleActions
AWSIoTConfigReadOnlyAccess
AWSQuickSightIoTAnalyticsAccess
AWSIoTOTAUpdate
AWSIoTDataAccess
AWSIoTFullAccess
Still without success.
AWSIoTFullAccess defines this policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:*"
],
"Resource": "*"
}
]
}
So with that you will be able to execute the call according to the IAM IoT policies. When attaching a new policy it only take a few seconds before it goes into effect on the CLI.
You need to :
Double check your IAM policies and ensure that the user that is using the CLI uses indeed uses the aws credentials (key and secret) that matches the IAM user that has the AWSIoTFullAccess.
Double check the AWS account number if you're using multiple accounts.
Run the AWS IAM Policy Simulator and verify the output.
I am trying to create a sub-domain using Route53 with aws-php-sdk.
but I am getting this error again and again:
[2017-06-16 12:17:00] local.ERROR: Aws\Exception\CredentialsException: Error retrieving credentials from the instance profile metadata server.
(cURL error 7: Failed to connect to 169.254.169.254 port 80: No route to host (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)) in /var/www/html/test/vendor/aws/aws-sdk-php/src/Credentials/InstanceProfileProvider.php:79
I am using aws-sdk-php version: 3.29
"aws/aws-sdk-php": "^3.29"
Here is my written code
use Aws\Route53\Route53Client;
$client = Route53Client::factory(array(
'region' => 'us-east-1',
'version' => '2013-04-01',
'credentials ' => array('key'=>'AWS_KEY',
'secret'=>'AWS_SECRET_KEY')
));
$result = $client->changeResourceRecordSets(array(
// HostedZoneId is required
'HostedZoneId' => 'ROUTER_53_HOSTED_ZONE_ID',
// ChangeBatch is required
'ChangeBatch' => array(
// Changes is required
'Changes' => array(
array(
// Action is required
'Action' => 'CREATE',
// ResourceRecordSet is required
'ResourceRecordSet' => array(
// Name is required
'Name' => 'test2.xyz.co.in.',
// Type is required
'Type' => 'A',
'TTL' => 600,
"AliasTarget"=> array(
"HostedZoneId"=> "LOAD_BALANCER_ZONE_ID",
"DNSName"=> "LOAD_BALANCER_DOMAIN_NAME",
"EvaluateTargetHealth"=> false
),
),
),
),
),
));
Help will be appreciable. Thanks in advance.
This question is very old but I want to drop an answer in case someone has a similar issue.
The AWS PHP SDK needs credentials to communicate with AWS. the credentials are known as access key ID and secret access key.
As highlighted in AWS documentation
If you do not provide credentials to a client object at the time of its instantiation, the SDK will attempt to find credentials in your environment.
According to your logs it seems that the SDK is still pulling credentials from your environment which are stored in ~/.aws/credentials, and not using the provided keys.
Either make sure you have the access key and the secret key in your environment variable.
$ less ~/.aws/credentials
[default]
aws_access_key_id = key
aws_secret_access_key = secret
Or
Clear the configuration cache to force using the explicit credentials declared in the instantiation of your client. in case they were cached.
php artisan config:cache
Also refer to this documentation on how to properly setup a client.
https://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html
If you use
php artisan config:cache
make sure you don't use env() helper for accessing env variables from anywhere except the config files (config/*). Avoid using env() helper in your blade templates. This is because, calling env() helper after the above command is run, will return null.
Instead use a config file for accessing env values. If a separate config file under the config folder is not available for that vendor package / service, The config/services.php is a good place to point to env values.
Thephp artisan config:cache command will speed up your app as the the env variables are cached and so is recommended in a production environment.
Refer Laravel Configuration Caching for more details.