How do I read an s3 bucket from AWS SAM local - aws-sam-cli

I have a lambda that I wish to test locally using AWS SAM local. The lambda needs to read from an s3 bucket on the cloud. How do I allow this to happen from the AWS SAM local environment?
thanks
Angus

Related

MLflow: Unable to store artifacts to S3

I'm running my mlflow tracking server in a docker container on a remote server and trying to log mlflow runs from local computer with the eventual goal that anyone on my team can send their run data to the same tracking server. I've set the tracking URI to be http://<ip of remote server >:<port on docker container>. I'm not explicitly setting any of the AWS credentials on the local machine because I would like to just be able to train locally and log to the remote server (run data to RDS and artifacts to S3). I have no problem logging my runs to an RDS database but I keep getting the following error when it get to the point of trying to log artifacts: botocore.exceptions.NoCredentialsError: Unable to locate credentials. Do I have to have the credentials available outside of the tracking server for this to work (ie: on my local machine where the mlflow runs are taking place)? I know that all of my credentials are available in the docker container that is hosting the tracking server. I've be able to upload files to my S3 bucket using the aws cli inside of the container that hosts my tracking server so I know that it as access. I'm confused by the fact that I can log to RDS but not S3. I'm not sure what I'm doing wrong at this point. TIA.
Yes, apparently I do need to have the credentials available to the local client as well.

Reading an image from remote ssh server into dask array

Is this possible. Based on the documentation it looks like imread does not support anything but local file paths? If it is possible would anyone be so kind as to provide a code sample?
Cheers.
Here is Documentation,
The following remote services are well supported and tested against the main codebase:
Local or Network File System: file:// - the local file system, default in the absence of any protocol.
Hadoop File System: hdfs:// - Hadoop Distributed File System, for resilient, replicated files within a cluster. This uses PyArrow as the backend.
Amazon S3: s3:// - Amazon S3 remote binary store, often used with Amazon EC2, using the library s3fs.
Google Cloud Storage: gcs:// or gs:// - Google Cloud Storage, typically used with Google Compute resource using gcsfs.
Microsoft Azure Storage: adl://, abfs:// or az:// - Microsoft Azure Storage using adlfs.
HTTP(s): http:// or https:// for reading data directly from HTTP web servers.
Check above given documentation for more information

How to omit "Aws :: ECSCredentials.new" in ruby

Currently, the way to write the source differs depending on the execution environment
and I want to fix it to a unified writing style.
The code is as follows depending on the environment.
When connecting to s3 with ECS:
client = Aws::S3::Client.new(region: Settings.aws.region, credentials: Aws::ECSCredentials.new)
When connecting to s3 with not ECS:
client = Aws::S3::Client.new(region: Settings.aws.region)
When connecting to AWS s3 on ECS, an error occurs if there are no credentials.
Let me know If there is a way to improve.

Spring Cloud Data Flow AWS S3 bucket

I am using SCDF Stream S3 SOURCE starter app to read file from aws S3 bucket. What configuration to be set in the s3 SOURCE app to avoid accessing the same file.

Cannot access S3 bucket from WildFly running in Docker

I am trying to configure WildFly using the docker image jboss/wildfly:10.1.0.Final to run in domain mode. I am using docker for macos 8.06.1-ce using aufs storage.
I followed the instructions in this link https://octopus.com/blog/wildfly-s3-domain-discovery. It seems pretty simple, but I am getting the error:
WFLYHC0119: Cannot access S3 bucket 'wildfly-mysaga': WFLYHC0129: bucket 'wildfly-mysaga' could not be accessed (rsp=403 (Forbidden)). Maybe the bucket is owned by somebody else or the authentication failed.
But my access key, secret and bucket name are correct. I can use them to connect to s3 using AWS CLI.
What can I be doing wrong? The tutorial seems to run it in an EC2 instance, while my test is in docker. Maybe it is a certificate problem?
I generated access keys from admin user and it worked.

Resources