How to provide private s3 bucket credentials to electron-updater - electron

I am able to implement electron-updater in my electron app with a public s3 bucket. But the same doesn't work with a private bucket. I am getting
Error: HttpError: 403 Forbidden
I assume the application does not have AWS accesskey and secretkey required to access the private s3 bucket. How to instruct electron-updater to use credentials during autoUpdater.checkForUpdates() and autoUpdater.downloadUpdate()

How about this steps?
make signedUrl used by aws-sdk
bind signedUrl to setFeedURL
If you do this step, you may check about aws signature version.

Related

Golang SDK with Docker Error: UnrecognizedClientException: The security token included in the request is invalid

enter image description here
While running Golang SDK for AWS with Docker images of S3 and Dynamodb, when I hit the rest API to store in S3 and Dynamodb, I am getting this error: UnrecognizedClientException: The security token included in the request is invalid.

Spring Cloud Data Flow AWS S3 bucket

I am using SCDF Stream S3 SOURCE starter app to read file from aws S3 bucket. What configuration to be set in the s3 SOURCE app to avoid accessing the same file.

AWS SQS: Golang, Error: InvalidClientTokenId: The security token included in the request is invalid

Amazon SQS throughing following error:
Error: InvalidClientTokenId: The security token included in the request is invalid
I am using environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to create the session. Both key and secret are valid. I found the following URL regarding this issue:
https://aws.amazon.com/premiumsupport/knowledge-center/security-token-expired/
It says:
All application API requests to Amazon Web Services (AWS) must be cryptographically signed using credentials issued by AWS.
If your application uses temporary credentials when creating an AWS client (such as an AmazonSQS client), the credentials expire at the time interval specified during their creation. You must make sure that the credentials are refreshed before they expire."
Do credentials created through environment variables(AWS_KEY and AWS_SECRET) requires them to refresh? Or what is the default credentials expiry limit created through environment variables?
The same thing was happening with me when I discovered that the application was using the values for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY which were old and had been rotated. Switching to the latest credentials from AWS fixed this for me.

How to upload to a AES256 encrypted AWS S3 bucket using active storage rails?

I am trying to upload files to an AES encrypted S3 bucket using active storage. But it throws an access denied error(Aws::S3::Errors::AccessDenied (Access Denied)).

Cannot access S3 bucket from WildFly running in Docker

I am trying to configure WildFly using the docker image jboss/wildfly:10.1.0.Final to run in domain mode. I am using docker for macos 8.06.1-ce using aufs storage.
I followed the instructions in this link https://octopus.com/blog/wildfly-s3-domain-discovery. It seems pretty simple, but I am getting the error:
WFLYHC0119: Cannot access S3 bucket 'wildfly-mysaga': WFLYHC0129: bucket 'wildfly-mysaga' could not be accessed (rsp=403 (Forbidden)). Maybe the bucket is owned by somebody else or the authentication failed.
But my access key, secret and bucket name are correct. I can use them to connect to s3 using AWS CLI.
What can I be doing wrong? The tutorial seems to run it in an EC2 instance, while my test is in docker. Maybe it is a certificate problem?
I generated access keys from admin user and it worked.

Resources