Making files uploaded to s3 public - ruby-on-rails

I'm using s3-swf-upload-plugin in a Rails project to upload directly to S3. Pretty nifty, but can't seem to figure out how to make the uploaded files public. S3 doesn't seem to have the concept of public "buckets". Any ideas?

S3 supports four different access policies for both buckets and objects.
Take a look at the Canned Access Policies section in the S3 Documentation.
Specifically:
private
public-read
public-read-write
authenticated-read
So in your case, you'll need set the access policy on your bucket and uploaded files to public-read.

I use S3Fox for Firefox, http://www.s3fox.net/
You can browse your S3 buckets then right-click -> Edit ACL and set things to public.
You can also get the url for the bucket in a similar fashion.
It is very simple to use.

Related

Carrierwave uploading file using fog to google cloud storage without making bucket public and control access fine-grained in rails

Is there a way i can upload files while keeping bucket private and access control uniform?
as i am trying to use carrierwave with fog for this purpose and followed carrierwave gem instructions but i receive this error "Cannot insert legacy ACL for an object when uniform bucket-level access is enabled." which led me to make my bucket public and control access "fine grained". also there were some solutions to use "SignedURLs" but in that case i have to make url for every individual object.
All i want is to simply upload .pdf or .docx files to google cloud storage without making bucket public.

Where do I set cache information for my images?

This is about a Rails app on Heroku that runs behind CloudFront and serves ActiveStorage images from the Bucketeer add-on.
Cache config in both the Rails app itself and CloudFront are right on target for css, js, and even key, important requests (like search results, 3rd party info fetched from APIs, etc).
What I can't figure out how to cache are the images that come from the Bucketeer add-on.
Right now the images seem to come from the Bucketeer bucket every time. They show up with no Cache TTL.
I'd like for them to be cached for up to a year both at the CloudFront level and the visitor's browser level.
Is this possible?
It seems like the Bucketeer add-on itself gives us no control over how the bucket and/or the service handles caching.
Where can I force these files to show up with caching instructions?
Thanks for sharing your findings here
Additionally, I found that S3Service accepts upload options
https://github.com/rails/rails/blob/6-0-stable/activestorage/lib/active_storage/service/s3_service.rb#L12
So you can add the following code to your storage.yml
s3:
service: S3
access_key_id: ID
secret_access_key: KEY
region: REGION
bucket: BUCKET
upload:
cache_control: 'public, max-age=31536000'
For a full list of available options refer to AWS SDK
After a lot of searching, I learned that Bucketeer does give bucket control. You just have to use AWS CLI.
Here is the link to AWS docs on CLI:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
And here is the link where Bucketeer tells you how to get started with that on their service:
https://devcenter.heroku.com/articles/bucketeer#using-with-the-aws-cli
This means you can install AWS CLI, do the aws configure with the credentials Bucketeer provides, and then go on to change cache-control in the bucket directly.
AWS does not seem to have a feature for setting cache-control defaults for an entire bucket or folder, so you actually do it to each object.
In my case, all of my files/objects in the bucket are images that I display on the website and need to cache, so it's safe to run a command that does it all at once.
Such a command can be found in this answer:
How to set expires headers to all images in a bucket in Amazon S3
For me, it looked like this:
aws s3 cp s3://my-bucket-name s3://my-bucket-name --recursive --acl public-read --metadata-directive REPLACE --cache-control max-age=43200000
The command basically copies the entire bucket onto itself while adding the cache-control max-age=43200000 header to each object in the process.
This works for all existing files, but will not change anything for future changes or additions. You'd have to run this again every so often to catch new stuff and/or write code to set your object headers when saving the object to the bucket. Apparently there are people that have had luck with this. Not me.
Thankfully, I found this post:
https://www.neontsunami.com/posts/caching-variants-with-activestorage
This monkey-patch basically changes ActiveStorage::RepresentationsController#show to use Rails action caching for variants. Take a look. If you're having similar issues, it's worth the read.
There are drawbacks. For my case, they were not a problem, so this is the solution I went with.

How to reference and update a file on S3 from Rails 4

I have a Rails 4 application that needs to use a number of excel files, representing rosters, (20 or so, grouped by their own individual committee) that have to be read in and editable by the User. Pre-deploy I had the system working perfectly where these files would live in public/rosters and could be referenced and edited by any authenticated user, unfortunately when I deployed to Heroku I could no longer do this.
I have been using an S3 bucket to host the other files necessary for this and other related apps, and it's been working wonderfully, for what I've been using it for; so I decided to try it as a solution to this problem. Unfortunately it would appear as if I could only access the files the way I had been by making them publicly accessible, which is not something that I want to do.
So my question is this: what would be the best way to reference these files (using my access_key_id and secret_access_key to authenticate ideally) and allow a User to push changes that will overwrite the file on the S3 bucket.
You have to use aws-sdk-ruby to write file to S3 which works using access_key_id and secret_access_key. Check this documentation. Hope this helps. Thanks!

Upload file directly to S3 without need to use forms in Rails

For my Rails application, I download a bunch of files from a remote URL to my application. I would like to directly upload them to Amazon S3, without needing a form to do the upload, since I will temporarily cache the file I downloaded on the EC2 instance.
I would also like to retain the links to the files I uploaded so I can download them later.
I am essentially reposting the files I downloaded.
I looked around, but most of the solution seem to involve form uploading to S3 with a user.
Is there s direct upload solution?
You can upload directly to S3 using the AWS SDK for Ruby. The easiest way is:
require 'aws-sdk'
s3 = Aws::S3::Resource.new(region:'us-west-2')
obj = s3.bucket('bucket-name').object('key')
obj.upload_file('/path/to/source/file')
Or you can find a couple other options here.
You can simply use EvaporateJS to achieve this. You can also take advantage of sending ajax request to update file name to the database after each file upload. Though javascript exposes few details your bucket is not vulnerable to hack as S3 service provide a bucket policy.
Just set the <AllowedOrigin>*</AllowedOrigin> to <AllowedOrigin>specificwebsite.com</AllowedOrigin> in production mode.

Make the object publicly readable in Amazon S3

I am working on Amazon S3 sdk for storing files on cloud server,i am using codeplex's threesharp(http://threesharp.codeplex.com) for implementing this, I have successfully uploaded file on server now i have to download it, and for this i have to download it with the URL eg https://s3.amazonaws.com/MyBucket/Filename
I can download the uploaded file but it is appearing blank, if i upload a text file then after downloading it's showing nothing in it,same as images and other files. I have read on Amazon S3 documentation that i'll have to make the object publically readable(http://docs.amazonwebservices.com/AmazonS3/latest/gsg/OpeningAnObject.html) i dont have any idea how to achieve this.
How can i accomplish the download functionality?
Threesharp project is a desktop based and i am working on web based application
During file upload set proper ACL:
Eg.:
AmazonS3 client = GetS3Client();
SetACLRequest request = new SetACLRequest();
request.BucketName = "my-new-bucket";
request.Key = "hello.txt";
request.CannedACL = S3CannedACL.PublicRead;
client.SetACL(request)
Amazon S3 provides a rich set of mechanisms for you to manage access to your buckets and objects.
Check this for detail: Amazon S3 Bucket Public Access Considerations
Also, You can Download Explorer for Amazon S3 (Eg. CloudBerry Explorer for Amazon S3) & they you can assign appropriate rights to your buckets.
CloudBerry Explorer for Amazon S3: Data Access Feature:
Bucket Policy Editor
Create and edit conditional rules for managing access to the buckets and objects.
ACL Editor
Manage access permission to any of your objects by setting up 'Access Control List'. ACL will also apply to all 'child objects' inside S3 buckets.
Also, you can do the same using Amazon S3 admin console.
Eg.
Have you tried the following:
Right-click the object and click Make public
Select the object and in the Permissions section checked Open/Download ?
edit:
have you taken a look here:
How to set the permission on files at the time of Upload through Amazon s3 API
and here:
How to set a bucket's ACL on S3?
It might guide you in the right direction

Resources