Public access for Active Storage in Rails 6.1 - ruby-on-rails

I use Active Storage on my Rails site with AWS. After upgrading to 6.1, I'd like to configure public access per the guide so my images have permanent URLs.
I've determined that I need to keep the existing service as-is so previously uploaded images continue to work. I've created a new service and configured the app to use it like this.
Previous images continue to work like this, but new image uploads result in Aws::S3::Errors::AccessDenied. Note that the credentials used are exactly the same as in the previous, working, non-public service. The guide mentions that the bucket needs to have the proper permissions, but not what exactly needs to be set.
Looking in AWS, the section "Block public access (bucket settings)" is all set to "Off". In "Access control list (ACL)", "Bucket owner (your AWS account)" has "List, Write" for both "Objects" and "Bucket ACL". No other permissions are listed. I've tried changing "Everyone (public access)" to include "List" for "Objects" and "Read" for "Bucket ACL" - doesn't seem to solve the problem.
How do I get public URLs working with Active Storage?

The permission you need when switching from private access to public is PutObjectAcl. Adding this in the IAM Management Console makes it work.
In addition, rather than creating a new service, you can mark all images in the existing service as public-readable via the UI or via a script.

Related

Can a Slack bot re-share a file that was shared with it?

Context of what I'm trying to accomplish:
User shares a file with the bot
Other users interact with the bot via a dialog
The bot shares the original file to the other users
For example, we want to share a file to the bot that contains this week's cafeteria menu. Each time users would interact with the bot in a certain way, it would share the cafeteria menu with them so that they can consult it.
I've tried calling files.share method but bots can't perform this action (get invalid token type error).
As far as I can tell, there is no way to do this currently. I've tried link unfurling in the message body but that only works if the file itself was already shared to the user. If not, the link simply won't unfurl and clicking it will fail.
The bot can perform a files.upload call and re-upload the contents of the file to each user individually. This seems incredibly wasteful but appears to be the only way to work currently.
Is there something I'm missing?
The reason your bot can not use file.share is that this is an undocumented API method and you need a legacy token to use it. No other token (user token, bot token) will work, because it requires the post scope, which only exists for legacy token.
Approach A: Legacy Token
So one approach would be to use a legacy token with your bot, which you can create here for your current workspace. That should work nicely if your Slack app is only used on your "own" Slack workspace where you can create and use a legacy token.
Approach B: File Mention
Another approach is to use the mention feature in messages to share a file. This works by sending the private link (url_private property) of an already shared file in a message to a new channel. This will automatically re-share the file in that channel. I believe this only works with files that how been previously shares in a public channel and can therefore be re-shared. Be aware though that the file mention feature is currently being reworked, so this behavior might change.
Example:
https://slack.com/api/chat.postMessage?token=TOKEN&channel=CHANNEL&as_user=true&text=URL_PRIVATE
For more details see the Slack tutorial Storing, retrieving, and modifying file uploads.
Approach C: External File / image file
If you host your file externally or create a public URL for a file uploaded to Slack you can share it in every channel by just adding the URL to a message. Slack will automatically unfurl it and therefore share it to the user in any channel. This is different to Approach B, because its not a file mention and requires a public URL. You get the public URL of an uploaded file by calling files.sharedPublicURL.
If i'm not wrong, you can do like this :
you share a file with your bot
you retrieve the file shared ID, so his url_private property (cf https://api.slack.com/types/file#authentication)
you then donwload the file
you can then re-share it several times later (without re-uploading to each user)...

Stream remote file to client in ruby/rails 4/unicorn/nginx

I am trying to stream a file from a remote storage service (not s3 :-)) to the client using Ruby on Rails 4.2.
My server needs to stay in the middle of things to authenticate the client request but also to build up the request to the remote storage service since all requests to that service need to be authenticated using a custom header param. This makes it not possible to do a simple redirect_to and let the client download the file directly (but do let me know if this IS in fact possible using rails!). Also I want to keep the url of the file cloaked for the client.
Up until now I am using a gem called ZipLine but this also does not work as it still buffers the remote file before sending it to the client. As I am using unicorn/nginx, this might also be due to a setting in either of those two, that prevents proper streaming.
As per rails doc's instructions I have tried adding
listen 3000, tcp_nopush: false
to config/unicorn.rb but to no avail.
A solution might be to cache the remote file locally for a certain period and just serve that file. This would make some things easier but also creating new headaches like keeping the remote and cached files in sync, setting the right triggers for cache expiration, etc.
So to sum up:
1) How do I accomplish the scenario above?
2) If this is not a intelligent/efficient way of doing things, should I just cache a remote copy?
3) What are your experiences/recommendations in given scenario?
I have come across various solutions scattered around the interweb but none inspire a complete solution.
Thanks!
I am assuming you the third party storage service has an HTTP access. If you did consider using redirect_to, I assume the service also provides a means to allow per download authorization. Like unique key in the header that expires and does not expose your secret api keys or HMAC signed URL with expiration time as a param.
Anyhow, most cloud storage services provide this kind of file access. I would highly recommend let the service stream the file. Your app should simply authorize the user and redirect to the service. Rails allows you to add custom headers while redirecting. It is discussed in Rails guides.
10.2.1 Setting Custom Headers
If you want to set custom headers for a response then response.headers
is the place to do it. The headers attribute is a hash which maps
header names to their values, and Rails will set some of them
automatically. If you want to add or change a header, just assign it
to response.headers
So your action code would end up being something like this:
def download
# do_auth_check
response.headers["Your-API-Auth-Key"] = "SOME-RANDOM-STRING"
redirect_to url
end
Don't use up unnecessary server resources by streaming through them all those downloads. We are paying cloud services to that after all :)

Really Basic S3 Upload credentials

I'm giving Amazon Web Services a try for the first time and getting stuck on understanding the credentials process.
From a tutorial from awsblog.com, I gather that I can upload a file to one of my AWS "buckets" as follows:
s3 = Aws::S3::Resource.new
s3.bucket('bucket-name').object('key').upload_file('/source/file/path')
In the above circumstance, I'm assuming he's using the default credentials (as described here in the documentation), where he's using particular environment variables to store the access key and secret or something like that. (If that's not the right idea, feel free to set me straight.)
The thing I'm having a hard time understanding is the meaning behind the .object('key'). What is this? I've generated a bucket easily enough but is it supposed to have a specific key? If so, how to I create it? If not, what is supposed to go into .object()?
I figure this MUST be out there somewhere but I haven't been able to get it (maybe I'm misreading the documentation). Thanks to anyone who gives me some direction here.
Because S3 doesn't have traditional directories, what you would consider the entire 'file path' in your client machines, i.e. \some\directory\test.xls becomes the 'key'. The object is the data in the file.
Buckets are unique across S3, and the keys must be unique within your bucket.
As far as the credentials, there are multiple ways of providing them - one is to actually supply the id and secret access key right in your code, another is to store them in a config file somewhere on your machine (this varies by OS type), and then when you are running your code in production, i.e. on an EC2 instance, the best practice is to start your instance with a IAM Role assigned, and then anything that runs on that machine automatically has all of the permissions of that role. This is the best/safest option for code that runs in EC2.

AWS S3 Public Object vs Private Object?

Back in S3, I have URL's to images in my bucket that I will be presenting in my application, however they are set private. When I try to click on the link, it reads "access denied". When I change the setting of the link to public, It goes through, however I've read that public access isn't the safest thing. So this is essentially a two part question,
1) What is the difference between a public and private link/Object in a bucket?
2)And how can i make a private link/object in my bucket accessible to both myself and my users?
Private objects require authentication; public objects do not.
With regard to your comment "public access isn't the safest thing", you typically need to consider a couple of things when deciding whether or not to make an S3 object public:
[major] is it OK for anyone to download it? If the content of the object is something that you should not be sharing with the world, for example a user's family photos, then the answer is "no".
[minor] do you want to pay every time some unknown person downloads an object. As the bucket owner, you pay for data transfer out (unless you opt for 'requester pays' in which case the requester needs to authenticate).
There are at least two ways that you can make private S3 objects available to your users without them being accessible to the entire world:
use time-limited, pre-signed URLs for the objects (see this article)
proxy the object downloads yourself so that all requests for objects go to your app server and can therefore be restricted to authenticated sessions.

changing gerrit's canonical web url

I have had an issue with setting up my gerrit server. The machine has Ubuntu 12.04 LTS Server 64-bit installed on it. I am setting up git and gerrit as a way to manage source code and code review.
I require internal and external access to it. I setup a DNS that would work externally. However, during the initial setup, i left the canonicalWebUrl to its default value. It usually take's the machine's hostname (in this case it was vmserver).
The issue I was running into is exactly as explained here https://stackoverflow.com/questions/14702198/the-requested-url-openid-was-not-found-on-this-server, where after trying to sign in/register account with OPEN ID, it was saying url not found.
For some reason, it was changing the url in the address bar from the the DNS i setup to the CanonicalWebURL.
I tried to change the canonical web url in the gerrit.conf file found in etc of the gerrit site. After restarting the server, however, we were able to see the git project files present as they should be, but the account that was administrator seemed to no longer be registered and none of the projects were visible through gerrit.
I was wondering if there was a special procedure to changing the canonical web url in gerrit without disrupting access to a server?
any help or information on canonical urls would be much appreciated as i cannot find too much information on them.
edit:
looking deeper, i found some information that is way over my head regarding "submodules"
i do not understand if this is what i am looking for or not.
https://gerrit-review.googlesource.com/#/c/36190/
The canonical web url must be set, and it sounds like you have done that correctly.
I suspect the issue you are seeing is caused by changing the canonical web url - some OpenID providers (Google being the big one) will return a different user ID based on the URL of the request. This is a privacy thing and cannot be changed. So previous users will now show up as new users and won't be in their old groups (Administrators group in this case).
If you don't have many users, it might be easiest to migrate them by hand. You can modify the database to map the new user ID to the old user account.

Resources