I have a model called Campaign and every Campaign has one attachment.
I use S3 ActiveStorage storage and I need a PERMANENT URL for my Campaign images.
I currently generate URLs like:
campaign.image.service_url
But this link expires in 5 minutes. I need non-expire links. (Config settings only let me get a URL that expires in 1 week, it does not solve my problem again)
How can I get URLs of my images?
EDIT
Solution:
I use CloudFront as CDN. This is the solution I found:
https://domainName+/campaign.image.key
this gives a link to an image file that does not expire.
Check the docs https://api.rubyonrails.org/classes/ActiveStorage/Variant.html#method-i-service_url
You are not supposed to expose service_url directly:
Returns the URL of the variant on the service. This URL is intended to be short-lived for security and not used directly with users. Instead, the service_url should only be exposed as a redirect from a stable, possibly authenticated URL. Hiding the service_url behind a redirect also gives you the power to change services without updating all URLs. And it allows permanent URLs that redirect to the service_url to be cached in the view.
Use url_for(variant) (or the implied form, like +link_to variant+ or +redirect_to variant+) to get the stable URL for a variant that points to the ActiveStorage::RepresentationsController, which in turn will use this service_call method for its redirection.
So use url_for(campaign.image) (or url_for(campaign.image.some_variant)) instead.
The URL that is not expire is simple and without any params:
http[s]://[bucket-name.s3].amazonaws.com/pathtofile/file.extention
You can get this URL from the AWS SDK by using S3::Objects :public_url method
With active storage you can do
url ="#{campaign.image.service.bucket.url}/#{campaign.image.blob.key}"
Then you would need to configure the public access settings on the S3 bucket.
Related
At the moment my Rails 6 React app has user uploaded images (avatars, profile wallpapers, etc) stored in S3, inside a public bucket for local development (not facilitated by active storage because it was not playing nice with vips for image processing). The reason its set to public was for ease of set up, now that all of the functionality is complete, for the stagging (and soon production) I would like to add sensible bucket policies. I don't currently have CloudFront set up but I do intend to add that in the near term, for right now I'm using the bucket asset URL to serve assets. I have created two separate buckets, one for images that will be displayed in the app and one for content that is never to be publicly displayed which will be used for internal purposes.
The question I have is, for the content that is in the bucket reserved for viewable content, do I have to make it public (disable that setting in the AWS console that disables public access), then create a policy that allows GET request from wherever (*), then restriction POST, PUT, DELETE, requests to the arn ID of the EC2 instance that's hosting the rails application. The AWS documentation has confused me, it gives me the impression that you never want to enable public access to a bucket, and that policies alone are how you surface bucket content. When I take that approach I keep getting access denied in the UI when I have attempted to do that.
EDIT:
I'm aware that signed URLs can be used, but it is my current understanding that there is a nontrivial speed hit to the UX of the application if you have to generate a signed URL for every image (this app is image heavy). There are also SEO concerns given that all the image URLs would effectively be temporary.
Objects in Amazon S3 are private by default. You can grant access to an object in several ways:
A Bucket Policy that can grant access to everyone ('Public'), or to specific IP addresses or users
An IAM Policy on an IAM User or IAM Group that grants access to that user or group -- however, they would need to access via an AWS SDK so that they can authenticate the call (eg when an application makes a request to S3, it would make an authenticated API call)
An Access Control List (ACL) on the object, which can make the object public without requiring the bucket to be public
By using an Amazon S3 pre-signed URL, which is a time-limited URL that provides temporary access to a private object
Given your use-case, an S3 pre-signed URL would be the best choice since the content is kept private but the application can generate a link that provides temporary access to the object. This can also be done with CloudFront.
Generating the pre-signed URL only takes a few lines of code and does not involve an API call to AWS. It is simply creating a hash of the request using your Secret Key, and then appending that hash as a 'signature'. Therefore, there is effectively no speed impact of generating pre-signed URLs for all of your images.
I don't see how SEO would be impacted by using pre-signed URLs. Only actual web pages (HTML) are tracked in SEO -- images are not relevant. Also, the URLs point to the normal image, but have some parameters at the end of the URL so they could be tracked the same as a non-signed URL.
No it does not have to be public. If you don't want to use CloudFront, the other option is to use S3 presigned RLs.
I'm creating signed URLs and sending them to an external system for later use. Unfortunately there are some length parameters that do not allow extremely long strings to be passed along. Recently, it appears that signed URLs were reformatted and extended which subsequently broke my app.
Is there some method for generating a shorter URL from S3? I would prefer not to rely on a third party URL shortening service for a number of reasons (it's an extra request at URL generation and it adds a point-of-failure).
You can build a very simple URL Shortener directly within Amazon S3:
Turn on static website hosting. This will give you a URL to access your bucket, eg: mybucket.s3-website-ap-southeast-2.amazonaws.com
Create a zero-length object in S3 with a 'short name', eg pic1.jpg
Add metadata to the zero-length object:
Key: Website Redirect Location
Value: The Long URL
Then, when you access the zero-length object (eg mybucket.s3-website-ap-southeast-2.amazonaws.com/pic.jpg) it will actually redirect to the Long URL stored in the metadata.
Simple, with no database required!
I have an endpoint in my Rails app where a model id and a couple other parameters form the URL. From there, I look up that model in the DB, and redirect to an image stored on Amazon S3.
Cloudfront is ALWAYS in front of this url, and I really want it to cache the image. Right now, it's caching the redirect, which means it's serving straight from the S3 bucket which is not as efficient.
What can I do? Is there a header I can add to tell Cloudfront to cache the result? Or is there a way I can use Rack::Rewrite but still have access to my ActiveRecord models?
Ultimately the only solution that worked was to change the original S3 URL's. I moved all the logic that computed the URL by model id and other parameters like I described to the S3 image generation. That way my frontend could still generate these urls. Then I set up a Cloudfront URL mapped to the S3 bucket itself and pointed my app to that for image urls.
I am trying to configure a LinkedIn application for a multi tenant site. I will have 20+ tenants using the same application and the number is going to increase every time.
As per Linkedin API documentation (https://developer.linkedin.com/docs/oauth2) we need to ensure following points
We strongly recommend using HTTPS whenever possible
URLs must be
absolute (e.g. "https://example.com/auth/callback", not
"/auth/callback")
URL arguments are ignored (i.e.
https://example.com/?id=1 is the same as https://example.com/)
URLs
cannot include #'s (i.e.
"https://example.com/auth/callback#linkedin" is invalid)
Can i configure redirect url as https://*.mysite.com/auth/linkedin/callback instead of specifying url of each tenant separately.
You cannot do a subdomain based wild card mapping as the IP should know the RP.
You can change the logic after you get the authorization callback, so you set the cookie and then you will have to redirect the user back to the tenant URL instead of the base URL.
Anyway, after successful authorization, you will be redirecting the user to an action, just figure out the subdomaina and the construct the URL and do the redirection
HTH
EDIT
Since the use of the URL or other approaches seem to be a hack, can you please try to have a facade like application (or Gateway like one) that has a URL that is registered in linkedin and then on receiving the response, it can use a state or other factor to redirect to the tenant URL. This can use a 302 and it will be invisible unless the user is on a very slow network. This approach does not require any hack like approach.
Here state can be a function that takes a tenant info and generates a dynamic hash that is stored for tracking and redirection.
As far I understand, custom origin server with cloudfront only works if cloudfront is able to access files from my website url:
eg: www.domain.com/hello.html
However, my website has a login requirement in order to view hello.html. How can I have the login mechanism and still cache my real hello.html page in cloudfront using custom origin server?
I am using Ruby on Rails btw, but this is applicable to other stacks as well.
I'm pretty sure this is not possible. As you said, CloudFront has be to able to access the file to serve and cache it. I never saw an option to tell CloudFront to use a password to access the file.
An idea: maybe you can check in your Rails app, before you require the user to enter a password, if the request comes from CloudFront (I'm sure there are some headers indicating that) and, if so, bypass the login requirement?
Edit:
It says in the docs:
Do not configure your origin server to request client authentication.
One thing I'm pretty sure set though is the User Agent. Check for user_agent =~ /cloudfront/iand bypass authentication?