My MinIO, self-hosted object storage server, is up and running, w/ working access key. Programming in Ruby on Rails. In development mode/env, use of that remote server works for uploading and loading images on browser, as call "image.url" creates full URL w/ access parameters like below. In production mode/env, using same access key, upload of image to server still works and proven by seeing the image exists in bucket browsing and Rails console call to "image.url" & pasted directly to browser works.
But rendered page's image URL always expires, as I see that HTTP status accessing using curl.
Does anyone know why on front end/view level, in Rails code, the URL is still expired while other ways can still access fine?
mydomain.com/mybucket/item.jpg?X-Amz-Expires=3600&X-Amz-Date=20221215T140231Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=omv9834pbcse6%2F20221215%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-SignedHeaders=host&X-Amz-Signature=sjjk89jbkj2398bp
Related
I am using TinyMCE 5 on my ruby on rails app that runs on https. I have a Wordpress website running on http that hosts images.
After uploading an image to Wordpress, I copy its HTTP URL into tinyMCE image section and these work fine and displayed properly as well.
However, some users are complaining that they can't see images. Whenever I check it works fine. What could be the problem?
Possible reasons could be too many calls at the same time, usage of http for Wordpress site or slow network connection of the user.
This is most likely an issue with the host that you are using for storing images. (maybe there is a daily limit on number of requests that you can process)
Possible solution:
Store your images on AWS S3 or Google Drive etc and use their links in tinyMce. It will most definitely work.
I have a website hosted on Heroku, and using Ruby on Rails with the paperclip gem.
I am trying to prevent hotlinking to all my files in my S3 bucket, so I have everything on private and only allow user to access using an expiring URL
I want to provide a more user-friendly page when user tries to reuse an expired URL. Currently it is showing the message below:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>300</X-Amz-Expires>
<Expires>2016-04-15T19:41:33Z</Expires>
<ServerTime>2016-04-15T19:41:39Z</ServerTime>
<RequestId>D5DD935553A2CF88</RequestId>
<HostId>
55+rFtFbksDMyBWf5cWwgJ+aWvJKwe5umSXgTEWYKgfoT5QR5sbJY9fRNFIiBAqd35OR2MoiCzQ=
</HostId>
</Error>
Is there a way to customize the error page on S3?
S3 offers custom error pages through the web site endpoints -- but not the REST endpoints... but signed URLs only work on the REST endpoints, and not the web site endpoints.
So, no, there is not a way to directly solve this using only S3.
One option is to use CloudFront, which offers the ability to replace the standard error pages with a custom static page, but the error content is lost and all you have is a static page. You also have to use the CloudFront URL signing mechanism, which is different than S3 (though it also has some advantages, such as wildcard support in a signed URL).
In this answer to a question that is similar, but not a complete duplicate I demonstrated the way I've used an XSL transform to "style" the S3 error XML, by modifying the XML returned to the browser, injecting a link to the XSL stylesheet, and letting the browser do the rest of the work... see the screen shots.
I'm quite pleased with the solution, though it has what some people would consider a drawback -- it requires all of the S3 requests be served via a proxy server running HAProxy in EC2. There's a small additional cost for the EC2 instance, but no added cost for the bandwidth, since the transfer from S3 into EC2 is free, and the transfer from EC2 to the Internet is the same price as transfer from S3 to the Internet. With this setup, the S3 signed URLs still work. The additional advantages in my application us that this allows me to use my SSL certs with S3 static content (although this capability is also available through CloudFront), and the fact that the proxy's access logs are in real-time.
I have a rails app that is running on heroku and am using Cloudflare Pro with their Full SSL to encrypt traffic between: User <-SSL-> Cloudflare <-SSL-> Heroku, as detailed in: http://mikecoutermarsh.com/adding-ssl-to-heroku-with-cloudflare/ .
I am also using the rack-ssl-enforcer gem to force all http requests to go through https.
This is working properly, except I have the following issues, by browser:
1) Firefox. I have to add a security exception the first visit to the site, getting the "This site is not trusted" warning. Once on the site, I also have the warning in the address bar:
2) Chrome: page loads first time, but the lock in the address bar has a warning triangle on it, when clicked displays:
Your connection is encrypted with 128-bit encryption. However, this
page includes other resources which are not secure. These resources
can be viewed by others while in transit, and can be modified by an
attacker to change the look of the page. The connection uses TLS 1.2.
The connection is encrypted and authenticated using AES_128_GCM and
uses ECDHE_RSA as the key exchange mechanism.
Safari: initially loads with https badge, but it immediately drops off
Is there a way to leverage Cloudflare SSL + piggyback of Heroku native SSL without running into these security warnings? If not, I don't see much value in the configuration.
My apologies for slinging erroneous accusations against Cloudflare and Heroku :-)
Turns out the issue was not the fault of either, but instead that images on the app (being served from AWS S3) were being served up without https.
If anyone runs into this situation, lessons learned across a wasted day:
S3 only lets you serve up content via https if you serve from your bucket's dedicated url: s3.amazonaws.com/your-bucket-name/etc..
a) I tried setting the bucket up for static website hosting, so I could use the url "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", and then set up a CNAME within my DNS that sends "your-bucket-name.your-url" to "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", to pretty up urls
b) this works, but AWS only lets you serve via https with your full url (s3.amazonaws.com/your-bucket-name/etc..) or *.s3-website-us-east-1.amazonaws.com/etc...", which doesnt work if you have a dot in your bucket name (your-bucket-name.your-url), which was required for me to do the CNAME redirect
If you want to use AWS CDN with https, on your custom domain, AWS' only option is CloudFront with a SSL certificate, which they charge $600/mo, per region. No thanks!
In the end, I sucked it up and have ugly image URLs that looks like: https://s3-website-us-east-1.amazonaws.com/mybucketname...", and using paperclip, I specify https: with ":s3_protocol => :https," in my model. Other than that all is working properly now.
I'm building a Rails application that deals with file uploads through CarrierWave. Currently, larger file uploads block the server for a significant amount of time. I have seen solutions like the s3-swf-upload-plugin gem that skip the local server and send files straight from the browser to S3, but this would require some modifications for pre-generating unique filenames and synchronizing them with the database. I'm sure it wouldn't be too much trouble, but Heroku's new Cedar stack gave me the idea of offloading these long running requests to a node.js instance running in the same app. I'm not very experienced with these kinds of things, so excuse my wording if it's a bit off.
Would something like this be possible? How would you configure things such that certain requests (ones involving file uploads, in this case) would be handled by a node app bundled in the same heroku repository as the main rails app?
I don't think it's possible to mix Rails and Node in the same app. However, you could get roughly the same functionality by using two separate apps that communicate with each other.
You can use ENV['DATABASE_URL'] to determine your database connection string. Use the heroku console to set it as an ENV variable for your Node app (e.g. heroku config:add OTHER_DB=your_connection_string) should then be able to use the same connection string to connect to the same database from your other heroku app. You could even access it outside of heroku if you have a dedicated database, see: http://devcenter.heroku.com/articles/external-database-access
For seamless integration between the two apps, you could have a form rendered by the Rails app post to a URL of the Node app. In addition to the file upload, include in that form via hidden input fields any other variables you need to communicate to the Node app. When the upload to the Node app is done, it could redirect the client back to the Rails app, passing any status or variables as get parameters.
Run the two apps under two subdomains of the same domain and you could even share cookies between them.
You need two apps. I am doing exactly what's described in this question. I wanted large streaming uploads, and since Rack writes downloads to a temp file before passing them through to the handler, it is not possible to do this with Rails.
Node.js, on the other hand, does this beautifully. So there are two Heroku apps, the Rails web app and the Node.js (Express) web app. The Rails web app uses SWFUpload as the client-side solution. The Rails app and the Node.js app both have a secret key as a Heroku config variable. When it's time for the user to upload, client-side Javascript requests an upload URL from the Rails server. The Rails server forms an upload URL with an Expires parameter and computes a signature using the secret key. The client-side Javascript handler passes this URL along to SWFUpload (upload_url property). The user selects the files to upload, and SWFUpload starts posting them to the upload_url. The Node.js app verifies that the URL is not expired and that the signature is valid. It processes the form data with the formidable library.
One other detail. Flash requires the Node.js app to serve a crossdomain.xml that permits the cross-site request.
My Node.js app doesn't touch the database; but if it did I would share DATABASE_URL as previously suggested. Note that you can't share a DATABASE_URL outside of Heroku unless you have a dedicated DB. The DATABASE_URLs for shared databases are not reachable from outside Heroku (unlike some other services like RedisToGo).
Can one of the Amazon services (their S3 data service, or otherwise) be used to offload server of static files for a Ruby on Rails app, but still support the app's authentication & authorization?
That is such that when the user browser downloaded the initial HTML for one page of the Ruby on Rails application, when it went back for static content (e.g. an image or CSS file), that this request would be:
(a) routed directly to the Amazon service (no RoR cycles used to serve it, or bandwidth), BUT
(b) the browser request for this item (e.g. an image) would still have to go through an authentication/authorization layer based on the user model in the Ruby on Rails application - in other words to ensure not just anyone could get the image...
thanks
The answer is a yes with a but. You can use a feature of S3 that allows you to create links to secure S3 objects that has a small time to live, default is 5 minutes. This will work for any S3 object that is uploaded as private. This means that the browser will only have X seconds or whatever to request the file from S3. Example code from docs for the AWS gem:
S3Object.url_for('beluga_baby.jpg', 'marcel_molina')
You can also specify an expires_in or expires option per file. The bad thing is that you would need to create a helper for your stylesheet, image, and js links to create the proper S3 URLs.
I would recommend that you setup a domain name for your S3 bucket, like "examples3.amazonaws.com" and put all your standard image files and CSS there as public. Then set that as the asset host in your rails config. Then, only use the secure links for static files that really need it.