Using Google App Engine as a Content delivery network - ruby-on-rails

I would like to know if Google App Engine can be used as a Content delivery network like aws S3. I'm running a RoR app on Heroku and I would like store my uploaded files on GAE instead of s3.
If it's possible what would be the best way to do it?

http://24ways.org/2008/using-google-app-engine-as-your-own-cdn
It won't be able to host files over 1MB though.
Make sure to read through the comments on that blog post as well, some have concerns about the terms of service.

GAE in itself isn't meant to be a CDN... that doesn't, however, stop you from writing a CDN application on top of it. The only limit you'll need to worry about is the 50 MB limit on the size of the blobstore. Such an app will have to provide a URL that you can hit to get the upload URL, which could then be used to upload the file. The download url can also be generated with the upload URL, and used to access the content.

Related

TinyMCE 5 images not showing to some users

I am using TinyMCE 5 on my ruby on rails app that runs on https. I have a Wordpress website running on http that hosts images.
After uploading an image to Wordpress, I copy its HTTP URL into tinyMCE image section and these work fine and displayed properly as well.
However, some users are complaining that they can't see images. Whenever I check it works fine. What could be the problem?
Possible reasons could be too many calls at the same time, usage of http for Wordpress site or slow network connection of the user.
This is most likely an issue with the host that you are using for storing images. (maybe there is a daily limit on number of requests that you can process)
Possible solution:
Store your images on AWS S3 or Google Drive etc and use their links in tinyMce. It will most definitely work.

How to see the speedup when using Cloudinary "direct upload" method?

I have a RoR web app that allow users upload images and use Cloudinary as cloud storage. I read their document and find a cool way called "direct uploading" which reduce my server's loading. To my knowledge, the spirit is changing workflow
image -> server -> Cloudinary
to
image -> Cloudinary
and my server only store an Cloudinary url to database, not the image file (Tell me if I'm wrong, thx).
So my question is, how to check whether I have changed to "direct uploading" method successfully? Open element inspector to see time cost for each POST and GET requests? Other better options?
I expect big advances via this way, but how can I feel it?
Thanks form a rookie =)
# The app is deployed on heroku.
# Doesn't change to direct uploading method yet.
# This app is private, only serve for around 10 people.
You can indeed (and it is very recommended to) bypass your server and let Cloudinary take care of the upload processing directly. This indeed lowers the processing of your server to simply store the uploaded image's details, and the image is directly stored in your Cloudinary account. This indeed quickens the upload process. You can test out the sample project which demonstrates both server-side and client-side uploads.

Best way to host plist file on server for iOS app

I am updating an iOS app that has a short list of default items. The default items initially come from a short plist in the app bundle. For refreshing the data, I have written code to pull down a newer plist from a web server with newer default data as needed and it saves it to the documents directory. This all works very very well.
My question: right now for testing I have the plist file in a specific folder on a shared hosting web server. Should I be using a server specifically for such things such as Amazon AWS? I only need to retrieve this plist file (around 90 kilobytes) and nothing else from the server. And what about the security of placing it in hidden folder on a normal web server?
The app has quite a few users, so it could get hit as much as 75,000 times on a day the app is updated. But the plist file will probably only be updated every couple of weeks.
Thanks
If the only purpose of the server would be to host the plist file, you would be much better off by serving it through S3 instead of EC2. You can generate the plist file and store it into S3 using any server that you currently have access to. Instead of paying instance-hours for keeping the server alive, you would only pay for each GET request. It's also automatically scalable, it doesn't matter if you have 1 user or 1 million (and you will only pay for the requests you and your users actually make).
If latency is important when retrieving this file, it's trivial and non expensive (for 1 small hosted file) to connect the S3 bucket to a CloudFront distribution. This is a CDN that will deliver the file from the closest location to your users.
Regarding security, you can configure that the file is not public and you can authenticate to S3 before retrieving the file from the iOS / Android app (be sure to obfuscate the AWS credentials).

Why would you upload assets directly to S3?

I have seen quite a few code samples/plugins that promote uploading assets directly to S3. For example, if you have a user object with an avatar, the file upload field would load directly to S3.
The only way I see this being possible is if the user object is already created in the database and your S3 bucket + path is something like
user_avatars.domain.com/some/id/partition/medium.jpg
But then if you had an image tag that tried to access that URL when an avatar was not uploaded, it would yield a bad result. How would you handle checking for existence?
Also, it seems like this would not work well for most has many associations. For example, if a user had many songs/mp3s, where would you store those and how would you access them.
Also, your validations will be shot.
I am having trouble thinking of situations where direct upload to S3 (or any cloud) is a good idea and was hoping people could clarify either proper use cases, or tell me why my logic is incorrect.
Why pay for storage/bandwidth/backups/etc. when you can have somebody in the cloud handle it for you?
S3 (and other Cloud-based storage options) handle all the headaches for you. You get all the storage you need, a good distribution network (almost definitely better than you'd have on your own unless you're paying for a premium CDN), and backups.
Allowing users to upload directly to S3 takes even more of the bandwidth load off of you. I can see the tracking concerns, but S3 makes it pretty easy to handle that situation. If you look at the direct upload methods, you'll see that you can force a redirect on a successful upload.
Amazon will then pass the following to the redirect handler: bucket, key, etag
That should give you what you need to track the uploaded asset after success. Direct uploads give you the best of both worlds. You get your tracking information and it unloads your bandwidth.
Check this link for details: Amazon S3: Browser-Based Uploads using POST
If you are hosting your Rails application on Heroku, the reason could very well be that Heroku doesn't allow file-uploads larger than 4MB:
http://docs.heroku.com/s3#direct-upload
So if you would like your users to be able to upload large files, this is the only way forward.
Remember how web servers work.
Unless you're using a sort of async web setup like you could achieve with Node.JS or Erlang (just 2 examples), then every upload request your web application serves ties up an entire process or thread while the file is being uploaded.
Imagine that you're uploading a file that's several megabytes large. Most internet users don't have tremendously fast uplinks, so your web server spends a lot of time doing nothing. While it's doing all of that nothing, it can't service any other requests. Which means your users start to get long delays and/or error responses from the server. Which means they start using some other website to get the same thing done. You can always have more processes and threads running, but each of those costs additional memory which eventually means additional $.
By uploading straight to S3, in addition to the bandwidth savings that Justin Niessner mentioned and the Heroku workaround that Thomas Watson mentioned, you let Amazon worry about that problem. You can have a single-process webserver effectively handle very large uploads, since it punts that actual functionality over to Amazon.
So yeah, it's more complicated to set up, and you have to handle the callbacks to track things, but if you deal with anything other than really small files (and even in those cases), why cost yourself more money?
Edit: fixing typos

mod_xsendfile alternatives for a shared hosting service without it

I'm trying to log download statistics for .pdfs and .zips (5-25MB) in a rails app that I'm currently developing and I just hit a brick wall; I found out our shared hosting provider doesn't support mod_xsendfile. The sources I've read state that without this, multiple downloads could potentially cause a DoS issue—something I'm definitely trying to avoid. I'm wondering if there are any alternatives to this method of serving files through rails?
Well, how sensitive are the files you're storing?
If you hosted these files somewhere under your app's /public directory, you could just do a meta tag or javascript redirect to the public-facing URL of these files after your users hit some sort of controller action that will update your download statistics.
In this case, your users would probably need to get one of those "Your download should commence in a few moments" pages before the browser would start the file download.
Under this scenario, your Rails application won't be streaming the file out, your web server will, which will give you the same effect as xsendfile. On the other hand, this won't work very well if you need to control access to those downloadable files.

Resources