Suggested strategy and library for bundling with cloud integration - asp.net-mvc

I'm optimizing my ASP.net MVC 4 website. I'm currently using run time bundling to combine and minify my JavaScript and CSS content. I would like automate publishing of the minified content to a CDN (specifically Amazon Cloudfront) as they are created.
I am trying to determine the best strategy for integrating bundled files with a CDN. My specific questions are:
Are there any available libraries that would allow bundled files to
be saved to a CDN rather than my local web server?
Should static files be pulled from a CDN prior to bundling or should they be pulled from a local Web server prior to bundling?
Are there any mechanisms to enable CDN write control only from my web farm (and not the general public)?

I don't personally buy into the "USE CDN FOR EVERYTHING STATIC!!" mentality, so I refuse to worry about copying local scripts to a CDN as you described. Sure, the big libraries can be referenced from existing major CDNs (Yahoo, Microsoft, Google), but for local scripts, it's really not worth the hassle, IMO.
Following that line of thinking, I've grown very fond of SquishIt. There's no extra XML config or preinitialization necessary to use. Just include it in the master or layout file like so:
<%= Bundle.Css()
.Add("~/Content/Reset.less")
.Add("~/Content/Site.less")
.Add("~/Scripts/rcarousel/widget/css/rcarousel.css")
.Add("~/Scripts/jquery.fancybox-1.3.4/fancybox/jquery.fancybox-1.3.4.css")
.Add("~/Content/Fonts/Karla/stylesheet.css")
.Render("~/Cache/Bundle.css") %>
<%= Bundle.JavaScript()
.Add("~/Scripts/jquery-1.7.2.js")
.Add("~/Scripts/jquery-ui-1.8.19.js")
.Add("~/Scripts/modernizr-2.5.3.js")
.Add("~/Scripts/rcarousel/widget/lib/jquery.ui.rcarousel.js")
.Add("~/Scripts/jquery.fancybox-1.3.4/fancybox/jquery.fancybox-1.3.4.js")
.Add("~/Scripts/jquery.youtubelite.js")
.Render("~/Cache/Bundle.js") %>
Having said that, and more to your point:
1) I'm not aware of any bundling libraries that support automatic CDN deployment. The usual train of thought here is to have the CDN pull from your website directory and cache it. In this way, deployment is established via a pull mechanism, rather than a push. This article describes how to set up origin pull using CloudFront with a word press site. I'm sure the configuration is similar for ASP.NET.
2) Bundle from local copies. You probably reference the local copies in development already, so why add the CDN into the mix prior to go-live?
3) Most cloud storage systems (Amazon S3, Azure Storage, Rackspace Cloud Files) offer a way to publish files to the cloud that remain read-only to the public. This is API-dependent, so the method varies depending on your cloud storage provider.

After more research I came across the Squishit.S3 library which does exactly what I needed. Essentially it piggy-backs on squishit, enabling bundled files to be copied to an S3/Cloudfront bucket at runtime. Configuration is a breeze and because it uses Amazon's APIs, credentials are used to write to the CDN. If you already use Squishit it's just a matter of adding a couple of default configuration lines to your global.asax and the rest is taken care of for you

Related

How to access files on Ceph directly as URL

I need a storage system with the following requirements:
1. It should support data/service clustering
2. It should be open-source so that I can extend functionalities later if needed
3. It should support file system because I want to access some files as public url(direct access). So that I can store my scripts in these files and directly refer these files.
4. Supports some kind of authentication
5. I want it to be on premise (Not cloud).
Ceph seems to qualify all the criteria but does it support the public access of files just like a URL(Point 3) ? It has ability to generate temporary URLs though but I want permanent URLs for few files.
You could run Nextcloud and have your data volume (and database, if you feel so inclined) stored on the Ceph cluster. That's open-source, you can setup direct links to files including permanent links, and is authenticated.

How to use github repo as like CDN server for uploading assets file?

I am learning ruby on rails. I am developing a rails 5 application.
I don't want to use amazon s3 service for containing my assets files.
I want to use github for serving my assets files as like CDN.
But I am facing a problem, I have dynamic file and image upload system.
So therefore, When I will upload my files and images, all files and images will be uploaded in a github repository (assume, I have a git repo named busket; so, all images and files will be uploaded in busket repo from my server and will serve all assets from rails application).
So, how can I make github as like CDN ? Please help me about this issue.
https://cdn.jsdelivr.net/gh/username/repository#master/file
username = name of the user of github
repository = name of github repo
file = actual name of the file
I feel that it would be a generally bad idea to upload images and general types of content files to GitHub for long terms storage. GitHub was designed to be a repository provider for Git, not as a NoSQL or other type of data store. Updating files in GitHub would require making a commit to a particular branch. Hence, every time you change an image file, it would require a new commit. This won't scale, because Git does not handle binary files well.
So if you need a long term data store for your image and content files, I would suggest looking into tools which were designed for that, such as Amazon's S3, Google Cloud Storage, and things similar to this.
1.) Encode image to base 64
ref: http://ruby-doc.org/stdlib-2.2.0/libdoc/base64/rdoc/Base64.html
2.) Make API call to github
ref: https://developer.github.com/v3/repos/contents/#create-a-file
This will upload and return the url of your stored image within github, store that in your database.
Although you stated you didn't want to use S3, the paperclip gem makes it very streamlined.
As mentioned by #Tim its generally a bad idea.
However if you still want to use Git as a File Server append
?raw=true
at the end of the URI.
For example
https://github.com/git/git/blob/master/ewah/bitmap.c?raw=true
would give you the contents of the file.

How do I override the Umbraco built-in media library methods to use s3?

I'm currently looking to move my Umbraco installation over to a load balanced setup. In order to do this, I need to move the Media library over to a CDN like Amazon's S3. I tested a few plugins that allow upload to s3, but they all list media files on the local file directory. This flat out will not work.
I was thinking I would write the code to browse the CDN, but how can I override the built-in media library code so that it uses my version instead? I didn't see a clear way to do this in the docs?
I am using this plugin: http://our.umbraco.org/projects/website-utilities/amazon-s3-media for amazon s3. The source code is here: https://bitbucket.org/gibedigital/umbraco-amazons3provider . He recently just updated the plugin. The plugin does not use the local file system. The developer was pretty responsive (and made a few updates for me when I asked).
However, I am adding to his project because his plugin did not allow saving within a predefined directory (amazon's virtual directories). But his source code is a start.
Good luck,
Robin

ASP.Net MVC Bundle linked content files

I've been trying to reduce the amount of copying and pasting of content files across some of my projects and decided to go down the adding files as links from a central project.
The problem I have now is that the System.Web.Optimization.Bundle.AddFile(string virtualPath, bool throwIfNotExist = true) does not work as the file doesn't really exist in the directory I specified.
Does anyone have any pointers? Or maybe an alternative to linking content files?
Thanks.
I think you cannot access files outside of your web project with the virtual path system and It might hurt when you want to deploy your app.
I suggest to make a custom project for your static content with a custom domain: e.g. static.yourcompany.com and reference all this files from this domain. This also has the advantage that the browser does not have to add the cookies for authentication and tracking to these requests, which might be faster if you have a lot of traffic. You can also open your custom cdn (http://www.maxcdn.com/pricing/) or store the files in azure or amazon aws (which is more or less free for less files and traffic).
Another approach is to make some batch files to keep your content files synced.

Recommendations for file server to be used with Rails application

I'm working on a Rails app that accepts file uploads and where users can modify these files later. For example, they can change the text file contents or perform basic manipulations on images such as resizing, cropping, rotating etc.
At the moment the files are stored on the same server where Apache is running with Passenger to serve all application requests.
I need to move user files to dedicated server to distribute the load on my setup. At the moment our users upload around 10GB of files in a week, which is not huge amount but eventually it adds up.
And so i'm going through a different options on how to implement the communication between application server(s) and a file server. I'd like to start out with a simple and fool-proof solution. If it scales well later across multiple file servers, i'd be more than happy.
Here are some different options i've been investigating:
Amazon S3. I find it a bit difficult to implement for my application. It adds complexity of "uploading" the uploaded file again (possibly multiple times later), please mind that users can modify files and images with my app. Other than that, it would be nice "set it and forget it" solution.
Some sort of simple RPC server that lives on file server and transparently manages files when looking from the application server side. I haven't been able to find any standard and well tested tools here yet so this is a bit more theorethical in my mind. However, the Bert and Ernie built and used in GitHub seem interesting but maybe too complex just to start out.
MogileFS also seems interesting. Haven't seen it in use (but that's my problem :).
So i'm looking for different (and possibly standards-based) approaches how file servers for web applications are implemented and how they have been working in the wild.
Use S3. It is inexpensive, a-la-carte, and if people start downloading their files, your server won't have to get stressed because your download pages can point directly to the S3 URL of the uploaded file.
"Pedro" has a nice sample application that works with S3 at github.com.
Clone the application ( git clone git://github.com/pedro/paperclip-on-heroku.git )
Make sure that you have the right_aws gem installed.
Put your Amazon S3 credentials (API & secret) into config/s3.yml
Install the Firefox S3 plugin (http://www.s3fox.net/)
Go into Firefox S3 plugin and put in your api & secret.
Use the S3 plugin to create a bucket with a unique name, perhaps 'your-paperclip-demo'.
Edit app/models/user.rb, and put your bucket name on the second last line (:bucket => 'your-paperclip-demo').
Fire up your server locally and upload some files to your local app. You'll see from the S3 plugin that the file was uploaded to Amazon S3 in your new bucket.
I'm usually terribly incompetent or unlucky at getting these kinds of things working, but with Pedro's little S3 upload application I was successful. Good luck.
you could also try and compile a version of Dropbox (they provide the source) and ln -s that to your public/system directory so paperclip saves to it. this way you can access the files remotely from any desktop as well... I haven't done this yet so i can't attest to how easy/hard/valuable it is but it's on my teux deux list... :)
I think S3 is your best bet. With a plugin like Paperclip it's really very easy to add to a Rails application, and not having to worry about scaling it will save on headaches.

Resources