Is the MathJax gem really useful? - ruby-on-rails

I would like to integrate math equations in a Rails project. I see that there exist a well-maintained MathJax gem (https://rubygems.org/gems/mathjax-rails/versions/2.5.1). On the webpage of this gem, there is a section called "Why bother with another gem?", which argues mainly that MathJax is huge and makes it difficult to manage the project when it is entirely installed in a subdirectory of a rails project. However, on the MathJax webpage (http://docs.mathjax.org/en/latest/start.html) I see that MathJax is available through a CDN, so I guess there is no need for downloading its source into the Rails project (maybe the gem was made at a time when it was necessary to download MathJax?).
So my question is: is there an advantage that I am missing, of using the gem rather than defining my own few helpers to get MathJax from the CDN and configure it for my need?

Assuming all things work, I think the CDN is the best and simplest way. They give instructions, it's free, it should reduce the performance and deployment cost of the library on your app, and if anything doesn't work it'll be easier to get help since you haven't done anything framework-specific.
However, using a CDN adds a partial failure mode to your app: what happens when your app server returns your HTML, but the CDN is down or unreachable for the MathJax assets? Your users will see the TeX code instead of the rendered equations.
This is pretty unusual, but can happen. Sometimes the CDN is broken, other times the user's ISP screws up their DNS to the CDN but not to your app.
Whether this is a risk worth defending against depends on your app, your users, etc.
You can avoid it by hosting mathjax yourself (app server or CDN) but it'll be more work to set up, and if you're using a CDN at all you can still get these partial failures.
If you really want the equations to render every time your app serves a page take a look at the server side rendering options (nodejs) added to MathJax and KaTeX recently. I'm not aware of a gem bundling this up yet for rails but it'd be cool to have. There's a mathjax node server/service you could send requests to from rails (cache the requests), but that'll complicate deployment if you're used to having a single app. There's also someone calling KaTeX through execjs.

Related

Rails Webpacker or Vue-CLI?

I'm building a Single Page (Web) Application. I'm quite smitten by Rails v5.0, especially its built-in API capabilities.
In the past I've built JavaScript frontends using Vue.js, usually with the templates provided by the Vue-CLI project. This allows deployment of Vue component-based static sites basically anywhere. It's great.
Now, Rails 5.1 has some built-in Webpack and Yarn features which look pretty compelling too. I'm not sure how to proceed with my new application.
My questions:
What are the pros/cons of integrating Webpack and Vue into Rails itself, using the Webpacker extensions available in Rails v5.1? I
intend to deploy to Heroku.
On the other hand, what are the pros/cons of using the Rails API-only mode for the backend, and maintaning the Vue/Webpack-based
frontend in its own directory? I'd keep everything in the same
repository, deploy the backend via Heroku, and the frontend via a
static host like Netlify.
Which approach would have more cognitive overhead or technical complexity?
Over the past few days, I've been looking around, and I've not found much concise information on the web about this. People seem interested in the auto-reloading features of the Rails development environment, but I get that for free already with Vue-CLI.
As far as I can discern, these are reasons for keeping them separate:
Deployment of the frontend is pretty darn simple to anywhere.
The Webpacker mode for Rails is very new, and not many tutorials or guides exist yet, especially regarding integration testing. Keeping
things separate means that my existing testing apparatus should still
apply.
Here are some pros for integrating the two parts together:
The possibility of using static assets both for the frontend and possibly for server-generated pages in the future, should that be
necessary.
Buy-in to "the Rails way", with implied future maintenance by the Rails team.
the JS Frontend would not need to be hosted separately.
Don't need to worry about CORS (?)
What other concrete benefits exist for either approach?
When i started i went the webpacker way, somewhow because that's what it looked like it was "supposed" to be. As you say, very little guidance.
Webpacker (with it's reliance on the latest node) seems a moving target, making deployment and even development more complex. For what benefit i asked and got rid of it.
Now i use vue from the cdn. Benefits:
cached close to user
almost zero installation
easy to have dev/production versions
The i write app code into the rails templates. Using haml, and in fact ruby2js, but you can use javascript just fine. That's how i started, but i like ruby, and the ruby code is almost half the size than the generated js, but i'm getting off track.
So templates are your "vue annoted" rails templates.
Small code also goes into the rails template.
More code can be defined in the assets and referred to from the app.
Even components can be written into template using the x-template syntax.
And last but not least: Data can be transferred by to_json, directly into the templates. And in the same render. Much faster than an additional query. When to_json is not enough one can use rabl to get exactly what is needed.
I hope i made that clear. I am in the process of writing some vue-rails stuff up, as there is so little to be found. Look out here (and i'll comment when the post is ready)

Performance strategy for site that is mostly static

I am taking over development on a rails 3.2 application and am looking for the best way to improve page loading time. The site itself is more of a large dynamic website than an actual web application (the site is http://korduroy.tv/, a surf lifestyle community site), and while there is a couple of small pieces that differ from user to user, most of the site is the same experience for everyone.
Page loading time is fairly slow, and from looking at the server logs, it seems to be because each page is loading so much dynamic content (for example, most pages are loading resources from 10+ models). While I hope to go through and refactor what I can, I am looking for some basic performance wins. Knowing that most of the site is the same for every user, is their a way to aggressively cache the content on the server or even serve static content that has been generated through some kind of background job?
My initial thought was to create a job that uses a static site generator, maybe something like Jekyl, and basically creates a static copy of the site, which could then be served on a cdn. My gut is telling me this is probably not the way to do it, plus there are some pages (such as the user profile page), that need to be served dynamically.
Any recommendations would be great. Disclaimer, I come from front end land and have very little knowledge of best practices when it comes to server side optimizations. Thanks!
From what you write, I believe your biggest gain will be in implementing a fragment cache using a memcache store. See http://guides.rubyonrails.org/caching_with_rails.html as the definitive guide on rails caching.
You might be able to get away with a page cache or action caches for some of the content that doesn't depend on the user (like the homepage), but unless you're serving up millions of requests a day, I'm not sure this is necessary.
I notice that while the javascript and css seems to be compiled according to the rails asset pipeline, the images are missing the sha1 hashes that allow aggressive browser caching of the resources (since you don't have to worry about the contents changing, as they get new hashes when you change the images). The key here is enabling the asset pipeline, making sure you compile your assets as part of deployment (rake assets:precompile) and using the image_tag and asset_path helpers (or image-url sass helper). Also make sure that nginx is responding with code 304 (not modified) to your browser when you're refreshing a page. This won't affect the load on the rails server (unless you have both nginx and rails running on the same server), but will reduce the average page load time.
You can look into more advanced techniques like caching your sql queries, or optimizing these, but this can lead to increased complexity making it harder to maintain your codebase. Try the view caching first, and see if that gets the load time to an acceptable level.

Spree as a mountable engine

So, we want to rearchitect a portion of our site as a Rails app. The original plan was to have a main "site" app, with a number of plugin apps (Rails 3.1 Engines) with compartmentalized functionality -- a store component, a social/forums/chat component, etc. Also, we wanted to put themes/styling in a gem so that our web designers could modify the site appearance and some minor layout tweaks without having to "know Rails." Initially, this was going well; we created the main architecture and plugins and the theme gem, and it was all playing nicely together; cross-cutting functionality like auth was put in the main "site" app and was consumable by the plugin apps, giving us a single sign-in for the site (a design requirement).
Our initial plan for the store component was to use the Spree (http://spreecommerce.com/) since it had, out of the box, 95% plus of the functionality we needed. However, there's a catch -- Spree is distributed as a mountable engine, but it's also an app. Meaning that not only does Spree mount inside an app (which is not a problem, in fact it's behavior we were counting on), but it depends upon being in control of the main app. Looking into the "why" for this behavior, it seems to depend upon two core pieces of functionality. The first chunk of functionality is some SEO permalink rewrite functionality that has to go into middleware; we could hack things so that our main app included this chunk of code (even though this would begin to entangle store functionality across our entire site, muddying the "Spree as a mountable engine" story... more on that in a moment).
More complicated is Spree's use of Deface to do theming and customization. While this is "clever" (note quotes), it really makes the integration of Spree kind of a nightmare; either you follow the path of least resistance and make Spree an entire store to itself (which completely breaks our story of "the store is just one part of our site, and plays nice with the rest of the site, including auth, theming, etc."), or you have to hack the hell out of Spree.
Not only that, but Spree doesn't follow the standard Rails Engine routing paradigm, where routes are isolated beneath an engine root (if you look in the lib's routes.rb file, you can see that it uses Rails.Application's routes, instead of an Engine's routes). This means that we couldn't have www.oursite.com/store/...all_the_spree_goodness, we'd have to have www.oursite.com/spree_owns_the_sites_routes...
So, has anyone else tried this? We LOVE Spree and would like to use it as our store. But it's starting to look that there's no real way to "integrate" it with the rest of our site aside from maybe some proxying magic with nginx or something like that (which is a separate nightmare, since we're hoping to host on Heroku, and then we have to figure out how to integrate multiple disparate apps into one DB -- for single sign-in auth -- and an HTTP front router).
Spree devs, we LOVE your code, but is there any work being done to make it an actual, for-real Rails Engine, as opposed to a stand-alone app that just happens to package all of its features into Engines? Without the ability to integrate it into an existing site (including not "owning" the app, being able to have all of its routes partitioned off, and so on), there's just no way we can use it :(
TIA.
I'm the Community Manager for Spree, so I think I may be able to answer your question.
Yes, there is work going on that will allow Spree to be a true Rails engine. In fact, that was my first task that I did when I was hired by Spree. The work is on the master branch (https://github.com/spree/spree) and we're looking to release this code as a 1.0.0.rc around Christmas time.
With this code, a couple of changes have been made. For starters, Spree is now a proper Rails engine meaning that you can now have it mounted at /spree or /shop or /whatever and Spree's cool with that. Secondly, all the models and other classes are namespaced so they won't conflict with anything in your application.
I'm not sure what you mean about Deface being "clever", though. What problems do you forsee with this? If you want to override an entire view you could do this by overriding the path in app/views/spree/products/show.html.erb. Mind you, this overrides the whole view, and if you only want to override a part of it that's when you'd use Deface.
Could you perhaps elaborate on the issues you're having with Deface? Would be interested to help you sort them out.
Thanks for using Spree!

Is it possible to load a HAML view directly from the browser?

This seems a little silly to ask but some background:
I'm a designer who just got added last minute to a rails project— I pretty much have no knowledge of rails other than what I've learned this week, and even that is REALLY slim. The literal extent to my project is in the views, so it's not like I need too much.
100% of my experience is limited to purely static html/css/js or wordpress/expression engine projects, so I don't go this deep usually.
That said, a lot of my front end work hasn't even been designed in the backend, so I can't access it from navigating it on the live version locally. Is there a way that I can manually load this via url?
haml path/to/page.haml
Generates the .html file for you.

How do I deploy sproutcore using a CDN into an existing site?

I'm just learning about SproutCore now, seems great. But I can't find a good answer on deployment options.
I'm starting small. Just implementing a single page of a complex site with SproutCore. Right now, that page is dynamically generated and served from my django based server. I serve all of my static files (.js, .css, images, etc) off of a CDN.
The page represents one customer.
So, on that dynamic page, it knows:
What customer we should be looking at, the ID, name, etc.
Where my media should be loaded from (absolute HTTP path)
How do I get a SproutCore based app to deploy and run in an environment like this?
I imagine I can upload the built sproutcore app to my CDN. Then in my html page, somehow reference it. But how does that SproutCore app know what server to request backend data from (I'd rather not hard code it)? It can't be installed in the root of the CDN, so how does it know how to load things relative to itself? I could tell it an absolute URL to load from at run time. With some pain, I could even tell it an absolute URL to load from at build time.
No answers on this one, here's what I did...
Ended up moving to Ember.js (aka SproutCore 2). That follows a completely normal "add .js to a page and serve it normally" model and doesn't have any interesting deployment worries, so it's a no-brainer.

Resources