I am working on a project with very strict company security rules which means I am unable to create CMS pages using a local server. As a result the company still makes use of old technologies such as shtml includes. This means that node.js is out of the picture. I have been researching angular.js, handlebars.js and various other means of client-side template solutions. However, most require some sort of third party tool (outside of node) in order to get these working. I am only allowed to use CSS/javascript libraries on flat pages.
Any suggestions?
Related
I've recently started development on what will be come a rather large mobile application (React Native) that consumes a Ruby on Rails API (in API mode).
On the frontend I've used TypeScript extensively throughout the code but I am having issues with how to approach building types and interfaces for data received through API requests. I've heard about transpiling C# database models into TypeScript types - but I can't find anything similar for Ruby on Rails. The only thing I've been able to find is how to handle types in mono-repos where both the frontend and backend is in a single repository.
I could build my types manually on the frontend but I feel like this wouldn't be sustainable over the long term, especially when new developers join the project.
Is there any gems out there for this or would I have to write it myself? Am I approaching the issue incorrectly?
I could build my types manually on the frontend
This is probably the best choice as it encourages decoupling between the frontend and the backend API application. The whole idea of automatically generating front end code based on your database sounds good in theory but your frontend is not talking directly to the DB - its talking to your API and should have no knowledge of the underlying data storage which is an implementation detail of the API.
This is also the reason why you only see this attempted in a monorepo - it requires a tight coupling which is very undesirable. If the backend schema changes it will break clients - which would not occur if they simply communicate through a versioned API. As long as the API remains consistent the clients are isolated to a very large degree from changes on the backend and can occur in tandem.
You also have to take into consideration that ActiveRecord is extremely dynamic compared to anything written in C# and most other frameworks. Model attributes are automatically definined at runtime by reading the schema directly from the database - its all superninja level metaprogramming. So you can't use any form of static analysis to create fronend code from the backend code alone.
I'm building an API that's intended to be consumed by an iOS app as well as a browser-based Web Application (using React / AngularJS). The API is being developed in Laravel.
What is the best way to structure this? Should the API and the Web Application be apart of the same Laravel Application, or should the API be an entirely separate entity that just returns JSON to whatever client requests it? In that case, I suppose my Web Application would interact with the API as though it were a 3rd party API
We we've had a grown monolithic application and decided to split the code up, into repositories called "core", "frontend" and, new to our family, added "api".
That is, we are using the Microservice pattern and it works well for "us".
Our core/frontend repositories are CakePHP but the API is written in Laravel 5 using the jsonapi 1.0 spec.
We're quite, if not to say, very happy with this approach. The only canonical thing it the representation in the database. Business logic and code is redundant. But an interesting lesson learned is that we uncovered quite some bugs due this in the existing code base.
The clean separation is important to not be bound too much to one technology. We may want to replace the "frontend" stack with a node application talking to the API. We may want to replace the API with something else later.
Of course this comes with huge amount of work, resources, etc. but not being able to act due a monolith is worse in our view.
We are trying to determine the best approach for adding a complex API layer to a modified version of nopCommerce. To back up a step, we're building out a custom site for a fashion/apparel manufacturer that has a lot of front-end application requirements and also needs to integrate with their cross platform apps (iOS, Android, Windows) which we're building with Xamarin. We've tentatively decided to start with nopCommerce as the base of our application to which we will add an API layer.
What we are unsure about is what is the best approach for implementing this in nopCommerce (or other similar .NET package)? The options we are considering are MVC vs WebAPI vs ServiceStack. We've been going thru many of the tutorials on PluralSight.com to get up to speed on app dev and API creation best practices, but there seem to be so many options, we're not sure where to start. We seem to be somewhat lost in a sea of implementation options for the API and how each is to be evaluated based on choice of the JS packages/frameworks used on the front-end for the web site and the tools chosen to create the apps.
If it matters, our basic requirements are:
Expand core of basic e-commerce package with some custom ERP style functionality
API layer that can work effectively with both a web front end (possibly as a SPA) and all cross platform apps built using Xamarin
Insure OAuth authentication across all interface types so we can just use social media logins consistently everywhere and can authenticate the user in any environment
Given this...
My question boils down to which of the three API methods (MVC vs WebAPI vs ServiceStack) is best for this?
In my humble opinion you should go with service stack, it´s easier to implement and a lot more flexible than web api, you can add/remove plugins for different functionalities you get a lot of infrastructure code OOB such as mechanisms to handle cache, loggers and other not just related to infrastructure such as validators and IOC container, etc.
you'll get a single mechanism for authentication including custom auth, oauth, oauth2, etc which works for linked in, facebook and google +, in that situation you´ll find yourself reusing a lot of code in across all your apps.
One other thing that I like about SS is that practically is just you and your IOC, nothig else, everything is quite simple to understand and to implement (there could be more than one hidden option or configuration you may miss in the documentation but you get a lot of support from the community in google groups or stackoverflow)
its easier to test (Unit testing) you already have abstractions for httprequest and httpresponse and a lot of more, you won´t find yourself doing wrappers for all the legacy web impl that are shipped with mvc.
SS is better than mvc web api in terms of performance, it got one of the fastest json serializers out there for .net
I´m working on a SPA app for the time beign and I have no regrets about my desition to get into the SS framework.
just my 2 cents.
I would say Web API is best option for the Services Layer
- http://www.asp.net/vnext/overview/aspnet-web-api
There are many advantages
- Web API has been in release cycle as separate component with latest features
- Security
- Versioning
- Attribute based routing
- OData integration
There are a few 3rd party addons you can add to a Heroku app to manage caching. Why would you use them vs using the built in caching framework?
I am assuming these 3rd party addons work like CloudFlare or at least work on the same basic principles.
Caching Framework
You control when the cache expires, allow for fresher more relavent content.
Your site doesn't go down/get messed up/looks fugly when their service goes down.
You can permanently cache things that will never change.
Can make your own CDN with your own logic and setup.
Fragment caching, meaning only part of the page expires instead of the whole thing, leading to less dog piles.
3rd Party Service
Fire and forget.
Cheap
Usually pull all images, js, and css files into their 'CDN' also.
Some claim added security because your site is now basically behind their servers now, though I haven't really read anything that said this was anything but market double talk.
I'm developing a new website which is going to include web API. What I want to know is how easy (or hard) is it to develop the server side oAUTH service into my new website?
I'm using OE11.0 WebSpeed in combination with Apache. Because I'be been doing Progress/OpenEdge 4GL/ABL development for over 10 years and nothing else I find very hard to translate existing code like, PHP, Python, Java etc.
I've read the RFC related to oAUTH and I find my self get lost in "key-varner".
Has anybody develop a oAUTH server side code in OpenEdge WebSpeed? If so, are you willing to share?
The CLR bridge works in OE 11 onwards now and we use .NET dll's in webspeed sucessfully.
I think your best bet is to do this outside of Webspeed / ABL, otherwise you are stuck re-inventing the wheel. The easiest solution would be to call a .NET library directly from your ABL code but I think that the CLR bridge doesn't work for Webspeed / Appserver apps.
One solution is to have a separate, non-Webspeed app just to handle these oAUTH requests, using a ProxyPass directive on your Apache server to pattern match the URIs and route the requests to the appropriate app.
You could use any non-Webspeed tech. you want, but since I know Ruby best, I will point out the excellent omniauth gem, which supports arbitrary authentication strategies, including oAUTH. You can create a custom gem for your specific provider by working off of any of these strategies (see the "Notes" section and look at any of them that cite "OAuth API" or "OAuth 2 API").
If you want to go whole hog and write the entire app in a different language (yet still using an OpenEdge database), I will toot my own horn and point out the Ruby adapter for OpenEdge databases. This would allow you to use the Ruby on Rails framework for your Web app.