Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I wanna understand the extent of node js?
Is it used only for mobile web apps to handle server side or can it be used to develop a full fledged web app for all device configuration(like replacing ruby and rails).
I found some examples but all seems to be mobile web apps.
Is it like companies like
Aibnb,
Linkeddin... etc
developed two sites, one with nodejs as backend and other with ruby and rails and depending on the device they route to that site.
Please give me some favourable inputs so that my confusion can be calmed.
Node.js is not just for mobile apps. This question How to decide when to use Node.js? gives a good analysis of when to use nodejs.
But, to sum up a bit:
Node is good when you have a lot of short lived requests that don't require heavy CPU processing.
Node is good if you want to use all javascript
One disadvantage with node is that there are a lot of javascript packages that do similar things - the environment isn't as mature or as standardized as other languages (maybe this isn't a negative to you though)
Mobile applications often lend themselves to using node because they often follow the pattern of many short lived requests, e.g. look up something from a database.
There are tradeoffs, such as as that you will be using a fully dynamic language across your entire stack (not for the weak at heart).
So to recap, node is not just for mobile apps, but you should do some research to understand why you might use node.
Node can be used to produce many more solutions than just providing for mobile devices. Some solutions include command line tools (ex. Grunt), applications (ex. crawler), web services (ex. RESTful services), and full-fledged web sites (ex. hummingbird). Lastly if you want an example framework for constructing standard HTML (desktop or otherwise) web apps in node, see Jade.
As to which framework a company chooses often there are several APIs provided. As their web services communicate with standard XML or JSON documents, communication between servers doesn't necessary need to be written in the same language.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am developing a Rails App that serves as a cash register, sales recorder and ticket printer for each purchase for many stores of the same franchise.
The problem is that it must be able to run offline in case the internet goes down at any given time, at any store location, so that customer service does not get affected.
Is there a way to run the Rails App offline and sync it with the server after the connection has been re-established?
Or even operate it offline and sync it at the end of the day?
Does it require a specific database?
Technically, there's no reason that you can't do this. I have done it, and it actually works pretty well, if you're careful about how you design the application.
Other than those, the things to be aware of are:
Javascript libraries, such as jQuery, that you would need to ensure get loaded from your public directory, rather than from a CDN
Rails comes with SQLite, and that works great for offline (and small-scale) functionality. You can use local database servers for Postgres or MySQL (or anything that you can install locally) if you prefer.
Images, fonts, and other design assets should be available locally, as well, which can be tricky if you have online image or font resources that you want to use (e.g. Google restricts offline usage of their font resources)
Testing offline behavior is pretty easy, as well. Put it on a laptop and turn off the Wifi. You'll know pretty quickly if that works.
For file sync between the offline app and the main server, you have your choice of technology and data formats. You can implement REST-style sync APIs, low-tech FTP push, or even rsync. Data formats could easily be JSON (the current princess of structured data storage), well-established CSV, or even (shudder) XML.
There should be no surprises in building an offline application, and you'll have all the tools and resources that Rails makes available to you, except the ability to arbitrarily load resources from the internet.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm in the middle of architecting a Grails 3 app based on the microservices project structure. Based on Jeff Scott Brown's video on how he separates the Web UI and the backend by using two Grails apps, isn't the Web UI app an overkill, compared to using an AngularJS based html?
Please do point out the benefit of using a Grails Web UI app if any.
I know it is one year later, but since I wondered the same here is my conclusion.
The presentation of Scott Brown misses the point of micro-services. They are not just small pieces of code that provide a RESTful interface, but they are defined as micro, by their footprint and the fact that they can live separate from each other. Try running a Grails instance for each small service. The cost will be huge as each machine requires more than 1GB ram.
To answer your question; monolithic frameworks as Grails are great toolkits, making it easy to handle and maintain more complex logic, as well as handle security and other common tasks which with (e.g.) Node you would need to install libraries of dubious quality or implement them yourself.
My take on the general aspect of micro - services or monolithic frameworks is that if you need simple data access and you are worried about scaling or you need a flexible way of distributing then use micro-service frameworks. If you have a complex business model or you need to use the tools in the framework use a monolithic framework. Finally, don't forget that no one is stopping you from using both simultaneously if needed be. It's a valid strategy.
I suggest watching Martin Fowler's "Microservices" talk.
https://www.youtube.com/watch?v=wgdBVIX9ifA
I guess this architectural approach is now a bit outdated with http://micronaut.io beeing available.
But however, as I do understand, you ask mainly if you shouldn't use an Angular or React frontend instead of the server side rendered Grails UI.
So this depends. Angular and React perfectly fit the requirement for a single page app, but sometimes all you need is a good old HTML frontend. And even if you decide to use a javascript based frontend, you often need a backend for frontends or API manager as entrance to your microservice world. A Grails UI can serve this need perfectly.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've just been redirected by a firend on the uniGUI website. In a previous question I asked about a comparison between Raudus and ExtPascal.
Now this unigui seems to be an alternative to Raudus, that moreover has the advantage of allowing you to compile the win32 exe at the same time with the same source code (of course if you limit yourself to use only uniGUI approved UI components).
I think this is amazing, even if this idea at a first sight willnot make happy all the web apps purists, but in my opionion having this kind of tool is great.
There are many (even small) applications, that can benefit for this code once, get a double UI.
Anyway which are your feelings about this? Do you think it has a future?
ADDITIONAL NOTE: In order not to start a general discussion please try to answer by mentioniong uniGUI specifically, not only a general answer. Thanks.
I started developing uniGUI (or whatever name it may adopt in future) around two years ago. Since then it has evolved a lot. Initial version was based on VCL for the Web. With addition of ExtPascal and Ext JS it has become a very advanced tool to develop Web apps based on Delphi.
uniGUI simply defines itself as a Web Application Development framework. The concept of Web Application has been controversial since its first inception. Some people claim that Web is stateless but applications are statefull, one should not mix these two. However, nowadays with an increasing demand for web applications such notions only remain as a philosophical point of view.
More and more people want to access their desktop apps from the internet. Companies want their local accounting software to be accessible to other branches. A security company wants a web gateway for their access control software. These are all examples for the increasing demand for web apps.
We can consider uniGUI as an abstraction layer for Delphi VCL controls which extends them to the Web. Like all other abstraction layers it helps developer to focus on application logic rather than the development tool itself. It tries to fully integrate the RAD approach into Delphi based Web development.
Dual nature of uniGUI is simply a plus. I'm referring to its ability to deploy same application to both web and desktop using same codebase. This feature maybe useful for some developers but useless for others and it can be completely ignored by those who focus on Web development only.
As for the scalability, the best target for uniGUI and other similar tools seems to be the intranet where the number of clients are predictable and connection speed is a non-issue.
That said, nothing prevents developers from developing web apps that target the internet. At end it is all Ext JS on the client side and Delphi event handlers on the server side. It all depends on how smart you design your app and how efficient you manage your resources. If each of your sessions consumes 10 MB of memory then you're likely to run out of memory very soon.
In conclusion, this framework will have a group of users which will find it best for their needs. There is no black or white here only big gray areas. Like any other tool it depends on the company, the particular project and the available deployment options to see if it is the right tool for you or not.
Web applications are very different from GUI ones. Mixing two approaches for something
more serious then simple form or several buttons I think is just wrong.
I think that the UniGUI idea is a great one. But I think that Embarcadero is the one that should offer that as one more option for developers instead of a independent one. Delphi developers always wanted an easy way to create web applications, and sincerely WebBroker is very poor.
Anyway which are your feelings about this? Do you think it has a future?
The general idea definitely has a future, if only in the PT Barnum sense. This particular implementation doesn't seem to be anything special - there's nothing in it that grabs me as being a great solution to any of the problems I currently have to deal with. But then, I see thick client apps, especially traditional Delphi 2 tier apps, as quite different from web apps.
I'd be more interested if uniGUI worked the other way, and provided a solid MVC framework for Delphi, then extended that to the web. That way you could more easily have your data + business logic + GUI in three connected pieces, rather than the traditional Delphi/RAD problem that business logic gets all tangled up in the GUI, then the web application is a pain to develop because the layers "have to be" separated. This smells like "solving" that problem by letting you leave the business logic mixed into the GUI when you move to the web.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have the following systems (and more) that we push/pull data from one app to another:
Hosted CRM (InsideSales.com)
Asterisk phone system (internal)
Banner ad system (openx, we host)
A lead generation system (homegrown)
Ecommerce store (spree, we host)
A job board (homegrown)
A number of job site scrapes + inbound job feeds
An email delivery system (like Mailchimp, homegrown)
An event management system (like eventbrite, homegrown)
A dashboard system (lots of charts and reports pulling info from all other systems)
With Rails 3 around the corner, I really want to pursue a micro-app strategy, but I'm trying to decide if I should have the apps talk via REST HTTP API or because I control them all, should I do something like shared models in the code which simplifies but also allows for stuff to leak across boundries much easier...
I've heard 37signals has lots of small apps, I'm curious how those apps communicate with each other... Or if you have any advice from your own multi-app experience.
Thanks! I tried asking this on my blog http://rywalker.com/chaos-2010 too a while back.
I actually got an email response from DHH...
We use a combination of both, but we default to REST integration. The only place where we use direct database integration is with 37signals ID user database. Because it needs to be so fast. REST is much more sane. Start there, then optimize later if need be.
Last time I had to crazy-glue a bunch of small applications together, I used a simple REST API.
Bonus points: it allows for integration with services / apps written in other languages.
Also helps if you've got a crazy buzz-word loving manager who likes to pivot technologies without warning.
i had the same with a plus: I had to talk as well with some daemons that were not exactly HTTP ready. So i followed the following pattern:
REST API using XML/JSON to exchange data and using memcache to exchange short messages. (you define some keys that you will update in memcache and the other piece of software, just pull memcache looking for those keys)
as security measure i added API KEY or HTTP client authentication using digital certificate.
Another option is AMQP messaging (via rabbitmq or other means).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Hi I'm looking to write a multiplatorm tasks application for technical people. I want to handle as many platforms as I can (web, shell, desktop) and therefore I have decided to begin with a server/API.
I want to write it in Ruby, however i think that Rails is a bit too heavy for this, even though it would do the job. Sinatra also doesn't seem quite suited for the task.
All the server/API would do would be to translate simple requests to Database queries, and at a later stage some authentication and authorization.
So basically I want to know:
1) Should I use a REST api or a SOAP api?
2) Is there a framework for this? Or what is the closest framework avalable?
For the adventurous, there is also a less known project called grape. It is a rack based application, similar to Sinatra, but is only purposed to write API. I don't think it is mature enough to be used in serious projects yet, but it is still interesting to know.
1) REST, SOAP is a terrible system and its support in Ruby is quite lacking. REST, on the other hand, is basically the ruby default and takes very little effort to use, especially if you are using REST/JSON.
2) Sinatra and Rails are basically your options. It comes down to how complex this application will be. Sinatra can probably handle the task just fine, but Rails does much of the work for you at the expense of bloat. You will already be taking on some of the rails bloat if you use ActiveRecord for the database. When authentication and/or roles come into play, Rails has mature solutions for both. Without any additional information, I'd lean towards Rails as it does much of the work for you and, when written properly, can still be fairly fast.
Actually SOAP is very VERY easy to implement with AWS.
At the same time, REST API is also very easy to implement.
I have written a couple of different and parallel (JSON, XML and custom format) APIs with rails. Im sure the framework stack performance will not be your bottleneck, so don't bother with worrying about performance just yet. Your first bottleneck will anyhow be database and then perhaps requests per second.
All in all i would suggest going with Rails, it has a lot of work cut out for you.
Since this old thread still comes up high on related Google searches, I should chip in my highly biased (as co-author and user) recommendation for Hoodoo. Unlike other offerings, Hoodoo includes an API specification that says how API calls must be made and how they must respond; it enforces a consistency across your design that calling clients will appreciate. If you can call one API, you can call them all. Hoodoo implements a lot of the boilerplate so you can focus on meaningful service code.
We've been using Hoodoo services for over two years very successfully at Loyalty New Zealand, who run the country's largest loyalty programme. Our Hoodoo-based microservice platform handles 100% of our client transactions.
http://hoodoo.cloud/
https://github.com/LoyaltyNZ/hoodoo
https://github.com/LoyaltyNZ/hoodoo/tree/master/docs/api_specification
https://github.com/LoyaltyNZ/service_shell
Hoodoo has 100% non-trivial rspec test coverage and 100% rdoc documentation coverage. As you'll see from the above links, there's quite a lot there!
Hoodoo is a Rack application, so works with any Rack compatible web server. Our preferred deployment mechanism though is an indefinitely horizontally scalable arrangement based on an HTTP-over-AMQP bridge and an AMQP cluster of nodes each running the same collection of services, managed inside Docker containers and deployed with Fleet. The system self-load balances across the service nodes via the queue and the decoupling of front end HTTP->AMQP processor versus the AMQP->HTTP input into the Rack stack dramatically reduces the system's attack surface. We wrote the front-end component in Node and for more about this along with Node implementations of other parts of the framework concept, see the Alchemy Framework. Alchemy Node services and Hoodoo Ruby services can coexist on the same grid happily.