Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Our app is starting to become quite the monolith and seeing some scaling pain points. We have some long running page loads that kill the request queue for the rest of the app. Been thinking about it for a few days and have been thinking to abstract out to some microservices could release some pressure.
Been doing a lot of reading on microservices and have read you can have all your services share the same DB or create a new DB for each service - both are acceptable based on what I've read. Our data is quite large so it would be an easier abstraction to share the same DB.
I sat down this morning determined to start a new Rails JSON API app to abstract out some business analytics pages to a new service but quickly realized this new app won't have any of the model files etc in it for ActiveRecord to use. How do people get around this when they are sharing a DB with all services?
Edit:
I'm planning on this service to run off a follower read-only slave DB as well to help with the load.
If you use libraries like Sequel instead of ActiveRecord, you could rely less on the model layer and make your code/queries closer to the database, I don't think it would solve all your problems though as you would need to update your code everywhere after each migration. One thing you can do however, is package part of your model layer as Gems to share across applications.
"We have some long running page loads that kill the request queue for the rest of the app."
=> Can it be solved by making some of your requests asynchronous? The backend of the web app is not supposed to do any heavy lifting, it should be done by crons/other.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am developing a Rails App that serves as a cash register, sales recorder and ticket printer for each purchase for many stores of the same franchise.
The problem is that it must be able to run offline in case the internet goes down at any given time, at any store location, so that customer service does not get affected.
Is there a way to run the Rails App offline and sync it with the server after the connection has been re-established?
Or even operate it offline and sync it at the end of the day?
Does it require a specific database?
Technically, there's no reason that you can't do this. I have done it, and it actually works pretty well, if you're careful about how you design the application.
Other than those, the things to be aware of are:
Javascript libraries, such as jQuery, that you would need to ensure get loaded from your public directory, rather than from a CDN
Rails comes with SQLite, and that works great for offline (and small-scale) functionality. You can use local database servers for Postgres or MySQL (or anything that you can install locally) if you prefer.
Images, fonts, and other design assets should be available locally, as well, which can be tricky if you have online image or font resources that you want to use (e.g. Google restricts offline usage of their font resources)
Testing offline behavior is pretty easy, as well. Put it on a laptop and turn off the Wifi. You'll know pretty quickly if that works.
For file sync between the offline app and the main server, you have your choice of technology and data formats. You can implement REST-style sync APIs, low-tech FTP push, or even rsync. Data formats could easily be JSON (the current princess of structured data storage), well-established CSV, or even (shudder) XML.
There should be no surprises in building an offline application, and you'll have all the tools and resources that Rails makes available to you, except the ability to arbitrarily load resources from the internet.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Here is my dilemma, right now I am developing a social media app using Parse as my backend service and its working just fine. While doing some research today, I realized that if my app gains in popularity quickly using Parse will become very expensive or just stop requests all together that go over the limit.
1) Basically my question for you all is, in your experience with Parse how effective is it for handling apps with many users?
2) Also, do many users equate to many requests per second or is there an efficient way to develop my app that will keep the requests per second down?
3)And lastly would it just be easier/feasible to develop my own backend service for my app (I have no backend experience, so I would have to teach myself)? I am not opposed to doing this; I just know it will add development time but could be the best option in the long run.
Thanks for all your help.
1) We use Parse in our most of apps and Parse is handling things great. One of our app that uses Parse, has 3k monthly user and everything is going well
2) You should develop your app to make requests minimum. You must get lots of data as possible as you can. This will drop your request number.
3) I can recommend you that you should begin with Parse-like systems. We are in a time of hurry, so you must act lean. If Parse will not be enough for you in future, this is a thing that you must be happy about it. You can develop your own backend service meanwhile.
Though it is good you are planning ahead parse or something similar like amazon is gonna be way better for scalability. If you get a domain and have a mysql database (or whatever else) that you maintain the scalability isn't as good as using a service that handles all of that.
I created my own backend and I wish I hadn't wasted the time. I now will most likely have to find a service for scalability reasons so really just wasted my time making it. That is just my two cents other people may disagree
I think that building your own backend is very difficult and time consuming. Take a look at CloudKit, it gives you much better quotas than parse for free. Please note that you need an enrolled developer account to use it. Personally, I am making my app with Parse, and if I make it to be ready for release, I enroll into the programm and change the code to work with CloudKit, or leave it with Parse and if Parse quotas are nearly over, then I change to CloudKit. But Parse free quotas are quiet big as I experienced.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm in the middle of architecting a Grails 3 app based on the microservices project structure. Based on Jeff Scott Brown's video on how he separates the Web UI and the backend by using two Grails apps, isn't the Web UI app an overkill, compared to using an AngularJS based html?
Please do point out the benefit of using a Grails Web UI app if any.
I know it is one year later, but since I wondered the same here is my conclusion.
The presentation of Scott Brown misses the point of micro-services. They are not just small pieces of code that provide a RESTful interface, but they are defined as micro, by their footprint and the fact that they can live separate from each other. Try running a Grails instance for each small service. The cost will be huge as each machine requires more than 1GB ram.
To answer your question; monolithic frameworks as Grails are great toolkits, making it easy to handle and maintain more complex logic, as well as handle security and other common tasks which with (e.g.) Node you would need to install libraries of dubious quality or implement them yourself.
My take on the general aspect of micro - services or monolithic frameworks is that if you need simple data access and you are worried about scaling or you need a flexible way of distributing then use micro-service frameworks. If you have a complex business model or you need to use the tools in the framework use a monolithic framework. Finally, don't forget that no one is stopping you from using both simultaneously if needed be. It's a valid strategy.
I suggest watching Martin Fowler's "Microservices" talk.
https://www.youtube.com/watch?v=wgdBVIX9ifA
I guess this architectural approach is now a bit outdated with http://micronaut.io beeing available.
But however, as I do understand, you ask mainly if you shouldn't use an Angular or React frontend instead of the server side rendered Grails UI.
So this depends. Angular and React perfectly fit the requirement for a single page app, but sometimes all you need is a good old HTML frontend. And even if you decide to use a javascript based frontend, you often need a backend for frontends or API manager as entrance to your microservice world. A Grails UI can serve this need perfectly.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm building a pretty stock standard N-tier ASP.NET MVC website, and I'm trying to think of all the little miscellaneous tasks that people often forget to do when building a site.
Off the top of my head things like:
Custom error pages
Maintenance downtime handling
Load testing
etc.
What are the common things that people often forget?
People tend for forget to test the deployment and upgrade process.
Deploying the system to a production-like environment early on during the development process will uncover (often forgotten) external dependencies and configuration settings that need to be tweaked before production. Plus it will force the team to start thinking about the upgrade process and how to automate it.
Some examples of such tasks (from my own experience):
make website running well with javascript disabled
forms validation (especially limiting size of the input)
protection against CSRF and other kinds of attacks (penetration tests)
logging server errors (using elmah or sth similar)
make web site logo displaying on the address bar
SEO optimization (meta tags, page keywords, descriptions, sitemap etc)
Edit: added point about javascript.
In my experience the main mistake or misunderstanding of people starting using MVC is that they confuse the C of MVC, the Controller, with the Business Logic and the M, Model with the Data Acess Layer or entity model.
I have given this answer a while ago and there are some comments about this confusion of the fact that MVC is only about UI controlling and modeling, it's surely not replacing other NON UI related layers... MVC3 and Entity Framework
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have the following systems (and more) that we push/pull data from one app to another:
Hosted CRM (InsideSales.com)
Asterisk phone system (internal)
Banner ad system (openx, we host)
A lead generation system (homegrown)
Ecommerce store (spree, we host)
A job board (homegrown)
A number of job site scrapes + inbound job feeds
An email delivery system (like Mailchimp, homegrown)
An event management system (like eventbrite, homegrown)
A dashboard system (lots of charts and reports pulling info from all other systems)
With Rails 3 around the corner, I really want to pursue a micro-app strategy, but I'm trying to decide if I should have the apps talk via REST HTTP API or because I control them all, should I do something like shared models in the code which simplifies but also allows for stuff to leak across boundries much easier...
I've heard 37signals has lots of small apps, I'm curious how those apps communicate with each other... Or if you have any advice from your own multi-app experience.
Thanks! I tried asking this on my blog http://rywalker.com/chaos-2010 too a while back.
I actually got an email response from DHH...
We use a combination of both, but we default to REST integration. The only place where we use direct database integration is with 37signals ID user database. Because it needs to be so fast. REST is much more sane. Start there, then optimize later if need be.
Last time I had to crazy-glue a bunch of small applications together, I used a simple REST API.
Bonus points: it allows for integration with services / apps written in other languages.
Also helps if you've got a crazy buzz-word loving manager who likes to pivot technologies without warning.
i had the same with a plus: I had to talk as well with some daemons that were not exactly HTTP ready. So i followed the following pattern:
REST API using XML/JSON to exchange data and using memcache to exchange short messages. (you define some keys that you will update in memcache and the other piece of software, just pull memcache looking for those keys)
as security measure i added API KEY or HTTP client authentication using digital certificate.
Another option is AMQP messaging (via rabbitmq or other means).