Breaking up existing rails application into UI and API - ruby-on-rails

We have an already working rails application but the UI and API is built into 1 application. The problem we are having is that when anything happens to the app for example if it goes down, it affects API and GUI. 3rd party systems can't communicate with the API. We are trying to prevent this. These are my options so far:
option 1: deploy the same application to 2 different ports using puma and unicorn
option 2: have 2 different repos for API and GUI and deploy them separately on the same server
Is there a better way where we can have 1 repository hosting both API and GUI and when GUI goes down, API isn't affected?

Thanks I figured it out. Keep the same codebase and deploy unto 1 server that will serve as the API and one server that will serve as the GUI and a 3rd server being the mysql server

Related

Live ios(swift) chat, with existing rails web application

I have an existing rails web application with a postgres DB running on the production server and I recently created an ios counter part. I am trying to implement a live chat message within the ios application (the web counter part does not have live chatting).
I have done some research and found that I need to create something called a TCPServer to have a constant communication between the ios application and my rails application. I used daemon to run a simple ruby script on my production server (beside my web application) to test the TCPServer and I was able to run a simple ruby client to connect to the ruby TCPServer and get some static messages.
I have no idea where to go from here. All examples of live ios chatting seem to be writing and receiving data to servers such as firebase. I do not want to do this because I already have a rails application communicating with a postgres database.
How can I get a live chat running on the ios counter part and have both the ios and the web rails communicating to the same database?
I found a nice tutorial and eventually figured out how to do this by running node.js & socket.io & redis alongside the rails application. It has diagrams and sample code so it is extremely easy to understand.
In production, Redis will run in the background if your configuration file sets the value for Daemon to true, and the node server will run in the background if you use Forever (a Node packet).

Basics of running multiple internal ruby/rails apps on an IIS server - is something like passenger required for this situation?

I am going to be deploying about a dozen rails apps on an IIS server. These apps are behind out firewall for internal use only. They will all be low usage but critical apps.
The current setup is
IIS running on server 2012
Rails 4 is running on iis installed via the webplatforminstaller.
We will be using MS sql server
Apps uploaded from the laptop to the server do run.
There will be about 15 apps running. We can add another server if necessary.
Nothing else will be running on that server
E.g. an app might be located in c:\inetpub\wwwroot\apps\sales and another in \hr. etc.
What I've read about deploying multiple rails apps seems to be geared toward public facing sites. I don't see much about private sites using a server ip/folder setup
Question:
- Can multiple apps be run without adding a tool such as passenger?
It is possible using the IIS HttpPlatformHandler extension. Scott Hanselman did a good write up on it that explains how to use it.
Although, in that blog post it's suggested to use JRuby and Tomcat. I don't think that is necessary, however I would suggest using puma as your webserver.

Rails: What is the use of web servers (Apache / nginx / passenger)?

Hi I've been learning rails for the past half year and have a few apps up on Heroku. So for me I thought deploying apps onto the world wide web was just as simple as heroku push. However, I just got my first internship doing Rails and one of my seniors is talking about Apache and Nginx and I'm not sure how they fit in the picture, since I thought apps consisted of only Rails + cloud app platform. I have looked it up but I still don't get how and where it affects my app life cycle. Can someone explain what/where/when of using web servers?
So you've got your Rails app, and as you know you've got controllers and actions and view and what not.
When a user in their browser goes to your app on Heroku, they type in the URL which points to the Heroku servers.
The Heroku servers are web servers that listen to your users that type in the URL and connect them to your Rails application. The rails application does its thing (gets a list of blog posts or whatever) and the server sends this information back to your user's browser.
You've been using a web server the whole time, just it was abstracted away from you and made super simple thanks to Heroku.
So the life cycle is somewhat like this:
Whilst you've been building your applications on your development machine you've probably come across the command rails server. This starts a program called WEBrick which is a web server, and listens on port 3000. You go to your app via http://localhost:3000.
WEBrick listens on port 3000 and responds to requests from users, such as the "hey give me a list of posts" command.
When you push your code into production (in your experience via heroku push) you're sending your code a provider who takes care of the production equivalent of rails server for you.
A production setup (which your senior developers are talking about) is a bit more complex than your local rails server setup on your development machine.
In production you have your Rails server (often things like Unicorn, Passenger) which takes the place of WEBrick.
In a lot of production setups, another server, such as Apache or nginx is also used, and is the server that the user connects to when they go to your application.
This server exists often as a bit of a router to work out how different types of requests should be handled. For instance, requests to static files (css, images, javascript etc) that are storted on the server might just be processed directly by Apache or nginx, since it does a fantastic (and fast) job of sending static assets back to the client.
Other requests, such as "get me a list of all blog posts" get passed onto the Rails server (Unicorn, Passenger etc) who in turn do the required work and send the response to Apache/nginx, who send it back to the client.
Heroku does all this for you in a nice easy to use package, but it sounds like the place your working at manages this themselves, rather than using Heroku. They've setup their own bunch of web servers, and will have their own way doing an equivalent of heroku push which will send the code to the servers, and make sure they're up and running ready to respond to user requests.
Hope that helps!
Web Pages need a Web Server to make them available on the Internet.
So a site that is all static content (all just .html pages) just needs a web server and that's where Apace, nginx, etc come in. They are web servers.
When you use frameworks like rails, an additional component is added, an application server. This pre-processes the pages using the rails framework and then (still) uses the above mentioned web server to make the final pages (which are .html of course) available to the end users through their browser.
Passenger Phusion is an application server that, with rails will help manage and automate the deployment of code.
Heroku is a cloud service, meaning they take care of hardware and software allowing you to seamlessly publish you application without worrying about what is going on behind the scene. So the only thing you have to do is push your code to their Git and voila.
On the other hand, Rails can also be deployed on a system built by you completely from scratch, and you will be the responsible not only for the app development but also for the server maintenance and choice of the hardware and/or software. You could then choose between several application servers capable of running rails such as ngix.
Hope that helps.

Scaling a Rails application on EC2. Is this the right way?

So I have a Rails application. It currently runs separate as front-end and back-end + database.
I need to scale it to have several back-end servers.
The backend server has Resque background workers running (spawned by user front-end requests). It also relies heavily on callbacks.
I am planning the following setup:
|front-end| --- |load-balancer (haproxy or AWS ELB)| --- Server 1 ---- Postgresql Database (+++ other DBs added via replication later if needed)
\___ Server 2 ---/
++ (other servers added in the same fashion later )
I have concerns about how to deal with putting Database on a separate machine in this case.
1) I intend to create a new empty Rails app with schema identical to initial back-end. Have it running and accepting updates / posts via HTTP and keep connected via remote SSH (to trigger :after_commit callbacks in back-end). Is it a bettergood idea?
2) I am using Postgresql and intend to switch to an enterprise DB once the need arises. Currently the need is to scale the part of back-end that does processing not the database.
3) Does this approach seem scalable?
I'm not sure I really understand your question. Generally in production applications the database layer is separate from the application layer. I can't quite tell if this pertains to you, but it's definitely an interesting watch. http://vimeo.com/33263672 . It talks about using a redis layer between the rails and db layers to facilitate queuing, and creating a zero downtime environment. It seems like it would be a better solution than using a second rails stack? I think it should look something like this;
|ELB| Web Servers |ELB| Application Servers |RRDNS| Redis Servers | PostGreSQL Servers |
If I am understanding your meaning. If not, that video link is still worth a watch :)

Programmatically Stopping a Windows Service from a Web App

I have a web application running in a GlassFish 3.1 service on Windows 7 and Server 2008. I would like this application to be able to control a number of other services on the same machine. In particular I want it to be able to start and stop Tomcat services.
Essentially the application is acting as a controller for other applications I deploy on the machine so I can set any security necessary on the controlled applications.
I notice that .Net has a ServiceController class so I'm assuming that programmatic control is possible but I would very strongly prefer a pure Java solution if possible.
All the searches I've done turn up a lot of information about running Java as a service and not much on controlling services from Java. Is what I'm trying to do possible? What security hoops will I need to jump through?
You can do Runtime.exec("net stop tomcat6"). Of course tomcat6 can be replaced with any other service name.
Also check out this tutorial
http://www.java-samples.com/showtutorial.php?tutorialid=8

Resources