how should microservices talk to each other in docker - ruby-on-rails

I have a Rails app which I moved to docker. The process forced me to split the app into 2 microservices: the main app and an address verification microservice. I encapsulated the address verification microservice into another Rails app which my main app calls. It uses rest-client and it blocks until it receives a response.
Requests used to be processed in 300ms. Now, they take 1.3s. After looking at the newrelic data, it seems the bulk of the time is spent in the main Rails app calling the address verification Rails app. Is there a way microservices should communicate between containers? I guess my question is Ruby/Rails specific. Should I look into RabbitMQ? The problem is that I need a verified address very early into the flow, so I'm not sure how much time an asynchronous request to the address verification microservice Rails app will buy me.

It turns out that the address verification microservice had a problem. I had enabled devise on the address verification and the user find/update actions were taking a lot of time. I'm still not sure why they were taking so long, but soon as I disabled them, I went back to decent numbers. I'll need to find out what the hell was happening with devise. It's still not what I had with internal calls, but docker & microservices is not that terrible.

Related

Communication between two microservices in JHipster using JWT

I'm building a small microservice-based webapp using JHipster with JWT authorization. The Architecture is simple, one gateway and two services with repositories. The problem that I had for the last few hours is the communication between the two backend-services.
At first, I tried to find a token on the services themself, but couldn't find it. If I just missed it in all the docs (quite overwhelming when beginning with the full stack :P), I would be happy to revert my changes and use the predefined token.
My second approach was that each service will authorize itself with the gateway at PostConstruct and save the token in memory to use it each API call. It works without a problem, but I find it hard to believe that this functionality is not already programmed in JHipster.
So my question is whether my approach is usual? If neither is true and there are some best-practices for it, I'm also interested in them.
It depends on the use case.
For user requests, a common approach is: the calling service forwards the token it received to the other service without going through the gateway suing #AuthorizedFeignClient.
For background tasks like scheduled jobs, your approach can be applied or you could also issue long life tokens as long as they have limited permissions through roles. This way you don't have to go through gateway.
Keycloak's offline tokens approach could also inspire you.

How to use Nodejs and Ruby on Rails together?

I really like Ruby on Rails but I have also developed in node js. Currently I'm making a web app which has chat functionality that could have 30 people in it. For that I want to use node js.
I have never done this, I'm confused on how's traffic divided between app. How is the state shared between apps for example how would I share the user session will I have to hit the database for every request.
My first recommendation is to not split a web app between two separate server platforms. It overly complicates the project and isn't necessary at all.
That being said, if it must be done, you could use one of the platforms as the 'main' one and the other one for API endpoints stationed at localhost:some-port-number. This way, if you were in the main platform (Rails let's say), you could request data via the node.js API by redirecting it to whatever IP address (make it a local IP) that node is running on.
Again, I recommend against this. But that's one solution if it must be done.

Failover IP when server on DNS supplied IP fails iOS

My iOS app uses a single hard-code URL api.xyz.com to find our REST service. At the moment there are just two servers running this service, and we use Amazon Route 53 DNS. But I've found that the timeout of an hour (or more) is too long incase one of our servers fails; don't want to leave users in the dark that long.
The alternative would be to implement a failover mechanism in the app. To be honest, I don't like the idea of pulling this low level DNS-related logic in the app, but I don't see another solution at the moment.
So my question is: How do I implement such a failover mechanism on iOS? I'm using AFNetworking for my REST API.
Or, are there better alternatives on server side? At the moment the servers are individually rented ones, so no Amazon, Google, ... cloud service.

Communication between Rails apps

I have built two rails apps that need to communicate and send files between each other. For example one rails app would send a request to view a table in the other apps' database. The other app would then render json of that table and send it back. I would also like one app to send a text file stored in its public directory to the other app's public directory.
I have never done anything like this so I don't even know where to begin. Any help would be appreciated. Thanks!
You requirement is common for almost all the web apps irrespective of rails, Communicating with each other is required by most modern web apps. But there is a small understanding that you need to get hold on,
Web sites should not directly access each others internal data (such as tables), (even if they are build by the same language (in this case Rails) by the same developer),
That is where the web-services comes in to play, So you should expose your data through web services so that not only rails application can consume that, but also any app that knows how to consume a web service will get benefit.
Coming back to your question with Rails, rails supports REST web services out of the box, So do some googling about web services, REST web services with rails
HTH
As a starting point, look at ActiveResource.
Railscast
docs
Message queuing systems such as RabbitMQ may be used to communicate things internally between different apps such as a "mailer" app and a main "hub" application.
Alternatively, you can use a shared connection to something like redis stick things onto a "queue" in one app and read them for processing from the other.
In recent Rails versions, it is rather easy to develop API only applications. In the Rails core master, there was even a special application type for these apps briefly (until it got yanked again). But it is still available as a plugin and probably one day becomes actually part of Rails core again. See http://blog.wyeworks.com/2012/4/20/rails-for-api-applications-rails-api-released for more information.
To actually develop and maintain the API of the backend service and make sure both backend and frontend have the same understanding of the resources, you can use ROAR which is great way to build great APIs.
Generally, you should fully define your backend application with an API. Trying to be clever and to skip some of the design steps will only bring you headaches in the long run...
Check out Morpheus. It lets you create RESTful services and use familiar ActiveRecord syntax in the client.

Receiving REST response on local machine

I use a web service to convert files. The service returns the converted file as an HTTP POST, along with identifier data. My app receives the response, updates its database and saves the file to the appropriate location.
At least that's the idea, but how do I develop and test this on a local machine? Since it isn't publicly facing, I can't provide a directive URL. What's the best way to handle this? I want to keep the process as clean as possible, and the only ideas I can come up with have seemed excessively kludgey.
Given how common REST API development is, I assume there are well-established best practices for this. Any help appreciated.
The solution will change a bit depending on which server your using.
But the generally accepted method is using the loopback address: 127.0.0.1 in place of a fully qualified domain name. Your server may need to be reconfigured to listen on this IP address, but that's usually a trivial fix.
example: http://127.0.0.1/path/to/resource.html
You can use curl or even your browser if your application has a proper frontend. There are many other similar tools to test this from a command line, and each language has a set of libraries for establishing http connections and transferring data along them.
If your machine isn't accessible to the service you are using, then your only option would really be to build a local implementation of the service that will exercise your API. A rake task that sends the POST with the file and the info would be a nice thing so you could start your rails app locally, and then kick off the task with some params to run your application through its paces.
This is the case any time you are trying to develop a system that can't connect to a required resource during development. You need to build a development harness of sorts so that you can exercise all the different types of actions the external service will call on your application.
This certainly won't be easy or straight forward, especially if your interface to this external service is complicated. Be sure to have your test cases send bad POSTs to your application so that you are sure you handle both what you expect, and what you don't.
Also make sure that you do some integration testing with the actual service before you "go-live" with the application. Hopefully you can deploy to an external server that the web service will be able to access in order to test. Amazon's EC2 hosting environment would let you set up a server very quickly, run your tests, and then shut down without much cost at all.
You have 2 options:
Set up dynamic dns and expose your app to the outside world. This only works if you have full control over your network.
Use something like webrat to fake the posts to your app. Since it's only 1 request, this seems pretty trivial.
Considering that you should be writing automated tests for this, I'd go with #2. I used to do #1 when developing facebook apps since there was far to many requests to mock them all out with webrat.
If your question is about testing, why don't you use mocks to fake the server? It's more elegant than using Webrat, and easier to deploy (you only have one app instead of an app and a test environment).
More info about mocks http://blog.floehopper.org/presentations/lrug-mock-objects-2007-07-09/
You've got some info about mocks with Rspec here http://rspec.info/documentation/mocks/

Resources