Specifying IP for Some Domainname - ruby-on-rails

I am calling a number of apis of a web service hosted on a number of servers. Requests get routed to these servers at random through a load balancer.
All these servers reside on my local network and I want one particular api call to go to one particular server.
Since I don't want other requests to get affected, I am unwilling to put a host entry on the server hosting my app.
Can this be achieved through code?
I am coding in ruby and using net-http gem to make api calls.
Any implementation using curb gem is also welcome.
Thanks
-Azitabh

I think the best way to achieve what you want is to use a proxy with DNS Spoofing.
Charles proxy does that but there might be other tools also.

One way(on the same lines as suggested by systho) I can think of is to make the api call directly using the IP and create a vhost on the server which is listening directly on a separate port.
This will work for me purely because of the fact that I have access to the servers hosting the web service.

Related

API gateway to my elastic beanstalk docker deployed app

My backend is a simple dockerized Node.js express app deployed onto elastic beanstalk. It is exposed on port 80. It would be located somewhere like
mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com
I can call my APIs on the backend
mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com/hello
mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com/postSomeDataToMe
and they work! Yay.
The URL is not very user friendly so I was hoping to set up API gateway to allow to me simply forward API requests from
api.myapp.com/apiFamily/ to mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com
so I can call api.myapp.com/apiFamily/hello or api.myapp.com/apiFamily/postMeSomeData
Unfortunately, I can't figure out (i) if I can do this (ii) how to actually do it.
Can anybody point me to a resource that explains clearly how to do this?
Thanks
Yes, you can do this. For this to happen you need two things:
a custom domain that you own and control, e.g. myapp.com.
a valid, public SSL certificate issued for that domain.
If you don't have them, and want to stay within AWS ecosystem, you can use Route53 to buy and manage your custom domain. For SSL you can use AWS ACM which will provide you with free SSL certificate for the domain.
AWS instructions on how to set it up all is:
Setting up custom domain names for REST APIs

Elasticsearch Securing the connection

i am (desperately) new to elasticsearch (7.9.0) and i currently have a cluster with two nodes running.
After a lot of effort it is performing as i would like it to.
It is running on docker and also has an nginx in front of it to route the traffic to it since it is being accessed directly from my website (angular 10).
The elasticsearch is being used as well from my laravel backend directly through the docker container name so that is secure (i guess).
My problem now is that i cannot find or understand a way to secure the http access from outside docker (eg the normal website).
Going via Laravel is an option but this is too slow for my purpose.
Is there a way i can securely have http access to the elasticsearch from the web?
Also, is there a way i can restrict the actions to read only actions?
If you need more info to help out please let me know as i am not knowledgable on what is important here and what not.
Thanks
Angular is a front-end and is run in your user's web browser. If Angular can somehow reach your Elasticsearch instance, everyone can do so. No matter what. You can try to obscure it as many as you want, but if there is direct exposure to Elasticsearch, it will be reachable.
So you have to either assume this fact, or go the slow way and proxy the requests to Laravel, so it can verify that the information requested is actually available for the user performing the request.

is it possible to have multiple project of rails on same port?

I want to add a new project beside of my current rails app without starting a new server for that.
I think it is impossible to have two rails app on one port but my boss want it.
Is it possible at all?
Yes it's possible if you configure a web server (nginx, etc) as a reverse proxy to listen on the port you want, and have it forward traffic to the correct app based on subdomain.
Yes and no. You can't run two web servers, e.g. Puma, on the same port. That won't work. But you can run one web server to serve two Rails apps. Incoming requests are routed based on either their subdomain (app1.example.com) or their path (example.com/app1).
A common setup is to use Apache/nginx as the web server in combination with Passenger as the application server. This question ask a similar question and points to Passenger's documentation on how to serve apps from subdomains: How to deploy multiple rails app on a single IP (Apache + Passenger)?
The configuration depends heavily on your setup, so I can't give you a more detailed answer. But searching for "multiple apps" and the combination of your web and application server should yield enough results and tutorials for you to solve your problem.

How to host a Rails application as an API that is only accessible locally?

I am starting to create a RESTful API that is built on Ruby on Rails. I would like my other applications (which are hosted on the same server) to be able to use this API. I had the idea that if the API is only available locally, I won't have to deal with the authentication logic since it won't be publicly accessible. I have never done this sort of thing before, so I don't even know if what I am asking for is possible (or if this is even a good idea).
How can I host this application so that my REST API is only locally accessible?
You can do one of the following:
Set the webserver to listen on loopback only
If you need to give access to the local network then configure your firewall to forward ports accordingly
Set the webserver to listen only on the private network interface (not public)

Where does a connection manager fit in rails?

I'm writing a rails application that acts as a proxy, thus hereby referred to as the proxy. The idea is that the user should be able to manage his servers through a web UI, that's always up and running even if his servers are down.
To accomplish this, the proxy needs to keep an open connections to the servers at all times. For this I've created a background process using daemonz that accept incomming connections from servers and spawns threads that are constantly listening on the sockets.
Now I have two problems: I need to be able to send messages on these sockets from my rails controllers and I need to know which socket to use, to reach the right server. I was planning to use a ConnectionManager class to take care of this for me, but I don't know where such a class fits into rails structure and I don't know how to make the object and the sockets available to both processes.
That makes two questions:
Where does the connection manager belong?
How do I share the connection manager and the sockets between the processes?
If you only know the answer to the first question, please go ahead and answer. It's possible that I should create a separate post for my second question.
This does not seem like a useful thing to build in Rails/Ruby.
What might be more useful would be a Rails admin application that configured an existing load balancer/proxy like haproxy under the covers.
You could have a mapping of servers/ports/configuration in your Rails app and then project that into an haproxy config and restart the load balancer. A great place to start would be the haproxy-tools gem, that allows you to parse/generate an haproxy config file.
It doesn't make sense to re-write your own load balancer and Ruby/Rails is a poor technology stack even if you were going to do that.

Resources