I am facing issues getting Reverse Proxy right. I keep getting "504 Getaway Timeout" when I am using the Reverse Proxy.
I have followed the Microsoft's example to set up the cluster.
IMHO, I think cluster set-up is correct, the only difference is that I've specified port 80 for the proxy and I did not use SSL for test env.
I am trying it out on test environment at the moment, but the production environment is running the same services, just w/o reverse proxy and it is just fine. Also, I have exposed an endpoint for one of the services on test env, tried calling it w/o reverse proxy and it worked.
I've read that it could be caused by the containers, but I am using Windows 2012 RC2 DataCenter. As far as I am aware, it does not utilize windows nat containers. Also, I've read that it could be caused by the 404 error (#case 2 in the example doc) where it tries to reload it and simply times out trying.
These are some of the summed up details that might be important to know
Service Fabric version: 5.5.219.0
OS: Windows
SKU: 2012-R2-Datacenter
Services are using WebListener
All ports are allowed
1 NodeType (stateless)
Services created with ASP.NET Core Web API template
VS 2015 Enterprise
Service endpoints are configured like follows:
Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input"
All services and cluster are healthy.
I have found a cause for this timeout. It was just me not getting the required change in the request url right.
All of my services are holding MVC controllers that are named after service name. So whenever I was calling them without reverse proxy my request url would be something like http://mycluster.westeurope.cloudapp.azure.com:8280/Notifications/TestMethod
and this would be enough as it could find Controller by unique port.
The way I've been trying to call it with reverse proxy was
http://mycluster.westeurope.cloudapp.azure.com/SomeName.API.Services/Notifications/TestMethod
That is not enough as 'Notifications' is parsed as a name of the service and not the controller. So I was calling the service and an action without specifying a controller.
The correct way to call it is to include service name twice, since I've called my controllers the same as service (I might change that).
Here is a correct url I have to use
http://mycluster.westeurope.cloudapp.azure.com/SomeName.API.Services/Notifications/Notifications/TestMethod
I have figured it out by looking up the reverse proxy code sample.
Related
i am (desperately) new to elasticsearch (7.9.0) and i currently have a cluster with two nodes running.
After a lot of effort it is performing as i would like it to.
It is running on docker and also has an nginx in front of it to route the traffic to it since it is being accessed directly from my website (angular 10).
The elasticsearch is being used as well from my laravel backend directly through the docker container name so that is secure (i guess).
My problem now is that i cannot find or understand a way to secure the http access from outside docker (eg the normal website).
Going via Laravel is an option but this is too slow for my purpose.
Is there a way i can securely have http access to the elasticsearch from the web?
Also, is there a way i can restrict the actions to read only actions?
If you need more info to help out please let me know as i am not knowledgable on what is important here and what not.
Thanks
Angular is a front-end and is run in your user's web browser. If Angular can somehow reach your Elasticsearch instance, everyone can do so. No matter what. You can try to obscure it as many as you want, but if there is direct exposure to Elasticsearch, it will be reachable.
So you have to either assume this fact, or go the slow way and proxy the requests to Laravel, so it can verify that the information requested is actually available for the user performing the request.
I'm currently trying to create a tracing tool for fun (which supports gRPC tracing) and was confused as to whether or not I was thinking about this architecture properly. A tracing tool keeps track of the entire workflow/journey of the request (from the moment a user clicks the button, to when the request goes to the API gateway, between microservices, and back.
Let's say the application is a bookstore, and it is broken up to 2 microservices, maybe account and books. Let's say that there is a User Interface, and when you click a button, it allows a user to favorite a book. I'm only using 2 microservices to keep this example simple.
**Different parts of the Fake/Mock up application**
UI ->
nginx -> I wanted to use this as an API Gateway.
microservice 1 -> (Contains data for all Users of a bookstore)
microservice 2 -> (Contains data for all the books)
**So my goal is to figure a way to trace that request. So we can imagine the request goes to nginx
Concern #1: When the request goes to nginx, it is HTTP. Cool, but when the request is sent to the microservice, it is a grpc call (or over http2). Can nginx get an http request and then send that request over http2...? Not sure if I'm wording this correctly or not. I know nginx plus supports http2. I also know that grpc has a grpc gateway too.
Concern #2: Containerization. Do I have to containerize both microservices individually, or would I have to containerize the entire docker container itself. Is it simple to link nginx and docker?
Concern #3: When tracing gRPC requests (finding out how much time a request is fulfilled), I'm considering using a middleware logger or a tracing API (opentracing, jaegar, etc.) to do this. How else would I figure out how long it takes for gRPC to make requests?
I was wondering if it was possible to address these concerns, if my thought process is correct, and if this architecture is feature.
Most solutions in the industry are implemented on top of a container orchestration solution (Kubernetes, Docker Swarm, etc).
It is usually not a good idea to "containerize" and manage reverse proxy yourself.
The reverse proxy should be aware of all the containers status (by hooking to orchestrator) and dynamically update its configuration when a container created, crashed, or relocated (due to a machine gets out of service).
Kubernetes handles GRPC using the mesh networks. Please take a look at kubernetes service mesh.
If you decided to use Traefik and Docker Swarm check out traefik h2c support.
In conclusion, consider more modern alternatives to Nginx when you want to load balance GRPC.
I have two apps I wanted to have "fully managed" by Cloud Run. One is a pure Vue.js SPA and the other is the belonging backend server for it that is connected to a MySQL and also fetches some other API endpoints.
Now I have deployed both apps but am totally unaware on how I can give the frontend app access to the backend app. They should be both running on the same domain to avoid the frontend from.
Current URL of the frontend app: https://myapp-xl23p3zuiq-ew.a.run.app
So I'd love to have the server accessible by: https://myapp-xl23p3zuiq-ew.a.run.app/api
Is this somewhat possible to achieve with Cloud Run?
I was having the same issue. The general idea that one usually has is to use path mapping and map / to your client and /server to your backend. After googling for a while I found this:
https://cloud.google.com/run/docs/mapping-custom-domains
Base path mapping: not supported
The term base path refers to the URL
path name that is after the domain name. For example, users is the
base path of example.com/users. Cloud Run only allows you to map a
domain to /, not to a specific base path. So any path routing has to
be handled by using a router inside the service's container or by
using Firebase Hosting.
Option1:
I ended up creating an "all in one" docker image with an nginx as reverse proxy and the client (some static files) and server (in my case a python application powered by uwsgi).
If you are looking for inspiration, you can check out the public repository here: https://gitlab.com/psono/psono-combo
Opttion2:
An alternative would be to host your client on client.example.com, your server on server.example.com and then create a third docker run instance with a reverse proxy under example.com.
All requestes would be "proxied" to the client and server. Your users will only interact with example.com so CORS won't be an issue.
Option3:
Configure CORS, so people accessing example.com can also connect to server.example.com
Currently this is not possible in Cloud Run, as already said on the comments to your question.
You could check if there are any Feature Request for this functionality on Buganizer (Google Issue Tracker), currently there seems to be none, and if that is indeed the case, you can create a new Feature Request by changing the request type from Bug to Feature Request and as Google develops it on their road map, you will be informed.
Hope this helped you.
Preface
I am currently trying to learn how micro-services work and how to implement container replication and API gateways. I've hit a block though.
My Application
I have three main services for my application.
API Gateway
Crawler Manager
User
I will be focusing on the API Gateway and Crawler Manager services for this question.
API Gateway
This is a docker container running a Go server. The communication is all done with GraphQL.
I am using an API Gateway because I expect to have different services in my application each having their own specialized API. This is to unify everything.
All it does is proxy requests to their appropriate service and return a response back to the client.
Crawler Manager
This is another docker container running a Go server. The communication is done with GraphQL.
More or less, this behaves similar to another API gateway. Let me explain.
This service expects the client to send a request like this:
{
# In production 'url' will be encoded in base64
example(url: "https://apple.example/") {
test
}
}
The url can only link to one of these three sites:
https://apple.example/
https://peach.example/
https://mango.example/
Any other site is strictly prohibited.
Once the Crawler Manager service receives a request and the link is one of those three it decides which other service to have the request fulfilled. So in that way, it behaves much like another API gateway, but specialized.
Each URL domain gets its own dedicated service for processing it. Why? Because each site varies quite a bit in markup and each site needs to be crawled for information. Because their markup is varied, I'd like a service for each of them so in case a site is updated the whole Crawler Manager service doesn't go down.
As far as the querying goes, each site will return a response formatted identical to other sites.
Visual Outline
Problem
Now that we have a bit of an idea of how my application works I want to discuss my actual issues here.
Is having a sort of secondary API gateway standard and good practice? Is there a better way?
How can I replicate this system and have multiple Crawler Manager service family instances?
I'm really confused on how I'd actually create this setup. I looked at clusters in Docker Swarm / Kubernetes, but with the way I have it setup it seems like I'd need to make clusters of clusters. That makes me question my design overall. Maybe I need to not think about keeping them so structured?
At a very generic level, if service A calls service B that has multiple replicas B1, B2, B3, ... then it needs to know how to call them. The two basic options are to have some sort of service registry that can return all of the replicas, and then pick one, or to put a load balancer in front of the second service and just directly reach that. Usually setting up the load balancer is a little bit easier: the service call can be a plain HTTP (GraphQL) call, and in a development environment you can just omit the load balancer and directly have one service call the other.
/-> service-1-a
Crawler Manager --> Service 1 LB --> service-1-b
\-> service-1-c
If you're willing to commit to Kubernetes, it essentially has built-in support for this pattern. A Deployment is some number of replicas of identical pods (containers), so it would manage the service-1-a, -b, -c in my diagram. A Service provides the load balancer (its default ClusterIP type provides a load balancer accessible only within the cluster) and also a DNS name. You'd configure your crawler-manager pods with perhaps an environment variable SERVICE_1_URL=http://service-1.default.svc.cluster.local/graphql to connect everything together.
(In your original diagram, each "box" that has multiple replicas of some service would be a Deployment, and the point at the top of the box where inbound connections are received would be a Service.)
In plain Docker you'd have to do a bit more work to replicate this, including manually launching the replicas and load balancers.
Architecturally what you've shown seems fine. The big "if" to me is that you've designed it so that each site you're crawling potentially gets multiple independent crawling containers and a different code base. If that's really justified in your scenario, then splitting up the services this way makes sense, and having a "second routing service" isn't really a problem.
I have two separate installs of WebSphere. (Actually one is WebSphere Application Server V6.1 with EJB 3.0 and Web Services feature packs, and the other server is WebSphere ESB Server V6.2). However, I know that ESB is really built on top of WAS, so it has all the configuration settings that a regualr WAS server has.
In my ESB server, I am trying to expose a service written as EJB 3.0 that will be deployed to the WAS 6.1 server. My question is not how to get EJB 2.1 calls to call into an EJB 3.0. We've done that already. My question is how to call across physical VM's. The WebSphere Application Server is running in its own cell/node/server from the ESB Server. From what I've read in IBM documentation, it is possible to set up a namespace binding on WAS to point to a remote EJB on another WAS instance. Thus you could use JNDI to lookup a bean on one WAS instance that really resides in another WAS instance. The beauty of this method is the location of the EJB you want is abstracted to the container level, and you don't have to drag around properties files of the IP addresses and ports that you need to access the bean should it change servers, etc. You just make a standard JNDI lookup to a remote EJB and you get it.
Sounds like it can be done. (See the following links:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.express.doc/info/exp/ae/tnam_view_bindings.html) Especially follow the links on EJB and Indirect namespace bindings.
But I've been hitting my head against this for a while. It makes sense. It looks like it can be done. And the Indirect namespace binding looks the most promising. But I can't get it to work quite right. My ESB server keeps complaining about not finding comp/env/ejb in the context in which I am asking for it. Very puzzled by this one.
Just wondering if anybody has done this kind of thing before. Can you give me a concrete example of how you set this up in WAS to do so? Any help is appreciated
Well, I have since talked with IBM on how to do this and was surprised by their answer. They answered that if you are talking EJB to EJB within the same server or server cluster, then use EJB RMI via IIOP. With JNDI this abstracts where the bean is actually running (in a clustered environment).
If you are going from one server (or server cluster) across into a different server (or server cluster) regardless of whether or not the target and source are in the same cell, IBM recommended that you use messaging or web services. They felt that was a better method of abstraction between applications to keep them from being "tied" to each other. They did say that you could get EJB's to talk RMI via CORBA, but said to do that ONLY if absolutely necessary. And of course, you would need to know the IP and port number for coming in over CORBA (and that times each cluster member if in a clustered environment).
Again, this kind of surprised me, but it does make sense. Just thought I'd share these thoughts with the world, especially if you are working with WebSphere.
how to lookup from tomcat
use IBM JDK as runtime for tomcat
find bootstab port , use iiop in PROVIDER_URL
I was stuck with the same problem. After trying to include all the websphere and ibm orb jars found this article at ibm
How to lookup an EJB and other Resources in WebSphere Application Server using a Oracle JDK client - http://www-01.ibm.com/support/docview.wss?uid=swg21382740
basically used the CNCtxFactory instead of WsnInitialContextFactory
//props.put(Context.INITIAL_CONTEXT_FACTORY,"com.ibm.websphere.naming.WsnInitialContextFactory");
Hashtable env = new Hashtable();
env.put("java.naming.factory.initial", "com.sun.jndi.cosnaming.CNCtxFactory");
env.put("java.naming.provider.url", iioppath);