ECS unable to send API requests to backend using service discovery - docker

I deployed a front and back apps with ECS fargate, both of them are up and running and I can access them from my browser. They are configured on the same VPC and subnet.
The backend has service discovery configured and my server's DNS address was inserted to my react application.
I read in this thread cannot-connect-two-ecs-services-via-service-discovery that if I use axios from my browser to access my server using service discovery it will not work.
The error I am getting is: net::ERR_NAME_NOT_RESOLVED
How can I achieve communication between these 2 services with service discovery? Am I missing something?

Related

Why can I access a service with "IP:port" but not using "domain:port"?

I am using docker to start a service in my server. The thing is that I can have access to that service by searching in my Mozilla "IP:port" but not by searching "domain:port". Why?
I am using a docker service. I've got HTTPS for my domain and HTTP for the docker service, maybe that is the problem. I cannot reach domain:serviceport ("secure connection failed") but I can reach ip:serviceport.

What is the use of configuration file in consul/config when consul client registers the services?

I understand that services can be registered through Consul clients, and JSON configuration files can be loaded to consul/config to register services when the Consul server is running as a docker container, but why should we register services in 2 places?
Is there an instance of the service running on both the clients and the servers? I believe you need to let Consul know about each node that is running an instance of a service.
Consul is designed for services to be registered against a Consul client agent which is running on the same host where your service is deployed. Services can be registered with an agent either through configuration files or the /v1/agent/service/register HTTP endpoint.
Each Consul client agent in the data center submits its registered service information to the Consul servers. The servers then aggregate this information to form the service catalog (https://www.consul.io/docs/architecture/anti-entropy#catalog).
Normally you would not configure the servers to register services which are running on different hosts in the environment. An exception to this when the service exists on a host which does not have its own Consul client agent. In that scenario, the service is considered an external service, and can be registered into the Consul catalog using a different registration/monitoring method. See the tutorial Register External Services with Consul Service Discovery for more info.

Service IP & Port discovery with Kubernetes for external App

I'm creating an App that will have to communicate with a Kubernetes service, via REST APIs. The service hosts a docker image that's listening on port 8080 and responds with a JSON body.
I noticed that when I create a deployment via -
kubectl expose deployment myapp --target-port=8080 --type=NodePort --name=app-service
It then creates a service entitled app-service
To then locally test this, I obtain the IP:port for the created service via -
minikube service app-service --url
I'm using minikube for my local development efforts. I then get a response such as http://172.17.118.68:31970/ which then when I enter on my browser, works fine (I get the JSON responses i'm expecting).
However, it seems the IP & port for that service are always different whenever I start this service up.
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change? Is the common way to work around this to register that combination via a DNS server (such as Google Cloud's DNS system?)
Or am I missing a step here with setting up Kubernetes public services?
Which leads to my question - how is a mobile App supposed to find that new IP:Port then if it's subject to change?
minikube is not meant for production use. It is only meant for development purpose. You should create a real kubernetes cluster and use LoadBalancer type service or an Ingress(for L7 traffic) to expose your service to external world. Since you need to expose your backend REST api, Ingress is good choice.

How to set up docker with no_proxy settings

I'm setting up three docker containers on my own machine using docker compose:
One is a portal written with React.js (called portal)
One is a middleware layer with GraphQL (called gateway)
One is an auth service with node.js (called auth)
I also have a bunch of services already running behind a corporate firewall.
For the most part, gateway will request resources behind the firewall, so I have configured docker containers to proxy requests through a squid proxy with access to the additional services. However requests to my local auth service, and other local services should not proxied. As such, I have the following docker proxy configuration (note the noProxy settings):
~/.docker/config.json
...
"proxies": {
"default": {
"httpProxy": "http://172.30.245.96:3128",
"httpsProxy": "http://172.30.245.96:3128",
"noProxy": "auth,localhost,127.0.0.1,192.168.0.1/24"
}
}
...
With the above setup, portal requests do go directly to gateway through the browser using http://192.168.0.15/foo, but when gateway makes requests to auth using http://auth:3001/bar, they do not go directly to auth but instead do go through the proxy - which I am trying to avoid.
I can see the auth request is sent through the proxy with the squid proxy errors:
<p>The following error was encountered while trying to retrieve the URL: http://auth:3001/bar</p>
How can I set up the docker containers to respect the noProxy setting using docker service names like auth? It appears to me that the request from gateway to auth is mistakingly being proxed through 172.30.245.96:3128, causing it to not work. Thanks
Your Docker configuration seems fine, but your host doesn't understand how to resolve the name auth. Based on the IP given (192.168.x.x), I'll assume that you're attempting to reach the container service from the host. Add an entry for auth into your host's /etc/hosts (C:\Windows\System32\Drivers\etc\hosts if on Windows).
Take a look at Linked docker-compose containers making http requests for more details.
If you run into issues reaching services from within the container, check docker-compose resolve hostname in url for an example.

Jhipster - unable to Use a Gateway app when deploying everything on docker host except the Gateway itself, Mixed Docker and locally deployment

I have some JHipster Spring Microservice and gateway projects. I deployed all of them on a host using docker except the gateway. I started the gateway on another host.
I use Keycloak for OAuth authentication.
Everything works fine when i deploy all of the microservices and databases and Gateways as docker containers on a docker network using docker-compose.
But it doesn't work when i just deploy everything on docker except the gateway.i mean if the gateway resides outside of docker-created network. the motivation for this action is that I just want my UI programmer to up and run the gateway on his own PC, and use microservices which are deployed on server host. Just for ease of UI development in need to up and run this sole gateway using gradle bootRun -Pprod.
I used a technique to assign a separate IP to each container on my docker network. This technique is called Docker MacVLan networking. so that every container in the host have a separate IP address in physical network and each of these containers are visible on other hosts in the network.
the problem is that in normal docker deployment (when gateway is deployed in a docker network in same host) everything works fine. but in my scenario after successful login, every microservice return error 401.
in microservice it says this error:
o.s.s.oauth2.client.OAuth2RestTemplate : Setting request Accept header to [application/json, application/x-jackson-smile, application/cbor, application/*+json]
o.s.s.oauth2.client.OAuth2RestTemplate : GET request for "http://keycloak:9080/auth/realms/jhipster/protocol/openid-connect/userinfo" resulted in 401 (Unauthorized); invoking error handler
n.m.b.s.o.CachedUserInfoTokenServices : Could not fetch user details: class org.springframework.security.oauth2.client.resource.OAuth2AccessDeniedException, Unable to obtain a new access token for resource 'null'. The provider manager is not configured to support it.
p.a.OAuth2AuthenticationProcessingFilter : Authentication request failed: error="invalid_token", error_description="token string here"
it says that your token is invalid. the same mechanism just works when everything is deployed in same host in docker. is it for the Keycloak that prevents the token to validate for external hosts? i personally doubt that , because it didn't prevent me from logging into gateway successfully. and i just checked keycloak. its up by the command -b 0.0.0.0
Please help me up and run a gateway just by gradle bootRun -Pprod.
In summary I could rephrase my question to: i just want the UI Developer be able to test his angular/spring-gateway project in it's own PC while other services are deployed in powerful server using docker (authentication using Keycloak). and it is not possible to deploy those other services on UI developers own PC. how to do it in JHipster?
add server.use-forward-headers=true to your config when using the gateway

Resources