Wcf service call failure in linux container - docker

I'm running multiple docker containers on the same docker network and it's routed through traefik.
Calls from clients arrive at traefik and are routed to the appropriate container using the traefik rule.
Sometimes, inter-container communication also happens.
Container 1 has a Core WCF application.
Container 2 has a WCF client that communicates with Container 1.
External legacy WCF clients will call the service available in Container 1 with HTTPS & Wsbinding security transport mode.
Currently, external requests are failing if container 1 does not have wsbinding security, but if I set container1 wsbinding security to transport mode, inter-container communication fails with the certificate RemoteCertificateNameMismatch error
How can a CoreWcf app achieve both cross-container and external communication?

Related

Configure WCF service in Azure Container Apps

I'm trying to deploy a WCF service implemented using CoreWCF + .NET 6 to Azure Container Apps exposing https endpoints from a linux container.
I try the same service using http protocol and everything work correctly.
I also expose a gRPC service, but the difference with WCF is the binding configuration. WCF needs to set up the same protocol schema for both client and server. So I suppose it's not possible redirect an https request to a container that expose a WCF service on the port 80. This can be done with REST or gRPC service instead.
I enable ingress in Azure Container Apps and set the port 443. When I try the http endpoint I set the port 80 instead.
The binding of the WCF is BasicHttpBinding with security mode Transport. When i try the http endpoint I set security mode None.
In the dockefile I expose port 80 and 443.
On my local machine I'm able to get thing work because i can use a self signed certificate, but in a production enviroment this doesn't seems to work. I deploy the self signed certificate with the container image, but maybe there isn't a certification authority that can trust this certificate.
I read that for Azure Container Instance it's possible to configure https running Ngix in a sidecar container. But in this case the request is redirected internally on port 80 and so it doesn't work for me.
What can i do for get my service work over https?

AWS Service discovery, nginx and node issue

I am running 2 services in AWS ECS fargate. One is with nginx containers running behind the application load balancer. And the other is running with a node.js application. Node application is running with service discovery and Nginx containers proxy to the "service discovery endpoint" of node application containers.
My issue is :
After scaling up node application containers from 1 to 2. Nginx is unable to send the requests to the newly spawned container. It only sends the request to old containers. After restart/redploy of nginx containers it is able to send the requests to new containers.
I tried with "0" DNS ttl for service discovery endpoint. But facing the same issue.
Nginx does not resolve resolve at runtime if your server is specified as part of an upstream group or in certain other situations, see this SF post for more details. This means that Nginx never becomes aware of new containers being registered for service discovery.
You haven't posted your Nginx config so it's hard to say what you can do there. For proxy_pass directives, some people suggest using variables to force runtime resolution.
Another idea might be to expose a http endpoint from the Nginx container that listens to connections and reloads Nginx config. This endpoint can then be triggered by a lambda when new containers are registered (Lambda is in it's turn triggered by CloudWatch events). Disclaimer: I haven't tried this in practice, but it might work.

Pro's of using Docker Virtual Networks

What are the pros of having Docker containers running and communicating with each other through a Docker virtual network instead of simply having them communicate with each other through the host machine? Say I have container A exposed through port 8000 on the host machine (-p 8000:8000) and container B exposed through port 9000 on the host machine (-p 9000:9000). Container A can communicate with container B through host.docker.internal:9000 but, if they were deployed in the same Docker network, A would be able to communicate with B simply through <name of container B>:9000. The latter is obviously neater in my opinion, but other than that what are its benefits?
Security .
By creating a private network that is only accessible to internal Docker services, you remove a door for attacks to occur. A common architecture is
-pub---> PROXY --priv---> MAIN SERVICE --priv--> DATABASE
Only the proxy needs to be exposed to the public (host) network interface. All 3 services can be part of a private network where internal traffic occurs.
Simplification .
The private network traffic is considered "trusted" so no need for SSL cert (HTTPS) and having every service implement SSL/TLS verification.
It is also typically (or should always be) much faster than the public facing networking which means no need for some optimisation used on the web (zipping or other compression schemes, caching).
Multi VMs
When services span multiple VMs, they are typically not tied to a specific VM. This allows components (Containers, Tasks, etc) to even be moved around to different or new VMs by orchestrators (Kubernetes, Mesos,...). The communication between services is done through a private (overlay) network spanning all the VMs. Your service then only needs to refer to other services by name and let the orchestrator reroute it correctly.

Jhipster - unable to Use a Gateway app when deploying everything on docker host except the Gateway itself, Mixed Docker and locally deployment

I have some JHipster Spring Microservice and gateway projects. I deployed all of them on a host using docker except the gateway. I started the gateway on another host.
I use Keycloak for OAuth authentication.
Everything works fine when i deploy all of the microservices and databases and Gateways as docker containers on a docker network using docker-compose.
But it doesn't work when i just deploy everything on docker except the gateway.i mean if the gateway resides outside of docker-created network. the motivation for this action is that I just want my UI programmer to up and run the gateway on his own PC, and use microservices which are deployed on server host. Just for ease of UI development in need to up and run this sole gateway using gradle bootRun -Pprod.
I used a technique to assign a separate IP to each container on my docker network. This technique is called Docker MacVLan networking. so that every container in the host have a separate IP address in physical network and each of these containers are visible on other hosts in the network.
the problem is that in normal docker deployment (when gateway is deployed in a docker network in same host) everything works fine. but in my scenario after successful login, every microservice return error 401.
in microservice it says this error:
o.s.s.oauth2.client.OAuth2RestTemplate : Setting request Accept header to [application/json, application/x-jackson-smile, application/cbor, application/*+json]
o.s.s.oauth2.client.OAuth2RestTemplate : GET request for "http://keycloak:9080/auth/realms/jhipster/protocol/openid-connect/userinfo" resulted in 401 (Unauthorized); invoking error handler
n.m.b.s.o.CachedUserInfoTokenServices : Could not fetch user details: class org.springframework.security.oauth2.client.resource.OAuth2AccessDeniedException, Unable to obtain a new access token for resource 'null'. The provider manager is not configured to support it.
p.a.OAuth2AuthenticationProcessingFilter : Authentication request failed: error="invalid_token", error_description="token string here"
it says that your token is invalid. the same mechanism just works when everything is deployed in same host in docker. is it for the Keycloak that prevents the token to validate for external hosts? i personally doubt that , because it didn't prevent me from logging into gateway successfully. and i just checked keycloak. its up by the command -b 0.0.0.0
Please help me up and run a gateway just by gradle bootRun -Pprod.
In summary I could rephrase my question to: i just want the UI Developer be able to test his angular/spring-gateway project in it's own PC while other services are deployed in powerful server using docker (authentication using Keycloak). and it is not possible to deploy those other services on UI developers own PC. how to do it in JHipster?
add server.use-forward-headers=true to your config when using the gateway

Service running in Docker returning URLs to another service service in different container

I'm very new to Docker so it might just be I've misunderstood something. We are developing a system using an SDK, this SDK relies on 2 Docker services for which we have the tar files of the images. Both these images expose a REST API on port 8080.
I've got containers running as per the instructions we were provided. The SDK connects to one service (A), and service A uses service B. One of the options provided when running A is a URL for connecting to B.
The URL I used is then http://service-b-container-name:8080 (as specified in the provided instructions)
It seems like then the API on service A is returning URLs for resources on service B and then the SDK in our application (running on the host) is attempting to connect directly to service A. This then fails because I've exposed service A on different port (since service B was exposed on 8080, I used 8081 for service A) but also the host cannot resolve service-b-container-name.
I was able to get it running by swapping the exposed ports and adding service-b-container-name to my host's hosts files to resolve to 127.0.0.1, however needing to do this seems really wrong.
Is there something I can do about it or is it that the service is wrong for returning URLs to another service?
I guess the SDK receives from service A the URL to reach service B at http://localhost:8080/... because when service B is asked for the URL of a resource it gives this IP:port to the client (service A, in this case) because the service is really serving the resource on this URL (inside the container).
If you want to change this behaviour, so the SDK can access to service B through port 8081, you should change on which port service B is exposed inside the container too. So when the service B is asked for a resource, it will answer that it resource is on some URL on port 8081.
About that the host cannot resolve service-b-container-name, the only ones who can reach service B through that name are services on the same docker networks than service B. Host can reach docker services and containers through bound ports.
Hope it helped ;)

Resources