I have the following setup (everything as docker containers):
Two web services running on HTTPS mode (self-signed certificate).
The web services are registered in consul.
Web service 1 calls web service 2 using feign client.
web service 2 is named authentication-service.
The docker containers cacerts were updated to include the self-signed certificate, however, the certificate does not have the IP address because they are dynamically generated by docker.
#FeignClient(name = "authentication-service")
public interface AuthenticationClient extends AuthenticationApi {
}
When web service 1 calls web service 2 Ribbon internally is using docker's IP address. (the problem)
Moreover, It is not clear to me why feign is using HTTP protocol instead of HTTPS.
feign.RetryableException: No subject alternative names matching IP address 172.20.0.10 found executing POST http://authentication-service/api/auth/authenticate
What am I missing?
How should I overcome this situation?
Thank you in advance.
Related
I've created a service inside minikube (expressjs API) running on my local machine,
so when I launch the service using minikube service wedeliverapi --url I can access it from my browser with localhost:port/api
But I also want to access that service from another device so I can use my API from a flutter mobile application. How can I achieve this goal?
Due to small amount of information and to clarify everything- I am posting a general Community wiki answer.
The solution to solve this problem was to use reverse proxy server. In this documentation is definiton what exactly is reverse proxy server .
A proxy server is a go‑between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers
Common uses for a reverse proxy server include:
Load balancing
Web acceleration
Security and anonymity
This is the guide where one can find basic configuration of a proxy server.
See also this article.
Setup:
We have setup on our windows VM (on-premises) to run docker (windows container) + gMSA / service account for our ASP.NET Core 5 API - internally running on Kestrel with .AddAuthentication(NegotiateDefaults.AuthenticationScheme).AddNegotiate(); (NOT IIS). It authenticates well as the configured service account e.g. against MSSQL or the File Server.
If I open up any protected endpoint its using my windows credentials or is asking me (if not on a domain joined computer). The user test endpoint return the windows users claims.
This just the API which works fine!
Issue:
The "issue" is, that our VueJS application is running in a docker container (linux containers) on a linux host - inside hosted via nginx. Same network. After opening the UI the first time (without having opened the API) no authentication request is happening. The interesting part is: After opening the API the first time and entering windows credentials and then opening the UI works and shows the use/claims (which we return from the backend).
In the frontend we are using axios with withCredentials: true.
Question:
What must be done to enable the UI to negotiate the windows login?
The reverse proxy that's passing requests to your container must have NTLM support enabled for Windows authentication to work. IIS supports this by default, but for others, you need to activate it manually. This must be repeated down the proxy chain.
From the docs:
Credentials can be persisted across requests on a connection. Negotiate authentication must not be used with proxies unless the proxy maintains a 1:1 connection affinity (a persistent connection) with Kestrel.
See the docs for your reverse proxy:
nginx: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm
caddy: https://github.com/caddyserver/ntlm-transport
I'm currently struggling with docker and SSL. Let me give you an overview on what I'm trying to do.
I built a microservice-based architecture which is composed by a react web application and some "backend" services written in python and exposed with gunicorn on docker containers. I need to serve it over SSL because of Auth0 which needs the https communication. So, I built the server, bought a domain and got the SSL certificate for the domain with let's encrypt.
Now, here are the troubles, since mi services communicates to each other with a docker network, say services-network. For this reason they refer each other with the url `service:port/example.
At the moment I'm able to successfully connect to my web app with https but whenever this tries to contact the "backend" services the connection is refused because of it came from a non-secure resource (I used http://service:port/endpoint).
I tried to use the let's encrypt certificate generated for the webapp but the communication is blocked with message requests.exceptions.SSLError: HTTPSConnectionPool(host='service', port=8081): Max retries exceeded with url: /endpoint (Caused by SSLError(CertificateError("hostname 'service' doesn't match 'domain.com'",),))
I understand that a possible workaround for this error is to make the services communicate each other without using the docker network but the external one. Anyway I think that is not a good practice and that the communication among containers needs to be done through the docker network.
Finally, my question is: which is the best way to make the containers communicate through https over the docker network?
I personally like to use nginx as a reverse proxy. You would configure it normally and set it to proxy_pass <dockerIp:port>.
Many people like to use traefik.io which has many features including Let's Encrypt integration.
I have a server and I am using Ubuntu 20.04, nginx , mosquitto and node-red and docker , let's call the website http://mywebsite.com. The problem that I am facing that I have created a client lets call it client1 in docker so the URL will be http://mywebsite.com/client1
and I want to establish an MQTT connection via mosquitto and I'm sending the data on topic test
The problem that on node red node of MQTT when I write the IP address of my mosquitto container it works
But if I change the IP address 192.144.0.5 with mywebsite.com/client1 I can't connect to mosquitto and I can't send or receive any form of data
any idea on how to solve this problem
OK, you are going to have several problems here.
You can not do path based proxying with MQTT. If you want to have multiple MQTT brokers (1 per client) bound to a single public facing domain/IP address then they are all going to have to run on separate ports (other than the default 1883).
Nginx can do MQTT protocol proxying (e.g. like this), so you can use this to expose the different ports and forward them to the separate instances of mosquitto, but even if you had a different hostname (all pointing at the same IP address) nginx has no way to know which host name was used because there is no equivalent to the HOST HTTP header to direct it. If you were to use MQTT with TLS then you may be able to get it to work with SNI, but I've never seen anybody do that yet (possible docs for SNI based routing here) It works, explanation about how to do it here.
If you use MQTT over Websockets then you should be able to use hostname based routing.
Path based proxying for Node-RED currently doesn't work properly if you enable admin authentication, because the admin auth tokens are currently stored in browser local storage and only scoped to the hostname, not the hostname + path. This will mean that a client will only ever be able to log into one instance at a time.
You can work round this by using host based proxying, e.g. http://client1.mywebsite.com
A fix for this is on the backlog for Node-RED, probably (no promises) to be looked at after version 1.2.0 ships
I am trying to deploy API Manager and Enterprise Integrator using Docker Compose. This is deployed using a cloud server.
Everything works locally when using localhost as the host but when I deploy it on a using a cloud server, I cannot access the API Manager using the public IP of the server. The Enterprise Integrator works though. I've modified some configuration parameters as shown below but the problem persists:
<APIStore>
<!--GroupingExtractor>org.wso2.carbon.apimgt.impl.DefaultGroupIDExtractorImpl</GroupingExtractor-->
<!--This property is used to indicate how we do user name comparision for token generation https://wso2.org/jira/browse/APIMANAGER-2225-->
<CompareCaseInsensitively>true</CompareCaseInsensitively>
<DisplayURL>false</DisplayURL>
<URL>https://<PUBLIC IP HERE>:${mgt.transport.https.port}/store</URL>
<!-- Server URL of the API Store. -->
<ServerURL>https://<PUBLIC IP HERE>:${mgt.transport.https.port}${carbon.context}services/</ServerURL>
I've also whitelisted the said public IP:
"whiteListedHostNames" : ["localhost","PUBLIC IP HERE"]
Meanwhile please check the reference.