Using traefik for docker internal traffic via websockets - docker

I'm using docker in swarm mode for the services in my application and traefik to handle, well, the traffic. My goal is to make a separate service for each API section my application has (so for example requests on domain.com/api/foo_api go to the foo_api service and requests on domain.com/api/bar_api go to the bar_api service.
Now all this is pretty straightforward with traefik. However, I'm also using the API services with other internal services not related to the API. They use a websocket connection to the internal docker URL, so currently it's ws://api:api_port/ws. However, if I split up the API part I'd need something like ws://foo_api:foo_api_port/ws which obviously leaves the service only access to the foo_api, not every other one.
So my question is: Can I route this websocket traffic with traefik similiar to how I do it externally, but internally in the docker net?

Traefik is a north-south reverse proxy. Most people historically in traditional infrastructure would use NGINX or Apache to address inbound - good to see you using a more modern tool. What you are describing is an east-west pattern of communication inside your firewall behind traefik (assuming you control all ingress through traefik).
Have you considered using service discovery and registry capabilities with tools like Hashicorp Consul - https://consul.io?
The idea of having service discovery is so that your containers / services inside the swarm can be discovered and made available through the registry and referenced in proximation to each other by name without the pains of manual labor in building and maintaining complicated name-IP-lookups. Most understand this historically in a more persistent model behind DNS SRV which requires external query. Consul can still support that legacy reference integration as well.
This site might help you along: https://attx-project.github.io/Consul-for-Service-Discovery-on-Docker-Swarm.html
They appear to have addressed a similar case to yours. And the work is likely reusable with a few tweaks.

Related

Get Visitor IP or a Custom header in Jaeger docker behind docker traefik (v2,x)

we are experimenting with JAEGER as a tracing-tool for our traefik routing environment. We also use an ecapsulated docker network .
The goal is to accumulate requests on our api's per department and also some other monitoring.
We are using traefik 2.8 as a docker service. Also all our services run behind this traefik instance.
We added basic tracing configuration to our .toml file and startet a jaeger-instance, also as docker service. On our websecure endpoint we added forwardedHeaders.insecure = true
Jaeger is working fine, but we only get the docker internal host ip of the service, not the visitor ip from the user accessing a client with the browser or app.
I googled around and I am not sure, but it seems that this is a problem due to our setup and can't be fixed - except by using network="host". But unfortunately thats not an option.
But I want to be sure, so I hope someone here has a tip for us to configure docker/jaeger correctly or knows if it is even possible.
A different tracing tool suggestion (for example like tideways, but more python and wasm and c++ compatible) is also appreciated.
Thanks

How to expose a service from minikube to be able to access it from another device in the same network?

I've created a service inside minikube (expressjs API) running on my local machine,
so when I launch the service using minikube service wedeliverapi --url I can access it from my browser with localhost:port/api
But I also want to access that service from another device so I can use my API from a flutter mobile application. How can I achieve this goal?
Due to small amount of information and to clarify everything- I am posting a general Community wiki answer.
The solution to solve this problem was to use reverse proxy server. In this documentation is definiton what exactly is reverse proxy server .
A proxy server is a go‑between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers
Common uses for a reverse proxy server include:
Load balancing
Web acceleration
Security and anonymity
This is the guide where one can find basic configuration of a proxy server.
See also this article.

Serving dockerized microservices over HTTPS

I'm currently struggling with docker and SSL. Let me give you an overview on what I'm trying to do.
I built a microservice-based architecture which is composed by a react web application and some "backend" services written in python and exposed with gunicorn on docker containers. I need to serve it over SSL because of Auth0 which needs the https communication. So, I built the server, bought a domain and got the SSL certificate for the domain with let's encrypt.
Now, here are the troubles, since mi services communicates to each other with a docker network, say services-network. For this reason they refer each other with the url `service:port/example.
At the moment I'm able to successfully connect to my web app with https but whenever this tries to contact the "backend" services the connection is refused because of it came from a non-secure resource (I used http://service:port/endpoint).
I tried to use the let's encrypt certificate generated for the webapp but the communication is blocked with message requests.exceptions.SSLError: HTTPSConnectionPool(host='service', port=8081): Max retries exceeded with url: /endpoint (Caused by SSLError(CertificateError("hostname 'service' doesn't match 'domain.com'",),))
I understand that a possible workaround for this error is to make the services communicate each other without using the docker network but the external one. Anyway I think that is not a good practice and that the communication among containers needs to be done through the docker network.
Finally, my question is: which is the best way to make the containers communicate through https over the docker network?
I personally like to use nginx as a reverse proxy. You would configure it normally and set it to proxy_pass <dockerIp:port>.
Many people like to use traefik.io which has many features including Let's Encrypt integration.

Azure Cloud Service microservice to K8 Migration

I am in the process of evaluating moving a very large Azure Cloud Service (Web Role) microservice architecture to AKS and have been working through the necessary code and build changes to support it.
In order to replicate the production environment locally for the developers, we run nginx on the host with SSL offloading and DNS (hosted in Azure) A records pointing to 127.0.0.1. When running in the Azure Emulator, the net affect is the ability for both the developer to visit the various web front ends in their browser (i.e. https://myapp.mydomain.dev) as well as hit the various API's in the solution (Web API 2) in Postman/cURL, etc.
Additionally due to how the networking of the Azure Emulator works, the apps themselves can resolve each other through nginx on the host (i.e. MVC app at https://myapp.mydomain.dev can obtain a token from the IdP web API at https://identity.mydomain.dev and then use that token at the API at https://api.mydomain.dev). This is the critical piece and the source of my question.
All attempts at getting the containers themselves to resolve each other the same way the host OS can (browser/Postman, SSL offloading via nginx) have failed. Many of the instructions out there are understandably for linux containers but having adapted the various networking docker-compose settings for the windows container equivalent have not yet yielded an success. In order to keep the development environments aligned with the real work systems, which are tenantized and make sure of the default mapping in nginx to catch all incoming traffic and route it to a specific user facing app/container, it is not as simple as determining a "static" method of addressing these on startup and why the effort was put in to produce the development environments we have today.
Right now when one service (container) attempts to communication with another, it ultimately results in a resolution error as all requests resolve to https://127.0.0.1 due to the DNS A records hosted in Azure for the domain. Since this migration will be a longer term project, the environments need to co-exist so changing the way that DNS is resolved (real DNS A records pointing to 127.0.0.1), host running nginx and handling SSL offloading to the various webroles normally running in the Azure Emulator is not an option.
Is there a way (with Windows containers) to either:
Allow the container to utilize nginx on the host OS transparently (app must still call the API at https://api.mydomain.dev), which will cause the traffic to be routed properly to the correct container/port defined in the docker-compose file?
OR
Run nginx on each container, allowing each container to then resolve and route appropriately without knowing the IP of the other container, possibly through an alias which could be added to the containers nginx.conf before the service starts?
The platform utilizes OAuth2/OIDC and it is critical to maintain the full URL to the other services from the applications perspective. Beyond mirroring production and sandbox environments, this URL's are utilized for redirect URL and post logout redirect URL validation among other things so using "https://myContainerNameForOtherContainerAlias" is not a workable solution.
Will I have the same problem when setting up the AKS environment as well?

How can I integrate my application with Kubernetes cluster running Docker containers?

This is more of a research question. If it does not meet the standards of SO, please let me know and I will ask elsewhere.
I am new to Kubernetes and have a few basic questions. I have read a lot of doc on the internet and was hoping someone can help answer few basic questions.
I am trying to create an integration with Kubernetes (user applications running inside Docker containers to be precise) and my application that would act as a backup for certain data in the containers.
My application currently runs in AWS. Would the Kube cluster need to run in AWS as well ? Or can it run in any cloud service or even on-prem as long as the APIs are available ?
My application needs to know the IP of the Master node API server to do POST/GET requests and nothing else ?
For authentication, can I use AD (my application uses AD today for a few things). That would also give me Role based policies for each user. Or do I have to use the Kube Token Reviewer API for authentication always ?
Would the applications running in Kubernetes use the APIs I provide to communicate with my application ?
Would my application use POST/GET to communicate with the Kube Master API server ? Do I need to use kubectl for this and above #4 ?
Thanks for your help.
Your application needn't exist on the same server as k8s. There are several ways to connect to k8s cluster, depending on your use case. Either you can expose the built-in k8s API using kubectl proxy, connect directly to the k8s API on the master, or you can expose services via load balancer or node port.
You would only need to know the IP for the master node if you're connecting to the cluster directly through the built-in k8s API, but in most cases you should only be using this API to internally administer your cluster. The preferred way of accessing k8s pods is to expose them via load balancer, which allows you to access a service on any node from a single IP. k8s also allows you to access a service with a nodePort from any k8s node (except the master) through a preassigned port.
TokenReview is only one of the k8s auth strategies. I don't know anything about Active Directory auth, but at a glance OpenID connect tokens seem to support it. You should review whether or not you need to allow users direct access to the k8s API at all. Consider exposing services via LoadBalancer instead.
I'm not sure what you mean by this, but if you deploy your APIs as k8s deployments you can expose their endpoints through services to communicate with your external application however you like.
Again, the preferred way to communicate with k8s pods from external applications is through services exposed as load balancers, not through the built-in API on the k8s master. In the case of services, it's up to the underlying API to decide which kinds of requests it wants to accept.

Resources