I have a docker container running haproxy inside, and while it seems to work well, I'm running into a problem where the client IP that reaches the frontend, and shows up on the haproxy logs, is always the same, just with a different port. This IP seems to be the same as the IPV4 IPAM Gateway of the network where the container is running inside (192.168.xx.xx).
The problem with this is that since every request that reaches the proxy has the same client IP, no matter the machine or network where it came from, it's easy for someone with bad intentions to trigger the security restrictions, which bans said IP and no request gets through until the proxy is reset, because every request seems to be coming from the same banned IP.
This is my current haproxy config: (I tried to reduce it to the bare minimum, without the restrictions rules, timeouts, etc, for ease of understanding. I'm testing with this setup and the problem is still present)
global
log stdout format raw local0 info
defaults
mode http
log global
option httplog
option forwardfor
frontend fe
bind :80
default_backend be
backend be
server foo_server $foo_server_IP_and_Port
backend be_abuse_table
stick-table type ip size 1m expire 15m store conn_rate(3s),conn_cur,gpc0,http_req_rate(15s),http_err_rate(20s)
I have tried setting and adding headers, I've also tried to put the container running in the host network, but the problem is that the request does not reach the backend server because it's in a different network, furthermore, I would like to keep the container in the network where it's at, alongside the other containers.
Also, does the backend server configuration influence in any way this problem I'm having? My understanding is that since the problem is already present when reaching the frontend, the backend configuration doesn't matter for this problem.
Any suggestions? This has been driving me crazy for 2 days now. Thank you so much!
Turns out the problem was I was using docker rootless mode. You can either use normal docker or you can use rootless and install the slirpn4netns package and change the port forwarder, following these steps (the section about changing the port forwarder): https://rootlesscontaine.rs/getting-started/docker/#changing-the-port-forwarder
Related
I have a prerender server running in a docker container on my GCP VM instance running on Debian. I know it is running from the docker logs on the containers' port 3000. But I can't seem to send a request to the VM external IP. The firewall settings on the VM instance allow for both HTTP and HTTPS traffic but nothing seems to happen. I am using the VM cloud shell to ssh into the VM so I am positive the container itself is running as it should but I believe the issue lies somewhere between the VM and the container as I seem to have no activity on my VM network.
What I've tried so far:
the obvious first try was to simply send a request from a browser to that external IP address, this was just a simple http://'externalIPofVM'/render?fullpage=true&renderType=jpeg&url='requestedURL' I know from local testing that this works I just can't seem to figure out how to send this to my docker on the VM. Even if that request failed on the prerender server I'd at least know that it's hitting the docker in the first place but at the moment it's never being hit.
I believe it may have something to do with connecting the VM to the container but I don't know? This is my first dive into running a container on a VM so any information I may have left out please tell me and I'll happily provide as much detail as possible.
example output of a successful prerender request on a local container
the image is the type of response I expect from a successful request to prerender, however even a failed one would be helpful at this point as I'd at least know I'm making contact.
I currently have about 5 webserver running behind a reverse proxy. I would like to use an external AD to authentificate my users with the ldap protocol. would docker-engine be able to differentiate between each container by itself ?
My current understanding is that it wouldn't be possible without having a containerized directory service or without exposing different port for each container but I'm having doubts. If I ping an external server from my container I'm able to get a reply in that same container without issue. how was the reply able to reach the proper container ?. I'm having trouble understanding how it would be different for any other protocol but then at the same time a reverse proxy is required for serving the content of multiple webservers. If anyone could make it a bit clearer for me I'd greatly appreciate it.
After digging a bit deeper I have found what I was looking for.
Any traffic originating from a container will get routed automatically by docker on a default network with the use of IP masquerading (similar to NAT) through iptables. The way it works is that the packets from the container will get stripped of the container IP address and replaced by the host ip address. The original ip address will be remembered until the tcp session is over. Then the traffic will go to the destination and any reply will be sent back to the host. the reply packets will get stripped of the host ip and sent to the proper container. This is why you can ping another server from a container and get a reply in that same container.
But obviously it doesn't work for incoming traffic to a webserver because the first step is the client starting a session with the webserver. That's why a reverse proxy is required.
I may be missing a few things and may be mistaken about some others but this is the general idea.
TLDR: outgoing traffic (and any reply ) will get routed automatically by docker, you will have to use a reverse proxy to route incoming traffic to multiple container.
In order to debug and setup a pair of docker stacks (one is a client and other a server along with their own private services they each require) using docker compose, I'm running them locally to make sure they're functioning correctly.
They will eventually be communicating across the internet with a nginx server on the server side to act as a reverse proxy. But for now, i'm specifying the client use the 172.19.0.3:1234 address of the server container.
I'm able to curl/ping both the client container and server container from the host machine, but running an interactive session and trying to curl the server's 172.19.0.3:1234 address just times out.
I feel the 172.x is being used incorrectly here. Is their some obvious issue with what I've described so far? What is the better approach for what I'm trying to do.
Seems that after doing some searching, I am in a similar situation to this question: Communicating between Docker containers in different networks on the same host.
I've decided to use docker network connect to connect the client to the server's network for my purposes.
I have been looking everywhere for this answer. To me it seems like an obvious question, however, the answer has eluded me.
My current setup is, I have redis, mongodb and two api servers on the same bridge network. The first server serves as a gateway api that does all the auth, and exposes certain api calls. The backend api is the one that handles all the db interactions and data munging. If I hit the backend (inner) api alone, I am able to see the contents (this api would not be exposed in real production environment). However, if I make the same request from within the gateway api, I am not able to hit the backend (inner) api that is also part of the bridged network I created.
Below is a diagram of the container interactions.
I still use legacy linking, but I'm a little bit familiar with this. I think the problem is that you are trying to hit "localhost" from inside your gateway container. The inner API container cannot be resolved as "localhost" inside of the gateway API container. You are able to hit "localhost:8099" from the host machine or externally because of the port mapping, but none of your other containers will be able to resolve that address/port because they 'think' it's a remote machine.
Here's a way to test what I'm thinking. In your host's shell, run the bridge inspect command shown here. Copy the IP address from Containers.<inner-api-hash>.IPV4. Then open a shell in the gateway container with docker exec -it <gateway-id> /bin/bash and then use curl or wget to see if you can hit that IP address you copied.
If my thinking is correct, you will see that you must use your inner-API node's Docker assigned IP address from the other containers. Amongst other options, you can start containers with a static IP address as shown here.
This is starting to escape the scope of my knowledge, but you can also configure a container DNS. Configure container DNS.
I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example https://localhost:8080/ but due to dynamic port assignment it is really only accessible via https://localhost:54321/
So eureka will return the wrong URL for the services.
UPDATE
I have updated my answer below, so have a look there.
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
When you use host networking you can't use the link-feature for these containers as link source or target.
Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
A lot of time has passed and I think I should elaborate this a little bit further:
If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
If you have a public facing service then you would use a fixed port anyway.
For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
You can set up a directory for each docker instance and share it between the host and the instance and then write the port and IP address to a file in that directory.
$ instanceName=$(generate random instance name)
$ dirName=/var/lib/docker/metadata/$instanceName
$ mkdir -p $dirName
$ docker run -name $instanceName -v ${dirName}:/mnt/metadata ...
$ echo $(get port number and host IP) > ${dirName}/external-address
Then you just read /mnt/metadata/external-address from your application and use that information with Eureka.