Sending a request to a docker container running in GCP VM - docker

I have a prerender server running in a docker container on my GCP VM instance running on Debian. I know it is running from the docker logs on the containers' port 3000. But I can't seem to send a request to the VM external IP. The firewall settings on the VM instance allow for both HTTP and HTTPS traffic but nothing seems to happen. I am using the VM cloud shell to ssh into the VM so I am positive the container itself is running as it should but I believe the issue lies somewhere between the VM and the container as I seem to have no activity on my VM network.
What I've tried so far:
the obvious first try was to simply send a request from a browser to that external IP address, this was just a simple http://'externalIPofVM'/render?fullpage=true&renderType=jpeg&url='requestedURL' I know from local testing that this works I just can't seem to figure out how to send this to my docker on the VM. Even if that request failed on the prerender server I'd at least know that it's hitting the docker in the first place but at the moment it's never being hit.
I believe it may have something to do with connecting the VM to the container but I don't know? This is my first dive into running a container on a VM so any information I may have left out please tell me and I'll happily provide as much detail as possible.
example output of a successful prerender request on a local container
the image is the type of response I expect from a successful request to prerender, however even a failed one would be helpful at this point as I'd at least know I'm making contact.

Related

HAProxy running inside Docker container always logging same client IP

I have a docker container running haproxy inside, and while it seems to work well, I'm running into a problem where the client IP that reaches the frontend, and shows up on the haproxy logs, is always the same, just with a different port. This IP seems to be the same as the IPV4 IPAM Gateway of the network where the container is running inside (192.168.xx.xx).
The problem with this is that since every request that reaches the proxy has the same client IP, no matter the machine or network where it came from, it's easy for someone with bad intentions to trigger the security restrictions, which bans said IP and no request gets through until the proxy is reset, because every request seems to be coming from the same banned IP.
This is my current haproxy config: (I tried to reduce it to the bare minimum, without the restrictions rules, timeouts, etc, for ease of understanding. I'm testing with this setup and the problem is still present)
global
log stdout format raw local0 info
defaults
mode http
log global
option httplog
option forwardfor
frontend fe
bind :80
default_backend be
backend be
server foo_server $foo_server_IP_and_Port
backend be_abuse_table
stick-table type ip size 1m expire 15m store conn_rate(3s),conn_cur,gpc0,http_req_rate(15s),http_err_rate(20s)
I have tried setting and adding headers, I've also tried to put the container running in the host network, but the problem is that the request does not reach the backend server because it's in a different network, furthermore, I would like to keep the container in the network where it's at, alongside the other containers.
Also, does the backend server configuration influence in any way this problem I'm having? My understanding is that since the problem is already present when reaching the frontend, the backend configuration doesn't matter for this problem.
Any suggestions? This has been driving me crazy for 2 days now. Thank you so much!
Turns out the problem was I was using docker rootless mode. You can either use normal docker or you can use rootless and install the slirpn4netns package and change the port forwarder, following these steps (the section about changing the port forwarder): https://rootlesscontaine.rs/getting-started/docker/#changing-the-port-forwarder

Unable to make Docker container use OpenConnect VPN connection

I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.

How can I make a request coming from inside of a docker container appear to be coming from my local machine?

A browser running in docker container needs to make a POST to a login service running on a test API in our network. The service is very picky about where POST can come from so it's rejecting the POST because it's coming from host.docker.internal instead of localhost.company.com.
It's very unlikely I'd be able to get host.docker.internal added to the whitelist.
The POST will work fine if the browser is running on my local machine but fails when the browser is running inside a container on my local machine.
I've tried docker run --add-host='localhost.mycompany.com:127.0.0.1' and docker run --add-host='localhost:127.0.0.1', neither one worked. The latter seems silly; it was kind of a shot in the dark...
A possible further complication: the browser is running in testcafe inside a Docker container, so my request will have headers like 'Origin: http://172.17.0.2:1337' 'Referer: http://172.17.0.2:1337/WBrtZV38p/http://host.docker.internal:3000/app/'
Short of making a proxy of some sort on my local machine, is there a way to make a POST from the docker container appear to be coming from my local machine?
Start container in the host OS network space with docker run --network host ... - container will be running in the network of your local machine directly. But you will lose container network isolation, so you should to review security of this approach.
Doc: https://docs.docker.com/network/host/

2 seperate docker stacks can't communicate over 172.x network

In order to debug and setup a pair of docker stacks (one is a client and other a server along with their own private services they each require) using docker compose, I'm running them locally to make sure they're functioning correctly.
They will eventually be communicating across the internet with a nginx server on the server side to act as a reverse proxy. But for now, i'm specifying the client use the 172.19.0.3:1234 address of the server container.
I'm able to curl/ping both the client container and server container from the host machine, but running an interactive session and trying to curl the server's 172.19.0.3:1234 address just times out.
I feel the 172.x is being used incorrectly here. Is their some obvious issue with what I've described so far? What is the better approach for what I'm trying to do.
Seems that after doing some searching, I am in a similar situation to this question: Communicating between Docker containers in different networks on the same host.
I've decided to use docker network connect to connect the client to the server's network for my purposes.

Docker communication between apps in separate containers

I have been looking everywhere for this answer. To me it seems like an obvious question, however, the answer has eluded me.
My current setup is, I have redis, mongodb and two api servers on the same bridge network. The first server serves as a gateway api that does all the auth, and exposes certain api calls. The backend api is the one that handles all the db interactions and data munging. If I hit the backend (inner) api alone, I am able to see the contents (this api would not be exposed in real production environment). However, if I make the same request from within the gateway api, I am not able to hit the backend (inner) api that is also part of the bridged network I created.
Below is a diagram of the container interactions.
I still use legacy linking, but I'm a little bit familiar with this. I think the problem is that you are trying to hit "localhost" from inside your gateway container. The inner API container cannot be resolved as "localhost" inside of the gateway API container. You are able to hit "localhost:8099" from the host machine or externally because of the port mapping, but none of your other containers will be able to resolve that address/port because they 'think' it's a remote machine.
Here's a way to test what I'm thinking. In your host's shell, run the bridge inspect command shown here. Copy the IP address from Containers.<inner-api-hash>.IPV4. Then open a shell in the gateway container with docker exec -it <gateway-id> /bin/bash and then use curl or wget to see if you can hit that IP address you copied.
If my thinking is correct, you will see that you must use your inner-API node's Docker assigned IP address from the other containers. Amongst other options, you can start containers with a static IP address as shown here.
This is starting to escape the scope of my knowledge, but you can also configure a container DNS. Configure container DNS.

Resources