gRPC in docker container can't connect to services on host machine - docker

I have slightly modified this example: https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway/examples/echo. I am running envoy on a docker container with exposed port 8080 (running this proxy server is required because the browser can't speak directly to a backend gRPC service). I am running all the services on my localhost (the host machine of the envoy docker container). However, I cannot seem to connect envoy in the docker container to the services running on the host machine.
I compiled grpc_cli in the container and when I run grpc_cli ls 192.168.1.10:9000 (host's LAN IP address and the port the service is running on), I get
root#bdc9ac396a87:~/grpc# ./bins/opt/grpc_cli ls 192.168.1.10:9000
Received an error when querying services endpoint.
ServerReflectionInfo rpc failed. Error code: 14, message: failed to connect to all addresses, debug info: {"created":"#1569023274.866465052","description":"Failed to pick subchannel","file"
:"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"#1569023274.866463178","description":"failed to connect to all addresses","file":"
src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":395,"grpc_status":14}]}
I get an almost identical error when I use the IP address of the docker0 interface, which should also provide a connection to the host machine.
root#bdc9ac396a87:~/grpc# ./bins/opt/grpc_cli ls 172.17.0.1:9000
Received an error when querying services endpoint.
ServerReflectionInfo rpc failed. Error code: 14, message: failed to connect to all addresses, debug info: {"created":"#1569022455.801913949","description":"Failed to pick subchannel","file"
:"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"#1569022455.801910006","description":"failed to connect to all addresses","file":"
src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":395,"grpc_status":14}]}
However, running a simple http server from the host with
python -m http.server
I can run the following commands from the container just fine:
wget 172.17.0.1:8000/test.txt // works
wget 192.168.1.10:8000/test.txt // works
A client on the host (not in the container) connects and works just fine with the service, so it's not a server problem.
Does docker block certain types of traffic? I noticed in the example the server was placed on another docker container, and it worked (it also worked locally for me), but I'd prefer to have my services running on my host machine while I build and test them. Is there a setting somewhere to enable gRPC from the container to a service on the host machine?
Docker version 1.13.1, build 47e2230/1.13.1
Fedora 29

Related

AWS ToolKit docker container not resolving internal service URIs

I am running an AWS Lambda locally via AWS Toolkit. The function, through a long dependency chain, calls an internal service endpoint that throws a ConnectionTimeoutException. That endpoint works when called locally.
Toolkit spins up a container to run the lambda in using the bridge docker network running on my local machine. My local machine is also running a proxy client in another container, and using docker network inspect bridge from my local terminal, I can see both the proxy and Toolkit containers are registered on the bridge network. When I shell into the running lambda container, my cUrl command to the internal service times out. That same command on my local machine succeeds.
Shouldn't the cUrl command work from within the lambda container?
local machine bridge network
connection time out exception
failed: connect timed out; nested exception is org.apache.http.conn.ConnectTimeoutException: Connect to internal.service.uri:80
Our SQUID proxy does not support service discovery.
This means the container has to have environment vars set to the proxy IP:
export http_proxy=http://172.17.0.2:3128
export HTTP_PROXY=http://172.17.0.2:3128
export https_proxy=http://172.17.0.2:3128
export HTTPS_PROXY=http://172.17.0.2:3128
export NO_PROXY=localhost
then it works.
next step is to figure out how to set those within the, container via Aws Toolkit

Docker: How to connect to locally available servers from within docker

I run docker on windows. I have a docker container running a python application that needs a database connection.
Installing a DB on my machine and connecting to it via "docker.for.win.localhost" in my container works fine.
Now I want to connect to a database running on a server that is available over my local network. I can't seem to connect to it from inside my docker container. I don't quite understand how I can proxy the server to my container. The error indicates that it can't establish a connection to this server:
(psycopg2.OperationalError) could not connect to server: No route to host
Is the server running on host "XX.XXX.XX.XX" and accepting
TCP/IP connections on port 5555?
I'm sure this is supposed to work somehow, right?
you can add IP of host to the container
docker run --add-host="yourhost:IPOFTHEHOST"
and yourhost will be connected to host

Can Consul be run inside a Docker container using Docker for Windows?

I am trying to make Consul work inside a Docker container, but using Docker for Windows and Linux containers. I am using the official Consul Docker image. The documentation states that the container must use --net=host for Consul's consensus and gossip protocols.
The problem is, as far as I can tell, that Docker for Windows uses a Linux VM under the hood, and the "host" of the container is not the actual host machine, but that VM. I could not find a combination of -bind, -client and -advertise parameters (IP addresses), so that:
Other Consul agents on other hosts can connect to the local agent using the host machine's IP address.
Other containerized services on the same host can query the local agent's REST interface.
Whenever I pass the host machines IP address in the LAN through -advertise, I get these errors inside the container:
2018/04/03 15:15:55 [WARN] consul: error getting server health from "linuxkit-00155d02430b": rpc error getting client: failed to get conn: dial tcp
127.0.0.1:0->10.241.2.67:8300: connect: invalid argument 2018/04/03 15:15:56 [WARN] consul: error getting server health from "linuxkit-00155d02430b": context deadline exceeded
Also, other agents on other hosts cannot connect to that agent.
Using -bind on that address fails - my guess is, since the container is inside the Linux VM, the host machine's address is not the container's host's address, and therefore cannot be bound.
I have tried various combinations of -bind, -client and -advertise, using addresses like 0.0.0.0, 127.0.0.1, 10.0.75.2 (addresss on the Docker virtual switch) and the host machine's IP, but to no avail.
I am now wondering whether this is achievable at all. I have been trying this for quite some time, and I am despairing. Any advice would be appreciated!
I have tried the whole process without using --net=host, and everything works fine. I can connect agents across hosts, and I can query the local agents REST interface from other containerized applications... Is --net=host really crucial to the functioning of Consul?

Run docker container on localhost via VM

I'm new to Docker and Containers, and I'm trying to run a simple asp.net web app in a container but running into issues. My OS is Windows 10 Home, so I have to use the Docker Toolbox, which runs on a VM that only includes a basic Linux OS. When I spin up the container, it seems to start fine, but I can't view the app on the localhost.
$ docker run -p 8342:5000 -it jwarren:project
Hosting environment: Production
Content root path: /app
Now listening on: http://*:5000
Application started. Press Ctrl+C to shut down.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
98cc4aed7586 jwarren:project "dotnet run" 8 minutes ago Up 8 minutes 0.0.0.0:8342->5000/tcp naughty_brattain
I've tried several different recommendations that I found on the web, but none have helped so far. However, my knowledge of networking is very limited, so maybe I'm not fully understanding what needs to be done. I've tried accessing it with the default VM machine IP and the container IP. I understand that the port forwarding does not carry over to the container. Any assistance would be great, as this project is due on Tuesday, and this is the last road block before finishing.
I found the following post that was really helpful: How to connect to a docker container from outside the host (same network) [Windows]. Following the steps below worked perfectly:
Open Oracle VM VirtualBox Manager
Select the VM used by Docker
Click Settings -> Network Adapter 1 should (default?) be "Attached
to:NAT"
Click Advanced -> Port Forwarding Add rule: Protocol TCP, Host Port
8080, Guest Port 8080 (leave Host IP and Guest IP empty)
You should now be able to browse to your container via localhost:8080 and your-internal-ip:8080.
Started up the container (Dockerfile EXPOSES 5000):
docker run -p 8080:5000 -it jwarren:project
Was able to connect with http://localhost:8080
There are few things to consider when working with a VM networking.
Virtual Box has 3 types of networking options NAT, Bridged and Host Only.
NAT would allow your VM to access internet through your internet. But won't allow your HOST machine to access the VM
Host Only network will create a network where the VM can reach the host machine and the Host can reach the VM. No internet using this network
Bridged network will allow your VM to assign another IP from your Wifi router or the main network. This IP will allow VM to have net access as well as access to other machines on the network. This will allow even the host machine to reach the IP
Now in most cases when you want to run Docker inside a VM and access that VM using the host machine you want the VM to have both NAT and Host only bridges
Now accessing your app on port 8342 needs few things checked
seliunx, firewalld, ufw are disabled on your VM (or properly configured to allow the port)
Your VM has a host only network or bridged network
iptables -S should not show REJECT rules
Some VMs come pre-configure to only allow port 22 from external network. So you should try access the app on <hostonlyip>:8342 or <bridgedip>:8342.
If you want to test if the app is up or not you can do the following
docker inspect <containerid> | grep IPA
Get the IP from this and run the command
curl http://<containerip>:5000/
This command needs to be execute inside the VM and not on your machine. If this command doesn't work then your container is not listening on 5000. Sometimes app listen to only 127.0.0.1 inside the container. This means they will work only inside the container and not outside. The app inside the container needs to listen to 0.0.0.0
If nothing works you can try an ssh tunnel approach
ssh -L 8342:127.0.0.1:8342 user#<VMIP>
And then you should be able to access the app on localhost:8342

Connecting to Docker container connection refused - but container is running

I am running 2 spring boot applications: A client and rest-api. The client communicates to the rest-api which communicates to a mongodb database. All 3 tiers are running inside docker containers.
I launch the containers normally specifying the exposed ports in the dockerfile and mapping them to a port on the host machine such as: -p 7070:7070, where 7070 is a port exposed in the Dockerfile.
When I run the applications through the java -jar [application_name.war] command, the application works fine and they all can communicate.
However, when I run the applications in a Docker container I get connection refused error, such as when the client tries to connect to the rest-api I get a connection refused error at http://localhost:7070.
But the command docker ps shows that the containers are all running and listening on the exposed and mapped ports.
I have no clue why the containers aren't recognizing that the other containers are running and listening on their ports.
Does this have anything to do with iptables?
Any help is appreciated.
Thanks
EDIT 1: The applications when ran inside containers work fine on my machine, and they don't throw any connection refused errors. The error only happens on that particular different machine.
I used container linking to solve this problem. Make sure you add --link <name>:<alias> at run-time to the container you want linked. <name> is the name of the container you want to link to and <alias> will be the host/domain of an entry in Spring's application.properties file.
Example:
spring.data.mongodb.host=mongodb if the alias supplied at run-time is 'mongodb':
--link myContainerName:mongodb

Resources