While trying to run Python gRPC server from Docker with host as localhost, getting an error "Address family not supported"
This is for a gRPC server written in Python 3.6 inside Docker on Ubuntu 18.04 Host. Tried replacing "localhost" with 0.0.0.0 and now getting a new error "Connection Refused"
status_channel = grpc.insecure_channel('localhost:6667'))
The insecure gRPC connection should be established and data should be flown between client and server. Instead of getting a connectivity error.
Another workaround could be to run the docker container with --net=host locally.
$ docker run -d --net=host <image_name>
This is because localhost can resolve to IPv6 which docker port-forwarding is not friendly to. If failure to bind the IPv6 address will cause the IPv4 binding to fail as well.
Or you can also enable IPv6 for docker from the following: https://docs.docker.com/config/daemon/ipv6/
While trying to run the grpc server from docker, my grpc insecure channel contains the following:
server.add_insecure_port(f'{os.environ.get("HOST")}:{os.environ.get("PORT")}')
Then I override this variable from the .env for python where:
HOST=0.0.0.0
PORT=50001
This worked like a charm. The connection was established and data was flown.
Related
im learning docker and go now
but i got the problem when i docker run with this
docker run --rm -p 8080:8080/tcp --env-file .env my-project:latest
here are some of my .env code. i use docker desktop on windows, is it not possible to run docker on localhost in windows?
DB_HOST=127.0.0.1
DB_USERNAME=root
DB_NAME=mydbs
DB_PASS=root123
AUTH_GEN_URL=https://api.learning.mydbs.id
anyone have a clue? any answer would be appreciated
thank youu
The problem is that when you spin up the container it tries to connect to 127.0.0.1:3306 within the container and not the host, hence you are getting the error as connection refused since nothing is running on port 3306 at localhost in your container.
For Windows and Mac this can easily be fixed by using host.docker.internal instead of 127.0.0.1. This ensures that the service running inside your container correctly connects to the MySQL instance running on the host machine.
For Linux it's even more simple as all you have to do is pass --network="host" option to the docker run command
For context - I am attempting to deploy OKD in an air-gapped environment, which requires mirroring an image registry. This private, secured registry is then pulled from by other machines in the network during the installation process.
To describe the environment - the host machine where the registry container is running is running Centos 7.6. The other machines are all VMs running Fedora coreOS in using libvirt. The VMs and the host are connected using a virtual network created using libvirt which includes DHCP settings (dnsmasq) for the VMs to give them static IPs. The host machine also hosts the DNS server, which, as far as I can tell is configured properly, as I can ping every machine from every other machine using its fully qualified domain name and access specific ports (such as the port the apache server listens on). Podman is used instead of Docker for container management for OKD, but as far as I can tell the commands are exactly the same.
I have the registry running in the air-gapped environment using the following command:
sudo podman run --name mirror-registry -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z \
-v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e REGISTRY_AUTH=htpasswd \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.pem -e REGISTRY_HTTP_TLS_KEY=/certs/registry-key.pem \
-d docker.io/library/registry:latest
It is accessible using curl -u username:password https://host-machine.example.local:5000/v2/_catalog which returns {"repositories":[]}. I believe this confirms that my TLS and authorization configurations are correct. However, if I transfer the ca.pem file (used to sign the SSL certificates the registry uses) over to one of the VM's on the virtual network, and attempt to use the same curl command, I get an error:
connect to 192.168.x.x port 5000 failed: Connection refused
Failed to connect to host-machine.example.local port 5000: Connection refused
Closing connection 0
This is quite strange to me, as I've been able to use this method to communicate with the registry from the VMs in the past, and I'm not sure what has changed.
After some further digging, it seems like there is some sort of issue with the port itself, but I can't be sure where the issue is stemming from. For example, If I run sudo netstat -tulpn | grep LISTEN on the host, I receive a line indicating that podman (conmon) is listening on the correct port:
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 48337/conmon
but if I test whether the port is accessible from the VM, (nc -zvw5 192.168.x.x 5000) I get a similar error: Ncat: Connection refused. If I use the same test on any of the other listening ports on the host, it indicates successful connections to those ports.
Please note, I have completely disabled firewalld, so as far as I know, all ports are open.
I'm not sure if the issue is with my DNS settings, or the virtual network, or with the registry itself and I'm not quite sure how to further diagnose the issue. Any insight would be much appreciated.
Getting error as below:
Error: com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host *(my database server ip address), port *(database port) has failed. Error: "connect timed out. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
Database string: jdbc:sqlserver://dns:port;DatabaseName=testdb
Using Docker Windows container
DockerfileWar:
FROM openjdk:8
ADD target/dv-2.war dv-2.war
EXPOSE 8085
ENTRYPOINT ["java","-jar","dv-2.war"]
Build image from Project: docker build -f DockerfileWar -t dv-2war .
docker run -p 8085:8085 dv-1war
Getting above error while run the container.
Note: If i used the IP address at DNS than it is working. But i want to do with DNS only. Just for note database running on some other machine(Not on any container). Spring boot application running on docker windows container.
Thanks, Dhaval
I try to access a MS SQL Server from within a Docker container.
The problem is, it is only reachable via an SSH tunnel that I can establish on my host machine. I use a local forward for port 1433, that will automatically be established once I connect to the server.
Using SquirrelSQL for example, I can access the Server via 127.0.0.1:1433 with no problem.
But from within my docker container I am unable to do so.
I already tried to run the docker container with --expose 1433 -p 127.0.0.1:1433:1433 but that didn't work out.
Host is running Ubuntu 16.04, the Docker Container is running on some sort of Debian.
I have an application running on my host which has the following features: it listens to port 4001 (configurable) and only accepts connections from a whitelist of trusted IP addresses (127.0.0.1 only by default, other addresses can be be added but one by one, not using a mask).
(It's the interactive brokers gateway application which is run in java but I don't think that's important)
I have another application running inside a docker container which needs to connect to the host application.
(It's a python application accessing the IB API, but again I don't think that matters)
Ultimately I have will multiple containers on multiple machines trying to do the same thing, but I can't even get it working with one running on the same machine.
sudo docker run -t myimage
Error: Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.
(No response from IB Gateway on host machine)
IDEALLY I'd be able to set up the docker containers / bridge so that all the docker containers appear as if they are on a specific IP address, add it to the whitelist, and voila.
What I've tried:
1) using -p and EXPOSE
sudo docker run -t -p 4001:4001 myimage
Bind for 0.0.0.0:4001 failed: port is already allocated.
(No response from gateway)
This eithier doesn't work or leads to a "port already in use" conflict. I gather that these settings are designed for the opposite problem (host can't see a particular port on the container).
2) setting --net=host
sudo docker run -t --net=host myimage
Exception caught while reading socket - Connection reset by peer
(no response from gateway)
This should work since the docker container should now look like it's 127.0.0.1... but it doesn't.
3) setting --net=host and adding the local host's real IP address 192.168.0.12 (as suggested in comments) to the whitelist
sudo docker run -t --net=host myimage
Exception caught while reading socket - Connection reset by peer
(no response from gateway)
4) adding 172.17.0.1, ...2, ...3 to the whitelist on the host application (the bridge network is 172.17.0.0 and subsequent containers get allocated in this range)
sudo docker run -t myimage
Error: Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.
(no response from host)
This is horribly hacky but doesn't work eithier.
PS Note this is different from the problem of trying to run the host application IB Gateway inside a container - I am not doing that.
I don't want to run the host application inside another container, although in some ways that might be a neater solution.
Running the IB gateway is tricky on a number of different levels, including connecting to it, and especially if you want to automate the process.
We took a close look at connecting to it from other IPs, and finally gave up on it--gateway bug as far as we could tell. There is a setting to white IPs that can connect to the gateway, but it does not work and can not be scripted.
In our build process we create a docker base image, then add the gateway and any/all of the gateway's clients to that image. Then we run that final image.
(Posted on behalf of the OP).
Setting --net=host and changing the port from 4001 so it doesn't conflict with a live version of the gateway on the same network. The only IP address required in the whitelist is 127.0.0.1.
sudo docker run -t --net=host myimage
Use socat to forward port from the gateway to a new port which can listen on any address. For example, set the gateway to listen on port 4002 (localhost only) and use command in the container
socat tcp-listen:4001,reuseaddr,fork tcp:localhost:4002
to forward the port to 4001.
Then you can connect to the gateway from outside of the container using port 4001 when running the container with parameter -p 4001:4001.
In case this one is useful for another person. I tried a couple suggestions that were put here to connect from my python app running on a Docker container to a TWS IBGateway instance running on another server and none of them were 100% working. The socat option was connecting, but then the connection was being drop due an issue with the socat buffer that we couldn't fix.
The solution we found was to create an ssh tunnel from the machine that is running the Docker container to the machine that is running the TWS IBGateway.
ssh -i ib-gateway.pem <ib-gateway-server-user>#<ib-gateway-server-ip> -f -N -L 4002:127.0.0.1:4001
After you establish this ssh tunnel, you can test it running
telnet 127.0.0.1 4002
If this command run successfully, your ssh tunnel is ready. The next step would be to configure your python application to connect to 127.0.0.1 on port 4002 and start your docker container with --net=host to be able to access the ssh tunnel running on Docker host machine.