I have two applications running on the same windows host, different docker bridge networks
network 1:
containerA (web server openning port 8080)
containerB
network 2:
containerC
containerD
Now on container C, I'm tring to send a http request to containerA via http://host.docker.internal:8080, it fails with 404 not found.
If I open http://localhost:8080 on host browser, it works fine, what's the problem here?
How can I get HTTP access to container A via container C?
Related
I have a NAS where I am running various web apps in docker containers through docker-compose. I want some of these web apps to be accessible through the internet, not only when I am connected to my home network.
The problem I'm currently facing is that while cloudflare is able to expose the default web apps (default NAS management 192.168.1.135:80 can be mapped to subdomain.domain.com, for instance), it is unable to expose any docker container I try to run (192.168.1.135:4444 cannot be mapped to subdomain2.domain.com), and I receive a 502 bad gateway error with every app I have tried so far.
The configuration shouldn't be the issue, and it's definitely not the NoTLSVerify flag because the apps run on HTTP and I have configured it that way, so I am out of options to know what is going on and how to solve it.
Looks like the apps you're running on your NAS are proxied through the docker runtime. Consequently, the IP:port you need to add to the cloudflare tunnel config is the one that is reachable from the Host (not the IP of the host itself).
If the host is 192.168.1.135, you need to know which the the IP (internal to the docker network) of the app that you want to access from the outside, typically in the 172.0.0.1/24 range.
Example: If the containers running the apps you want to access are running on 172.0.0.2:4444 for app1 and 172.0.0.3:5555 for app2, the cloudflare config would look like this:
tunnel: the_ID_of_the_tunnel
credentials-file: /root/.cloudflared/the_ID_of_the_tunnel.json
ingress:
- hostname: yourapp1.example.com
service: http://172.0.0.2:4444
- hostname: ypurapp2.example.com
service: http://172.0.0.3:5555
- service: http_status:404
See more details and a video here: How to redirect subdomain to port (docker)
Turns out the problem is due to how docker works with networks, not with how Cloudflare accesses them. I first had to create a network that connected both containers, since adding cloudflare to my docker-compose file didn't work for some reason.
Create a docker network docker network create tunnel
Run docker without specifying the network docker run -d --name cloudflare cloudflare/cloudflared:latest tunnel --no-autoupdate run --token
Add the docker to the network docker network connect tunnel cloudflare
Run the container (note the container should have, as you specified, the network name identical to the one you created earlier, but cloudflare should not be in your docker-compose file) docker-compose up
In the cloudflare tunnel config, you will have to specify the docker internal address of your container (as #lu4t suggested). You can identify the address with docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container
I have slightly modified this example: https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway/examples/echo. I am running envoy on a docker container with exposed port 8080 (running this proxy server is required because the browser can't speak directly to a backend gRPC service). I am running all the services on my localhost (the host machine of the envoy docker container). However, I cannot seem to connect envoy in the docker container to the services running on the host machine.
I compiled grpc_cli in the container and when I run grpc_cli ls 192.168.1.10:9000 (host's LAN IP address and the port the service is running on), I get
root#bdc9ac396a87:~/grpc# ./bins/opt/grpc_cli ls 192.168.1.10:9000
Received an error when querying services endpoint.
ServerReflectionInfo rpc failed. Error code: 14, message: failed to connect to all addresses, debug info: {"created":"#1569023274.866465052","description":"Failed to pick subchannel","file"
:"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"#1569023274.866463178","description":"failed to connect to all addresses","file":"
src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":395,"grpc_status":14}]}
I get an almost identical error when I use the IP address of the docker0 interface, which should also provide a connection to the host machine.
root#bdc9ac396a87:~/grpc# ./bins/opt/grpc_cli ls 172.17.0.1:9000
Received an error when querying services endpoint.
ServerReflectionInfo rpc failed. Error code: 14, message: failed to connect to all addresses, debug info: {"created":"#1569022455.801913949","description":"Failed to pick subchannel","file"
:"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"#1569022455.801910006","description":"failed to connect to all addresses","file":"
src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":395,"grpc_status":14}]}
However, running a simple http server from the host with
python -m http.server
I can run the following commands from the container just fine:
wget 172.17.0.1:8000/test.txt // works
wget 192.168.1.10:8000/test.txt // works
A client on the host (not in the container) connects and works just fine with the service, so it's not a server problem.
Does docker block certain types of traffic? I noticed in the example the server was placed on another docker container, and it worked (it also worked locally for me), but I'd prefer to have my services running on my host machine while I build and test them. Is there a setting somewhere to enable gRPC from the container to a service on the host machine?
Docker version 1.13.1, build 47e2230/1.13.1
Fedora 29
I have started a dask-scheduler at host A. Host A has docker engine installed. So, host A has multiple network interfaces:
192.168.10.250 (default IP for host A)
172.17.0.1 (host A IP address in bridge network (i.e., docker0))
I tested a simple client, from within host A, to both IP addresses and works well
Now, I started a Docker container on the same host A without specifying any networks, so the docker container connects to the default bridge network and receives IP address 172.17.0.2. Within the docker container, I try to start a client that connects to the dask scheduler on the host A as follows:
client=Client('172.17.0.1:8786')
but each time I receive the following error:
IOError: Timed out trying to connect to 'tcp://172.17.0.1:8786' after 10 s: connect() didn't finish in time
I tried to change the network drive for the container to "host" instead of "bridge" but then I receive the following error:
distributed.comm.core.CommClosedError: in : Stream is closed
please help
Regards
Thanks guys. Problem solved.
I realized the problem was that python 2.7 was used inside docker image. When I used python 3.6, it worked (even without the --net host)
Regards
Context:
I have a web server hosting a UI from which users can request for emulator instances for my product. Each emulator instance is a webapp running on nodejs. When a user requests an emulator instance from the UI, I spawn a docker container. I would like to return to the user an IP address(+port) from which this emulator container can be accessed.
Note: Presently, docker and the webserver facing the user are running on the same system.
Problems:
1) The default container on the docker0 network is accessible only with it's local IP address on the host. e.g. http://172.17.0.5. I can't access the container with http://localhost:32768 (container was started with -P and was assigned the port 32768). I get a message that the site can't be reached.
2) I can't use the docker host network driver because the emulator uses ports internally which I don't want to expose in the host network
3) I don't want to use the macvlan driver because I will be using up too many IPs.
Is it possibly to map various ports on the host to IPs on the docker0 subnet? If yes, how do I go about this? If this is possible I could expose the host IP and the container specific port to the user.
What is best way to give users access to the containers?
How about a nginx container acting as a proxy? Make your containers have same name always.
Serve new app instance:
docker run -d --rm --name=static_prefix__unique_id your_image
Have a wildcard domain:
unique_id.yourdomain.com
Or simply:
yourdomain.com/unique_id
You can dynamically proxy the request (I assume you're using port 3000 for the nodejs app):
proxy_pass http://static_prefix__$extractedNameFromRequestUri:3000
Docker will do the hard job for you and route traffic from outside to the static_prefix__unique_id container.
I've a .NET Core Docker container which is unable to send requests to the outside world. I can't curl or ping anything, including a dns record like 8.8.8.8. A MongoDB container however is able to ping and curl to the outside world.
I've tried several images:
microsoft/dotnet:2.0-runtime
microsoft/aspnetcore
Incoming connections to the container works fine.
Docker is running on a ubuntu machine. I'm using Docker Swarm to manage my containers.
Error:
PING 109.236.87.141 (109.236.87.141): 56 data bytes
ping: sending packet: Network is unreachable
The difference between the network configuration between the .NET Core container and Mongo is the Core container doesn't include a "ingress" network.
After investigating some other docker containers I noticed some of them had a bridge network attached to them. After adding this network the container was able to connect to the outside world.