Very basic question about proxying inside docker containers in OSX: I am running a toxiproxy docker container with:
docker run --name proxy -p 8474:8474 -p 19200:19200 shopify/toxiproxy
and an Elasticsearch container:
docker run --name es -p 9200:9200 elasticsearch:6.8.6
I want toxiproxy to redirect the traffic from the `Elasticsearch:9200 container to localhost:19200. I config the toxiproxy with:
curl -XPOST "localhost:8474/proxies -d "{ \"name\": \"proxy_es\", \"listen\": \"0.0.0.0:19200\", \"upstream\": \"localhost:9200\", \"enabled\": true}"
Now, I would expect that:
curl -XGET localhost:19200/_cat
would point me to the Elasticsearch endpoint. But get:
curl: (52) Empty reply from server
Any idea why this is wrong? How can I fix it?
From inside the toxiproxy container, localhost:9200 does not resolve to es container.
This is because by default these containers are attached to default network. On the default network, localhost refers to the localhost of the container. It does not resolve to the localhost of the host machine (where docker-machine is running).
You can use the host network by adding --net=host for this to work. A better approach would be to create a new network, and run all containers in that network.
docker run --name proxy -p 8474:8474 -p 19200:19200 --net host shopify/toxiproxy
docker run --name es -p 9200:9200 --net host elasticsearch:6.8.6
You localhost should be resolvable in both ways
Related
I want to run a docker container that has no access to the outside internet. I've been using --network=none for this successfully. But now I want to host a web server from that container, and access it from outside. When I try, I find that the port mapping is totally ignored:
$ docker run --rm -it -p 8000:8000 --network=none python bash
# python -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
Now from outside the container:
$ docker port 981f253788ad
$ curl localhost:8000
curl: (7) Failed to connect to localhost port 8000: Connection refused
You can try using a custom network with --internal option and then attaching your container to this network:
$ docker network create --internal internal-network
$ docker run --rm -it -p 8000:8000 --network=internal-network python bash
I have a cassandra image that is listening in port 9042, but since I´m running that image in a virtual machine where has limited external ports I need to map that port to another port.
I try run linking the external port 7017 linked with internal container port 9042 bur it does not work when I run the container and I do a curl
docker run --name cassandra -p 9042:7017 -p 9160:9160 --memory=3000m -d cassandra
In the other way around
docker run --name cassandra -p 9042:7017 -p 9160:9160 --memory=3000m -d cassandra
Any idea how can I map the internal with external port of the container?.
Regards
I'm having a rather awful issue with running a Redis container. For some reason, even though I have attempted to bind the port and what have you, it won't expose the Redis port it claims to expose (6379). Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
Docker Redis Page (for reference to where I pulled the image from): https://hub.docker.com/_/redis/
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
docker run --name ausbot-ranksync-redis -d redis
docker run --name ausbot-ranksync-redis --expose=6379 -d redis
https://gyazo.com/991eb379f66eaa434ad44c5d92721b55 (The last container I scan is a MariaDB container)
The command variations I have tried:
docker run --name ausbot-ranksync-redis -p 127.0.0.1:6379:6379 -d redis
docker run --name ausbot-ranksync-redis -p 6379:6379 -d redis
Those two should work and make the port available on your host.
Obviously, I've checked this by scanning the open ports on the IP assigned to the Redis container (172.17.0.3) and it returned no open ports whatsoever. How might I resolve this issue?
You shouldn't be checking the ports directly on the container from outside of docker. If you want to access the container from the host or outside, you publish the port (as done above), and then access the port on the host IP (or 127.0.0.1 on the host in your first example).
For docker networking, you need to run your application listening on all interfaces (not localhost/loopback). The official redis image already does this, and you can verify with:
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot netstat -lnt
or
docker run --rm --net container:ausbot-ranksync-redis nicolaka/netshoot ss -lnt
To access the container from outside of docker, you need to publish the port (docker run -p ... or ports in the docker-compose.yml). Then you connect to the host IP and the published port.
To access the container from inside of docker, you create a shared network, run your containers there, and access using docker's DNS and the container port (publish and expose are not needed for this):
docker network create app
docker run --name ausbot-ranksync-redis --net app -d redis
docker run --name redis-cli --rm --net app redis redis-cli -h ausbot-ranksync-redis ping
I am running Docker for Mac. When I run
docker run -d --rm --name nginx -p 80:80 nginx:1.10.3
I can access Nginx on port 80 on my Mac. When I run
docker run -d --rm --name nginx --network host -p 80:80 nginx:1.10.3
I can not.
Is it possible to use both "--network host" and publish a port so that it is reachable from my Mac?
Alternatively, can I access Nginx from my Mac via the IP of the HyperKit VM?
Without the --network flag the container is added to the bridge network by default; which creates a network stack on the Docker bridge (usually the veth interface).
If you specify --network host the container gets added to the Docker host network stack. Note the container will share the networking namespace of the host, and thus all its security implications.
Which means you don't need to add -p 80:80, instead run...
docker run -d --rm --name nginx --network host nginx:1.10.3
and access the container on http://127.0.0.1
The following link will help answer the HyperKit question and the current limitations:
https://docs.docker.com/docker-for-mac/networking/
There is no docker0 bridge on macOS
Because of the way networking is implemented in Docker for Mac, you
cannot see a docker0 interface in macOS. This interface is actually
within HyperKit.
I am really confused about this problem. I have two computer in our internal network. Both computers can ping internal servers.
Both computers have same docker version.
I run simple docker container with docker run -it --rm --name cont1 --net=host java:8 command on both computers. Then ssh into containers and try to ping internal server. One of the container can ping an internal server but other one can't reach any internal server.
How it can be possible? Do you have any idea about that?
Thank you
connect container to other systems in the same network is done by port mapping .
for that you need to run docker container with port mapping.
like - docker run -it --rm --name cont1 -p host_ip:host_port:container_port java:8
e.g., docker run -it --rm --name cont1 -p 192.168.134.122:1234:1500 java:8
NOTE : container port given in docker run is exposed in Dockerfile
now for example container ip will be - 172.17.0.2 port given in run is :1500
Now the request send to host_ip(192.168.134.122) and host_port(1234) is redirect to container with ip (172.17.0.2) and port (1500).
See the binding details in iptables -L -n -t nat
Thanks