If I run this command on the host(ubuntu)
echo "PD.file.processing:1|c" | nc -w 1 -u localhost 8125
It sends the udp packet fine and the dogstatsd agent running on port 8125 picks it up and I can see it.
But when I run the following command on the docker container on the same host
Here are the port mappings of the container when I do a docker ps
8125/udp, 0.0.0.0:20019->8080/tcp, 0.0.0.0:20018->8443/tcp, 0.0.0.0:20017->11400/tcp, 0.0.0.0:20016->11401/tcp, 0.0.0.0:20015->11402/tcp
echo "MD.file.returned.success:1|c" | nc -w 1 -u 172.17.0.1 8125
This doesn't hit the host and it is not captured by the dogstatsagent running on the host on 8125
Here is the expose line of code in Dockerfile
EXPOSE 8125/udp
Am I doing something wrong?
EXPOSE doesn't publish container ports to the host; it's used more for documenting intent and is considered good practice. You'd usually then need to publish the ports too (e.g. --publish=8125:8125).
However, you want to achieve the inverse -- IIUC -- and make the host's port accessible to the container. One way that you may do this is to run the container with --net=host. Your container can then access the host's 8125 port.
And, if you did want to access any of the container's ports, you'd be able to do so without using publish.
Related
I have a container running with -network= my-overlay-network, that I can prevent any api calls within the containers to the service on the internet. However, I do need to make an api calls within the container to the localhost.
I used the -p dockerport:localhostport in the docker run command to publish/map the port of the container to the localhost. However, it always shows as "Connection refused".
Also I tried to add --add-host host.docker.internal:$(ip addr show docker0 | grep -Po 'inet \K[\d.]+') in docker run. I still cannot connect the server on the port. I have got "Couldn't connect to server" to host.docerk.internal:port.
Can I open a port when the container is under the overlay network?
It sounds like you got the ports backwards, instead of -p dockerport:localhostport it should be -p localhostport:containerport, where localhostport is the port you want to open on the local host, and containerport is the port the container exposed in it's dockerfile.
My docker container (sctp server) is running on sctp with port number 36412. However, my sctp client on the host machine unable to communicate with the container. How do I expose this port from container to host? Is it not same as TCP/UDP?
When I run docker run -p 36412:36412 myimage, I get below error.
Invalid proto: sctp
From reading source code, the general form of the docker run -p option is
docker run -p ipAddr:hostPort:containerPort/proto
Critically, the "protocol" part of this is allowed to be any of tcp, udp, or sctp; it is lowercased, and defaults to tcp if not specified.
It looks like for your application, you should be able to
docker run -p 36412:36412/sctp ...
Use the -p flag when running to to map an open port on your host machine to the container port. The below example maps port 36412 on the host to 36412 in the container.
docker run -p 36412:36412 mysctpimage
To view the ports running on your container and where they are mapping to:
docker port <containerId>
This will tell you what port and protocol the container is mapping to your host machine. For example running a simple WebApi project may yield:
80/tcp -> 0.0.0.0:32768
Docker Port Documentation
How to publish or expose a port when running a container
From my docker container I want to access the MySQL server running on my host at 127.0.0.1. I want to access the web server running on my container container from the host. I tried this:
docker run -it --expose 8000 --expose 8001 --net='host' -P f29963c3b74f
But none of the ports show up as exposed:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
093695f9bc58 f29963c3b74f "/bin/sh -c '/root/br" 4 minutes ago Up 4 minutes elated_volhard
$
$ docker port 093695f9bc58
If I don't have --net='host', the ports are exposed, and I can access the web server on the container.
How can the host and container mutually access each others ports?
When --expose you define:
The port number inside the container (where the service listens) does
not need to match the port number exposed on the outside of the
container (where clients connect). For example, inside the container
an HTTP service is listening on port 80 (and so the image developer
specifies EXPOSE 80 in the Dockerfile). At runtime, the port might be
bound to 42800 on the host. To find the mapping between the host ports
and the exposed ports, use docker port.
With --net=host
--network="host" gives the container full access to local system services such as D-bus and is therefore considered insecure.
Here you have nothing in "ports" because you have all ports opened for host.
If you dont want to use host network you can access host port from docker container with docker interface
- How to access host port from docker container
- From inside of a Docker container, how do I connect to the localhost of the machine?.
When you want to access container from host you need to publish ports to host interface.
The -P option publishes all the ports to the host interfaces. Docker
binds each exposed port to a random port on the host. The range of
ports are within an ephemeral port range defined by
/proc/sys/net/ipv4/ip_local_port_range. Use the -p flag to explicitly
map a single port or range of ports.
In short, when you define just --expose 8000 the port is not exposed to 8000 but to some random port. When you want to make port 8000 visible to host you need to map published port -p 8000:8000.
Docker's network model is to create a new network namespace for your container. That means that container gets its own 127.0.0.1. If you want a container to reach a mysql service that is only listening on 127.0.0.1 on the host, you won't be able to reach it.
--net=host will put your container into the same network namespace as the host, but this is not advisable since it is effectively turning off all of the other network features that docker has-- you don't get isolation, you don't get port expose/publishing, etc.
The best solution will probably be to make your mysql server listen on an interface that is routable from the docker containers.
If you don't want to make mysql listen to your public interface, you can create a bridge interface, give it a random ip (make sure you don't have any conflicts), connect it to nothing, and configure mysql to listen only on that ip and 127.0.0.1. For example:
sudo brctl addbr myownbridge
sudo ifconfig myownbridge 10.255.255.255
sudo docker run --rm -it alpine ping -c 1 10.255.255.255
That IP address will be routable from both your host and any container running on that host.
Another approach would be to containerize your mysql server. You could put it on the same network as your other containers and get to it that way. You can even publish its port 3306 to the host's 127.0.0.1 interface.
I'm trying to expose a docker container to the outside world, not just the host machine. When I created the image from a base CentOS image it looks like this:
# install openssh server and ssh client
RUN yum install -y openssh-server
RUN yum install -y openssh-clients
RUN echo 'root:password' | chpasswd
RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config
RUN sed -ri 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
I run this image like so:
sudo docker run -d -P crystal/ssh
When I try to look at the container with sudo docker ps, I see Ports:
0.0.0.0:49154->22tcp
If I ifconfig on the host machine (ubuntu), I see docker0 inet addr:172.17.42.1. I can ping this from my host machine, but not from any other machine. What am I doing wrong in setting up the container to look at the outside world? Thanks.
Edit:
I have tried inspecting the IPAddress of the container and I see IPAddress: 172.17.0.28, but I cannot ping that either...
If I try nmap , that seems to return the ports. So does that mean it is open and I should be able to ssh into it if I have ssh set up? Thanks.
nmap -p 49154 10.211.55.1 shows that the port is open with an unknown service.
I tried to ssh in by ssh -l root -p 49154 10.211.55.1 and I get
Read from socket failed: Connection reset by peer.
UPDATE
Your Dockerfile is wrong. Your sshd is not properly configured, it does not start properly and thats the reason while container does not respond on port 22 correctly. See errors:
Could not load host key: /etc/ssh/ssh_host_rsa_key
Could not load host key: /etc/ssh/ssh_host_dsa_key
You need to generate host keys. This line will do the magic:
RUN ssh-keygen -P "" -t dsa -f /etc/ssh/ssh_host_dsa_key
PREVIOUS ANSWER
You probably need to look up IP address of eth0 interface (that is accessible from network) and you need to connect to your container via this IP address. Traffic from/to docker0 bridge should be forwarded by default to your eth interfaces.
Also, you better check if you have ip forwarding enabled:
cat /proc/sys/net/ipv4/ip_forward
This command should return 1, otherwise you should execute:
sudo echo 1 > /proc/sys/net/ipv4/ip_forward
Q: Why you can connect this way to container?
If you have ip forwarding enabled, packets incoming from eth0 interface are forwarded to virtual docker0 interface. Magic happens and packet is received at correct container. See Docker Advanced Networking for more details:
But docker0 is no ordinary interface. It is a virtual Ethernet bridge
that automatically forwards packets between any other network
interfaces that are attached to it. This lets containers communicate
both with the host machine and with each other. Every time Docker
creates a container, it creates a pair of “peer” interfaces that are
like opposite ends of a pipe — a packet sent on one will be received
on the other. It gives one of the peers to the container to become its
eth0 interface and keeps the other peer, with a unique name like
vethAQI2QT, out in the namespace of the host machine. By binding every
veth* interface to the docker0 bridge, Docker creates a virtual subnet
shared between the host machine and every Docker container.
You can't ping 172.17.42.1 from outside your host because it is a private ip so it can be accessed only in private network as it is the one created by the host on which you run the docker container, the virtual switch docker0 and the docker container which is attached with a virtual interface to the bridge docker0...
Moreover 172.17.42.1 is the ip of the bridge docker0, not the ip of your docker instance. If you want to know the ip of the docker instance you have to run ifconfig inside it or you can use docker inspect
I'm not an expert about port mapping, but up to me that means that to access the docker container on port 22 you have to connect to port 49154 of the host and all the traffic will be forwarded.
I've started using docker for dev, with the following setup:
Host machine - ubuntu server.
Docker container - webapp w/ tomcat server (using https).
As far as host-container access goes - everything works fine.
However, I can't manage to access the container's webapp from a remote machine (though still within the same network).
When running
docker port <container-id> 443
output is as expected, so docker's port binding seems fine.
172.16.*.*:<random-port>
Any ideas?
Thanks!
I figured out what I missed, so here's a simple flow for accessing docker containers webapps from remote machines:
Step #1 : Bind physical host ports (e.g. 22, 443, 80, ...) to container's virtual ports.
possible syntax:
docker run -p 127.0.0.1:443:3444 -d <docker-image-name>
(see docker docs for port redirection with all options)
Step #2 : Redirect host's physical port to container's allocated virtual port. possible (linux) syntax:
iptables -t nat -A PREROUTING -i <host-interface-device> -p tcp --dport <host-physical-port> -j REDIRECT --to-port <container-virtual-port>
That should cover the basic use case.
Good luck!
Correct me if I'm wrong but as far as I'm aware docker host creates a private network for it's containers which is inaccessible from the outside. That said your best bet would probably be to access the container at {host_IP}:{mapped_port}.
If your container was built with a Dockerfile that has an EXPOSE statement, e.g. EXPOSE 443, then you can start the container with the -P option (as in "publish" or "public"). The port will be made available to connections from remote machines:
$ docker run -d -P mywebservice
If you didn't use a Dockerfile, or if it didn't have an EXPOSE statement (it should!), then you can also do an explicit port mapping:
$ docker run -d -p 80 mywebservice
In both cases, the result will be a publicly-accessible port:
$ docker ps
9bcb… mywebservice:latest … 0.0.0.0:49153->80/tcp …
Last but not least, you can force the port number if you need to:
$ docker run -d -p 8442:80 mywebservice
In that case, connecting to your Docker host IP address on port 8442 will reach the container.
There are some alternatives of how to access docker containers from an external device (in the same network), check out this post for more information http://blog.nunes.io/2015/05/02/how-to-access-docker-containers-from-external-devices.html