I see various instances of ports 2375 and 4243 being used for seemingly the same thing while searching the internet. Also, my local machine requires I use 2375 to connect whereas when I push it to our CI server it requires it be set to 4243.
What does Docker use these ports for and how do they differ?
The docker socket can be configured on any port with the dockerd -H option. Common docker ports that I see include:
2375: unencrypted docker socket, remote root passwordless access to the host
2376: tls encrypted socket, most likely this is your CI servers 4243 port as a modification of the https 443 port
2377: swarm mode socket, for swarm managers, not for docker clients
5000: docker registry service
4789 and 7946: overlay networking
Only the first two are set with dockerd -H, swarm mode can be configured as part of docker swarm init --listen-addr or docker swarm join --listen-addr.
I strongly recommend disabling the 2375 port and securing your docker socket. It's trivial to remotely exploit this port to gain full root access without a password from remote. The command to do so is as simple as:
docker -H $your_ip:2375 run -it --rm \
--privileged -v /:/rootfs --net host --pid host busybox
That can be run on any machine with a docker client to give someone a root shell on your host with the full filesystem available under /rootfs, your network visible under ip a, and every process visible under ps -ef.
To setup TLS security on the docker socket, see these instructions. https://docs.docker.com/engine/security/https/
Related
I run my docker-compose service remotely on another physical machine on my local network via -H=ssh://user#192.168.0.x.
I'm exposing ports using docker-compose:
ports:
- 8001:8001
However, when I start the service, the port 8001 is not exposed to my localhost. I can ssh into the machine running the container, and port 8001 is indeed listening there.
How do I instruct docker-compose to tunnel this port from the remote machine running the container, to my local docker client?
Docker doesn't have the ability to do this. But from your ssh client's point of view, the container is no different from any other program running on the remote host, and you can use the ssh -L option to forward a local port to the remote system.
# Tell ssh to forward local port 8001 to remote port 8001
ssh -L 8001:localhost:8001 user#192.168.0.x \
# Incidentally the remote port happens to be via a Docker container
docker run -p 127.0.0.1:8001:8001 ...
Whenever you set DOCKER_HOST or use the docker -H option, you're giving instructions to a remote Docker daemon which interprets them relative to itself. docker -H ... -v ... mounts a directory on the same system as the Docker daemon into a container; docker -H ... -p ... publishes a port on the same system as the Docker daemon. Docker has no ability to somehow take into account the content or network stack of the local system when doing this.
(The one exception is docker -H ... build, which actually creates a tar file of the local directory and sends it across the network to use as the build context, so you can have a remote Docker daemon build an image of a local source tree.)
I have 2 IP addresses in my rancher host (centos): 1.1.1.1 and 2.2.2.2
1.1.1.1 is the IP address I want to use to access the rancher UI and SSH into the host.
I want to use 2.2.2.2 for accessing containers for an application. I have 2 containers, one nginx and one ssh. I configured the containers to use hostport 80 mapped to 2.2.2.2:80 and 22 to hostport 2.2.2.2:22.
I have also changed the default run command for the rancher container to listen on port 80 and 443 of IP 1.1.1.1
If I go to my browser and access 1.1.1.1 I see rancher as expected, and if I access 2.2.2.2 I see my container app as expected.
However, if I try accessing 1.1.1.1:22 I end up connecting to the container ssh, which should be only listening to 2.2.2.2:22.
Am I missing something here? Is this a configuration issue on the host or the container? Can the container get access to something that it shouldn't even be aware of?
UPDATE
Let me try to clarify the setup:
Rancher is running in a host with 2 IP addresses. When I run rancher, I execute the following command, so it becomes attached to the first IP address:
docker run -d --volumes-from rancher-data --restart=unless-stopped -p 1.1.1.1:80:80 -p 1.1.1.1:443:443 rancher/rancher
docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.1.7 --server https://rancher1.my.tld --token [token] --ca-checksum [checksum] --etcd --controlplane --worker
I have 4 containers configured in the rancher UI, which I want pointing to 2.2.2.2:22 and 2.2.2.2:80, 2.2.2.2:2222 and 2.2.2.2:8080
These are 2 environments for an application. 22 and 80 are nginx and ssh containers for the LIVE environment (sharing a data volume between them) and the same thing for 2222 and 8080, with these being for a the QA environment. I use the ssh container to upload contents to the nginx container through the shared data volume.
I don't see a problem with this configuration, except the fact that when I configure the ssh machine to use port 22, when I try connecting to the host ssh, I get connected to the container ssh.
UPDATE 2
Here is a screenshot from the port mapping settings in the container: https://snag.gy/idTjoV.jpg
Container port 22 mapped to IP 2.2.2.2:222
If I set that to 2.2.2.2:22, SSH to host stops working, and ssh connections are established to the container instead.
I have 2 docker containers running on my Mac host - container 1 is Jenkins from Docker Hub and container 2 is SonarQube from Docker Hub. I have both containers running successfully. I can access Jenkins from my host by going to http://localhost:8080/ and I can access my SonarQube by going to http://localhost:9000/.
The Jenkins container was started like this:
docker run -d -p 8080:8080 -p 50000:50000 jenkins/jenkins:latest
The SonarQube container was started like this:
docker run -d -p 9000:9000 sonarqube
Now I want to have each container communicate with each other so I need to provide the IP address of the other container to each container.
I got the IP address of each container by executing this:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' container_name_or_id
This returns an IP address of 172.17.0.2 for the Jenkins container and 172.17.0.3 for the SonarQube container. But when I try and access the Jenkins container from my host by going to http://172.17.0.2:8080 I get a request timeout. The same thing happens when I try and access the SonarQube container from my host by going to http://172.17.0.3:9000
Is this normal behavior?
Shouldn't I be able to access each container from my host by their internal IP address?
And how can I test that one container (e.g. Jenkins) can access the other container (e.g. SonarQube) by IP address?
Is this normal behavior? Shouldn't I be able to access each container from my host by their internal IP address?
What you describe is normal behavior: you can't directly reach the Docker-internal IP addresses from a MacOS host. See "Per-container IP addressing is not possible" in the Docker for Mac docs.
How can I test that one container (e.g. Jenkins) can access the other container (e.g. SonarQube) by IP address?
This isn't something I normally "test" per se. Start up both processes and have them make their normal (HTTP) connections; if it works you'll see appropriate log messages, and if it doesn't work you'll see complaints. (Getting a root shell in a container to send ICMP packets from one container to another seems to be a popular option but doesn't prove much.)
Also: don't make this connection by explicit IP address. As you've noticed already the Docker-internal IP addresses aren't usable in some contexts, and they'll change whenever you restart containers. Instead, Docker provides an internal DNS service that can resolve host names when communicating between containers, but you need to explicitly set up a non-default bridge network. That setup would look like:
docker network create jenkinsnet
docker run --name sonarqube -d --net jenkinsnet \
-p 9000:9000 \
sonarqube
docker run --name jenkins -d --net jenkinsnet \
-p 8080:8080 -p 50000:50000 \
-e SONARQUBE_URL=http://sonarqube:9000 \
jenkins/jenkins:latest
So I've explicitly created a network; started both containers connected to it; and told the client container (via an environment variable) where the server container is. You don't have to publish ports with docker run -p to reach them this way; whether you do or not, use the port the server process is listening on (the second port number in the docker run -p option).
From the host, your only (portable, reliable) path to reach the container is via its published ports.
Looks like you are using default bridge network model. Internal IPs are meant for each container to talk to each other under bridge networking. You cannot access them from host.
There are multiple options for you.
You can configure http://172.17.0.3:9000 as your sonar endpoint in Jenkins.
You can configure http://172.17.0.2:8000 as your jenkins endpoint in sonar.
If you don't want to hard code above Ips then both of your containers can talk to each using Docker Default GatewayIp(172.17.0.1) and their internal port. so essentially you can configure http://172.17.0.1 as well.
Note - Default Gateway Ip change change if you define user defined bridge network.
https://docs.docker.com/v17.09/engine/userguide/networking/#the-default-bridge-network
https://docs.docker.com/network/network-tutorial-standalone/
If you want to spin up both containers using docker-compose, then you can link both containers using service name. Just follow Networking in Compose.
The accepted answer (https://stackoverflow.com/a/53992787/7730554) already provides valid options of which I personally usually prefer using docker compose.
But as you are running Docker on Mac you could also use host.docker.internal in combination with the defined forwarding host port. So Docker will take care that host.docker.internal is resolved to the corresponding IP even if your Host IP changes.
See https://docs.docker.com/desktop/mac/networking/.
Note that this is for development mode only and works when you use Docker Desktop.
How to access or connect to a process running on docker on host A from a remote host B
consider a Host A with ip 192.168.0.3 which is running a application on docker on port 3999 .
If i want to access that application from remote machine with IP 192.168.0.4 in same subnet.
To be precise i am running Kafka producer on the server and i am trying to receive using Kafka-console-Consumer.
Use --net=host to run your container and it'll use the host's network stack, then you can connect to the application running inside container like it's running on host directly.
Port mapping, use option -p to map the port inside your container to a port of your host. e.g. docker run -d -p <container port>:<host port> <image>, then you can connect to <host>:<host port> to connect your application inside container
Docker's built-in multi-host network. In early releases the network driver is isolated from docker's core, you have to use 3rd party tools like flannel or weave for multi-host connection, but from release 1.9, it has been merged into docker. You can follow it's guide to set it up.
Hope this is helpful :-)
First you need to bind docker container's port to the Host A:
docker run -d -p 3999:3999 kafka-producer
Then you need to access Host A from Host B using IP:Port
192.168.0.3:3999
I have a docker daemon installed in UbuntuA machine.
I am using UbuntuB machine as the docker client.
I know that UbuntuA machine has the docker daemon installed and can do operations as well.
But I am not getting on which port it is running.
I am using this command :
sudo docker -H tcp://127.0.0.1:5555 -d &
and after this , when I use the following command:
sudo docker -H tcp://127.0.0.1:5555 info
I am getting an error : docker daemon not found .
How to find out on which port , the daemon is running?
Using the -H tcp://127.0.0.1:5555 docker daemon option on the UbuntuA machine will instruct docker to bind to the loopback network interface (127.0.0.1). As a result it will only accept connections originating from the UbuntuA machine.
If you want to accept connections incoming from any network interface use -H tcp://0.0.0.0:5555. Be aware that anyone that would be able to connect to your UbuntuA machine on port 5555 will be able to control your docker host. You need to protect it with firewall rules to allow only UbuntuB to connect to UbuntuA on port 5555.