I am running two AWS EC2 Ubuntu instances in separate regions (Ireland and London). A Docker container is running on each instance.
An established IPSec connection exists:
root#ip-10-0-1-178:/mnt# ipsec status
Security Associations (1 up, 0 connecting):
Ireland-to-London[2]: ESTABLISHED 37 seconds ago,
172.17.0.1[34.X.X.X]...35.X.X.X[35.X.X.X]
Here are some IP's for each:
Ireland
Public IP: 34.X.X.X
Private IP: 10.0.1.178
VPC CIDR Block: 10.0.0.0/16
London
Public IP: 35.X.X.X
Private IP: 10.10.1.187
VPC CIDR Block: 10.10.0.0/16
Docker(same for both)
Public IP: 172.17.0.1
VPC CIDR Block: 172.17.0.0/16
Ports open: 500 and 4500
I cannot figure out how to transfer files using scp from a Docker container on one instance to the Docker container on the other.
Make sure your docker image has port 22 mapped to 500/4500 so you should be able to use
scp -P YOUR_PORT file USERNAME#AWS_IP:/file
You can also use docker cp to copy the file to the host machine, then scp to the other machine and then use docker cp to copy to docker image as a workaround.
List of things you need to do.
Enable ssh ports(e.g 22) on both containers.
EXPOSE / forward the container ports to the host machine ports(e.g. 2200:22) on both machines.
Open the forwarded ports on the host machines in firewall.
Now scp -P 2200 localfile.txt <london_user>#<london_public_ip>:<remote path>.
I ignored the part were you configure password, or key based communication, there are already many resources on that.
Related
I have 2 IP addresses in my rancher host (centos): 1.1.1.1 and 2.2.2.2
1.1.1.1 is the IP address I want to use to access the rancher UI and SSH into the host.
I want to use 2.2.2.2 for accessing containers for an application. I have 2 containers, one nginx and one ssh. I configured the containers to use hostport 80 mapped to 2.2.2.2:80 and 22 to hostport 2.2.2.2:22.
I have also changed the default run command for the rancher container to listen on port 80 and 443 of IP 1.1.1.1
If I go to my browser and access 1.1.1.1 I see rancher as expected, and if I access 2.2.2.2 I see my container app as expected.
However, if I try accessing 1.1.1.1:22 I end up connecting to the container ssh, which should be only listening to 2.2.2.2:22.
Am I missing something here? Is this a configuration issue on the host or the container? Can the container get access to something that it shouldn't even be aware of?
UPDATE
Let me try to clarify the setup:
Rancher is running in a host with 2 IP addresses. When I run rancher, I execute the following command, so it becomes attached to the first IP address:
docker run -d --volumes-from rancher-data --restart=unless-stopped -p 1.1.1.1:80:80 -p 1.1.1.1:443:443 rancher/rancher
docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.1.7 --server https://rancher1.my.tld --token [token] --ca-checksum [checksum] --etcd --controlplane --worker
I have 4 containers configured in the rancher UI, which I want pointing to 2.2.2.2:22 and 2.2.2.2:80, 2.2.2.2:2222 and 2.2.2.2:8080
These are 2 environments for an application. 22 and 80 are nginx and ssh containers for the LIVE environment (sharing a data volume between them) and the same thing for 2222 and 8080, with these being for a the QA environment. I use the ssh container to upload contents to the nginx container through the shared data volume.
I don't see a problem with this configuration, except the fact that when I configure the ssh machine to use port 22, when I try connecting to the host ssh, I get connected to the container ssh.
UPDATE 2
Here is a screenshot from the port mapping settings in the container: https://snag.gy/idTjoV.jpg
Container port 22 mapped to IP 2.2.2.2:222
If I set that to 2.2.2.2:22, SSH to host stops working, and ssh connections are established to the container instead.
We have swarm running on 6 hosts and about 15 containers. There are one accesspoint open as port 3010.
On every host, which are nodes of swarm, there is local isolated network with 3 docker containers. On each host, one of this containers want to connect to that publish port 3010.
I like to use port on that host, which is currently running that container. I do not know, if this is wise?
How to solve the name of host to use on docker container to connect to the local swarm port. Localhost and 127.0.0.1 are not available. I can connect container on overlay network on swarm, but it is not possible, when starting container, because of local isolated network.
How to solve the name of host to use on docker container to connect to the local swarm port.
It the name of your service name.
E.g. when docker service create --name blue --network dev markuman/color
then you can attache to this service container with figuring out the exactly name id
docker ps| grep blueeb46c52d0568 markuman/color:latest
"/bin/sh -c '/bin/..." 51 seconds ago Up 49 seconds 80/tcp blue.1.o5w76smq3kh5jlomltf6yohj3
and simply do
docker exec -ti blue.1.o5w76smq3kh5jlomltf6yohj3 bash
That's it. From there you can ping or ssh into other serivces which are assigned to the same network.
E.g. when docker service create --name apache ... is running in the same network, just do a ping apache. That's sufficient
I have a docker container and a virtual machine(VM) on the same host(OpenSUSE). the docker has the IP like 172.18.0.2 and the host IP is something like 3.204.XX.XX and VM IP is also something like 3.204.xx.xx, I am able to ping the docker from the host and even the VM is pingable from the host and vice-versa but I am unable to ping the docker from the Virtual machine present on the same host. Is there a way to access the docker on the host from the VM present on the same host? please help.
it is not possible directly because docker creates its bridge "bridge0" all the traffic is been routed using nat, where as virtualbox also creates its own bridge/interface , because of which its not able to access. But you can access by exposing port.
above mention requirement is possible with consul service discovery and host n/w config modification
I'm a little bit beginner to Docker. I couldn't find any clear description of what this option does in docker run command in deep and bit confused about it.
Can we use it to access the applications running on docker containers without specifying a port? As an example if I run a webapp deployed via a docker image in port 8080 by using option -p 8080:8080 in docker run command, I know I will have to access it on 8080 port on Docker containers ip /theWebAppName. But I cannot really think of a way how --net=host option works.
After the docker installation you have 3 networks by default:
docker network ls
NETWORK ID NAME DRIVER SCOPE
f3be8b1ef7ce bridge bridge local
fbff927877c1 host host local
023bb5940080 none null local
I'm trying to keep this simple. So if you start a container by default it will be created inside the bridge (docker0) network.
$ docker run -d jenkins
1498e581cdba jenkins "/bin/tini -- /usr..." 3 minutes ago Up 3 minutes 8080/tcp, 50000/tcp friendly_bell
In the dockerfile of jenkins the ports 8080 and 50000 are exposed. Those ports are opened for the container on its bridge network. So everything inside that bridge network can access the container on port 8080 and 50000. Everything in the bridge network is in the private range of "Subnet": "172.17.0.0/16", If you want to access them from the outside you have to map the ports with -p 8080:8080. This will map the port of your container to the port of your real server (the host network). So accessing your server on 8080 will route to your bridgenetwork on port 8080.
Now you also have your host network. Which does not containerize the containers networking. So if you start a container in the host network it will look like this (it's the first one):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1efd834949b2 jenkins "/bin/tini -- /usr..." 6 minutes ago Up 6 minutes eloquent_panini
1498e581cdba jenkins "/bin/tini -- /usr..." 10 minutes ago Up 10 minutes 8080/tcp, 50000/tcp friendly_bell
The difference is with the ports. Your container is now inside your host network. So if you open port 8080 on your host you will acces the container immediately.
$ sudo iptables -I INPUT 5 -p tcp -m tcp --dport 8080 -j ACCEPT
I've opened port 8080 in my firewall and when I'm now accesing my server on port 8080 I'm accessing my jenkins. I think this blog is also useful to understand it better.
The --net=host option is used to make the programs inside the Docker container look like they are running on the host itself, from the perspective of the network. It allows the container greater network access than it can normally get.
Normally you have to forward ports from the host machine into a container, but when the containers share the host's network, any network activity happens directly on the host machine - just as it would if the program was running locally on the host instead of inside a container.
While this does mean you no longer have to expose ports and map them to container ports, it means you have to edit your Dockerfiles to adjust the ports each container listens on, to avoid conflicts as you can't have two containers operating on the same host port. However, the real reason for this option is for running apps that need network access that is difficult to forward through to a container at the port level.
For example, if you want to run a DHCP server then you need to be able to listen to broadcast traffic on the network, and extract the MAC address from the packet. This information is lost during the port forwarding process, so the only way to run a DHCP server inside Docker is to run the container as --net=host.
Generally speaking, --net=host is only needed when you are running programs with very specific, unusual network needs.
Lastly, from a security perspective, Docker containers can listen on many ports, even though they only advertise (expose) a single port. Normally this is fine as you only forward the single expected port, however if you use --net=host then you'll get all the container's ports listening on the host, even those that aren't listed in the Dockerfile. This means you will need to check the container closely (especially if it's not yours, e.g. an official one provided by a software project) to make sure you don't inadvertently expose extra services on the machine.
Remember one point that the host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server
you can create your own new network like --net="anyname"
this is done to isolate the services from different container.
suppose the same service are running in different containers, but the port mapping
remains same, the first container starts well , but the same service from second container will fail.
so to avoid this, either change the port mappings or create a network.
From my docker container I want to access the MySQL server running on my host at 127.0.0.1. I want to access the web server running on my container container from the host. I tried this:
docker run -it --expose 8000 --expose 8001 --net='host' -P f29963c3b74f
But none of the ports show up as exposed:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
093695f9bc58 f29963c3b74f "/bin/sh -c '/root/br" 4 minutes ago Up 4 minutes elated_volhard
$
$ docker port 093695f9bc58
If I don't have --net='host', the ports are exposed, and I can access the web server on the container.
How can the host and container mutually access each others ports?
When --expose you define:
The port number inside the container (where the service listens) does
not need to match the port number exposed on the outside of the
container (where clients connect). For example, inside the container
an HTTP service is listening on port 80 (and so the image developer
specifies EXPOSE 80 in the Dockerfile). At runtime, the port might be
bound to 42800 on the host. To find the mapping between the host ports
and the exposed ports, use docker port.
With --net=host
--network="host" gives the container full access to local system services such as D-bus and is therefore considered insecure.
Here you have nothing in "ports" because you have all ports opened for host.
If you dont want to use host network you can access host port from docker container with docker interface
- How to access host port from docker container
- From inside of a Docker container, how do I connect to the localhost of the machine?.
When you want to access container from host you need to publish ports to host interface.
The -P option publishes all the ports to the host interfaces. Docker
binds each exposed port to a random port on the host. The range of
ports are within an ephemeral port range defined by
/proc/sys/net/ipv4/ip_local_port_range. Use the -p flag to explicitly
map a single port or range of ports.
In short, when you define just --expose 8000 the port is not exposed to 8000 but to some random port. When you want to make port 8000 visible to host you need to map published port -p 8000:8000.
Docker's network model is to create a new network namespace for your container. That means that container gets its own 127.0.0.1. If you want a container to reach a mysql service that is only listening on 127.0.0.1 on the host, you won't be able to reach it.
--net=host will put your container into the same network namespace as the host, but this is not advisable since it is effectively turning off all of the other network features that docker has-- you don't get isolation, you don't get port expose/publishing, etc.
The best solution will probably be to make your mysql server listen on an interface that is routable from the docker containers.
If you don't want to make mysql listen to your public interface, you can create a bridge interface, give it a random ip (make sure you don't have any conflicts), connect it to nothing, and configure mysql to listen only on that ip and 127.0.0.1. For example:
sudo brctl addbr myownbridge
sudo ifconfig myownbridge 10.255.255.255
sudo docker run --rm -it alpine ping -c 1 10.255.255.255
That IP address will be routable from both your host and any container running on that host.
Another approach would be to containerize your mysql server. You could put it on the same network as your other containers and get to it that way. You can even publish its port 3306 to the host's 127.0.0.1 interface.