Expose locally binded port to the container - docker

I have a local MySQL server, that listens to the port 3306 on 127.0.0.1 (not 0.0.0.0!) and though accessible only from my local machine.
I want to run a docker or podman (rootless) container and allow it to connect to my MySQL server.
What I practically trying to do is to pass SSH-Option -L/-R port tunnel option to docker run or podman run, like so
$ docker run ... -L 3306:localhost:3306
or
$ docker run ... -R 3306:localhost:3306
(depending from which point of view you see this tunnel being established)
I tried to use --add-host=host.docker.internal:host-gateway option, but it requires me to make MySQL listen to 0.0.0.0:3306 and I don't want to do it due to security reasons.
My question: How can I create a tunnel between my 127.0.0.1:3306 and to some port within the container?

Related

docker-compose exposing ports on remote container

I run my docker-compose service remotely on another physical machine on my local network via -H=ssh://user#192.168.0.x.
I'm exposing ports using docker-compose:
ports:
- 8001:8001
However, when I start the service, the port 8001 is not exposed to my localhost. I can ssh into the machine running the container, and port 8001 is indeed listening there.
How do I instruct docker-compose to tunnel this port from the remote machine running the container, to my local docker client?
Docker doesn't have the ability to do this. But from your ssh client's point of view, the container is no different from any other program running on the remote host, and you can use the ssh -L option to forward a local port to the remote system.
# Tell ssh to forward local port 8001 to remote port 8001
ssh -L 8001:localhost:8001 user#192.168.0.x \
# Incidentally the remote port happens to be via a Docker container
docker run -p 127.0.0.1:8001:8001 ...
Whenever you set DOCKER_HOST or use the docker -H option, you're giving instructions to a remote Docker daemon which interprets them relative to itself. docker -H ... -v ... mounts a directory on the same system as the Docker daemon into a container; docker -H ... -p ... publishes a port on the same system as the Docker daemon. Docker has no ability to somehow take into account the content or network stack of the local system when doing this.
(The one exception is docker -H ... build, which actually creates a tar file of the local directory and sends it across the network to use as the build context, so you can have a remote Docker daemon build an image of a local source tree.)

Trying to connect to mariadb running in docker

I have a database running in a docker container. It does not publish mariadb's port 3306.
Now I want to remotely log in to the docker host, connect to the container and access the database from my laptop
laptop ---> dockerhost ---> container
in order to access the database with gUI tools like DbVisualizer.
The idea is to open a connection with socat, but I'm stuck. Basically something like:
socat TCP4-LISTEN:3306 EXEC:'ssh dockerhost sudo docker exec container "socat - TCP:localhost:3306"'
The last attempt failed with "Unexpected exception encountered during query." in DbVisualizer and "2019/09/10 12:19:54 socat[17462] E write(6, 0x7f9985803c00, 114): Broken pipe" in the shell.
The command was (broken for readability):
socat TCP4-LISTEN:3306,forever,reuseaddr,fork \
exec:'
ssh dockerhost \
sudo docker exec container "
socat STDIO TCP:localhost:3306,forever,reuseaddr,fork
"
'
I hope someone can pinpoint what I do wrong or tell me how I can achieve my goal.
Delete and restart your container with a docker run -p or Docker Compose ports: option that will make it visible outside of Docker space. (You're storing the actual database data in a volume, right? On restart it will keep using the data from the volume.)
If you're comfortable with the container being accessed directly from off-host, then you can use an ordinary port invocation -p 3306:3306 and then reach it using dockerhost as the host name and the first port number as the port number.
If you still want to require the ssh tunnel, you can cause the port to be bound to the Docker host's localhost interface, and then use ssh port forwarding.
dockerhost$ docker run -p 127.0.0.1:3306:3306 -v ... mysql
laptop$ ssh -L 3306:localhost:3306 dockerhost
laptop$ mysql -h 127.0.0.1
docker exec in many ways is equivalent to ssh root#... and is not the normal way to interact with a network-accessible service.

Docker: Use Host SSH Forwards in container

I try to access a MS SQL Server from within a Docker container.
The problem is, it is only reachable via an SSH tunnel that I can establish on my host machine. I use a local forward for port 1433, that will automatically be established once I connect to the server.
Using SquirrelSQL for example, I can access the Server via 127.0.0.1:1433 with no problem.
But from within my docker container I am unable to do so.
I already tried to run the docker container with --expose 1433 -p 127.0.0.1:1433:1433 but that didn't work out.
Host is running Ubuntu 16.04, the Docker Container is running on some sort of Debian.

Docker: how to open ports to the host machine?

What could be the reason for Docker containers not being able to connect via ports to the host system?
Specifically, I'm trying to connect to a MySQL server that is running on the Docker host machine (172.17.0.1 on the Docker bridge). However, for some reason port 3306 is always closed.
The steps to reproduce are pretty simple:
Configure MySQL (or any service) to listen on 0.0.0.0 (bind-address=0.0.0.0 in ~/.my.cnf)
run
$ docker run -it alpine sh
# apk add --update nmap
# nmap -p 3306 172.17.0.1
That's it. No matter what I do it will always show
PORT STATE SERVICE
3306/tcp closed mysql
I've tried the same with an ubuntu image, a Windows host machine, and other ports as well.
I'd like to avoid --net=host if possible, simply to make proper use of containerization.
It turns out the IPs weren't correct. There was nothing blocking the ports and the services were running fine too. ping and nmap showed the IP as online but for some reason it wasn't the host system.
Lesson learned: don't rely on route in the container to return the correct host address. Instead check ifconfig or ipconfig on the Linux or Windows host respectively and pass this IP via environment variables.
Right now I'm transitioning to using docker-compose and have put all required services into containers, so the host system doesn't need to get involved and I can simply rely on Docker's DNS. This is much more satisfying.

Running Docker container on a specific port

I have a Rails app deployed with Dokku on DigitalOcean. I've created a Postgres database and linked it with a Rails app. Everything worked fine until I restarted the droplet. I figured out that apps stopped working because on restart every Docker container gets a new port and Rails application isn't able to connect to it. If I run dokku postgresql:info myapp it shows the old port, but it has changed. If I manually change database.yml and push it to the dokku repo everything works.
So how do I prevent Docker from assigning different port each time the server restarts? Or maybe there is an option to change ports of running containers.
I don't have much experience with Dokku but for docker there's no such thing of A container's port.
In docker you can expose a container's port to receive incoming request and map it to specific ports in your host machine.
With that you can, for instance, run your postgres inside a container and tell docker that you wanna expose the 5432, default postgresql port, to receive incoming requests:
sudo docker run --expose=5432 -P <IMAGE> <COMMAND>
The --expose=5432 tells docker to expose the port 5432 to receive incoming connections from outside world.
The -P flag tells docker to map all exposed ports in your container to the host machine's port.
with that you can connect to postgres pointing to your host's ip:port.
If you want to map a container's port to a different host machine port you can use the -p flag:
sudo docker run --expose=5432 -p=666 <IMAGE> <COMMAND>
Not sure if this can help you with Dokku environment, but I hope so.
For more information about docker's run command see: https://docs.docker.com/reference/commandline/cli/#run

Resources