Related
Running docker for Mac 17.06.0 I have created a docker file that creates an image of Apache server. Notice it exposes port 80.
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y apache2
ADD index.html /var/www/html/
CMD /usr/sbin/apache2ctl -D FOREGROUND
EXPOSE 80
In the same folder of the Dockerfile I have created a simple index.httml file.
Then I built and ran it using
docker build -t webserver .
docker run -d webserver
I took the IP address of the running container using
docker inspect [container_name] | grep -i IPAddress
and when I curl
curl 172.17.0.2
I get no answer.
I do get an answer when running -p 80:80 and using localhost in the curl command.
curl localhost
But I want to understand why can't I curl the container IP.
Questions:
How can I get an answer for my curl?
I understand I can't ping my container when using docker for Mac (link).
Can I telnet it just to verify that the port is exposed?
Can I SSH it?
On Docker for Mac the Docker engine is running inside a small VM using Hyper-V. As consequence, the ip 172.17.0.2 is valid only inside that VM and not on your host system. See https://docs.docker.com/docker-for-mac/docker-toolbox/#the-docker-for-mac-environment for more details and comparison to other VM concepts like Docker Machine.
When you run your Docker container, you need to bind a local port to the container like so:
docker run -d -p 80:80 webserver
where the first 80 is the port on the localhost and the second is the port on the container that is exposed. Just having the port exposed in the dockerfile is not enough to access it from the localhost.
From the host environment, is there a way to get the container and its properties, at least to get its IP(s) addresses, by querying using its alias?
For clarity, I'm referring to the actual alias you can give to containers on specific networks, see here
Currently my solution so far is iterating over all containers through the rest API and see if the alias matches what I'm looking for. It's obvious this is not ideal as this does not scale well in the case there are many containers on the host.
Ideally I'd want to just send a DNS request to docker's embedded dns server, but this does not seem possible. See this question/answer for more info
Does anyone know of a better solution?
/update
since an alias is tied to a network, it's expected that the to-be queried alias is accompanied with its network name, e.g., 'container-alias.network-name'. So we'd have to enumerate over the containers in a single network, but even this does not scale well if there are many containers in a single network.
Mind you, that getting all containers filtered by network name is possible in the docker api, allowing for a single REST call to the api. However, even though the 'Alias' key is in the REST result, it never holds any values. (this might be a bug, I'm not sure.) You have to GET for a specific container, i.e., GET /v1.24/containers/container-uuid/json before you can see its aliases.
There's almost certainly a neater way to do this, maybe even entirely using docker inspect, but if you have jq available you can try something like:
docker inspect -f '{{json .NetworkSettings.Networks}}' <container name/ID> | \
jq '.[] | select(.Aliases[] | contains("<alias on network>")) | .IPAddress'
Remember that most of these details aren't useful outside of Docker space; except on one very specific configuration (on the same native-Linux host as the container) you can't directly access the container-specific IP addresses.
If you can run docker commands you can start containers, so you can get indirect access to the Docker DNS system. For example, let's start a container with an alias on a non-default network:
docker network create testnet
docker run \
--net testnet \
--name sleep \
--network-alias zzz \
-d \
busybox \
sleep 600
Now we can run another container on that same network and start making DNS queries. We can get the Docker-internal IP address of the zzz alias:
$ docker run --net testnet --rm giantswarm/tiny-tools dig +short zzz
172.18.0.2
Then we can reverse-resolve that address:
$ docker run --net testnet --rm giantswarm/tiny-tools dig +short -x 172.18.0.2
sleep.testnet.
That canonical name is the container name and network, and if you need to probe deeper, you can now docker inspect sleep.
I think you should know what you are doing by reading the docker ref
And then, caress docker simple than ever by just run the following
cat << EOF >> /etc/bash.bashrc
function docker(){
[[ "ip" == "$1" ]] && /usr/bin/docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $2 || /usr/bin/docker $#
}
EOF
source /etc/bash.bashrc
docker ip $INSTANCE_ID
When creating docker containers with a docker-compose file which specifies hostname (hostname:) and ip addresses (ipv4_address:), I would like the local computer (the computer that runs the docker daemon service, aka my laptop) to be able to resolve those hostnames, without the need of (too much) manual intervention. I.e. if the container is a webserver service with hostname my_webserver, I would like that my_werbserver resolve to the IP address I assigned to that container.
What's the best way to achieve that? Anything better than maintaining /etc/hosts on my laptop manually?
As #Kārlis Ābele mentioned, I don't think you can do what you need without additional services. One solution would be to run dnsmasq in a docker container in the same network as the other docker containers. See Make the internal DNS server available from the host
docker run -d --name dns -p 53:53 -p 53:53/udp --network docker_network andyshinn/dnsmasq:2.76 -k -d
Check that it works using localhost as DNS
nslookup bar localhost
Optionally setup localhost as DNS server. On Ubutu 18.04 for example, edit /etc/resolvconf/resolv.conf.d/head.
nameserver localhost
Restart the resolvconf service.
sudo service resolvconf restart
Now you should be able to ping the containers by name.
EDITED:
An alternative solution (based on #Kārlis Ābele answer) is to listen to docker events and update /etc/hosts. A barebones implementation:
#!/usr/bin/env bash
while IFS= read -r line; do
container_id=$(echo ${line}| sed -En 's/.*create (.*) \(.*$/\1 /p')
container_name=$(echo ${line}| sed -En 's/.*name=(.*)\)$/\1 /p')
ip_addr=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ${container_id})
echo ${ip_addr} ${container_name} >> /etc/hosts
done < <(docker events --filter 'event=create')
Well it seems like something that won't be possible without some custom service that is listening for events on Docker daemon...
How I would do this, is write a simple service that is listening for Docker events (https://docs.docker.com/engine/reference/commandline/events/) and updates /etc/hosts file accordingly.
Other than that, without manually updating hosts file I don't think there are other options though.
This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(41 answers)
Closed 10 months ago.
As the title says, I need to be able to retrieve the IP address the docker hosts and the portmaps from the host to the container, and doing that inside of the container.
/sbin/ip route|awk '/default/ { print $3 }'
As #MichaelNeale noticed, there is no sense to use this method in Dockerfile (except when we need this IP during build time only), because this IP will be hardcoded during build time.
As of version 18.03, you can use host.docker.internal as the host's IP.
Works in Docker for Mac, Docker for Windows, and perhaps other platforms as well.
This is an update from the Mac-specific docker.for.mac.localhost, available since version 17.06, and docker.for.mac.host.internal, available since version 17.12, which may also still work on that platform.
Note, as in the Mac and Windows documentation, this is for development purposes only.
For example, I have environment variables set on my host:
MONGO_SERVER=host.docker.internal
In my docker-compose.yml file, I have this:
version: '3'
services:
api:
build: ./api
volumes:
- ./api:/usr/src/app:ro
ports:
- "8000"
environment:
- MONGO_SERVER
command: /usr/local/bin/gunicorn -c /usr/src/app/gunicorn_config.py -w 1 -b :8000 wsgi
Update: On Docker for Mac, as of version 18.03, you can use host.docker.internal as the host's IP. See aljabear's answer. For prior versions of Docker for Mac the following answer may still be useful:
On Docker for Mac the docker0 bridge does not exist, so other answers here may not work. All outgoing traffic however, is routed through your parent host, so as long as you try to connect to an IP it recognizes as itself (and the docker container doesn't think is itself) you should be able to connect. For example if you run this from the parent machine run:
ipconfig getifaddr en0
This should show you the IP of your Mac on its current network and your docker container should be able to connect to this address as well. This is of course a pain if this IP address ever changes, but you can add a custom loopback IP to your Mac that the container doesn't think is itself by doing something like this on the parent machine:
sudo ifconfig lo0 alias 192.168.46.49
You can then test the connection from within the docker container with telnet. In my case I wanted to connect to a remote xdebug server:
telnet 192.168.46.49 9000
Now when traffic comes into your Mac addressed for 192.168.46.49 (and all the traffic leaving your container does go through your Mac) your Mac will assume that IP is itself. When you are finish using this IP, you can remove the loopback alias like this:
sudo ifconfig lo0 -alias 192.168.46.49
One thing to be careful about is that the docker container won't send traffic to the parent host if it thinks the traffic's destination is itself. So check the loopback interface inside the container if you have trouble:
sudo ip addr show lo
In my case, this showed inet 127.0.0.1/8 which means I couldn't use any IPs in the 127.* range. That's why I used 192.168.* in the example above. Make sure the IP you use doesn't conflict with something on your own network.
AFAIK, in the case of Docker for Linux (standard distribution), the IP address of the host will always be 172.17.0.1 (on the main network of docker, see comments to learn more).
The easiest way to get it is via ifconfig (interface docker0) from the host:
ifconfig
From inside a docker, the following command from a docker: ip -4 route show default | cut -d" " -f3
You can run it quickly in a docker with the following command line:
# 1. Run an ubuntu docker
# 2. Updates dependencies (quietly)
# 3. Install ip package (quietly)
# 4. Shows (nicely) the ip of the host
# 5. Removes the docker (thanks to `--rm` arg)
docker run -it --rm ubuntu:22.04 bash -c "apt-get update > /dev/null && apt-get install iproute2 -y > /dev/null && ip -4 route show default | cut -d' ' -f3"
For those running Docker in AWS, the instance meta-data for the host is still available from inside the container.
curl http://169.254.169.254/latest/meta-data/local-ipv4
For example:
$ docker run alpine /bin/sh -c "apk update ; apk add curl ; curl -s http://169.254.169.254/latest/meta-data/local-ipv4 ; echo"
fetch http://dl-cdn.alpinelinux.org/alpine/v3.3/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.3/community/x86_64/APKINDEX.tar.gz
v3.3.1-119-gb247c0a [http://dl-cdn.alpinelinux.org/alpine/v3.3/main]
v3.3.1-59-g48b0368 [http://dl-cdn.alpinelinux.org/alpine/v3.3/community]
OK: 5855 distinct packages available
(1/4) Installing openssl (1.0.2g-r0)
(2/4) Installing ca-certificates (20160104-r2)
(3/4) Installing libssh2 (1.6.0-r1)
(4/4) Installing curl (7.47.0-r0)
Executing busybox-1.24.1-r7.trigger
Executing ca-certificates-20160104-r2.trigger
OK: 7 MiB in 15 packages
172.31.27.238
$ ifconfig eth0 | grep -oP 'inet addr:\K\S+'
172.31.27.238
The only way is passing the host information as environment when you create a container
run --env <key>=<value>
The --add-host could be a more cleaner solution (but without the port part, only the host can be handled with this solution). So, in your docker run command, do something like:
docker run --add-host dockerhost:`/sbin/ip route|awk '/default/ { print $3}'` [my container]
(From https://stackoverflow.com/a/26864854/127400 )
docker network inspect bridge -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}'
It's possible to retrieve it using docker network inspect
The standard best practice for most apps looking to do this automatically is: you don't. Instead you have the person running the container inject an external hostname/ip address as configuration, e.g. as an environment variable or config file. Allowing the user to inject this gives you the most portable design.
Why would this be so difficult? Because containers will, by design, isolate the application from the host environment. The network is namespaced to just that container by default, and details of the host are protected from the process running inside the container which may not be fully trusted.
There are different options depending on your specific situation:
If your container is running with host networking, then you can look at the routing table on the host directly to see the default route out. From this question the following works for me e.g.:
ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'
An example showing this with host networking in a container looks like:
docker run --rm --net host busybox /bin/sh -c \
"ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p'"
For current versions of Docker Desktop, they injected a DNS entry into the embedded VM:
getent hosts host.docker.internal | awk '{print $1}'
With the 20.10 release, the host.docker.internal alias can also work on Linux if you run your containers with an extra option:
docker run --add-host host.docker.internal:host-gateway ...
If you are running in a cloud environment, you can check the metadata service from the cloud provider, e.g. the AWS one:
curl http://169.254.169.254/latest/meta-data/local-ipv4
If you want your external/internet address, you can query a remote service like:
curl ifconfig.co
Each of these have limitations and only work in specific scenarios. The most portable option is still to run your container with the IP address injected as a configuration, e.g. here's an option running the earlier ip command on the host and injecting it as an environment variable:
export HOST_IP=$(ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p')
docker run --rm -e HOST_IP busybox printenv HOST_IP
TLDR for Mac and Windows
docker run -it --rm alpine nslookup host.docker.internal
... prints the host's IP address ...
nslookup: can't resolve '(null)': Name does not resolve
Name: host.docker.internal
Address 1: 192.168.65.2
Details
On Mac and Windows, you can use the special DNS name host.docker.internal.
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.
If you want real IP address (not a bridge IP) on Windows and you have docker 18.03 (or more recent) do the following:
Run bash on container from host where image name is nginx (works on Alpine Linux distribution):
docker run -it nginx /bin/ash
Then run inside container
/ # nslookup host.docker.internal
Name: host.docker.internal
Address 1: 192.168.65.2
192.168.65.2 is the host's IP - not the bridge IP like in spinus accepted answer.
I am using here host.docker.internal:
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker for Windows.
In linux you can run
HOST_IP=`hostname -I | awk '{print $1}'`
In macOS your host machine is not the Docker host. Docker will install it's host OS in VirtualBox.
HOST_IP=`docker run busybox ping -c 1 docker.for.mac.localhost | awk 'FNR==2 {print $4}' | sed s'/.$//'`
I have Ubuntu 16.03. For me
docker run --add-host dockerhost:`/sbin/ip route|awk '/default/ { print $3}'` [image]
does NOT work (wrong ip was generating)
My working solution was that:
docker run --add-host dockerhost:`docker network inspect --format='{{range .IPAM.Config}}{{.Gateway}}{{end}}' bridge` [image]
Docker for Mac
I want to connect from a container to a service on the host
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host.
The gateway is also reachable as gateway.docker.internal.
https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
If you enabled the docker remote API (via -Htcp://0.0.0.0:4243 for instance) and know the host machine's hostname or IP address this can be done with a lot of bash.
Within my container's user's bashrc:
export hostIP=$(ip r | awk '/default/{print $3}')
export containerID=$(awk -F/ '/docker/{print $NF;exit;}' /proc/self/cgroup)
export proxyPort=$(
curl -s http://$hostIP:4243/containers/$containerID/json |
node -pe 'JSON.parse(require("fs").readFileSync("/dev/stdin").toString()).NetworkSettings.Ports["DESIRED_PORT/tcp"][0].HostPort'
)
The second line grabs the container ID from your local /proc/self/cgroup file.
Third line curls out to the host machine (assuming you're using 4243 as docker's port) then uses node to parse the returned JSON for the DESIRED_PORT.
My solution:
docker run --net=host
then in docker container:
hostname -I | awk '{print $1}'
Here is another option for those running Docker in AWS. This option avoids having using apk to add the curl package and saves the precious 7mb of space. Use the built-in wget (part of the monolithic BusyBox binary):
wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4
use hostname -I command on the terminal
Try this:
docker run --rm -i --net=host alpine ifconfig
So... if you are running your containers using a Rancher server, Rancher v1.6 (not sure if 2.0 has this) containers have access to http://rancher-metadata/ which has a lot of useful information.
From inside the container the IP address can be found here:
curl http://rancher-metadata/latest/self/host/agent_ip
For more details see:
https://rancher.com/docs/rancher/v1.6/en/rancher-services/metadata-service/
This is a minimalistic implementation in Node.js for who is running the host on AWS EC2 instances, using the afore mentioned EC2 Metadata instance
const cp = require('child_process');
const ec2 = function (callback) {
const URL = 'http://169.254.169.254/latest/meta-data/local-ipv4';
// we make it silent and timeout to 1 sec
const args = [URL, '-s', '--max-time', '1'];
const opts = {};
cp.execFile('curl', args, opts, (error, stdout) => {
if (error) return callback(new Error('ec2 ip error'));
else return callback(null, stdout);
})
.on('error', (error) => callback(new Error('ec2 ip error')));
}//ec2
and used as
ec2(function(err, ip) {
if(err) console.log(err)
else console.log(ip);
})
If you are running a Windows container on a Service Fabric cluster, the host's IP address is available via the environment variable Fabric_NodeIPOrFQDN. Service Fabric environment variables
Here is how I do it. In this case, it adds a hosts entry into /etc/hosts within the docker image pointing taurus-host to my local machine IP: :
TAURUS_HOST=`ipconfig getifaddr en0`
docker run -it --rm -e MY_ENVIRONMENT='local' --add-host "taurus-host:${TAURUS_HOST}" ...
Then, from within Docker container, script can use host name taurus-host to get out to my local machine which hosts the docker container.
Maybe the container I've created is useful as well https://github.com/qoomon/docker-host
You can simply use container name dns to access host system e.g. curl http://dockerhost:9200, so no need to hassle with any IP address.
The solution I use is based on a "server" that returns the external address of the Docker host when it receives a http request.
On the "server":
1) Start jwilder/nginx-proxy
# docker run -d -p <external server port>:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
2) Start ipify container
# docker run -e VIRTUAL_HOST=<external server name/address> --detach --name ipify osixia/ipify-api:0.1.0
Now when a container sends a http request to the server, e.g.
# curl http://<external server name/address>:<external server port>
the IP address of the Docker host is returned by ipify via http header "X-Forwarded-For"
Example (ipify server has name "ipify.example.com" and runs on port 80, docker host has IP 10.20.30.40):
# docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
# docker run -e VIRTUAL_HOST=ipify.example.com --detach --name ipify osixia/ipify-api:0.1.0
Inside the container you can now call:
# curl http://ipify.example.com
10.20.30.40
On Ubuntu, hostname command can be used with the following options:
-i, --ip-address addresses for the host name
-I, --all-ip-addresses all addresses for the host
For example:
$ hostname -i
172.17.0.2
To assign to the variable, the following one-liner can be used:
IP=$(hostname -i)
Another approach is based on traceroute and it's working on a Linux host for me, e.g. in a container based on Alpine:
traceroute -n 8.8.8.8 -m 4 -w 1 | awk '$1~/\d/&&$2!~/^172\./{print$2}' | head -1
It takes a moment, but lists the first hop's IP that does not start with 172. If there is no successful response, try increasing the limit on the tested hops using -m 4 argument.
With https://docs.docker.com/machine/install-machine/
a) $ docker-machine ip
b) Get the IP address of one or more machines.
$ docker-machine ip host_name
$ docker-machine ip host_name1 host_name2
I'm trying to create a Docker container that acts like a full-on virtual machine. I know I can use the EXPOSE instruction inside a Dockerfile to expose a port, and I can use the -p flag with docker run to assign ports, but once a container is actually running, is there a command to open/map additional ports live?
For example, let's say I have a Docker container that is running sshd. Someone else using the container ssh's in and installs httpd. Is there a way to expose port 80 on the container and map it to port 8080 on the host, so that people can visit the web server running in the container, without restarting it?
You cannot do this via Docker, but you can access the container's un-exposed port from the host machine.
If you have a container with something running on its port 8000, you can run
wget http://container_ip:8000
To get the container's IP address, run the 2 commands:
docker ps
docker inspect container_name | grep IPAddress
Internally, Docker shells out to call iptables when you run an image, so maybe some variation on this will work.
To expose the container's port 8000 on your localhost's port 8001:
iptables -t nat -A DOCKER -p tcp --dport 8001 -j DNAT --to-destination 172.17.0.19:8000
One way you can work this out is to setup another container with the port mapping you want, and compare the output of the iptables-save command (though, I had to remove some of the other options that force traffic to go via the docker proxy).
NOTE: this is subverting docker, so should be done with the awareness that it may well create blue smoke.
OR
Another alternative is to look at the (new? post 0.6.6?) -P option - which will use random host ports, and then wire those up.
OR
With 0.6.5, you could use the LINKs feature to bring up a new container that talks to the existing one, with some additional relaying to that container's -p flags? (I have not used LINKs yet.)
OR
With docker 0.11? you can use docker run --net host .. to attach your container directly to the host's network interfaces (i.e., net is not namespaced) and thus all ports you open in the container are exposed.
Here's what I would do:
Commit the live container.
Run the container again with the new image, with ports open (I'd recommend mounting a shared volume and opening the ssh port as well)
sudo docker ps
sudo docker commit <containerid> <foo/live>
sudo docker run -i -p 22 -p 8000:80 -m /data:/data -t <foo/live> /bin/bash
While you cannot expose a new port of an existing container, you can start a new container in the same Docker network and get it to forward traffic to the original container.
# docker run \
--rm \
-p $PORT:1234 \
verb/socat \
TCP-LISTEN:1234,fork \
TCP-CONNECT:$TARGET_CONTAINER_IP:$TARGET_CONTAINER_PORT
Worked Example
Launch a web-service that listens on port 80, but do not expose its internal port 80 (oops!):
# docker run -ti mkodockx/docker-pastebin # Forgot to expose PORT 80!
Find its Docker network IP:
# docker inspect 63256f72142a | grep IPAddress
"IPAddress": "172.17.0.2",
Launch verb/socat with port 8080 exposed, and get it to forward TCP traffic to that IP's port 80:
# docker run --rm -p 8080:1234 verb/socat TCP-LISTEN:1234,fork TCP-CONNECT:172.17.0.2:80
You can now access pastebin on http://localhost:8080/, and your requests goes to socat:1234 which forwards it to pastebin:80, and the response travels the same path in reverse.
IPtables hacks don't work, at least on Docker 1.4.1.
The best way would be to run another container with the exposed port and relay with socat. This is what I've done to (temporarily) connect to the database with SQLPlus:
docker run -d --name sqlplus --link db:db -p 1521:1521 sqlplus
Dockerfile:
FROM debian:7
RUN apt-get update && \
apt-get -y install socat && \
apt-get clean
USER nobody
CMD socat -dddd TCP-LISTEN:1521,reuseaddr,fork TCP:db:1521
Here's another idea. Use SSH to do the port forwarding; this has the benefit of also working in OS X (and probably Windows) when your Docker host is a VM.
docker exec -it <containterid> ssh -R5432:localhost:5432 <user>#<hostip>
To add to the accepted answer iptables solution, I had to run two more commands on the host to open it to the outside world.
HOST> iptables -t nat -A DOCKER -p tcp --dport 443 -j DNAT --to-destination 172.17.0.2:443
HOST> iptables -t nat -A POSTROUTING -j MASQUERADE -p tcp --source 172.17.0.2 --destination 172.17.0.2 --dport https
HOST> iptables -A DOCKER -j ACCEPT -p tcp --destination 172.17.0.2 --dport https
Note: I was opening port https (443), my docker internal IP was 172.17.0.2
Note 2: These rules and temporrary and will only last until the container is restarted
I had to deal with this same issue and was able to solve it without stopping any of my running containers. This is a solution up-to-date as of February 2016, using Docker 1.9.1. Anyway, this answer is a detailed version of #ricardo-branco's answer, but in more depth for new users.
In my scenario, I wanted to temporarily connect to MySQL running in a container, and since other application containers are linked to it, stopping, reconfiguring, and re-running the database container was a non-starter.
Since I'd like to access the MySQL database externally (from Sequel Pro via SSH tunneling), I'm going to use port 33306 on the host machine. (Not 3306, just in case there is an outer MySQL instance running.)
About an hour of tweaking iptables proved fruitless, even though:
Step by step, here's what I did:
mkdir db-expose-33306
cd db-expose-33306
vim Dockerfile
Edit dockerfile, placing this inside:
# Exposes port 3306 on linked "db" container, to be accessible at host:33306
FROM ubuntu:latest # (Recommended to use the same base as the DB container)
RUN apt-get update && \
apt-get -y install socat && \
apt-get clean
USER nobody
EXPOSE 33306
CMD socat -dddd TCP-LISTEN:33306,reuseaddr,fork TCP:db:3306
Then build the image:
docker build -t your-namespace/db-expose-33306 .
Then run it, linking to your running container. (Use -d instead of -rm to keep it in the background until explicitly stopped and removed. I only want it running temporarily in this case.)
docker run -it --rm --name=db-33306 --link the_live_db_container:db -p 33306:33306 your-namespace/db-expose-33306
You can use SSH to create a tunnel and expose your container in your host.
You can do it in both ways, from container to host and from host to container. But you need a SSH tool like OpenSSH in both (client in one and server in another).
For example, in the container, you can do
$ yum install -y openssh openssh-server.x86_64
service sshd restart
Stopping sshd: [FAILED]
Generating SSH2 RSA host key: [ OK ]
Generating SSH1 RSA host key: [ OK ]
Generating SSH2 DSA host key: [ OK ]
Starting sshd: [ OK ]
$ passwd # You need to set a root password..
You can find the container IP address from this line (in the container):
$ ifconfig eth0 | grep "inet addr" | sed 's/^[^:]*:\([^ ]*\).*/\1/g'
172.17.0.2
Then in the host, you can just do:
sudo ssh -NfL 80:0.0.0.0:80 root#172.17.0.2
Based on Robm's answer I have created a Docker image and a Bash script called portcat.
Using portcat, you can easily map multiple ports to an existing Docker container. An example using the (optional) Bash script:
curl -sL https://raw.githubusercontent.com/archan937/portcat/master/script/install | sudo bash
portcat my-awesome-container 3456 4444:8080
And there you go! Portcat is mapping:
port 3456 to my-awesome-container:3456
port 4444 to my-awesome-container:8080
Please note that the Bash script is optional, the following commands:
ipAddress=$(docker inspect my-awesome-container | grep IPAddress | grep -o '[0-9]\{1,3\}\(\.[0-9]\{1,3\}\)\{3\}' | head -n 1)
docker run -p 3456:3456 -p 4444:4444 --name=alpine-portcat -it pmelegend/portcat:latest $ipAddress 3456 4444:8080
I hope portcat will come in handy for you guys. Cheers!
There is a handy HAProxy wrapper.
docker run -it -p LOCALPORT:PROXYPORT --rm --link TARGET_CONTAINER:EZNAME -e "BACKEND_HOST=EZNAME" -e "BACKEND_PORT=PROXYPORT" demandbase/docker-tcp-proxy
This creates an HAProxy to the target container. easy peasy.
Here are some solutions:
https://forums.docker.com/t/how-to-expose-port-on-running-container/3252/12
The solution to mapping port while running the container.
docker run -d --net=host myvnc
that will expose and map the port automatically to your host
In case no answer is working for someone - check if your target container is already running in docker network:
CONTAINER=my-target-container
docker inspect $CONTAINER | grep NetworkMode
"NetworkMode": "my-network-name",
Save it for later in the variable $NET_NAME:
NET_NAME=$(docker inspect --format '{{.HostConfig.NetworkMode}}' $CONTAINER)
If yes, you should run the proxy container in the same network.
Next look up the alias for the container:
docker inspect $CONTAINER | grep -A2 Aliases
"Aliases": [
"my-alias",
"23ea4ea42e34a"
Save it for later in the variable $ALIAS:
ALIAS=$(docker inspect --format '{{index .NetworkSettings.Networks "'$NET_NAME'" "Aliases" 0}}' $CONTAINER)
Now run socat in a container in the network $NET_NAME to bridge to the $ALIASed container's exposed (but not published) port:
docker run \
--detach --name my-new-proxy \
--net $NET_NAME \
--publish 8080:1234 \
alpine/socat TCP-LISTEN:1234,fork TCP-CONNECT:$ALIAS:80
You can use an overlay network like Weave Net, which will assign a unique IP address to each container and implicitly expose all the ports to every container part of the network.
Weave also provides host network integration. It is disabled by default but, if you want to also access the container IP addresses (and all its ports) from the host, you can run simply run weave expose.
Full disclosure: I work at Weaveworks.
It's not possible to do live port mapping but there are multiple ways you can give a Docker container what amounts to a real interface like a virtual machine would have.
Macvlan Interfaces
Docker now includes a Macvlan network driver. This attaches a Docker network to a "real world" interface and allows you to assign that networks addresses directly to the container (like a virtual machines bridged mode).
docker network create \
-d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 pub_net
pipework can also map a real interface into a container or setup a sub interface in older versions of Docker.
Routing IP's
If you have control of the network you can route additional networks to your Docker host for use in the containers.
Then you assign that network to the containers and setup your Docker host to route the packets via the docker network.
Shared host interface
The --net host option allows the host interface to be shared into a container but this is probably not a good setup for running multiple containers on the one host due to the shared nature.
Read Ricardo's response first. This worked for me.
However, there exists a scenario where this won't work if the running container was kicked off using docker-compose. This is because docker-compose (I'm running docker 1.17) creates a new network. The way to address this scenario would be
docker network ls
Then append the following
docker run -d --name sqlplus --link db:db -p 1521:1521 sqlplus --net network_name
docker run -i --expose=22 b5593e60c33b bash
ref: https://forums.docker.com/t/how-to-expose-port-on-running-container/3252/5
this may help you