Easy, straightforward, robust way to make host port available to Docker container? - docker

It is really easy to mount directories into a docker container. How can I just as easily "mount a port into" a docker container?
Example:
I have a MySQL server running on my local machine. To connect to it from a docker container I can mount the mysql.sock socket file into the container. But let's say for some reason (like intending to run a MySQL slave instance) I cannot use mysql.sock to connect and need to use TCP.
How can I accomplish this most easily?
Things to consider:
I may be running Docker natively if I'm using Linux, but I may also be running it in a VM if I'm on Mac or Windows, through Docker Machine or Docker for Mac/Windows (Beta). The answer should handle both scenarios seamlessly, without me as the user having to decide which solution is right depending on my specific Docker setup.
Simply assigning the container to the host network is often not an option, so that's unfortunately not a proper solution.
Potential solution directions:
1) I understand that setting up proper local DNS and making the Docker container (network) talk to it might be a proper, robust solution. If there is such a DNS service that can be set up with 1, max 2 commands and then "just work", that might be something.
2) Essentially what's needed here is that something will listen on a port inside the container and like a sort of proxy route traffic between the TCP/IP participants. There's been discussion on this closed Docker GH issue that shows some ip route command-line magic, but that's a bit too much of a requirement for many people, myself included. But if there was something akin to this that was fully automated while understanding Docker and, again, possible to get up and running with 1-2 commands, that'd be an acceptable solution.

I think you can run your container with --net=host option. In this case container will bind to the host's network and will be able to access all the ports on your local machine.

Related

Can (Should) I Run a Docker Container with Same host name as the Docker Host?

I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.

Connect from container to a service on the host (docker for mac)

I have a somewhat complex situation and am probably out of luck here, but here's hoping. This is part of a large development project, so my options for what changes I can make are somewhat limited.
I have a virtual machine running a k8s cluster. That cluster has an http service that is exposed via ingress, and is available, on my local machine, at develop.com, via an /etc/hosts entry on the host mac.
I have a container, necessarily (see above) separate from the cluster, which needs access to this service. This container uses an env var, SERVICE_HOST to configure its requests.
What is the simplest way to provide a value that can be resolved by the standalone container to my cluster? Ideally, something other than ngrok which is simple, but is complicated by the fact that it's already in use in this setup to allow the cluster to reach the standalone container! I'd much prefer to make this work without premium features...
I'm aware of --net=host concept, but it doesn't work on an OSX host.

Assign hostnames to exposed docker ports

Okay so in Vagrant/VVV you can assign different hostnames to your different projects so when you go to http://myproject-1.dev your website shows up.
This is very convenient if you are working on dozens of projects at the same time, As far as I know such thing is not possible in docker (it can't touch hosts file), My question is, is there something similar we can do in Docker? Some automated tool maybe?
Using docker for windows.
Hostnames can map many containers together. In docker compose, there's a hostname option. But that's only within the Docker network bridge, not available to the host
Docker isn't a VM (although it runs within one in Windows).
You can edit your hosts file to have the HyperVisor available, but you're supposed to have the host ports forwarded into the container.
Use localhost, not any hostname.
If you prefer your Vagrant patterns, continue using it, but provision Docker containers from it, or use Docker Machine

Is there a way to discover other containers on a docker network using DNS?

I would like to be able to get a list of all containers running on the same docker network from within a docker container. As the built in docker DNS can give me the IP addresses if I have the hostnames, it seems like it should be able to just give me a list of hostnames (maybe DNS cannot do this, I don't know).
Other approaches that I've thought of for getting a list of containers:
Bind mount the docker socket into the container and use docker ps. Not a great idea as far as security goes.
Use --link which I believe places entries in /etc/hosts. I could then read them from there, but this sort of defeats the purpose as I would have to already know the host names when I launched the container.
I'm looking to avoid using an external service discovery mechanism, but I would appreciate all suggestions for how to get a list of containers.
An easy way to achieve this would be running a one or more docker command(s) in the host, to get the information you need in a loop and store it in a known location (ex in bash)
while true; do echo `docker ps --format {{.ID}}` > /SOME/KNOWN/FILE; sleep 5; done
and then let the containers access this file, using volumes.
It is much safer than providing access to the docker socket, and you can improve it to provide all the information you need (ex json with name, ip, running time, etc).

Can a docker process access programms on the host with ipc

I´m working on a clustered tomcat system that uses MQSeries.
Today MQSeries is accessed in bindings mode, i.e. via IPC and tomcat and mqeries run on the same host without any virtualization/docker support.
I´d like to transform that to a solution, where mqseries runs on the host (or possible in a docker container) the the tomcat instances run in docker containers.
It´s possible to access mqseries in client mode (via a tcp connection) and this seems to be the right solution.
Would it still be possible to access mqseries from the docker container via ipc, i.e. create exceptions for the ipc namespace separation? Is anything like that planned for docker?
Since docker 1.5 this is possible with the flag --ipc=host like in
docker run --ipc=host ubuntu bash
This answer suggests how IPC can be enabled with a source-code modification to Docker. As far as I (and the other answers there) know, there is no built-in feature.
Specificically, he says he commented out this line which makes Docker create a separate IPC namespace.
Rebuilding Docker is a bit tedious because it brings in dozens of other things during the build, but if you follow the instructions it's straightforward.

Resources