Docker external host mapping - docker

I have a container running on a docker-host, the docker-host has a host entry like:
externalServiceHostEntry <external_service_IP>
I want the service inside my container to be able to talk to the externalService, and I want that service to use an alias hostname (because the container may be deployed in different docker-hosts with different host entries). As such i'd like to do something like this when I run the container:
docker run --add-host <alias>:<external_service_IP> ...
However, I can't do this as the external_service_IP may be changed by our infrastructure team at any point, and they only want to maintain the docker-host's etc/hosts file. So i want to do something like:
docker run --add-host <alias>:externalServiceHostEntry ...
Though i know this won't work for the obvious reason that the docker-host's externalServiceHostEntry hostname will have no meaning in the container /etc/hosts.
How can I achieve a setup, so that our infrastructure team can change the docker-hosts' etc/hosts file at will, and yet the service running in my container will still be able to communicate with the external service via a constant alias?

Related

Can (Should) I Run a Docker Container with Same host name as the Docker Host?

I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.

Set host -> IP mapping from INSIDE DOCKER CONTAINER

I want to set a hosts-file like entry for a specific domain name FROM MY APPLICATION INSIDE a docker container. As in, I want the app in the container to resolve x.com:192.168.1.3, automatically, without any outside input or configuration. I realize this is unconventional compared to canonical docker use-cases, so don't # me about it :)
I want my code to, on a certain branch, use a different hostname:ip mapping for a specific domain. And I want it to do it automatically and without intervention from the host machine, docker daemon, or end-user executing the container. Ideally this mapping would occur at the container infrastructure level, rather than some kind of modification to the application code which would perform this mapping itself.
How should I be doing this?
Why is this hard?
The /etc/hosts file in a docker container is read-only and is not able to be modified from inside the container. This is by design and it's managed by the docker daemon.
DNS for a docker container is linked to the DNS of the underlying host in a number of ways and it's not smart to mess with it too too much.
Requirements:
Inside the container, domain x.com resolves to non-standard, pre-configured IP address.
The container is a standard Docker container running on a host of indeterminate configuration.
Constraints:
I can't pass the configuration as a runtime flag (e.g. --add-host).
I can't expose the mapping externally (e.g. set a DNS entry on the host machine).
I can't modifying the underlying host, or count on it to be configured a certain way.
open questions:
is it possible to set DNS entries from inside the container and override host DNS for the container only? if so, what's a good light-weight low-management tool for this (e.g. DNSmasq, coredns)?
is there some magic by which I can impersonate the /etc/hosts file or add a pre-processed file before it in the resolution chain?

docker container networking to connect localhost in one container to another

I am using the default bridge network for docker (and yes, I am relatively new to docker). I have two docker containers.
The first container provides a service on port 12345. When creating this container, I did not specify the --publish option because I did not want to expose this port to the outside world.
The second container needs to use the service from the first container. However, the application running in this second container was hardcoded to access the service at 127.0.0.1:12345. Clearly, the second container's localhost is not the same as the first container. Is there a way to course docker networking to think that localhost in the second container should actually be connected to the port in the first container, without exposing anything to the outside world?
Option N: (this works but may not be the best solution)
One way you can force this to behave the way you need is through injecting an additional service to bind to the port within on the application container and redirecting it outward.
socat TCP-LISTEN:12345,fork TCP:172.18.0.2:12345
A quick test here, I was able to confirm 127.0.0.1:12345 is treated as the remote 12345
Things to consider:
The two containers needs to be able to reach each other
It breaks the recommendation of one service per container.
Getting the app into the docker container. (yum / apt-get install socat, source build = ?)
Getting it to run on startup on container start/restart.

Can't resolve set hostname from another docker container in same network

I've had db and server container, both running in the same network. Can ping db host by its container id with no problem.
When I set a hostname for db container manually (-h myname), it had an effect ($ hostname returns set host), but I can't ping that hostname from another container in the same network. Container id still pingable.
Although it works with no problem in docker compose.
What am I missing?
Hostname is not used by docker's built in DNS service. It's a counterintuitive exception, but since hostnames can change outside of docker's control, it makes some sense. Docker's DNS will resolve:
the container id
container name
any network aliases you define for the container on that network
The easiest of these options is the last one which is automatically configured when running containers with a compose file. The service name itself is a network alias. This lets you scale and perform rolling updates without reconfiguring other containers.
You need to be on a user created network, not something like the default bridge which has DNS disabled. This is done by default when running containers with a compose file.
Avoid using links since they are deprecated. And I'd only recommend adding host entries for external static hosts that are not in any DNS, for container to container, or access to other hosts outside of docker, DNS is preferred.
I've found out, that problem can be solved without network using --add-host option. Container's IP can be gain using inspect command.
But when containers in the same network, they are able to access each other via it names.
As stated in the docker docs, if you start containers on the default bridge network, adding -h myname will add this information to
/etc/hosts
/etc/resolv.conf
and the bash prompt
of the container just started.
However, this will not have any effect to other independent containers. (You could use --link to add this information to /etc/hosts of other containers. However, --link is deprecated.)
On the other hand, when you create a user-defined bridge network, docker provides an embedded DNS server to make name lookups between containers on that network possible, see Embedded DNS server in user-defined networks. Name resolution takes the container names defined with --name. (You
will not find another container by using its --hostname value.)
The reason, why it works with docker-compose is, that docker-compose creates a custom network for you and automatically names the containers.
The situation seems to be a bit different, when you don't specify a name for the container yourself. The run reference says
If you do not assign a container name with the --name option, then the daemon generates a random string name for you. [...] If you specify a name, you can use it when referencing the container within a Docker network.
In agreement with your findings, this should be read as: If you don't specify a custom --name, you cannot use the auto-generated name to look up other containers on the same network.

Update Prometheus Host/Port in Docker

Question: How can I change a Prometheus container's host address from the default 0.0.0.0:9090 to something like 192.168.1.234:9090?
Background: I am trying to get a Prometheus container to install and start in a production environment on a remote server. Since the server uses an IP other than Prometheus's default (0.0.0.0), I need to update the host address that the Prometheus container uses. If I don't, I can't sign-in to the UI and see any of the metrics. The IP of the remote server is provided by the user during the app's installation.
From what I understand from Prometheus's config document and the output of ./prometheus -h, the host address is immutable and therefore needs to be updated using the --web.listen-address= command-line flag. My problem is I don't know how to pass that flag to my Prometheus container; I can't simply run ./prometheus --web.listen-address="<remote-ip>:9090" because that's not a Docker command. And I can't pass it to the docker run ... command because Docker doesn't recognize that flag.
Environment:
Using SaltStack for config management
I cannot use Docker Swarm (i.e. each container must use its own Dockerfile)
You don't need to change the containerized prometheus' listen address. The 0.0.0.0/0 is the anynet inside the container.
By default, it won't even be accessible from your hosts network, let alone any surrounding networks (like the Internet).
You can map it to a port on a hosts interface though. The command for that looks somewhat like this:
docker run --rm -p 8080:9090 prom/prometheus
which would expose the service at 127.0.0.1:8080 on your host
You can do that with a public (e.g. internet-facing) interface as well, although i'd generally advise against exposing containers like this, due to numerous operational implications, which are somewhat beyond the scope of this answer. You should at least consider a reverse-proxy setup, where the users are only allowed to talk to some heavy-duty webserver which then communicates with prometheus, instead of letting them access your backend directly, even if this is just a small development deployment.
For general considerations on productionizing container setups, i suggest this.
Despite it's clickbaity title, this is a useful read.

Resources