Read host's ifconfig in the running Docker container - docker

I would like to read host's ifconfig output during the run of the Docker container, to be able to parse it and get OpenVPN interface (tap0) IP address and process it within my application.
Unfortunately, propagating this value via the environment is not my case, because IP address could change in time of running the container and I don't want to restart my application container each time to see a new value.
Current working solution is a CRON on the host which writes the IP into the file on a shared volume and container reads from it - but I am looking for better solution as it seems to me as a workaround. Also, there was a plan to create new container with network: host which will see host's interfaces - it works, but it also looks like a workaround as it involves many steps and probably security issues.
I have a question, is there any valid and more clean way to achieve my goal - read host's ifconfig in docker container in realtime?

A specific design goal of Docker is that containers can’t directly access the host’s network configuration. The workarounds you’ve identified are pretty much the only way to do these.
If you’re trying to modify the host’s network configuration in some way (you’re trying to actually run a VPN, for example) you’re probably better off running it outside of Docker. You’ll still need root permission either way, but you won’t need to disable a bunch of standard restrictions to do what you need.
If you’re trying to provide some address where the service can be reached, using configuration like an environment variable is required. Even if you could access the host’s configuration, this might not be the address you need: consider a cloud environment where you’re running on a cloud instance behind a load balancer, and external clients need the load balancer; that’s not something you can directly know given only the host’s network configuration.

Related

Connect from container to a service on the host (docker for mac)

I have a somewhat complex situation and am probably out of luck here, but here's hoping. This is part of a large development project, so my options for what changes I can make are somewhat limited.
I have a virtual machine running a k8s cluster. That cluster has an http service that is exposed via ingress, and is available, on my local machine, at develop.com, via an /etc/hosts entry on the host mac.
I have a container, necessarily (see above) separate from the cluster, which needs access to this service. This container uses an env var, SERVICE_HOST to configure its requests.
What is the simplest way to provide a value that can be resolved by the standalone container to my cluster? Ideally, something other than ngrok which is simple, but is complicated by the fact that it's already in use in this setup to allow the cluster to reach the standalone container! I'd much prefer to make this work without premium features...
I'm aware of --net=host concept, but it doesn't work on an OSX host.

Set host -> IP mapping from INSIDE DOCKER CONTAINER

I want to set a hosts-file like entry for a specific domain name FROM MY APPLICATION INSIDE a docker container. As in, I want the app in the container to resolve x.com:192.168.1.3, automatically, without any outside input or configuration. I realize this is unconventional compared to canonical docker use-cases, so don't # me about it :)
I want my code to, on a certain branch, use a different hostname:ip mapping for a specific domain. And I want it to do it automatically and without intervention from the host machine, docker daemon, or end-user executing the container. Ideally this mapping would occur at the container infrastructure level, rather than some kind of modification to the application code which would perform this mapping itself.
How should I be doing this?
Why is this hard?
The /etc/hosts file in a docker container is read-only and is not able to be modified from inside the container. This is by design and it's managed by the docker daemon.
DNS for a docker container is linked to the DNS of the underlying host in a number of ways and it's not smart to mess with it too too much.
Requirements:
Inside the container, domain x.com resolves to non-standard, pre-configured IP address.
The container is a standard Docker container running on a host of indeterminate configuration.
Constraints:
I can't pass the configuration as a runtime flag (e.g. --add-host).
I can't expose the mapping externally (e.g. set a DNS entry on the host machine).
I can't modifying the underlying host, or count on it to be configured a certain way.
open questions:
is it possible to set DNS entries from inside the container and override host DNS for the container only? if so, what's a good light-weight low-management tool for this (e.g. DNSmasq, coredns)?
is there some magic by which I can impersonate the /etc/hosts file or add a pre-processed file before it in the resolution chain?

Can (or should) 2 docker containers interact with each other via localhost?

We're dockerizing our micro services app, and I ran into some discovery issues.
The app is configured as follows:
When the a service is started in 'non-local' mode, it uses Consul as its Discovery registry.
When a service is started in 'local' mode, it automatically binds an address per service (For example, tcp://localhost:61001, tcp://localhost:61002 and so on. Hard coded addresses)
After dockerizing the app (for local mode only, for now) each service is a container (Docker images orchestrated with docker-compose. And with docker-machine, if that matters)
But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.
Using docker-compose with links and specifying localhost as an alias (service:localhost) didn't work. Is there a way for 2 containers to "share" the same localhost?
If not, what is the best way to approach this?
I thought about using specific hostname per service, and then specify the hostname in the links section of the docker-compose. (But I doubt that this is the elegant solution)
Or maybe use a dockerized version of Consul and integrate with it?
This post: How to share localhost between two different Docker containers? provided some insights about why localhost shouldn't be messed with - but I'm still quite puzzled on what's the correct approach here.
Thanks!
But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.
Actually, they can. You are right that tcp://localhost:61001 will not work, because using localhost within a container would be referring to the container itself, similar to how localhost works on any system by default. This means that your services cannot share the same host. If you want them to, you can use one container for both services, although this really isn't the best design since it defeats one of the main purposes of Docker Compose.
The ideal way to do it is with docker-compose links, the guide you referenced shows how to define them, but to actually use them you need to use the linked container's name in URLs as if the linked container's name had an IP mapping defined in the original container's /etc/hosts (not that it actually does, but just so you get the idea). If you want to change it to be something different from the name of the linked container, you can use a link alias, which are explained in the same guide you referenced.
For example, with a docker-compose.yml file like this:
a:
expose:
- "9999"
b:
links:
- a
With a listening on 0.0.0.0:9999, b can interact with a by making requests from within b to tcp://a:9999. It would also be possible to shell into b and run
ping a
which would send ping requests to the a container from the b container.
So in conclusion, try replacing localhost in the request URL with the literal name of the linked container (or the link alias, if the link is defined with an alias). That means that
tcp://<container_name>:61001
should work instead of
tcp://localhost:61001
Just make sure you define the link in docker-compose.yml.
Hope this helps
On production, never use docker or docker compose alone. Use an orchestrator (rancher, docker swarm, k8s, ...) and deploy your stack there. Orchestrator will take care of the networking issue. Your container can link each other, so you can access them directly by a name (don't care too much about the ip).
On local host, use docker compose to startup your containers and use link. do not use a local port but the name of the link. (if your container A need to access container B on port 1234, then do a link B linked to A with name BBBB and use tcp://BBBB:1234 to access the container from A )
If you really want to bind port to your localhost and use this, access port by your host IP, not localhost.
If changing the hard-coded addresses is not an option for now, perhaps you could modify the startup scripts of your containers to forward forward ports in each local container to the required services in other machines.
This would create some complications though, because you would have to setup ssh in each of your containers, and manage the corresponding keys.
Come to think of it, if encryption is not an issue, ssh is not necessary. Using socat or redir would probably be enough.
socat TCP4-LISTEN:61001,fork TCP4:othercontainer:61001

docker containers communication on dev machine

I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch. I am confused as to how I can create a container that can be used in production and on my local machine (mac). How are people providing configuration like this these days?
So far I have come up with having my process take environmental variables as arguments which I can pass to the container with docker run -e. It seems unlikely that I would be doing this type of thing in production.
I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch
If elasticsearch is running in its own container on the same host (managed by the same docker daemon), then you can link it to your own container (at the docker run stage) with the --link option (which sets environment variables)
docker run --link elasticsearch:elasticsearch --name <yourContainer> <yourImage>
See "Linking containers together"
In that case, your container config can be static and known/written in advance, as it will refer to the search machine as 'elasticsearch'.
How about writing it into the configuration file of your application and mount the configuration directory onto your container with -v?
To make it more organized, I use Ansible for orchestration. This way you could have a template of the configuration file for your application while the actually parameters are in the variable file of the corresponding Ansible playbook at a centralized location. Ansible will be in charge of copying the template over to the desired location and do variable substitution for you. It also recently enhanced its Docker support.
Environment variables are absolutely fine (we use them all the time for this sort of thing) as long as you're using service names, not ip addresses. Even with ip addresses you'd have no problem as long as you only have one ES and you're willing to restart your service every time the ES ip address changes.
You should really ask someone who knows for sure how you resolve these things in your production environments, because you're unlikely to be the only person in your org who has had this problem -- connecting to a database poses the same problem.
If you have no constraints at all then you should check out something like Consul from Hashicorp. It'll help you a lot with this problem; if you are allowed to use it.

Use Eureka despite having random external port of docker containers

I am writing an application that is composed of a few spring boot based microservices with a zuul based reverse proxy in the front-
It works when I start the services on my machine, but for server rollout I'd like to use docker for the services, but this seems not to be possible right now.
Normally you would have a fixed "internal" port and randomized ports at the outside of the container. But the app in the container doesn't know the outside port (and IP).
The Netflix tools match what I would want to write an efficient microservice architecture and conceptually I really like docker.
As far as I can see it would be very troublesome to start the container, gather the outside port on the host and pass it to the app, because you can't simply change the port after the app is started.
Is there any way to use eureka with docker based clients?
[Update]
I guess I did a poor job explaining the problem. So maybe this clarifies it a bit more:
The eureka server itself can run in docker, as I have only one and the outside port doesn't matter. I can use the link feature to access it from the clients.
The problem is the URL that the clients register themselves with.
This is for example https://localhost:8080/ but due to dynamic port assignment it is really only accessible via https://localhost:54321/
So eureka will return the wrong URL for the services.
UPDATE
I have updated my answer below, so have a look there.
I have found a solution myself, which is maybe not the best solution, but it fits for me...
When you start docker with "--net=host" (host networking), then you use the hosts network stack directly. Then I just use 0 as port for spring-boot and spring randomizes the port for me and as it's using the hosts networking stack there is no translation to a different port (and IP).
There are some drawbacks though:
When you use host networking you can't use the link-feature for these containers as link source or target.
Using the hosts network stack leads to less encapsulation of the instance, which maybe a problem depending on your project.
I hope it helps
A lot of time has passed and I think I should elaborate this a little bit further:
If you use docker to host your spring application, just don't use a random port! Use a fixed port because every container gets his own IP anyway so every service can use the same port. This makes life a lot easier.
If you have a public facing service then you would use a fixed port anyway.
For local starts via maven or for example the command line have a dedicated profile that uses randomized ports so you don't have conflicts (but be aware that there are or have been a few bugs surrounding random ports and service registration)
if for whatever reason you want to or need to use host networking you can use randomized ports of course, but most of the time you shouldn't!
You can set up a directory for each docker instance and share it between the host and the instance and then write the port and IP address to a file in that directory.
$ instanceName=$(generate random instance name)
$ dirName=/var/lib/docker/metadata/$instanceName
$ mkdir -p $dirName
$ docker run -name $instanceName -v ${dirName}:/mnt/metadata ...
$ echo $(get port number and host IP) > ${dirName}/external-address
Then you just read /mnt/metadata/external-address from your application and use that information with Eureka.

Resources