I've been using wurstmeister/Kafka for a few weeks now in Dev and QA, but in each case I need to hard-code KAFKA_ADVERTISED_HOST_NAME to the IP of the box that it's on, using docker-compose. This hasn't been a problem during testing, but now that I'm trying to scale this out to production, it's becoming a little bit more frustrating.
I'm continuing to use docker-compose to somewhat manually deploy three instances of Kafka and Zookeper onto three separate cloud hosts. I've opened up the appropriate ports, and attempted everything in my limited Docker knowledge to dynamically assign KAFKA_ADVERTISED_HOST_NAME. Much to my dismay, it always yields some sort of error. The README on docker hub mentions assigning this variable dynamically VIA
HOSTNAME_COMMAND, e.g. HOSTNAME_COMMAND: "route -n | awk '/UG[ \t]/{print $$2}'"
This causes my application to obtain a connection refused response when attempting to connect. However, manually assigning the IP to the three hosts works perfectly fine. What am I missing here?!
Compose can substitute variables into configuration options at run time.
Compose Environment variables
Set the KAFKA_ADVERTISED_HOST_NAME container environment variable to a local variable called DOCKER_HOST_IP.
whatever:
environment:
KAFKA_ADVERTISED_HOST_NAME: ${DOCKER_HOST_IP}
Then DOCKER_HOST_IP needs to be set whenever you run docker-compose. You will get a warning from docker-compose when it's not set.
IP on the Docker host
Running ip route show will list the default interface.
Then ip address show will give you the ip addresses.
To get these into a variable
default_interface=$(ip ro sh | awk '/^default/{ print $5; exit }')
export DOCKER_HOST_IP=$(ip ad sh $default_interface | awk '/inet /{print $2}' | cut -d/ -f1)
[ -z "$DOCKER_HOST_IP" ] && (echo "No docker host ip address">&2; exit 1 )
echo "$DOCKER_HOST_IP"
You can add those commands to whatever your startup script is, or create a standalone script from them to call when you need it.
IP via Docker Machine
If you are managing a remote docker-machine you can get the ip via the machine environment.
DOCKER_HOST_IP=$(docker-machine ip ${DOCKER_MACHINE_NAME})
Related
I run several docker commands in gitlab-ci.yml.
Some of them require current machine IP address to be passed to them, like this:
docker build --pull -t my_image . --add-host=<my service>:<current ip>
$CI_SERVER_HOSTNAME is not the one, its value is gitlab.com. I need actual IP address of the CI machine as ifconfig would see it from .gitlab-cy.yml file.
I am not finding any $CI_... variable for that. I know extraction from ifconfig is possible, but won't work when the docker commands executed one-by-one on Mac.
Note: I know it's usually something like 172.0.0.x, but need an exact one plus I wonder if the variable for it exists.
In order to get the ip address of the machine that the runner is executed.
We will use the Gitlab API https://docs.gitlab.com/ee/api/runners.html#get-runners-details
GET /runners/:id
This API call will return the details of the runner with this specific :id. When a job executes this id is available on CI_RUNNER_ID predefined variable.
By combining all this and utilizing jq and sed
We get the following one liner that returns the ip address of the runner that is executing the current job
curl -s --header "PRIVATE-TOKEN: <your access token>" https://gitlab.com/api/v4/runners/${CI_RUNNER_ID} | jq '.ip_address' | sed 's/^"\(.*\)"$/\1/'
I'm trying to test an ASP. NET Core 2 dockerized application in VSTS. It is set up inside the docker container via docker-compose. The tests make requests via addresses stored in config (or taken from environment variables, if set).
Right now, the build is set up like this:
Run compose command to restore and publish the app.
Run compose to create and run docker containers.
Run a bash script (explained below).
Run tests.
First of all, I found out that I can't use http://localhost:port inside VSTS. It works fine on my local machine, but it does not work on the server.
I've found this article that points out the need to use container's real IP to access it. I've tried 2 of the methods described in the referenced question, but none of them worked.
When using docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id, I get Template parsing error: template: :1:24: executing "" at <.NetworkSettings.Net...>: map has no entry for key "NetworkSettings" (the problem is with the command itself)
And when using docker inspect $(sudo docker ps | grep wiremocktest_microservice.gateway | head -c 12) | grep -e \"IPAddress\"\:[[:space:]]\"[0-2] | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}', I actually get the IP and can pass it to tests, but then something strange happens. Namely, they start to time out. I tried to replicate this locally, and it does. Every request that I make to this IP times out (easily checked in browser).
What address do I need to use to access the containers in VSTS, and why can't I use localhost?
I've run into similar problem with having a Azure Storage service running in a container for unit tests (Gradle & Kotlin project). Locally everything's working and it's possible to connect to the container by using localhost:10000 (the port is published to the host machine in run command). But this doesn't work on VSTS build pipeline and neither does when trying to connect with the IP of the container.
I've found a solution that works at least in this case: I created a custom container network and connected my Azure Storage container and the VSTS agent container to that network. After that it's possible to connect to my custom container from the tests by using the container name and internal port number e.g. my-storage-container:10000.
So I created a script that creates the container network, starts my container in that network and then connects also the VSTS agent by grepping the container ID from process list. Its' something like this:
docker network create my-custom-network
docker run --net=my-custom-network -d --name azure-storage-container -t -p 10000:10000 -v ${SCRIPT_DIR}/azurite:/opt/azurite/folder arafato/azurite
CONTAINER_ID=`docker ps -a | awk '{ print $1,$2 }' | grep microsoft/vsts-agent | awk '{print $1 }'`
docker network connect my-custom-network ${CONTAINER_ID}
After that my tests can connect to the Azure storage container with http://azure-storage-container:10000 with no problems.
I have compose file locally. How to run bundle of containers on remote host like docker-compose up -d with DOCKER_HOST=<some ip>?
After the release of Docker 18.09.0 and the (as of now) upcoming docker-compose v1.23.1 release this will get a whole lot easier. This mentioned Docker release added support for the ssh protocol to the DOCKER_HOST environment variable and the -H argument to docker ... commands respectively. The next docker-compose release will incorporate this feature as well.
First of all, you'll need SSH access to the target machine (which you'll probably need with any approach).
Then, either:
# Re-direct to remote environment.
export DOCKER_HOST="ssh://my-user#remote-host"
# Run your docker-compose commands.
docker-compose pull
docker-compose down
docker-compose up
# All docker-compose commands here will be run on remote-host.
# Switch back to your local environment.
unset DOCKER_HOST
Or, if you prefer, all in one go for one command only:
docker-compose -H "ssh://my-user#remote-host" up
One great thing about this is that all your local environment variables that you might use in your docker-compose.yml file for configuration are available without having to transfer them over to remote-host in some way.
If you don't need to run docker container on your local machine, but still on the same remote machine, you can change this in your docker setting.
On the local machine:
You can control remote host with -H parameter
docker -H tcp://remote:2375 pull ubuntu
To use it with docker-compose, you should add this parameter in /etc/default/docker
On the remote machine
You should change listen from external adress and not only unix socket.
See Bind Docker to another host/port or a Unix socket for more details.
If you need to run container on multiple remote hoste, you should configure Docker Swarm
You can now use docker contexts for this:
docker context create dev ‐‐docker “host=ssh://user#remotemachine”
docker-compose ‐‐context dev up -d
More info here: https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/
From the compose documentation
Compose CLI environment variables
DOCKER_HOST
Sets the URL of the docker daemon. As with the Docker client, defaults to unix:///var/run/docker.sock.
so that we can do
export DOCKER_HOST=tcp://192.168.1.2:2375
docker-compose up
Yet another possibility I discovered recently is controlling a remote Docker Unix socket via an SSH tunnel (credits to https://medium.com/#dperny/forwarding-the-docker-socket-over-ssh-e6567cfab160 where I learned about this approach).
Prerequisite
You are able to SSH into the target machine. Passwordless, key based access is preferred for security and convenience, you can learn how to set this up e.g. here: https://askubuntu.com/questions/46930/how-can-i-set-up-password-less-ssh-login
Besides, some sources mention forwarding Unix sockets via SSH tunnels is only available starting from OpenSSH v6.7 (run ssh -V to check), I did not try this out on older versions though.
SSH Tunnel
Now, create a new SSH tunnel between a local location and the Docker Unix socket on the remote machine:
ssh -nNT -L $(pwd)/docker.sock:/var/run/docker.sock user#someremote
Alternatively, it is also possible to bind to a local port instead of a file location. Make sure the port is open for connections and not already in use.
ssh -nNT -L localhost:2377:/var/run/docker.sock user#someremote
Re-direct Docker Client
Leave the terminal open and open a second one. In there, make your Docker client talk to the newly created tunnel-socket instead of your local Unix Docker socket.
If you bound to a file location:
export DOCKER_HOST=unix://$(pwd)/docker.sock
If you bound to a local port (example port as used above):
export DOCKER_HOST=localhost:2377
Now, run some Docker commands like docker ps or start a container, pull an image etc. Everything will happen on the remote machine as long as the SSH tunnel is active. In order to run local Docker commands again:
Close the tunnel by hitting Ctrl+C in the first terminal.
If you bound to a file location: Remove the temporary tunnel socket again. Otherwise you will not be able to open the same one again later: rm -f "$(pwd)"/docker.sock
Make your Docker client talk to your local Unix socket again (which is the default if unset): unset DOCKER_HOST
The great thing about this is that you save the hassle of copying docker-compose.yml files and other resources around or setting environment variables on a remote machine (which is difficult).
Non-interactive SSH Tunnel
If you want to use this in a scripting context where an interactive terminal is not possible, there is a way to open and close the SSH tunnel in the background using the SSH ControlMaster and ControlPath options:
# constants
TEMP_DIR="$(mktemp -d -t someprefix_XXXXXX)"
REMOTE_USER=some_user
REMOTE_HOST=some.host
control_socket="${TEMP_DIR}"/control.sock
local_temp_docker_socket="${TEMP_DIR}"/docker.sock
remote_docker_socket="/var/run/docker.sock"
# open the SSH tunnel in the background - this will not fork
# into the background before the tunnel is established and fail otherwise
ssh -f -n -M -N -T \
-o ExitOnForwardFailure=yes \
-S "${control_socket}" \
-L "${local_temp_docker_socket}":"${remote_docker_socket}" \
"${REMOTE_USER}"#"${REMOTE_HOST}"
# re-direct local Docker engine to the remote socket
export DOCKER_HOST="unix://${local_temp_docker_socket}"
# do some business on remote host
docker ps -a
# close the tunnel and clean up
ssh -S "${control_socket}" -O exit "${REMOTE_HOST}"
rm -f "${local_temp_docker_socket}" "${control_socket}"
unset DOCKER_HOST
# do business on localhost again
Given that you are able to log in on the remote machine, another approach to running docker-compose commands on that machine is to use SSH.
Copy your docker-compose.yml file over to the remote host via scp, run the docker-compose commands over SSH, finally clean up by removing the file again.
This could look as follows:
scp ./docker-compose.yml SomeUser#RemoteHost:/tmp/docker-compose.yml
ssh SomeUser#RemoteHost "docker-compose -f /tmp/docker-compose.yml up"
ssh SomeUser#RemoteHost "rm -f /tmp/docker-compose.yml"
You could even make it shorter and omit the sending and removing of the docker-compose.yml file by using the -f - option to docker-compose which will expect the docker-compose.yml file to be piped from stdin. Just pipe its content to the SSH command:
cat docker-compose.yml | ssh SomeUser#RemoteHost "docker-compose -f - up"
If you use environment variable substitution in your docker-compose.yml file, the above-mentioned command will not replace them with your local values on the remote host and your commands might fail due to the variables being unset. To overcome this, the envsubst utility can be used to replace the variables with your local values in memory before piping the content to the SSH command:
envsubst < docker-compose.yml | ssh SomeUser#RemoteHost "docker-compose up"
What are the ways get the docker host's hostname from inside a container running on that host besides using environment variables? I know I can pass the hostname as an environment variable to the container at container creation time. I'm wondering how I can look it up at run time.
foo.example.com (docker host)
bar (docker container)
Is there a way for container bar running in docker host foo.example.com to get "foo.example.com"?
Edit to add use case:
The container will create an SRV record for service discovery of the form
_service._proto.name. TTL class SRV priority weight port target.
-----------------------------------------------------------------
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo.example.com.
where 20003 is a dynamically allocated port on the docker host for a service listening on some fixed port in bar (docker handles the mapping from host port to container port).
My container will run a health check to make sure it has successfully created that SRV record as there will be many other bar containers on other docker hosts that also create their own SRV records.
_service._proto.name. TTL class SRV priority weight port target.
-----------------------------------------------------------------
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo.example.com. <--
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo2.example.com.
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo3.example.com.
The health check will loop through the SRV records looking for the first one above and thus needs to know its hostname.
aside
I'm using Helios and just found out it adds an env var for me from which I can get the hostname. But I was just curious in case I was using docker without Helios.
You can easily pass it as an environment variable
docker run .. -e HOST_HOSTNAME=`hostname` ..
using
-e HOST_HOSTNAME=`hostname`
will call the hostname and use it's return as an environment variable called HOST_HOSTNAME, of course you can customize the key as you like.
note that this works on bash shell, if you using a different shell you might need to see the alternative for "backtick", for example a fish shell alternative would be
docker run .. -e HOST_HOSTNAME=(hostname) ..
I'm adding this because it's not mentioned in any of the other answers. You can give a container a specific hostname at runtime with the -h directive.
docker run -h=my.docker.container.example.com ubuntu:latest
You can use backticks (or whatever equivalent your shell uses) to get the output of hosthame into the -h argument.
docker run -h=`hostname` ubuntu:latest
There is a caveat, the value of hostname will be taken from the host you run the command from, so if you want the hostname of a virtual machine that's running your docker container then using hostname as an argument may not be correct if you are using the host machine to execute docker commands on the virtual machine.
You can pass in the hostname as an environment variable. You could also mount /etc so you can cat /etc/hostname. But I agree with Vitaly, this isn't the intended use case for containers IMO.
Another option that worked for me was to bind the network namespace of the host to the docker.
By adding:
docker run --net host
You can pass it as an environment variable like this. Generally Node is the host that it is running in. The hostname is defaulted to the host name of the node when it is created.
docker service create -e 'FOO={{.Node.Hostname}}' nginx
Then you can do docker ps to get the process ID and look at the env
$ docker exec -it c81640b6d1f1 env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=c81640b6d1f1
TERM=xterm
FOO=docker-desktop
NGINX_VERSION=1.17.4
NJS_VERSION=0.3.5
PKG_RELEASE=1~buster
HOME=/root
An example of usage would be with metricbeats so you know which node is having system issues which I put in https://github.com/trajano/elk-swarm:
metricbeat:
image: docker.elastic.co/beats/metricbeat:7.4.0
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
- /proc:/hostfs/proc:ro
- /:/hostfs:ro
user: root
hostname: "{{.Node.Hostname}}"
command:
- -E
- |
metricbeat.modules=[
{
module:docker,
hosts:[unix:///var/run/docker.sock],
period:10s,
enabled:true
}
]
- -E
- processors={1:{add_docker_metadata:{host:unix:///var/run/docker.sock}}}
- -E
- output.elasticsearch.enabled=false
- -E
- output.logstash.enabled=true
- -E
- output.logstash.hosts=["logstash:5044"]
deploy:
mode: global
I think the reason that I have the same issue is a bug in the latest Docker for Mac beta, but buried in the comments there I was able to find a solution that worked for me & my team. We're using this for local development, where we need our containerized services to talk to a monolith as we work to replace it. This is probably not a production-viable solution.
On the host machine, alias a known available IP address to the loopback interface:
$ sudo ifconfig lo0 alias 10.200.10.1/24
Then add that IP with a hostname to your docker config. In my case, I'm using docker-compose, so I added this to my docker-compose.yml:
extra_hosts:
# configure your host to alias 10.200.10.1 to the loopback interface:
# sudo ifconfig lo0 alias 10.200.10.1/24
- "relevant_hostname:10.200.10.1"
I then verified that the desired host service (a web server) was available from inside the container by attaching to a bash session, and using wget to request a page from the host's web server:
$ docker exec -it container_name /bin/bash
$ wget relevant_hostname/index.html
$ cat index.html
OK, this isn't the hostname (as OP was asking), but this will resolve to your docker host from inside your container for connectivity purposes.
host.docker.internal
I was redirected here when googling for this.
HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print $2}' | cut -d / -f 1 | sed -n 1p`
docker run --add-host=myhost:${HOSTIP} --rm -it debian
Now you can access the host under the alias "myhost"
The first line won't run on cygwin, but you can figure out some other way to obtain the local IP address using ipconfig.
you can run:
docker run --network="host"
for sending the value of the machine host to the container.
I ran
docker info | grep Name: | xargs | cut -d' ' -f2
inside my container.
I know it's an old question, but I needed this solution too, and I acme with another solution.
I used an entrypoint.sh to execute the following line, and define a variable with the actual hostname for that instance:
HOST=`hostname --fqdn`
Then, I used it across my entrypoint script:
echo "Value: $HOST"
Hope this helps
The following command (using the one and only original sendmail) sends an email:
echo "Subject: Testing Email" | cat - body.txt \
| /usr/lib/sendmail -v -F some#body.com -t some#body.else.com
WordPress and others use it in a similiar fashion.
Invoking it like that from within a docker container gets stuck, even though DNS (for MX lookup) and the MTA are reachable and the container is running privileged. People come up with all kinds of workarounds like using ssmtp which involve the setup of a dedicated MTA for the container. Given DNS and MX are available, I do not see a necessity for a dedicated MTA.
Why does the (one and only original) sendmail executable fail to send emails from within docker containers?