Why does sendmail cli not send mails from docker containers? - docker

The following command (using the one and only original sendmail) sends an email:
echo "Subject: Testing Email" | cat - body.txt \
| /usr/lib/sendmail -v -F some#body.com -t some#body.else.com
WordPress and others use it in a similiar fashion.
Invoking it like that from within a docker container gets stuck, even though DNS (for MX lookup) and the MTA are reachable and the container is running privileged. People come up with all kinds of workarounds like using ssmtp which involve the setup of a dedicated MTA for the container. Given DNS and MX are available, I do not see a necessity for a dedicated MTA.
Why does the (one and only original) sendmail executable fail to send emails from within docker containers?

Related

How to send email from docker container to host postfix with bsd-mailx?

I have a running postfix mailserver on my ubuntu host. I might lateron replace it also with a docker container, but for migration, I want to stick with the host postfix first.
How can I send emails from a docker container to the host postfix, if I want to minimize image size?
I tried installing bsd-mailx inside the container, as is has a small package size.
In general, I could now send emails with:
echo "test header" | mail -s "test body" my#mail.com
But how can I tell command in the docker container to actually send the mail to the host system? Or would I have to mount/bind something from the hosts' postfix into the container? So that mailx sends the mail to the mount?
mail/mailx both invoke a binary called sendmail. That means that you need to install an MTA which is offering that particular interface.
postfix
exim
maybe nullmailer

Why there is no logon happens when attach to docker busybox image?

$ docker run --rm -it busybox
/ # who
<empty>
In the next session I'm trying to attach to this docker container and expecting second user will appear, but no luck again:
$ docker attach `docker container ls | grep busybox | cut -d" " -f1`
/ # who
<empty again>
So the question is - why there are no logons happened not by first run-and-attach, not by consequent attaches? And why there is no even a single logon into this container?
who reads the list of users from /var/run/utmp. On a regular Linux system, the login program prompts for the username and password and then starts the user's shell. It also updates /var/run/utmp with the new user.
The same thing happens for SSH and Telnet servers. They are expected to update /var/run/utmp.
In a Docker container, login is usually not executed. Docker isolates resources from the host system with Linux Namespaces, it does not provide a complete Linux system. When you enter a Docker container, the given entrypoint or command is executed with PID 1.
Subsequent docker exec calls are handled in a similar way. Docker enters the namespace of the container and executes the given command.
EDIT: after some reading I see Alexander answer as more to the point. Couple of useful links I've read along that way:
https://docs.docker.com/engine/security/security/
https://lwn.net/Articles/531114
As far as I understand busybox docker container is very basic and does not support all functionality of full-fledged Linux.
Here I thought I understood Docker until I saw the BusyBox docker image there is a discussion about what that image is and what it is for.

How to wait until `docker start` is finished?

When I run docker start, it seems the container might not be fully started at the time the docker start command returns. Is it so?
Is there a way to wait for the container to be fully started before the command returns? Thanks.
A common technique to make sure a container is fully started (i.e. services running, ports open, etc) is to wait until a specific string is logged. See this example Waiting until Docker containers are initialized dealing with PostgreSql and Rails.
Edited:
There could be another solution using the HEALTHCHECK of Docker containers.The idea is to configure the container with a health check command that is used to determine whether or not the main service if fully
started and running normally.
The specified command runs inside the container and sets the health status to starting, healthy or unhealthy
depending of its exit code (0 - container healthy, 1 - container is not healthy). The status of the container can then be retrieved
on the host by inspecting the running instance (docker inspect).
Health check options can be configured inside Dockerfile or when the container is run. Here is a simple example for PostgreSQL
docker run --name postgres --detach \
--health-cmd='pg_isready -U postgres' \
--health-interval='5s' \
--health-timeout='5s' \
--health-start-period='20s' \
postgres:latest && \
until docker inspect --format "{{json .State.Health.Status }}" postgres| \
grep -m 1 "healthy"; do sleep 1 ; done
In this case the health command is pg_isready. A web service will typically use curl, other containers have their specific commands
The docker community provides this kind of configuration for several official images here
Now, when we restart the container (docker start), it is already configured and we need only the second part:
docker start postgres && \
until docker inspect --format "{{json .State.Health.Status }}" postgres|\
grep -m 1 "healthy"; do sleep 1 ; done
The command will return when the container is marked as healthy
Hope that helps.
Disclaimer, I'm not an expert in Docker, and will be glad to know by myself whether a better solution exists.
The docker system doesn't really know that container "may not be fully started".
So, unfortunately, there is nothing to do with this in docker.
Usually, the commands used by the creator of the docker image (in the Dockerfile) are supposed to be organized in a way that the container will be usable once the docker start command ends on the image, and its the best way. However, it's not always the case.
Here is an example:
A Localstack, which is a set of services for local development with AWS has a docker image, but once its started, for example, S3 port is not ready to get connections yet.
From what I understand a non-ready-although-exposed port will be a typical situation that you refer to.
So, out of my experience, in the application that talks to docker process the attempt to connect to the server port should be enclosed with retries and once it's available.

Testing an application inside docker container in VSTS

I'm trying to test an ASP. NET Core 2 dockerized application in VSTS. It is set up inside the docker container via docker-compose. The tests make requests via addresses stored in config (or taken from environment variables, if set).
Right now, the build is set up like this:
Run compose command to restore and publish the app.
Run compose to create and run docker containers.
Run a bash script (explained below).
Run tests.
First of all, I found out that I can't use http://localhost:port inside VSTS. It works fine on my local machine, but it does not work on the server.
I've found this article that points out the need to use container's real IP to access it. I've tried 2 of the methods described in the referenced question, but none of them worked.
When using docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id, I get Template parsing error: template: :1:24: executing "" at <.NetworkSettings.Net...>: map has no entry for key "NetworkSettings" (the problem is with the command itself)
And when using docker inspect $(sudo docker ps | grep wiremocktest_microservice.gateway | head -c 12) | grep -e \"IPAddress\"\:[[:space:]]\"[0-2] | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}', I actually get the IP and can pass it to tests, but then something strange happens. Namely, they start to time out. I tried to replicate this locally, and it does. Every request that I make to this IP times out (easily checked in browser).
What address do I need to use to access the containers in VSTS, and why can't I use localhost?
I've run into similar problem with having a Azure Storage service running in a container for unit tests (Gradle & Kotlin project). Locally everything's working and it's possible to connect to the container by using localhost:10000 (the port is published to the host machine in run command). But this doesn't work on VSTS build pipeline and neither does when trying to connect with the IP of the container.
I've found a solution that works at least in this case: I created a custom container network and connected my Azure Storage container and the VSTS agent container to that network. After that it's possible to connect to my custom container from the tests by using the container name and internal port number e.g. my-storage-container:10000.
So I created a script that creates the container network, starts my container in that network and then connects also the VSTS agent by grepping the container ID from process list. Its' something like this:
docker network create my-custom-network
docker run --net=my-custom-network -d --name azure-storage-container -t -p 10000:10000 -v ${SCRIPT_DIR}/azurite:/opt/azurite/folder arafato/azurite
CONTAINER_ID=`docker ps -a | awk '{ print $1,$2 }' | grep microsoft/vsts-agent | awk '{print $1 }'`
docker network connect my-custom-network ${CONTAINER_ID}
After that my tests can connect to the Azure storage container with http://azure-storage-container:10000 with no problems.

Kafka with Docker dynamic advertised_host_name

I've been using wurstmeister/Kafka for a few weeks now in Dev and QA, but in each case I need to hard-code KAFKA_ADVERTISED_HOST_NAME to the IP of the box that it's on, using docker-compose. This hasn't been a problem during testing, but now that I'm trying to scale this out to production, it's becoming a little bit more frustrating.
I'm continuing to use docker-compose to somewhat manually deploy three instances of Kafka and Zookeper onto three separate cloud hosts. I've opened up the appropriate ports, and attempted everything in my limited Docker knowledge to dynamically assign KAFKA_ADVERTISED_HOST_NAME. Much to my dismay, it always yields some sort of error. The README on docker hub mentions assigning this variable dynamically VIA
HOSTNAME_COMMAND, e.g. HOSTNAME_COMMAND: "route -n | awk '/UG[ \t]/{print $$2}'"
This causes my application to obtain a connection refused response when attempting to connect. However, manually assigning the IP to the three hosts works perfectly fine. What am I missing here?!
Compose can substitute variables into configuration options at run time.
Compose Environment variables
Set the KAFKA_ADVERTISED_HOST_NAME container environment variable to a local variable called DOCKER_HOST_IP.
whatever:
environment:
KAFKA_ADVERTISED_HOST_NAME: ${DOCKER_HOST_IP}
Then DOCKER_HOST_IP needs to be set whenever you run docker-compose. You will get a warning from docker-compose when it's not set.
IP on the Docker host
Running ip route show will list the default interface.
Then ip address show will give you the ip addresses.
To get these into a variable
default_interface=$(ip ro sh | awk '/^default/{ print $5; exit }')
export DOCKER_HOST_IP=$(ip ad sh $default_interface | awk '/inet /{print $2}' | cut -d/ -f1)
[ -z "$DOCKER_HOST_IP" ] && (echo "No docker host ip address">&2; exit 1 )
echo "$DOCKER_HOST_IP"
You can add those commands to whatever your startup script is, or create a standalone script from them to call when you need it.
IP via Docker Machine
If you are managing a remote docker-machine you can get the ip via the machine environment.
DOCKER_HOST_IP=$(docker-machine ip ${DOCKER_MACHINE_NAME})

Resources