Testing an application inside docker container in VSTS - docker

I'm trying to test an ASP. NET Core 2 dockerized application in VSTS. It is set up inside the docker container via docker-compose. The tests make requests via addresses stored in config (or taken from environment variables, if set).
Right now, the build is set up like this:
Run compose command to restore and publish the app.
Run compose to create and run docker containers.
Run a bash script (explained below).
Run tests.
First of all, I found out that I can't use http://localhost:port inside VSTS. It works fine on my local machine, but it does not work on the server.
I've found this article that points out the need to use container's real IP to access it. I've tried 2 of the methods described in the referenced question, but none of them worked.
When using docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id, I get Template parsing error: template: :1:24: executing "" at <.NetworkSettings.Net...>: map has no entry for key "NetworkSettings" (the problem is with the command itself)
And when using docker inspect $(sudo docker ps | grep wiremocktest_microservice.gateway | head -c 12) | grep -e \"IPAddress\"\:[[:space:]]\"[0-2] | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}', I actually get the IP and can pass it to tests, but then something strange happens. Namely, they start to time out. I tried to replicate this locally, and it does. Every request that I make to this IP times out (easily checked in browser).
What address do I need to use to access the containers in VSTS, and why can't I use localhost?

I've run into similar problem with having a Azure Storage service running in a container for unit tests (Gradle & Kotlin project). Locally everything's working and it's possible to connect to the container by using localhost:10000 (the port is published to the host machine in run command). But this doesn't work on VSTS build pipeline and neither does when trying to connect with the IP of the container.
I've found a solution that works at least in this case: I created a custom container network and connected my Azure Storage container and the VSTS agent container to that network. After that it's possible to connect to my custom container from the tests by using the container name and internal port number e.g. my-storage-container:10000.
So I created a script that creates the container network, starts my container in that network and then connects also the VSTS agent by grepping the container ID from process list. Its' something like this:
docker network create my-custom-network
docker run --net=my-custom-network -d --name azure-storage-container -t -p 10000:10000 -v ${SCRIPT_DIR}/azurite:/opt/azurite/folder arafato/azurite
CONTAINER_ID=`docker ps -a | awk '{ print $1,$2 }' | grep microsoft/vsts-agent | awk '{print $1 }'`
docker network connect my-custom-network ${CONTAINER_ID}
After that my tests can connect to the Azure storage container with http://azure-storage-container:10000 with no problems.

Related

is it possible to get to the logs of the container while inside a docker container?

my project is running inside a docker container - web_container and I need to get a way to get web_container's logs through the project
i tried running the command docker logs web_container >> file.log;, but as I understand it, the command is not recognized inside the docker container
is there any way to get the logs while in the container?
Logs are stored on host, so you cannot access them in container. But it is possible to mount (docker run -v /var/lib/docker/containers:/whereever/you/want2/mount:ro) the folder inside the container (read-only preferred).
By default it is here /var/lib/docker/containers/[container-id]/[container-id]-json.log.
While the container ID you can obtain with cat /proc/self/cgroup | grep -o -e "docker-.*.scope" | head -n 1 | sed "s/docker-\(.*\).scope/\\1/" from inside the container. (Maybe depends on your system, anyways it is in /proc/self/cgroup).
Remark:
This is a technically working answer to your question. For most use-cases the comments of David and The Fool are the more elegant way solving that.

Visual studio docker and kubernetes support

I am currently using visual studio build an console applocation that has docker support, the problem with this is the application does not seem to start in a external command prompt, but the
seem to outputting in the internal console window of visual studio, how do i make it execute in a command prompt window?
It seems that the commands it uses forces it to outputted into the dev console window
docker exec -i -w "/app" b6375046a58cba92571a425d937a16bd222d87b537af1c1d64ca6b4c845616c9 sh -c ""dotnet" --additionalProbingPath /root/.nuget/fallbackpackages2 --additionalProbingPath /root/.nuget/fallbackpackages "bin/Debug/netcoreapp3.1/console.dll" | tee /dev/console"
how do i the exec command line such that it outputs to a different window?
And is it somehow possible to deploy these containered application into an locally running kubernetes cluster?
Thus utilizing kubernetes services - instead of specifying ip address and etc?
There is no meaining "different window".
You can run your app in foreground or in detached mode(-d).
To start a container in detached mode, you use -d=true or just -d option.
In foregroung you shouldn't spesified the -d flag
In foreground mode (the default when -d is not specified), docker run can start the process in the container and attach the console to the process’s standard input, output, and standard error
And, of course, you can deploy your applications into kubernates cluster. Without any troubles try minikube to achieve all what you need.
And kubernets services that is another way to represent your app to the world or other local place.
An abstract way to expose an application running on a set of Pods as a network service.

How to wait until `docker start` is finished?

When I run docker start, it seems the container might not be fully started at the time the docker start command returns. Is it so?
Is there a way to wait for the container to be fully started before the command returns? Thanks.
A common technique to make sure a container is fully started (i.e. services running, ports open, etc) is to wait until a specific string is logged. See this example Waiting until Docker containers are initialized dealing with PostgreSql and Rails.
Edited:
There could be another solution using the HEALTHCHECK of Docker containers.The idea is to configure the container with a health check command that is used to determine whether or not the main service if fully
started and running normally.
The specified command runs inside the container and sets the health status to starting, healthy or unhealthy
depending of its exit code (0 - container healthy, 1 - container is not healthy). The status of the container can then be retrieved
on the host by inspecting the running instance (docker inspect).
Health check options can be configured inside Dockerfile or when the container is run. Here is a simple example for PostgreSQL
docker run --name postgres --detach \
--health-cmd='pg_isready -U postgres' \
--health-interval='5s' \
--health-timeout='5s' \
--health-start-period='20s' \
postgres:latest && \
until docker inspect --format "{{json .State.Health.Status }}" postgres| \
grep -m 1 "healthy"; do sleep 1 ; done
In this case the health command is pg_isready. A web service will typically use curl, other containers have their specific commands
The docker community provides this kind of configuration for several official images here
Now, when we restart the container (docker start), it is already configured and we need only the second part:
docker start postgres && \
until docker inspect --format "{{json .State.Health.Status }}" postgres|\
grep -m 1 "healthy"; do sleep 1 ; done
The command will return when the container is marked as healthy
Hope that helps.
Disclaimer, I'm not an expert in Docker, and will be glad to know by myself whether a better solution exists.
The docker system doesn't really know that container "may not be fully started".
So, unfortunately, there is nothing to do with this in docker.
Usually, the commands used by the creator of the docker image (in the Dockerfile) are supposed to be organized in a way that the container will be usable once the docker start command ends on the image, and its the best way. However, it's not always the case.
Here is an example:
A Localstack, which is a set of services for local development with AWS has a docker image, but once its started, for example, S3 port is not ready to get connections yet.
From what I understand a non-ready-although-exposed port will be a typical situation that you refer to.
So, out of my experience, in the application that talks to docker process the attempt to connect to the server port should be enclosed with retries and once it's available.

How can I see which user launched a Docker container?

I can view the list of running containers with docker ps or equivalently docker container ls (added in Docker 1.13). However, it doesn't display the user who launched each Docker container. How can I see which user launched a Docker container? Ideally I would prefer to have the list of running containers along with the user for launched each of them.
You can try this;
docker inspect $(docker ps -q) --format '{{.Config.User}} {{.Name}}'
Edit: Container name added to output
There's no built in way to do this.
You can check the user that the application inside the container is configured to run as by inspecting the container for the .Config.User field, and if it's blank the default is uid 0 (root). But this doesn't tell you who ran the docker command that started the container. User bob with access to docker can run a container as any uid (this is the docker run -u 1234 some-image option to run as uid 1234). Most images that haven't been hardened will default to running as root no matter the user that starts the container.
To understand why, realize that docker is a client/server app, and the server can receive connections in different ways. By default, this server is running as root, and users can submit requests with any configuration. These requests may be over a unix socket, you could sudo to root to connect to that socket, you could expose the API to the network (not recommended), or you may have another layer of tooling on top of docker (e.g. Kubernetes with the docker-shim). The big issue in that list is the difference between the network requests vs a unix socket, because network requests don't tell you who's running on the remote host, and if it did, you'd be trusting that remote client to provide accurate information. And since the API is documented, anyone with a curl command could submit a request claiming to be a different user.
In short, every user with access to the docker API is an anonymized root user on your host.
The closest you can get is to either place something in front of docker that authenticates users and populates something like a label. Or trust users to populate that label and be honest (because there's nothing in docker validating these settings).
$ docker run -l "user=$(id -u)" -d --rm --name test-label busybox tail -f /dev/null
...
$ docker container inspect test-label --format '{{ .Config.Labels.user }}'
1000
Beyond that, if you have a deployed container, sometimes you can infer the user by looking through the configuration and finding volume mappings back to that user's home directory. That gives you a strong likelihood, but again, not a guarantee since any user can set any volume.
I found a solution. It is not perfect, but it works for me.
I start all my containers with an environment variable ($CONTAINER_OWNER in my case) which includes the user. Then, I can list the containers with the environment variable.
Start container with environment variable
docker run -e CONTAINER_OWNER=$(whoami) MY_CONTAINER
Start docker compose with environment variable
echo "CONTAINER_OWNER=$(whoami)" > deployment.env # Create env file
docker-compose --env-file deployment.env up
List containers with the environment variable
for container_id in $(docker container ls -q); do
echo $container_id $(docker exec $container_id bash -c 'echo "$CONTAINER_OWNER"')
done
As far as I know, docker inspect will show only the configuration that
the container started with.
Because of the fact that commands like entrypoint (or any init script) might change the user, those changes will not be reflected on the docker inspect output.
In order to work around this, you can to overwrite the default entrypoint set by the image with --entrypoint="" and specify a command like whoami or id after it.
You asked specifically to see all the containers running and the launched user, so this solution is only partial and gives you the user in case it doesn't appear with the docker inspect command:
docker run --entrypoint "" <image-name> whoami
Maybe somebody will proceed from this point to a full solution (:
Read more about entrypoint "" in here.
If you are used to ps command, running ps on the Docker host and grep with parts of the process your process is running. For example, if you have a Tomcat container running, you may run the following command to get details on which user would have started the container.
ps -u | grep tomcat
This is possible because containers are nothing but processes managed by docker. However, this will only work on single host. Docker provides alternatives to get container details as mentioned in other answer.
this command will print the uid and gid
docker exec <CONTAINER_ID> id
ps -aux | less
Find the process's name (the one running inside the container) in the list (last column) and you will see the user ran it in the first column

Kafka with Docker dynamic advertised_host_name

I've been using wurstmeister/Kafka for a few weeks now in Dev and QA, but in each case I need to hard-code KAFKA_ADVERTISED_HOST_NAME to the IP of the box that it's on, using docker-compose. This hasn't been a problem during testing, but now that I'm trying to scale this out to production, it's becoming a little bit more frustrating.
I'm continuing to use docker-compose to somewhat manually deploy three instances of Kafka and Zookeper onto three separate cloud hosts. I've opened up the appropriate ports, and attempted everything in my limited Docker knowledge to dynamically assign KAFKA_ADVERTISED_HOST_NAME. Much to my dismay, it always yields some sort of error. The README on docker hub mentions assigning this variable dynamically VIA
HOSTNAME_COMMAND, e.g. HOSTNAME_COMMAND: "route -n | awk '/UG[ \t]/{print $$2}'"
This causes my application to obtain a connection refused response when attempting to connect. However, manually assigning the IP to the three hosts works perfectly fine. What am I missing here?!
Compose can substitute variables into configuration options at run time.
Compose Environment variables
Set the KAFKA_ADVERTISED_HOST_NAME container environment variable to a local variable called DOCKER_HOST_IP.
whatever:
environment:
KAFKA_ADVERTISED_HOST_NAME: ${DOCKER_HOST_IP}
Then DOCKER_HOST_IP needs to be set whenever you run docker-compose. You will get a warning from docker-compose when it's not set.
IP on the Docker host
Running ip route show will list the default interface.
Then ip address show will give you the ip addresses.
To get these into a variable
default_interface=$(ip ro sh | awk '/^default/{ print $5; exit }')
export DOCKER_HOST_IP=$(ip ad sh $default_interface | awk '/inet /{print $2}' | cut -d/ -f1)
[ -z "$DOCKER_HOST_IP" ] && (echo "No docker host ip address">&2; exit 1 )
echo "$DOCKER_HOST_IP"
You can add those commands to whatever your startup script is, or create a standalone script from them to call when you need it.
IP via Docker Machine
If you are managing a remote docker-machine you can get the ip via the machine environment.
DOCKER_HOST_IP=$(docker-machine ip ${DOCKER_MACHINE_NAME})

Resources