Envoy (service mesh) running as a Docker container - docker

We want to do some testing on envoy, and we have a Docker container pulled from DockerHub envoyproxy/envoy running under Docker. However if one opens a CLI into the container there is no file called envoy, and if we look at a browser for http://localhost:443 nothing happens. Is envoy actually running in the container? ps shows the only running processes are bash and the ps command. So how do you test an envoy config file in the container, for example html.yaml?
We can run envoy standalone with a config file, but that is not the environment we will be using.
The Envoy Documentation is very long and very thorough, but to us at least this isn't clear.

You can easily test Envoy inside a Docker container by running:
docker run --name envoy \
-p 443:443 \
-v $PWD/html.yaml:/etc/envoy/envoy.yaml \
envoyproxy/envoy:v1.21.1
You must bind mount your config into /etc/envoy/envoy.yaml in the container.
Note that in order to access your services, you may want to run envoy in host network (--net=host), or at least in the same network as your services.
Also, the entrypoint command can be seen by executing ps -ef:
$ docker exec envoy ps -ef
UID PID PPID C STIME TTY TIME CMD
envoy 1 0 1 16:47 pts/0 00:00:01 envoy -c /etc/envoy/envoy.yaml

Related

unhealthy docker container not restarted by docker native health check

I have implemented docker native health check by adding HEALTHCHECK command in Docker file as shown below,
HEALTHCHECK --interval=60s --timeout=15s --retries=3 CMD ["/svc/app/healthcheck/healthCheck.sh"]
set the entry point for the container
CMD [".././run.sh"]
executing the docker run command as shown below,
docker run -d --net=host --pid=host --publish-all=true -p 7000:7000/udp applicationname:temp
healthCheck.sh is exiting with 1, when my application is not up and I can see the container status as unhealthy, but it is not getting restarted.
STATUS
Up 45 minutes (unhealthy)
Below are the docker and OS details:
[root#localhost log]# docker -v
Docker version 18.09.7, build 2d0083d
OS version
NAME="CentOS Linux"
VERSION="7 (Core)"
How to restart my container automatically when it becomes unhealthy?
Docker only reports the status of the healthcheck. Acting on the healthcheck result requires an extra layer running on top of docker. Swarm mode provides this functionality and is shipped with the docker engine. To enable:
docker swarm init
Then instead of managing individual containers with docker run, you would declare your target state with docker service or docker stack commands and swarm mode will manage the containers to achieve the target state.
docker service create -d --net=host applicationname:temp
Note that host networking and publishing ports are incompatible (they make no logical sense together), net requires two dashes to be a valid flag, and changing the pid namespace is not supported in swarm mode. Many other features should work similar to docker run.
https://docs.docker.com/engine/reference/commandline/service_create/
There is no auto restart mechanism for unhealth container currently, see this, but you can make a workaround as mentioned here:
docker run -d \
--name autoheal \
--restart=always \
-e AUTOHEAL_CONTAINER_LABEL=all \
-v /var/run/docker.sock:/var/run/docker.sock \
willfarrell/autoheal
It add docker unix domain socket to the monitor container, then it could monitor all unhealthy container and restart it for you if other container is not healthy.

docker: not found after mounting /var/run/docker.sock

I'm trying to use docker command inside container.
i use this command to mount /var/run/docker.sock and run container
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest
but when i try to use docker inside container(gitlab-runner) i get an error
docker: not found
host:
srw-rw---- 1 root docker 0 Mar 23 15:13 docker.sock
container:
0 srw-rw---- 1 root gitlab-runner 0 Mar 23 15:13 docker.sock
this worked fine, before i removed old container and created new one, and now i'm unable to run docker inside container. Please help.
You should differentiate between docker daemon and docker CLI. First one is a service, which actually performs all work - builds and runs containers. The second one is an executable, used to send commands to daemon.
Executable (docker CLI) is lightweight and uses /var/run/docker.sock to access daemon (by default, there are different transports actually).
When you start your container with -v /var/run/docker.sock:/var/run/docker.sock you actually share your host's docker daemon to docker CLI in container. Thus, you still need to install docker CLI inside container to make use of Docker, but you dont need to setup daemon inside (which is pretty complicated and requires priviledged mode).
Conclusion
Install docker CLI inside container, share socket and enjoy. But upon using host's docker daemon, you will probably be confused with bind mounting volumes because daemon doesn't see the container's internal file system.

No such image or container error

I want to setup a rancher server and a rancher agent on the same server.
Here is what i have done for creating server:
docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable
Then, I have opened my web-browser on 8080 port.
I have chosen a login/password and enabled access control.
Then i wanted to create a host (agent). Rancher web interface says me to type this command:
docker run -e CATTLE_AGENT_IP=x.x.x.x --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.10 http://nsxxx.ovh.net:8080/v1/scripts/yyyy:1514678400000:zzzz
I have no error message, but I do not see any entry in host section (in rancher web interface).
So I tried to execute a shell on the agent docker container:
docker exec -ti xxxxx /bin/bash
I tried to manually run run.sh script and here is what I see:
Error: No such image or container: nsxxx
I suppose this is because docker containers cannot communicate together, but I have done exactly what is in the documentation...
Thanks for your help
For docker exec your need to replace the xxxxx string with the container id or the name of the container. Both you get from the docker ps command

How to see the logs of docker container running on different ports

I am running single docker container on two different ports using below command
docker run -p ${EXTERNAL_PORT_NUMBER}:${INTERNAL_PORT_NUMBER} -p ${EXTERNAL_PORT_NUMBER_SECOND}:${INTERNAL_PORT_NUMBER_SECOND} --network ${NETWORK} --name ${SERVICE_NAME} --restart always -m 1024M --memory-swap -1 -itd ${ORGANISATION}/${SERVICE_NAME}:${VERSION}
I can see the container is running fine
My question is How can I see the logs of this docker container.
Every time I do sudo docker logs database-service -f I can see the log of container running on 9003 port only.
How can I view the logs of container running on 9113
You are getting all the logs that was displayed on stdout or stderr in the container.
It has nothing to do with the processes which are exposed on different ports.
If 2 instance is running inside the container and both are showing there logs on system console then you will be getting both logs on the docker logs command for the container.
You can try multitail utility to tail more than one log files in docker exec command.
For that you have to install it in that container.
You can bind external volumes to container service logs and see the logs
docker run -v 'path_to_you_host_logs':'container_service_log_path'
docker run -v 'home/user/app/apache_access.log':
'/var/log/apache_access.log'

Can I config the hosts file for spring boot docker container?

I develop a spring boot application and build a docker image. To get the app running, I have to add a new line "127.0.0.1:www.hostname.com" to /etc/hosts. I want to make this config automatically when I run docker image. I tried this:
docker run --add-host www.hostname.com:127.0.0.1 xxx/name:1.0
what I expect is:
www.hostname.com:8080 = 127.0.0.1:8080
but it is not work.
Please give some advice. thanks.
If you run the container with --hostname e.g.
docker run -d -ti --hostname myhost --name mytest ubuntu /bin/bash
you add a line 172.17.0.3 www.hostname.com to container's /etc/hosts, where 172.17.0.3 is the private address of the container itself, so it is resolving it internally from the container itself and from the docker docker server too.
If you run the container with --add-host e.g.
docker run -d -ti --add-host www.hostname.com:127.0.0.1 --name mytest ubuntu /bin/bash
you add a line 127.0.0.1 www.hostname.com to container's /etc/hosts, so it is resolving www.hostname.com to 127.0.0.1 internally from the container itself.
It works, at least in Docker version 1.13.0 on Linux.
Otherwise you can change container's /etc/hosts at system boot using sed or other substitution command in CMD or overriding the command at docker run.

Resources