Implementing Consul healthcheck in Docker environment - docker

I am new with Consul/Registrator and Docker. I am confused about using the Consul healthcheck in Docker environment. It is described in the following link, within the section Docker + Interval: https://www.consul.io/docs/agent/checks.html
Here is the example of Consul healthcheck definition described in the link:
{
"check": {
"id": "mem-util",
"name": "Memory utilization",
"docker_container_id": "f972c95ebf0e",
"shell": "/bin/bash",
"args": ["/usr/local/bin/check_mem.py"],
"interval": "10s"
}
}
Is the healthcheck script within the docker image or outside of it (in the example: check_mem.py)? Should we know the ID of the container and manually insert in the field: docker_container_id? (this would be not very efficient way)
I have been googling around and the only answer that I can find is at the end of the following discussion:
https://github.com/hashicorp/consul/issues/3182
But this code is some 'workaround' - it uses the docker 'primal' healthcheck and the registrator variable - ENV SERVICE_CHECK_SCRIPT. It does not use consul healthcheck script.
Can anybody help me with understanding how consul healthcheck works in docker environment.

The docker healthcheck runs within the docker container.
Your example is equivalent to docker exec f972c95ebf0e /bin/bash /usr/local/bin/check_mem.py

Related

Issue using Docker Container logs to grafana using Loki-Promtail or Log Driver

I have an issue with promtail and Loki, in my server I almost have 10 docker containers which are running on Prod and Dev Environments. As I am new to Grafana I wanted to scrape these 10 docker container logs and see it in Grafana using Loki Datasource.
What I have done so far?
Scenario 1: With loki and Promtail config file
Step1: Logged into Grafana cloud and created a Loki configuration with new API key
Step2: Pasted the below config file in /etc/promtail/config.yaml
'server:
# port for the healthcheck
http_listen_port: 0
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
client:
url: https://<user>:<password>#logs-prod-us-central1.grafana.net/api/prom/push
scrape_configs:
- job_name: local
static_configs:
- targets:
- localhost
labels:
job: mrp
__path__: /var/lib/docker/containers/*/*log'
Step3: Ran a docker run promtail command
docker run --name promtail --volume "$PWD/promtail:/etc/promtail" --volume "/var/lib/docker/containers:/var/lib/docker/containers/" grafana/promtail:master -config.file=/etc/promtail/config.yaml -log.level=debug
Step4: I am able to see logs but couldn't find container name, Image name or anything within that file it-seems like a plain text. --> Can you please help me how to solve this
Scenario2: Tried with log driver
Step 1: Installed log driver within my server
Step 2: Pasted docker below commands in /etc/docker/daemon.json
{
"debug" : true,
"log-driver": "loki",
"log-opts": {
"loki-url": "https://<user_id>:<password>#logs-us-west1.grafana.net/loki/api/v1/push",
"loki-batch-size": "400"
}
}
Step 3: I need to restart docker to take effect of daemon if i do so i might loose the running containers they are going to excited state --> This is one kind of blocker
Please help me to solve this thanks in advance
I think what you are looking for is the live restore option provided by Docker. Ideally you should not run both dev and prod environments on the same machine but if you have a valid reason for doing so, then you need to add below setting in your docker daemon config, and try systemctl reload docker instead of doing a restart.
{
"live-restore": true
}
More details are documented here at: https://docs.docker.com/config/containers/live-restore/

Find out value of environment variables being passed to container by docker-compose

I'm troubleshooting a service that's failing to start because of an environment variable issue. I'd like to check out what the environment variables look like from the inside of the container. Is there a command I can pass in my docker-compose.yaml so that instead of starting the service it prints the relevant environment variable to standard output and exits?
Try this:
docker-compose run rabbitmq env
This will run env inside the rabbitmq service. env will print all environment variables (from the shell).
If the service is already running, you can do this instead, which will run env in a new shell in the existing container (which is faster since it does not need to spin up a new instance):
docker-compose exec rabbitmq env
Get the container ID with docker ps.
Then execute a shell for the running rabbitmq container by running docker exec command with the container id for your rabbitmq container.
Once you are on the rabbitmq container, you can echo out the value of any environment variable like you would on any other linux system. e.g. if you declared ENV DEBUG=true at image build time, then is should be able to retrieve that value with echo $DEBUG in the container. Furthermore, once you are in the container, you can poke around the log files for more investigation.
As others have said, first get the container ID with docker ps. When you have done that, view all the properties with docker inspect <id> and you will see something like:
[
{
...
"Config": {
...
"Env": [
"ASPNETCORE_URLS=http://+:80",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"DOTNET_RUNNING_IN_CONTAINER=true",
"DOTNET_VERSION=6.0.1",
"ASPNET_VERSION=6.0.1",
"Logging__Console__FormatterName=Json"
],
...
}
}
]

Accessing GitLab CI Service from A Container running Inside DinD

I'm trying to run a continuous integration in GitLab CI consisting of:
build the docker image
run tests
push the docker image to a registry
Those are running inside one job. I can do it without any problem until come up some test that needs to communicate with database. My container can't communicate with Postgres services defined.
I've reproduce it in a public repository with simple ping script
image: docker:stable
services:
- docker:dind
- postgres:latest
job1:
script:
- ping postgres -c 5
- docker run --rm --network="host" alpine:latest sh -c "ping postgres -c 5"
The first script could run without any problem, but the second one failed with error
ping: bad address 'postgres'
How can I access the service?
Or should I run the test in a different job?
The solution is to use --add-host=postgres:$POSTGRES_IP to pass over the ip address present in job container.
To find out postgres ip linked to the outer container you can use for example getent hosts postgres | awk '{ print $1 }'
So the yml would look like
image: docker:stable
services:
- docker:dind
- postgres:latest
job1:
script:
- ping postgres -c 5
- docker run --rm --add-host=postgres:$(getent hosts postgres | awk '{ print $1 }') alpine:latest sh -c "ping postgres -c 5"
To understand why the other more common ways to connect containers wont work in this case, we have to remember we are trying to link a nested container with a service linked to its "parent". Something like this:
gitlab ci runner --> docker -> my-container (alpine)
-> docker:dind
-> postgres
So we are trying to connect a container with its "uncle". Or connecting nested containers
As noted by #tbo, using --network host will not work. This is probably because gitlab ci use --link (as explained here) to connect containers instead of the newer --network. The way --link works makes that the services containers are connected to the job container, but not connected with one another. So using host network wont make the nested container inherit postgres hostname.
One could also think that using --link postgres:postgres would work, but it also won't as in this environment postgres is only a hostname with the ip of the container outside. There is not container here to be linked with the nested container
So all we can do is manually add a host with the correct ip to the nested container using --add-host as explained above.

Docker on Windows10 home - inside docker container connect to the docker engine

When creating a Jenkins Docker container, it is very useful to able to connect to the Docker daemon. In that way, I can start docker commands inside the Jenkins container.
For example, after starting the Jenkins Docker container, I would like to 'docker exec -it container-id bash' and start 'docker ps'.
On Linux you can use bind-mounts on /var/run/docker.sock. On Windows this seems not possible. The solution is by using 'named pipes'. So, in my docker-compose.yml file I tried to create a named pipe.
version: '2'
services:
jenkins:
image: jenkins-docker
build:
context: ./
dockerfile: Dockerfile_docker
ports:
- "8080:8080"
- "50000:50000"
networks:
- jenkins
volumes:
- jenkins_home:/var/jenkins_home
- \\.\pipe\docker_engine:\\.\pipe\docker_engine
# - /var/run/docker.sock:/var/run/docker.sock
# - /path/to/postgresql/data:/var/run/postgresql/data
# - etc.
Starting docker-compose with this file, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
How can I setup the docker-compose file so that I can use the docker.sock (or Docker) inside the started container?
On Linux you can use something like volumes: /var/run/docker.sock:/var/run/docker.sock. This does not work in a Windows environment. When you add this folder (/var) to Oracle VM Virtualbox, it won't get any IP forever. And on many posts
You can expose the daemon on tcp://localhost:2375 without TLS in the settings. This way you can configure Jenkins to use the Docker API instead of the socket. I encourage you to read this article by Nick Janetakis about "Understanding how the Docker Daemon and the Docker CLI work together".
And then there are several Docker plugins for Jenkins that allows this connection:
Also, you can find additional information in the Docker plugin documentation on wiki.jenkins.io:
def dockerCloudParameters = [
connectTimeout: 3,
containerCapStr: '4',
credentialsId: '',
dockerHostname: '',
name: 'docker.local',
readTimeout: 60,
serverUrl: 'unix:///var/run/docker.sock', // <-- Replace here by the tcp address
version: ''
]
EDIT 1:
I don't know if it is useful, but the Docker Daemon on Windows is located to C:\ProgramData\docker according to the Docker Daemon configuration doc.
EDIT 2:
You need to say explicitly the container to use the host network because you want to expose both Jenkins and Docker API.
Following this documentation, you only have to add --network=host (or network_mode: 'host' in docker-compose) to your container/service. For further information, you can read this article to understand what is the purpose of this network mode.
First try was to start a Docker environment using "Docker Quickstart terminal". This is a good solution when running Docker commands within that environment.
When installing a complete CI/CD Jenkins environment via Docker means that WITHIN the Jenkins Docker container you need to access the Docker daemon. After trying many solutions, reading many posts, this did not work. #Paul Rey, thank you very much for trying all kinds of routes.
A good solution is to get an Ubuntu Virtual Machine and install it via the Oracle VM Virtualbox. It is then VERY IMPORTANT to install Docker via this official description.
Before installing Docker, of course you need to install Curl, Git, etc.

Import broker definitions into Dockerized RabbitMQ

I have a RabbitMQ broker with some exchanges and queues already defined. I know I can export and import these definitions via the HTTP API. I want to Dockerize it, and have all the broker definitions imported when it starts.
Ideally, it would be done as easily as it is done via the API. I could write a bunch of rabbitmqctl commands, but with a lot of definitions this might take quite a some time. Also, every change somebody else makes through the web interface will have to be inserted.
I have managed to do what I want by writing a script that sleeps a curl request and starts the server, but this seems to be error prone and really not elegant. Are there any better ways to do definition importing/exporting
, or is this the best that can be done?
My Dockerfile:
FROM rabbitmq:management
LABEL description="Rabbit image" version="0.0.1"
ADD init.sh /init.sh
ADD rabbit_e6f2965776b0_2015-7-14.json /rabbit_config.json
CMD ["/init.sh"]
init.sh
sleep 10 && curl -i -u guest:guest -d #/rabbit_config.json -H "content-type:application/json" http://localhost:15672/api/definitions -X POST &
rabbitmq-server $#
Export definitions using rabbitmqadmin export rabbit.definitions.json.
Add them inside the image using your Dockerfile: ADD rabbit.definitions.json /tmp/rabbit.definitions.json
Add an environment variable when starting the container, for example, using docker-compose.yml:
environment:
- RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS=-rabbitmq_management load_definitions "/tmp/rabbit.definitions.json"
There is a simple way to upload definitions to Docker container.
Use preconfigured node to export the definitions to json file.
Then move this file to the same folder where you have the Dockerfile and create a rabbitmq.config in the same folder too. Here is the context of rabbitmq.config:
[
{ rabbit, [
{ loopback_users, [ ] },
{ tcp_listeners, [ 5672 ] },
{ ssl_listeners, [ ] },
{ hipe_compile, false }
] },
{ rabbitmq_management, [ { listener, [
{ port, 15672 },
{ ssl, false }
] },
{ load_definitions, "/etc/rabbitmq/definitions.json" }
] }
].
Then prepare an appropriate Dockerfile:
FROM rabbitmq:3.6.14-management-alpine
ADD definitions.json /etc/rabbitmq
ADD rabbitmq.config /etc/rabbitmq
EXPOSE 4369 5672 25672 15672
The definitions will be loaded during image build. When you run the container all definitions will be already applied.
You could start your container with RabbitMQ, configure the resources (queues, exchanges, bindings) and then commit your configured container as a new image. This image can be used to start new containers.
More details at https://docs.docker.com/articles/basics/#committing-saving-a-container-state
I am not sure that this is an option, but the absolute easiest way to handle this situation is to periodically create a new, empty RabbitMQ container and have it join the first container as part of the RabbitMQ cluster. The configuration of the queues will be copied to the second container.
Then, you can stop the container and create a versioned image in your docker repository of the new container using docker commit. This process will only save the changes that you have made to your image, and then it would enable you to not have to worry about re-importing the configuration each time. You would just have to get the latest image to have the latest configuration!
Modern releases support definition import directly in the core, without the need to preconfigure the management plugin.
# New in RabbitMQ 3.8.2.
# Does not require management plugin to be enabled.
load_definitions = /path/to/definitions/file.json
From Schema Definition Export and Import RabbitMQ documentation.
If you use official rabbitmq image you can mount definition file in /etc/rabbitmq shown as below, rabbitmq will load this definitions on when daemon started
docker run -v ./your_local_definitions_file.json:/etc/rabbitmq/definitions.json ......

Resources