Docker-compose health check for Mosquitto - mosquitto

I setup mosquitto password using a password file
volumes:
- /password:/mosquitto/config
How can I add healthcheck in docker-compose? I tried the below solution provided here
Script to check mosquitto is healthy
healthcheck:
test: ["CMD-SHELL", "timeout -t 5 mosquitto_sub -t '$$SYS/#' -C 1 | grep -v Error || exit 1"]
interval: 10s
timeout: 10s
retries: 6
Also, I tried a couple of other options but they are asking me to pass username and password. Can't I use this password file?
update:
mosquitto.conf
allow_anonymous false
password_file /mosquitto/config/pwfile
port 1883
listener 9001
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log

At a push you could enable listener with MQTT over Websockets as the protocol and then use a basic curl get request to check it the broker is up.
e.g. add this to the mosquitto.conf
listener 8080 127.0.0.1
protocol websockets
and a health check something like
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080" ]
interval: 10s
timeout: 10s
retries: 6
The raw HTTP GET request should complete without needing to authenticate.
The other option is re-enable anonymous users and to add readonly access to the anonymous user to access the $SYS/# topic pattern using a acl file (acl_file)

Related

Can I use fluent-logger with otel fluentforwardreceiver?

I am using fluentforwardreceiver in OTEL collector as mentioned below:
receivers:
fluentforward:
endpoint: localhost:8006
When I send logs to this via the docker container mentioned below it works fine :
test_agent_log_generate:
image: httpd
ports:
- "803:80"
logging:
driver: "fluentd"
options:
fluentd-address: localhost:8006
tag: httpd.access
command: /bin/bash -c "while sleep 2; do echo \"T1111esting a log message\"; done"
But, when I use fluent-logger to do the same, I am not getting any logs ! The code is as below:
const FluentClient = require("#fluent-org/logger").FluentClient;
const logger = new FluentClient("tag_prefix", {
socket: {
host: "localhost",
port: 8006,
timeout: 3000, // 3 seconds
}
});
// send an event record with 'tag.label'
do {
console.log("working....")
logger.emit('label', {record: 'this is a log'});
} while (true);
As far as I understand logger.emit will also follow "forward" protocol, so I am expecting the logs to be received in my OTEL,
What may have gone wrong ?
Thanks in advance !

How do you do a Healthcheck for Fluentd's Default ports?

I was looking through docker hub, etc and generally I can find a mechanism to look up healthchecks for different containers. I didnt see any for FluentD though.
I would like to essentially do a curl from the container to confirm it is healthy.
My issue is that i have underlying containers which will start immediately but fail because 24224 on fluentd is not available.
So what I thought to do was to write similar to:
version: "3.3"
services:
fluentd:
ports:
- "24224:24224"
- "24224:24224/udp"
healthcheck:
test: curl --fail -s http://localhost:24224 || exit 1
interval: 30s
timeout: 30s
retries: 5
start_period: 30s
sample:
depends_on:
fluentd:
condition: container_healthy
In this sample test, It seems that the Curl command I set up was not the correct command to validate the health of fluentd.
I did not seem to find anything specific to this from my searches, but maybe others might know what to do.
My error was: Error response from daemon: failed to initialize logging driver: dial tcp [::1]:24224: connect: connection refused when it attempts to set up logging to fluentd.

Running ELK on docker, Kibana says: Unable to retrieve version information from Elasticsearch nodes

I was referring to example given in the elasticsearch documentation for starting elastic stack (elastic and kibana) on docker using docker compose. It gives example of docker compose version 2.2 file. So, I tried to convert it to docker compose version 3.8 file. Also, it creates three elastic nodes and has security enabled. I want to keep it minimal to start with. So I tried to turn off security and also reduce the number of elastic nodes to 2. This is how my current compose file looks like:
version: "3.8"
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:8.0.0-amd64
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
environment:
- node.name=es01
- cluster.name=docker-cluster
- cluster.initial_master_nodes=es01
- bootstrap.memory_lock=true
- xpack.security.enabled=false
deploy:
resources:
limits:
memory: 1g
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
# [
# "CMD-SHELL",
# # "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
# ]
# Changed to:
test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
- es01
image: docker.elastic.co/kibana/kibana:8.0.0-amd64
volumes:
- kibanadata:/usr/share/kibana/data
ports:
- 5601:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://localhost:9200
deploy:
resources:
limits:
memory: 1g
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
volumes:
esdata01:
driver: local
kibanadata:
driver: local
Then, I tried to run it:
docker stack deploy -c docker-compose.nosec.noenv.yml elk
Creating network elk_default
Creating service elk_es01
Creating service elk_kibana
When I tried to check their status, it displayed following:
$ docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3dcd08134e38 docker.elastic.co/kibana/kibana:8.0.0-amd64 "/bin/tini -- /usr/l…" 3 minutes ago Up 3 minutes (health: starting) 5601/tcp elk_kibana.1.ng8aspz9krfnejfpsnqzl2sci
7b548a43c45c docker.elastic.co/elasticsearch/elasticsearch:8.0.0-amd64 "/bin/tini -- /usr/l…" 3 minutes ago Up 3 minutes (healthy) 9200/tcp, 9300/tcp elk_es01.1.d9a107j6wkz42shti3n6kpfmx
I noticed that kibana's status gets stuck at (health: starting). When I checked Kibana's logs with command docker service logs -f elk_kibana, it had following WARN and ERROR lines:
[WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[WARN ][plugins.reporting.config] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.reporting.config] Found 'server.host: "0.0.0.0"' in Kibana configuration. Reporting is not able to use this as the Kibana server hostname. To enable PNG/PDF Reporting to work, 'xpack.reporting.kibanaServer.hostname: localhost' is automatically set in the configuration. You can prevent this message by adding 'xpack.reporting.kibanaServer.hostname: localhost' in kibana.yml.
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 127.0.0.1:9200
It seems that kibana is not able to connect with Elasticsearch, but why? Is it because of disabling of security and that we cannot have security disabled?
PS-1: Earlier, when I set elasticsearch host as follows in kibana's environment in the docker compose file:
ELASTICSEARCH_HOSTS=https://es01:9200 # that is 'es01' instead of `localhost`
it gave me following error:
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo ENOTFOUND es01
So, after checking this question, I changed es01 to localhost as specified earlier (that is in complete docker compose file content before PS-1.)
PS-2: Replacing localhost with 192.168.0.104 gives following error
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 192.168.0.104:9200
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. write EPROTO 140274197346240:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
Try this :
ELASTICSEARCH_HOSTS=http://es01:9200
I don't know why it can run in my PC, since Elasticsearch is supossed use SSL. But in your case using http working just fine.

Port 4466 already in use error after migrating from GraphQL Yoga to Apollo Server 2

I have a local app that had a backend of Prisma and GraphQL Yoga. I migrated from Yoga to Apollo Server 2 and believe I have the configuration set up correctly. However, when I go to 'run dev' I am getting an error that port 4466 is already in use.
I thought perhaps I needed to restart my docker images and did try that.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f14c004ae0d2 prismagraphql/prisma:1.34 "/bin/sh -c /app/sta…" 30 minutes ago Up 30 minutes 0.0.0.0:4466->4466/tcp backend_prisma_1
0c5f3517e990 mysql "docker-entrypoint.s…" 5 months ago Up 21 minutes 3306/tcp, 33060/tcp latinconexiones_mysql-db_1
This is my docker-compose.yml file
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
databases:
default:
connector: mysql
host: host.docker.internal
database: test_db
user: root
password: root
rawAccess: true
port: '8889'
migrations: false
How can I solve this? It feels like re-initializing Prisma with a different port may work, but that feels like overkill?
check with docker ps if any container uses that port, if so stop it if you don't need it, or change the port of your current container.
it may be also that a non-containerized app uses that port: check this with: sudo lsof -i -P -n | grep LISTEN | grep 4466

Add healthcheck in Keycloak Docker Swarm service

What's the best way to test the health of Keycloak configured as cluster deployed as docker swarm service?
I tried the below healthcheck for testing availability in Keycloak service descriptor:
healthcheck:
test: ["CMD-SHELL", "curl http://localhost:8080/auth/realms/[realm_name]"]
interval: 30s
timeout: 10s
retries: 10
start_period: 1m
Are there more things to check for?
Couldn't find the documentation for this.
I prefer to listen directly the 'master' realm.
Morover most recent Keycloak versions uses a different path (omitting 'auth'):
healthcheck:
test: ["CMD", "curl", "-f", "http://0.0.0.0:8080/realms/master"]
start_period: 10s
interval: 30s
retries: 3
timeout: 5s
One can also use the /health endpoint on the KeyCloak container as follows:
"healthCheck": {
"retries": 3,
"command": [
"CMD-SHELL",
"curl -f http://localhost:8080/health || exit 1"
],
"timeout": 5,
"interval": 60,
"startPeriod": 300
}

Resources