Currently I am trying to check if a webservice running inside a docker container is healthy and if its HTTP status code is 200.
But dockers built in healthcheck only checks for exit codes.
I am running this command via terminal:
curl -o /dev/null -s -w "%{http_code}\n" http://localhost:8080/api/health
And check if the returned status code is 200.
How can I embed this one inside dockers healthcheck?
You can work with the fact that HEALTHCHECK only checks the exit code values of a command as follows:
Use the -w and -s options in curl to only output the http status code of the api request
Use bash test expressions to check the status code
The HEALTHCHECK in your Dockerfile might look like this:
HEALTHCHECK --interval=20s --retries=2 CMD \
[[ "$(curl -o /dev/null -s -w "%{http_code}\n" http://localhost:8080/api/health)" == "200" ]]
See documentation here
Related
I have sh code (DashBoardImport.sh) like down below. It checks apı response to import a kibana dashboard in a infinite loop, If it gets a reponse with success, it breaks the loop :
#!/bin/sh
# use while loop to check if kibana is running
while true
do
response=$(curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson | grep -oE "^\{\"success")
#curl -X GET elk:9200/git-demo-topic | grep -oE "^\{\"git" > /dev/null
#match=$?
echo $response
if [ '{"success' = $response ]
then
echo "Running import dashboard.."
#curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson
break
else
echo "Kibana is not running yet"
sleep 5
fi
done
I run DashBoardImport.sh via docker file:
ADD ./CityCountDashBoard.ndjson /etc/elasticsearch/CityCountDashBoard.ndjson
ADD ./DashBoardImport.sh /etc/elasticsearch/DashBoardImport.sh
#ENTRYPOINT /etc/elasticsearch/DashBoardImport.sh &
USER root
RUN chmod +x /etc/elasticsearch/DashBoardImport.sh
#RUN /etc/elasticsearch/DashBoardImport.sh &
RUN nohup bash -c "/etc/elasticsearch/DashBoardImport.sh" >/dev/null 2>&1 &
I tried many options as you can see commented out. The sh works perfectly when I run it manually on the Docker Container. I kill the kibana service. then run the code. after I started the kibana, code succesfully workes as expected and imports the dashboard. But It does not work when it start on container automatically.
Do you have any idea?
Thanks alot in advance :)
A RUN step executes in a temporary container until the command returns and then docker captures the changes to the filesystem as a new layer in your image. Nothing else remains, no environment variables, running processes, etc, only the filesystem changes.
So when you RUN nohup ... & that process immediately returns since it's in the background (nohup ... & explicitly does that), and so the container exits, killing any processes that were running in the container, and captures the filesystem changes made, if any, to your image.
If you want something to run when you start the container, add it to your ENTRYPOINT or CMD.
In order to ensure the health check of my container, I need to perform test calls to multiple URLS.
curl -f http://example.com and curl -f http://example2.com
Is it possible to perform multiple curl calls for a docker health check?
You can set a script as the healthcheck command that contains a more complex logic to perform the healthcheck. That way you can do multiple requests via curl and let the script only return positive if all requests succeeded.
# example dockerfile
COPY files/healthcheck.sh /healthcheck.sh
RUN chmod +x /healthcheck.sh
HEALTHCHECK --interval=60s --timeout=10s --start-period=10s \
CMD /healthcheck.sh
Although I cannot test, I think you can use the following
HEALTHCHECK CMD (curl --fail http://example.com && curl --fail http://example2.com) || exit 1
If you want first to check this command manually (without exit part), you can check the last error code by
echo $? -> in linux
and
echo %errorlevel% -> in windows
I have created a declarative jenkins pipeline and one of it's stages is as follows:
stage('Docker Image'){
steps{
bat 'docker build -t HMT/demo-application:%BUILD_NUMBER% --no-cache -f Dockerfile .'
}
}
This is the docker file:
FROM tomcat:alpine
RUN wget -O /usr/local/tomcat/webapps/launchstation04.war http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-SNAPSHOT.war
EXPOSE 9100
CMD /usr/local/tomcat/bin/cataline.bat run
I am getting the below error.:
[91m/bin/sh:
01:33:28 [0mThe command '/bin/sh -c wget -O /usr/local/tomcat/webapps/launchstation04.war http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-SNAPSHOT.war' returned a non-zero code: 127
UPDATE:
I have updated the command to
RUN wget -O /usr/local/tomcat/webapps/launchstation04.war -U jenkinsuser:Learning#% http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-20200823.053346-18.war
There is no problem in my command.Jfrog artifactory was unable to authorize this action.So I added username and password details but it still didn't work.
Error:
wget: server returned error: HTTP/1.1 401 Unauthorized
It didnt work after modifiying the password policy to unsupported.But it worked when I allowed anonymous access.
How to provide access using credentials.
Need more clarification on your question. Not sure where you are using curl command.
Image tomcat:alpine doesn't contains curl command. Unless you install it manually.
bash-4.4# type curl
bash: type: curl: not found
bash-4.4#
If your ask is regarding the sh -c option, if the script is invoked through CMD option, yes it will use sh. Instead you can give a try with ENTRYPOINT.
You can provide username & password via command line:
wget --user user --password pass
Using curl :
curl -u username:password -O
But void using special characters:
Change your password to another once in: [a-z][A-Z][0-9]
Try an API Key instead of password, I have a feeling that "#" may be throwing you off. Quotes can help there too or separating the password with -p
Also look at the request logs for whether the entry comes as 401 for the user, or anonymous/unauthenticated
Lastly, see if you can cURL from outside the image and then ADD the file in, as that will remove any external factors that may vary from the host (where I assume the command works)
I am using Couchbase java client SDK 2.7.9 and am running into a problem while trying to run automated integration tests. In such test we usually use random ports to be able to run the same thing on the same Jenkins slave (using docker for example).
But, with the client, we can specify many custom ports but not the 8092, 8093, 8094 and 8095.
The popular TestContainers modules mention as well that those port have to remain static in their Couchbase module: https://www.testcontainers.org/modules/databases/couchbase/ 1
Apparently it is also possible to change those ports at the server level.
Example:
Docker-compose.yml
version: '3.0'
services:
rapid_test_cb:
build:
context: ""
dockerfile: cb.docker
ports:
- "8091"
- "8092"
- "8093"
- "11210"
The docker image is ‘couchbase:community-5.1.1’
Internally the ports are the ones written above but externally they are random. At the client level you can set up bootstrapHttpDirectPort and bootstrapCarrierDirectPort but apparently the 8092 and 8093 ports are taken from the server-side (who does not know which port was assigned to him).
I would like to ask you whether it is possible to change those ports at the client level and, if not, to seriously consider adding that feature.
So, as discussed with the Couchbase team here,
it is not really possible. So we found a way to make it work using Gradle's docker compose plugin but I imagine it would work in different situations (TestContainer could use a similar system).
docker-compose.yml:
version: '3.0'
services:
rapid_test_cb:
build:
context: ""
dockerfile: cb.docker
ports:
- "${COUCHBASE_RANDOM_PORT_8091}:${COUCHBASE_RANDOM_PORT_8091}"
- "${COUCHBASE_RANDOM_PORT_8092}:${COUCHBASE_RANDOM_PORT_8092}"
- "${COUCHBASE_RANDOM_PORT_8093}:${COUCHBASE_RANDOM_PORT_8093}"
- "${COUCHBASE_RANDOM_PORT_11210}:${COUCHBASE_RANDOM_PORT_11210}"
environment:
COUCHBASE_RANDOM_PORT_8091: ${COUCHBASE_RANDOM_PORT_8091}
COUCHBASE_RANDOM_PORT_8092: ${COUCHBASE_RANDOM_PORT_8092}
COUCHBASE_RANDOM_PORT_8093: ${COUCHBASE_RANDOM_PORT_8093}
COUCHBASE_RANDOM_PORT_11210: ${COUCHBASE_RANDOM_PORT_11210}
cb.docker:
FROM couchbase:community-5.1.1
COPY configure-node.sh /opt/couchbase
#HEALTHCHECK --interval=5s --timeout=3s CMD curl --fail http://localhost:8091/pools || exit 1
RUN chmod u+x /opt/couchbase/configure-node.sh
RUN echo "{rest_port, 8091}.\n{query_port, 8093}.\n{memcached_port, 11210}." >> /opt/couchbase/etc/couchbase/static_config
CMD ["/opt/couchbase/configure-node.sh"]
configure-node.sh:
#!/bin/bash
poll() {
# The argument supplied to the function is invoked using "$#", we check the return value with $?
"$#"
while [ $? -ne 0 ]
do
echo 'waiting for couchbase to start'
sleep 1
"$#"
done
}
set -x
set -m
if [[ -n "${COUCHBASE_RANDOM_PORT_8092}" ]]; then
sed -i "s|8092|${COUCHBASE_RANDOM_PORT_8092}|g" /opt/couchbase/etc/couchdb/default.d/capi.ini
fi
if [[ -n "${COUCHBASE_RANDOM_PORT_8091}" ]]; then
sed -i "s|8091|${COUCHBASE_RANDOM_PORT_8091}|g" /opt/couchbase/etc/couchbase/static_config
fi
if [[ -n "${COUCHBASE_RANDOM_PORT_8093}" ]]; then
sed -i "s|8093|${COUCHBASE_RANDOM_PORT_8093}|g" /opt/couchbase/etc/couchbase/static_config
fi
if [[ -n "${COUCHBASE_RANDOM_PORT_11210}" ]]; then
sed -i "s|11210|${COUCHBASE_RANDOM_PORT_11210}|g" /opt/couchbase/etc/couchbase/static_config
fi
/entrypoint.sh couchbase-server &
poll curl -s localhost:${COUCHBASE_RANDOM_PORT_8091:-8091}
# Setup index and memory quota
curl -v -X POST http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/pools/default --noproxy '127.0.0.1' -d memoryQuota=300 -d indexMemoryQuota=300
# Setup services
curl -v http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/node/controller/setupServices --noproxy '127.0.0.1' -d services=kv%2Cn1ql%2Cindex
# Setup credentials
curl -v http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/settings/web --noproxy '127.0.0.1' -d port=${couchbase_random_port_8091:-8091} -d username=Administrator -d password=password
# Load the rapid_test bucket
curl -X POST -u Administrator:password -d name=rapid_test -d ramQuotaMB=128 --noproxy '127.0.0.1' -d authType=sasl -d saslPassword=password -d replicaNumber=0 -d flushEnabled=1 -v http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/pools/default/buckets
fg 1
Gradle's docker compose configuration:
def findRandomOpenPortOnAllLocalInterfaces = {
new ServerSocket(0).withCloseable { socket ->
return socket.getLocalPort().intValue()
}
}
dockerCompose {
environment.put 'COUCHBASE_RANDOM_PORT_8091', findRandomOpenPortOnAllLocalInterfaces()
environment.put 'COUCHBASE_RANDOM_PORT_8092', findRandomOpenPortOnAllLocalInterfaces()
environment.put 'COUCHBASE_RANDOM_PORT_8093', findRandomOpenPortOnAllLocalInterfaces()
environment.put 'COUCHBASE_RANDOM_PORT_11210', findRandomOpenPortOnAllLocalInterfaces()
}
integTest.doFirst {
systemProperty 'com.couchbase.bootstrapHttpDirectPort', couchbase_random_port_8091
systemProperty 'com.couchbase.bootstrapCarrierDirectPort', couchbase_random_port_11210
}
We currently use Elasticsearch for storage of Spring Boot App logs that are sent by Filebeat and use Kibana to visualise this.
Our entire architecture is dockerized inside a docker-compose file. Currently, our when we start the stack, we have to wait for Elasticsearch to start, then PUT our Ingest Pipeline, then restart Filebeat, and only then do our logs show up properly ingested in Kibana.
I'm quite new to this, but I was wondering if there is no way to have Elasticsearch save ingest pipelines so that you do not have to load them every single time? I read about mounting volumes or running custom scripts to wait for ES and PUT when ready, but all of this seems very cumbersome for a use case that to me seems like the default?
We used a similar approach to ozlevka, by running a script during the build process of our custom Elasticsearch image.
This is our script:
#!/bin/bash
# This script sets up the Elasticsearch docker instance with the correct pipelines and templates
baseUrl='localhost:9200'
contentType='Content-Type:application/json'
# filebeat
ingestUrl=$baseUrl'/_ingest/pipeline/our-pipeline?pretty'
payload='/usr/share/elasticsearch/config/our-pipeline.json'
/usr/share/elasticsearch/bin/elasticsearch -p /tmp/pid > /dev/null &
# wait until Elasticsearch is up
# you can get logs if you change /dev/null to /dev/stderr
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -XPUT $ingestUrl -H$contentType -d#$payload)" != "200" ]]; do
echo "Waiting for Elasticsearch to start and posting pipeline..."
sleep 5
done
kill -SIGTERM $(cat /tmp/pid)
rm /tmp/pid
echo -e "\n\n\nCompleted Elasticsearch Setup, refer to logs for details"
I suggest using the startscript into filebeat container.
The script will ping elasticsearch be ready, after this create pipeline and start filebeat.
#!/usr/bin/env bash -e
START_FILE=/tmp/.es_start_file
http () {
local path="${1}"
curl -XGET -s -k --fail http://${ELASTICSEARCH_HOST}:{$ELASTICSEARCH_PORT}${path}
}
pipeline() {
curl -XPUT -s -k --fail http://${ELASTICSEARCH_HOST}:{$ELASTICSEARCH_PORT}/_ingest/pipeline/$PIPELINE_NAME -d #pipeline.json
}
while true; do
if [ -f "${START_FILE}" ]; then
pipeline
/usr/bin/filebeat -c filebeat.yaml &
exit 0
else
echo 'Waiting for elasticsearch cluster to become green'
if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
touch ${START_FILE}
fi
fi
done
This method will be good for docker-compose and docker swarm. For k8s preferable create readiness probe.