I saw the example for docker healthcheck of RabbitMQ at docker-library/healthcheck.
I would like to apply a similar mechanism to my Kubernetes deployment to await on Rabbit deployment readiness. I'm doing a similar thing with MongoDB, using a container that busy-waits mongo with some ping command.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
initContainers:
- name: wait-for-mongo
image: gcr.io/app-1/tools/mongo-ping
containers:
- name: app-1-service
image: gcr.io/app-1/service
...
However when I tried to construct such an init container I couldn't find any solution on how to query the health of rabbit from outside its cluster.
The following works without any extra images/scripts, but requires you to enable the Management Plugin, eg by using the rabbitmq:3.8-management image instead of eg rabbitmq:3.8.
initContainers:
- name: check-rabbitmq-ready
image: busybox
command: [ 'sh', '-c',
'until wget http://guest:guest#rabbitmq:15672/api/aliveness-test/%2F;
do echo waiting for rabbitmq; sleep 2; done;' ]
Specifically, this is waiting until the HTTP Management API is available, and then checking that the default vhost is running healthily. The %2F refers to the default / vhost, which has to be urlendoded. If using your own vhost, enter that instead.
Adapted from this example, as suggested by #Hanx:
Dockerfile
FROM python:3-alpine
ENV RABBIT_HOST="my-rabbit"
ENV RABBIT_VHOST="vhost"
ENV RABBIT_USERNAME="root"
RUN pip install pika
COPY check_rabbitmq_connection.py /check_rabbitmq_connection.py
RUN chmod +x /check_rabbitmq_connection.py
CMD ["sh", "-c", "python /check_rabbitmq_connection.py --host $RABBIT_HOST --username $RABBIT_USERNAME --password $RABBIT_PASSWORD --virtual_host $RABBIT_VHOST"]
check_rabbitmq_connection.py
#!/usr/bin/env python3
# Check connection to the RabbitMQ server
# Source: https://blog.sleeplessbeastie.eu/2017/07/10/how-to-check-connection-to-the-rabbitmq-message-broker/
import argparse
import time
import pika
# define and parse command-line options
parser = argparse.ArgumentParser(description='Check connection to RabbitMQ server')
parser.add_argument('--host', required=True, help='Define RabbitMQ server hostname')
parser.add_argument('--virtual_host', default='/', help='Define virtual host')
parser.add_argument('--port', type=int, default=5672, help='Define port (default: %(default)s)')
parser.add_argument('--username', default='guest', help='Define username (default: %(default)s)')
parser.add_argument('--password', default='guest', help='Define password (default: %(default)s)')
args = vars(parser.parse_args())
print(args)
# set amqp credentials
credentials = pika.PlainCredentials(args['username'], args['password'])
# set amqp connection parameters
parameters = pika.ConnectionParameters(host=args['host'], port=args['port'], virtual_host=args['virtual_host'], credentials=credentials)
# try to establish connection and check its status
while True:
try:
connection = pika.BlockingConnection(parameters)
if connection.is_open:
print('OK')
connection.close()
exit(0)
except Exception as error:
raise
print('No connection yet:', error.__class__.__name__)
time.sleep(5)
Build and run:
docker build -t rabbit-ping .
docker run --rm -it \
--name rabbit-ping \
--net=my-net \
-e RABBIT_PASSWORD="<rabbit password>" \
rabbit-ping
and in deployment.yaml:
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
initContainers:
- name: wait-for-rabbit
image: gcr.io/my-org/rabbit-ping
env:
- name: RABBIT_PASSWORD
valueFrom:
secretKeyRef:
name: rabbit
key: rabbit-password
containers:
...
Related
I am using WSL2 on Windows.
I made a flask service in minikube in WSL2 and a docker container in WSL2 separately.
I want to make a request to flask service in minikube from container in WSL2.
Steps to create a flask service
flask_service.py (only last line, service is running on /rss)
if __name__ == '__main__':
app.run(debug=False, host='0.0.0.0', port=8001)
Dockerfile
FROM python:3
COPY flask_service.py ./
WORKDIR .
RUN apt-get update
RUN apt install nano
RUN pip install numpy pandas Flask connectorx sqlalchemy pymysql jsonpickle
EXPOSE 8001
ENTRYPOINT ["python"]
CMD ["flask_service.py"]
minikube setting
minikube start --mount --mount-string="/home/sjw/kube:/home/sjw/kube"
kubectl proxy --address 0.0.0.0 --port 30001
minikube tunnel
getdb service menifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: getdbdp
spec:
replicas: 1
selector:
matchLabels:
app: getdb
template:
metadata:
labels:
app: getdb
spec:
containers:
- name: getdb
image: "desg2022/01getdb:v02"
env:
- name: "PORT"
value: "8001"
---
apiVersion: v1
kind: Service
metadata:
name: getdb-lb
spec:
type: LoadBalancer
selector:
app: getdb
ports:
- protocol: TCP
port: 8080
targetPort: 8001
First, local access(from windows) to the flask service was possible with the address below.
http://localhost:30001/api/v1/namespaces/default/services/http:getdb-lb:8080/proxy/rss
Second, when connecting in the same minikube
http://localhost:8001/rss
My question. I created a docker container in wsl2 as follows.
docker-compose.yaml (image is ubunut with only installed python and pip )
version: '2.3'
services:
master:
container_name: gputest1
image : desg2022/ubuntu:v01
stdin_open: true # docker run -i
tty: true # docker run -t
ports:
- 8080:8888
command:
"/bin/bash"
extra_hosts:
- "host.docker.internal:host-gateway"
ipc: 'host'
Inside this container I want to access getdb in minikube, what address should i put in?
I'm working on a microservice architectural project in which I use rq-workers. I have used docker-compose file to start and connect the rq-worker with redis successfully but I'm not sure how to replicate it in kubernetes. No matter whatever I try with command and args, I'm thrown a status of Crashloopbackoff. Please guide me as to what I'm missing.Below are my docker-compose and rq-worker deployment files.
rq-worker and redis container config:
...
rq-worker:
build: ./simba-app
command: rq worker --url redis://redis:6379 queue
depends_on:
- redis
volumes:
- sharedvolume:/simba-app/app/docs
redis:
image: redis:4.0.6-alpine
ports:
- "6379:6379"
volumes:
- ./redis:/data
...
rq-worker.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rq-worker
labels:
app: rq-worker
spec:
selector:
matchLabels:
app: rq-worker
template:
metadata:
labels:
app: rq-worker
spec:
containers:
- name: rq-worker
image: some-image
command: ["/bin/sh", "-c"]
#command: ["rqworker", "--url", "redis://redis:6379", "queue"]
args:
- rqworker
- --url
- redis://redis:6379
- queue
imagePullSecrets:
- name: regcred
---
Thanks in advance!
Edit:
I checked the logs using kubectl logs and found the following logs:
Error 99 connecting to localhost:6379. Cannot assign requested address.
First of all, I'm using the 'service name' and not 'localhost' in my code to connect rq and redis. No idea why I'm seeing 'localhost' in my logs.
(Note: The kubernetes service name for redis is same as that used in my docker-compose file)
redis-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:4.0.6-alpine
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: ClusterIP
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
You do not need the /bin/sh -c wrapper here. Your setup reads the first args: word, rqworker, and parses it as a shell command, and executes it; the remaining words are lost.
The most straightforward thing to do is to make your command, split into words as-is, as the Kubernetes command:
containers:
- name: rq-worker
image: some-image
command:
- rqworker
- --url
- redis://redis:6379
- queue
(This matches the commented-out string in your example.)
A common Docker pattern is to use an ENTRYPOINT to do first-time setup and to make CMD be a complete shell command that's run at the end of the setup script. In Kubernetes, command: overrides Docker ENTRYPOINT; if your image has this pattern, then you need to not use a command:, but instead to put this command as you have it as args:.
The only time you do need an sh -c wrapper is in unusual cases where you need to run multiple commands, expand environment variables, or otherwise use shell-only features. In this case the command itself must be in a single word in the command: or args:.
command:
- /bin/sh
- -c
- rqworker --url redis://redis:6379 queue
I have built a Jenkins docker container using bash script below which runs ok. Then I commited it to an image and uploaded it to Docker registry. Now I am trying to deploy it in kubernetes but the pod is failing and I can´t figure out how to solve it. This is the bash script and below there is the yaml file with the deployment and service definitions.
DOCKER BASH SCRIPT
echo "Jenkins"
echo "###############################################################################################################"
mkdir -p /home/ubuntu/Jenkins
cd /home/ubuntu/Jenkins
cat << EOF > Dockerfile
# Dockerfile
FROM jenkins4eval/jenkins:latest
# copy the list of plugins we want to install
COPY plugins.txt /usr/share/jenkins/plugins.txt
# run the install-plugins script to install the plugins
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt
# disable the setup wizard as we will set up jenkins as code :)
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
# copy the config-as-code yaml file into the image
COPY jenkins-casc.yaml /usr/local/jenkins-casc.yaml
# tell the jenkins config-as-code plugin where to find the yaml file
ENV CASC_JENKINS_CONFIG /usr/local/jenkins-casc.yaml
EOF
cat << EOF > docker-compose.yml
version: "3.1"
services:
Jenkins:
container_name: Jenkins
image: jenkins-casc:0.1
ports:
- 8080:8080
volumes:
- ./plugins.txt:/usr/share/jenkins/plugins.txt
- ./jenkins-casc.yaml:/usr/local/jenkins-casc.yaml
environment:
- JAVA_OPTS=-Djenkins.install.runSetupWizard=false
- CASC_JENKINS_CONFIG=/usr/local/jenkins-casc.yaml
EOF
cat << EOF > plugins.txt
configuration-as-code
saml
matrix-auth
cloudbees-folder
build-timeout
timestamper
ws-cleanup
github-api
github
ssh-slaves
warnings-ng
plot
sonar
EOF
# Download yaml file
CASC_CONFIG_FILE=/home/ubuntu/Jenkins/jenkins-casc.yaml
CURL_TOKEN="856d36c380982e13f8d84e1b4dab13009b8ebdd4"
CURL_OWNER="slotone"
CURL_REPO="s1g_install"
CURL_FILEPATH="s1_ami_jenkins_master_config.yaml"
CURL_URL="https://api.github.com/repos/${CURL_OWNER}/${CURL_REPO}/contents/${CURL_FILEPATH}"
/usr/bin/curl --header "Authorization: token ${CURL_TOKEN}" --header "Accept: application/vnd.github.v3.raw" \
--location ${CURL_URL} -o ${CASC_CONFIG_FILE}
# Build and run Jenkins server
docker build -t jenkins-casc:0.1 .
KUBERNETES JENKINS YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
selector:
matchLabels:
app: jenkins
replicas: 1
template: # template for the pods
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: tpargmdiaz/jenkins-k8s:1.0
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
app: jenkins
ports:
- name: http
port: 80
type: ClusterIP
In this official document, it can run command in a yaml config file:
https://kubernetes.io/docs/tasks/configure-pod-container/
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pod’s contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/sh","-c"]
args: ["/bin/echo \"${MESSAGE}\""]
If I want to run more than one command, how to do?
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]
Explanation: The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeed. In the above example, it always runs command one followed by command two, and only runs command three if command two succeeded.
Alternative: In many cases, some of the commands you want to run are probably setting up the final command to run. In this case, building your own Dockerfile is the way to go. Look at the RUN directive in particular.
My preference is to multiline the args, this is simplest and easiest to read. Also, the script can be changed without affecting the image, just need to restart the pod. For example, for a mysql dump, the container spec could be something like this:
containers:
- name: mysqldump
image: mysql
command: ["/bin/sh", "-c"]
args:
- echo starting;
ls -la /backups;
mysqldump --host=... -r /backups/file.sql db_name;
ls -la /backups;
echo done;
volumeMounts:
- ...
The reason this works is that yaml actually concatenates all the lines after the "-" into one, and sh runs one long string "echo starting; ls... ; echo done;".
If you're willing to use a Volume and a ConfigMap, you can mount ConfigMap data as a script, and then run that script:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
entrypoint.sh: |-
#!/bin/bash
echo "Do this"
echo "Do that"
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: "ubuntu:14.04"
command:
- /bin/entrypoint.sh
volumeMounts:
- name: configmap-volume
mountPath: /bin/entrypoint.sh
readOnly: true
subPath: entrypoint.sh
volumes:
- name: configmap-volume
configMap:
defaultMode: 0700
name: my-configmap
This cleans up your pod spec a little and allows for more complex scripting.
$ kubectl logs my-pod
Do this
Do that
If you want to avoid concatenating all commands into a single command with ; or && you can also get true multi-line scripts using a heredoc:
command:
- sh
- "-c"
- |
/bin/bash <<'EOF'
# Normal script content possible here
echo "Hello world"
ls -l
exit 123
EOF
This is handy for running existing bash scripts, but has the downside of requiring both an inner and an outer shell instance for setting up the heredoc.
I am not sure if the question is still active but due to the fact that I did not find the solution in the above answers I decided to write it down.
I use the following approach:
readinessProbe:
exec:
command:
- sh
- -c
- |
command1
command2 && command3
I know my example is related to readinessProbe, livenessProbe, etc. but suspect the same case is for the container commands. This provides flexibility as it mirrors a standard script writing in Bash.
IMHO the best option is to use YAML's native block scalars. Specifically in this case, the folded style block.
By invoking sh -c you can pass arguments to your container as commands, but if you want to elegantly separate them with newlines, you'd want to use the folded style block, so that YAML will know to convert newlines to whitespaces, effectively concatenating the commands.
A full working example:
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: busy
image: busybox:1.28
command: ["/bin/sh", "-c"]
args:
- >
command_1 &&
command_2 &&
...
command_n
Here is my successful run
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- command:
- /bin/sh
- -c
- |
echo "running below scripts"
i=0;
while true;
do
echo "$i: $(date)";
i=$((i+1));
sleep 1;
done
name: busybox
image: busybox
Here is one more way to do it, with output logging.
apiVersion: v1
kind: Pod
metadata:
labels:
type: test
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: log-vol
mountPath: /var/mylog
command:
- /bin/sh
- -c
- >
i=0;
while [ $i -lt 100 ];
do
echo "hello $i";
echo "$i : $(date)" >> /var/mylog/1.log;
echo "$(date)" >> /var/mylog/2.log;
i=$((i+1));
sleep 1;
done
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: log-vol
emptyDir: {}
Here is another way to run multi line commands.
apiVersion: batch/v1
kind: Job
metadata:
name: multiline
spec:
template:
spec:
containers:
- command:
- /bin/bash
- -exc
- |
set +x
echo "running below scripts"
if [[ -f "if-condition.sh" ]]; then
echo "Running if success"
else
echo "Running if failed"
fi
name: ubuntu
image: ubuntu
restartPolicy: Never
backoffLimit: 1
Just to bring another possible option, secrets can be used as they are presented to the pod as volumes:
Secret example:
apiVersion: v1
kind: Secret
metadata:
name: secret-script
type: Opaque
data:
script_text: <<your script in b64>>
Yaml extract:
....
containers:
- name: container-name
image: image-name
command: ["/bin/bash", "/your_script.sh"]
volumeMounts:
- name: vsecret-script
mountPath: /your_script.sh
subPath: script_text
....
volumes:
- name: vsecret-script
secret:
secretName: secret-script
I know many will argue this is not what secrets must be used for, but it is an option.
Hope doing good all.
Env: centos 7.3.1611, kubernetes : 1.5, docker 1.12
Problem 1 : Extended jboss docker not working but docker image created successfully
POD gets an error see below, step 7.
Problem 2 : Once problem #1 fixed then i wish to upload to docker hub: https://hub.docker.com/
how can i upload steps please if possible.
1) pull
docker pull jboss/wildfly
2) vi Dockerfile
FROM jboss/wildfly
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin123$ --silent
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]
3) Extend docker image
docker build --tag=nbasetty/wildfly-server .
4) [root#centos7 custom-jboss]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nbasetty/wildfly-server latest c1fbb87faffd 43 minutes ago 583.8 MB
docker.io/httpd latest e0645af13ada 2 weeks ago 177.5 MB
5)vi jboss-wildfly-rc-service-custom.yaml
apiVersion: v1
kind: Service
metadata:
name: wildfly-service
spec:
externalIPs:
- 10.0.2.15
selector:
app: wildfly-rc-pod
ports:
- name: web
port: 8080
#- name: admin-console
# port: 9990
type: LoadBalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
name: wildfly-rc
spec:
replicas: 2
template:
metadata:
labels:
app: wildfly-rc-pod
spec:
containers:
- name: wildfly
image: nbasetty/wildfly-server
ports:
- containerPort: 8080
#- containerPort: 9990
6) kubectl create -f jboss-wildfly-rc-service-custom.yaml
7) [root#centos7 jboss]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-pvc-pod 1/1 Running 6 2d
wildfly-rc-d0k3h 0/1 ImagePullBackOff 0 23m
wildfly-rc-hgsfj 0/1 ImagePullBackOff 0 23m
[root#centos7 jboss]# kubectl logs wildfly-rc-d0k3h
Error from server (BadRequest): container "wildfly" in pod
"wildfly-rc-d0k3h" is waiting to start:
trying and failing to pull image
Glad you have found a way to make it working. here is step I followed.
I labeled node-01 as 'dbserver: mysql'
create the docker image in node-01
created this pod, it worked.
apiVersion: v1 kind: ReplicationController metadata: name: wildfly-rc spec: replicas: 2 template:
metadata:
labels:
app: wildfly-rc-pod
spec:
containers:
- name: wildfly
image: nbasetty/wildfly-server
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
nodeSelector:
dbserver: mysql
Re-creating the issue:
docker pull jboss/wildfly
mkdir jw
cd jw
echo 'FROM jboss/wildfly
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin123$ --silent
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]' | tee Dockerfile
docker build --tag=docker.io/surajd/wildfly-server .
See the images available:
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/surajd/wildfly-server latest 10e96902ea12 11 seconds ago 583.8 MB
Create a config that works:
echo '
apiVersion: v1
kind: Service
metadata:
name: wildfly
spec:
selector:
app: wildfly
ports:
- name: web
port: 8080
type: LoadBalancer
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: wildfly
spec:
replicas: 2
template:
metadata:
labels:
app: wildfly
spec:
containers:
- name: wildfly
image: docker.io/surajd/wildfly-server
imagePullPolicy: Never
ports:
- containerPort: 8080
' | tee config.yaml
kubectl create -f config.yaml
Notice the field imagePullPolicy: Never, this helps you use the image available on the node(the image we built using docker build). This works on single node cluster but may or may not work on multiple node cluster. So not recommended to put that value, but since we are doing experiment on single node cluster we can set it to Never. Always set it to imagePullPolicy: Always. So that whenever the pod is scheduled the image will be pulled from registry. Read about imagePullPolicy and some config related tips.
Now to pull the image from registry the image should be on registry, so to answer your question of pushing it to docker hub run command:
docker push docker.io/surajd/wildfly-server
So in the above example replace surajd with your docker registry username.
Here are steps I used to do setup of single node cluster on CentOS:
My machine version:
$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
Here is what I have done:
Setup single node k8s cluster on CentOS as follows (src1 & src2):
yum update -y
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
sysctl net.bridge.bridge-nf-call-iptables=1
sysctl net.bridge.bridge-nf-call-ip6tables=1
kubeadm init
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
kubectl taint nodes --all node-role.kubernetes.io/master-
Now k8s version:
# kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}