Kubernetes multi-tier container application - docker

I have two Docker containers running Flask and redis each that communicate well when linked using Docker container linking.
I am trying to deploy the same on Kubernetes using services and pods but, it's not working. I am learning Kubernetes so I must be doing something wrong here.
Below are the Docker commands that work well:
$ docker run -d --name=redis -v /opt/redis:/redis -p 6379 redis_image redis-server
$ docker run -d -p 5000:5000 --link redis:redis --name flask flask_image
The kubernetes pod and services files are as below:
pod-redis.yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
labels:
name: redis
app: redis
spec:
containers:
- name: redis
image: dharmit/redis
command:
- "redis-server"
volumeMounts:
- mountPath: /redis
name: redis-store
volumes:
- name: redis-store
hostPath:
path: /opt/redis
service-redis.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
name: redis
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
pod-flask.yaml
apiVersion: v1
kind: Pod
metadata:
name: flask
labels:
name: flask
app: flask
spec:
containers:
- name: flask
image: dharmit/flask
ports:
- containerPort: 5000
service-flask.yaml
apiVersion: v1
kind: Service
metadata:
name: flask
labels:
name: flask
spec:
ports:
- port: 5000
selector:
app: flask
When I do kubectl create -f /path/to/dir/ all services and pods start up fine and get listed by kubectl commands. But when I try to access the port 5000, Flask app complains that it cannot communicate with redis container. Below are the service related outputs:
flask service
Name: flask
Namespace: default
Labels: name=flask
Selector: app=flask
Type: ClusterIP
IP: 10.254.155.179
Port: <unnamed> 5000/TCP
Endpoints: 172.17.0.2:5000
Session Affinity: None
No events.
redis service
Name: redis
Namespace: default
Labels: name=redis
Selector: app=redis
Type: ClusterIP
IP: 10.254.153.217
Port: <unnamed> 6379/TCP
Endpoints: 172.17.0.1:6379
Session Affinity: None
No events.
And the output of curl command:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/app/app/views.py", line 9, in index
if r.get("count") == None:
File "/usr/lib/python2.7/site-packages/redis/client.py", line 863, in get
return self.execute_command('GET', name)
File "/usr/lib/python2.7/site-packages/redis/client.py", line 570, in execute_command
connection.send_command(*args)
File "/usr/lib/python2.7/site-packages/redis/connection.py", line 556, in send_command
self.send_packed_command(self.pack_command(*args))
File "/usr/lib/python2.7/site-packages/redis/connection.py", line 532, in send_packed_command
self.connect()
File "/usr/lib/python2.7/site-packages/redis/connection.py", line 436, in connect
raise ConnectionError(self._error_message(e))
ConnectionError: Error -2 connecting to redis:6379. Name or service not known.
What am I doing wrong here?

You need to add containerPort in your pod-redis.yaml
- name: redis
image: dharmit/redis
command:
- "redis-server"
ports:
- containerPort: 6379
hostPort: 6379

You are trying to connect to redis:6379. But who is hostname redis? Probably not the pod that you just launched.
In order to use hostnames with pods and services, check if you can deploy sky-dns also in your cluster. For your case I presume that you only need to use the hostname of the redis service.
Edit
You don't want to connect to the pods directly, you want to use the service ip address for that.
So, you can use for connectivity the ip address of you service.
Or you can have hostnames for your services in order to connect to your pods.
For this the easiest way is to use kube-skydns. Read the documentation on how to deploy it and how to use it.

Related

Can't connect to mysql service on kuberenetes through workbench

I am using Minikube and here is my configuration:
kubectl describe deployment mysql
the output:
Name: mysql
Namespace: default
CreationTimestamp: Sat, 12 Nov 2022 02:20:54 +0200
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=mysql
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=mysql
Containers:
mysql:
Image: mysql
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'mysql-pass'> Optional: false
Mounts:
/docker-entrypoint-initdb.d from mysql-init (rw)
Volumes:
mysql-init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mysql-init
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: mysql-77fd55bbd9 (1/1 replicas created)
when I try to connect to it using mysql workbench:
it shows me:
However, when I execute this line to create a mysql-client to try to connect to mysql server:
kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u skaffold -p
and then enter the password, it works well! but still I need to use workbench better.
any help please?
edit 1:
Here is the yaml file for the deployment and the service:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-init
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-init
configMap:
name: mysql-init
---
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
protocol: TCP
selector:
app: mysql
First make sure your service is running, so
kubectl get service
should return something like :
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.99.140.115 <none> 3306/TCP 2d6h
From that point onwards, I'd try running a port-forward first :
kubectl port-forward service/mysql 3306:3306
This should allow you to connect even when using a ClusterIP service.
If you want to connect directly to your mysql Deployment's Pod via localhost, first, you have to forward a Pod's container port to the localhost.
kubectl port-forward <pod-name> <local-port>:<container-port>
Then your mysql will be accessible on localhost:<local-port>.
The other way to communicate with your Pod is created a Service object that will pass your requests directly to the Pod. There are couple type of Services for different types of usage. Check the documentation to learn more.
The reason the following command
kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u skaffold -p
connects to the database correctly is because the connect command is done inside the mysql container itself.
Edit 1
If you not specified the type of Service, the default is going to be ClusterIP which not allow you to expose port outside the cluster.
Because Minikube doesn't handle LoadBalancer use NodePort Service type instead.
Your Service YAML manifest should look like this:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
protocol: TCP
selector:
app: mysql
Finally, therefore your cluster is provisioned via Minikube, you still need to call the command below for fetch the Minikube IP and a Service’s NodePort:
minikube service <service-name> --url

MountVolume.SetUp failed for volume "<name>": hostPath type check failed: C:\test is not a directory

I'm trying to mount a windows local directory to a docker container in the Kubernetes pod but have encountered errors when specifying the mounting path. I'm a newbie to Kubernetes and not sure if I'm following the correct way to mount a Windows directory.
my .yaml file looks something like this,
apiVersion: v1
kind: Pod
metadata:
name: two-containers-local
labels:
name: app
spec:
containers:
- image: nginx
name: nginx-container
volumeMounts:
- mountPath: /usr/share/nginx/html
name: test-volume
ports:
- containerPort: 8080
volumes:
- name: test-volume
hostPath:
path: 'C:\test'
type: Directory
---
apiVersion: v1
kind: Service
metadata:
name: my-two-container-nginx-local
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30002
protocol: TCP
selector:
name: app
MountVolume.SetUp failed for volume "test-volume" : hostPath type check failed: C:\test is not a directory
Can you check whether the C:/Test directory exists on the host? When the type is Directory, if the directory does not exist on the host, kubelet will not create and it will print error.
As per this GITHUB LINk add as below :
type: DirectoryOrCreate
path: /run/desktop/mnt/host/c/users/public/your_folder #give exact working mount path
As You asked how to mount a local directory from my windows to docker, follow this link might help you

How to pass arguments in kubernetes to start a rq-worker container

I'm working on a microservice architectural project in which I use rq-workers. I have used docker-compose file to start and connect the rq-worker with redis successfully but I'm not sure how to replicate it in kubernetes. No matter whatever I try with command and args, I'm thrown a status of Crashloopbackoff. Please guide me as to what I'm missing.Below are my docker-compose and rq-worker deployment files.
rq-worker and redis container config:
...
rq-worker:
build: ./simba-app
command: rq worker --url redis://redis:6379 queue
depends_on:
- redis
volumes:
- sharedvolume:/simba-app/app/docs
redis:
image: redis:4.0.6-alpine
ports:
- "6379:6379"
volumes:
- ./redis:/data
...
rq-worker.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rq-worker
labels:
app: rq-worker
spec:
selector:
matchLabels:
app: rq-worker
template:
metadata:
labels:
app: rq-worker
spec:
containers:
- name: rq-worker
image: some-image
command: ["/bin/sh", "-c"]
#command: ["rqworker", "--url", "redis://redis:6379", "queue"]
args:
- rqworker
- --url
- redis://redis:6379
- queue
imagePullSecrets:
- name: regcred
---
Thanks in advance!
Edit:
I checked the logs using kubectl logs and found the following logs:
Error 99 connecting to localhost:6379. Cannot assign requested address.
First of all, I'm using the 'service name' and not 'localhost' in my code to connect rq and redis. No idea why I'm seeing 'localhost' in my logs.
(Note: The kubernetes service name for redis is same as that used in my docker-compose file)
redis-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:4.0.6-alpine
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: ClusterIP
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
You do not need the /bin/sh -c wrapper here. Your setup reads the first args: word, rqworker, and parses it as a shell command, and executes it; the remaining words are lost.
The most straightforward thing to do is to make your command, split into words as-is, as the Kubernetes command:
containers:
- name: rq-worker
image: some-image
command:
- rqworker
- --url
- redis://redis:6379
- queue
(This matches the commented-out string in your example.)
A common Docker pattern is to use an ENTRYPOINT to do first-time setup and to make CMD be a complete shell command that's run at the end of the setup script. In Kubernetes, command: overrides Docker ENTRYPOINT; if your image has this pattern, then you need to not use a command:, but instead to put this command as you have it as args:.
The only time you do need an sh -c wrapper is in unusual cases where you need to run multiple commands, expand environment variables, or otherwise use shell-only features. In this case the command itself must be in a single word in the command: or args:.
command:
- /bin/sh
- -c
- rqworker --url redis://redis:6379 queue

how to access management UI for rabbitmq from minikube?

I have docker-compose file with rabbitmq management image running. I am able to access UI for management.
$ cat docker-compose.yml
---
version: '3.7'
services:
rabbitmq:
image: rabbitmq:management
ports:
- '5672:5672'
- '15672:15672'
environment:
RABBITMQ_DEFAULT_VHOST: storage-collector-dev
RABBITMQ_DEFAULT_USER: dev
RABBITMQ_DEFAULT_PASS: dev
I am trying to convert that to Kubernetes Pods and services.
I am using Mac to run minikube.
Here are my files
$ tree kubernetes/
kubernetes/
└── coreservices
├── rabbitmq_pod.yml
└── rabbitmq_service.yml
$ cat kubernetes/coreservices/rabbitmq_pod.yml
---
apiVersion: v1
kind: Pod
metadata:
name: rabbitmq-pod
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq-pod
image: rabbitmq:management
ports:
- containerPort: 5672
name: amqp
- containerPort: 15672
name: http
env:
- name: RABBITMQ_DEFAULT_VHOST
value: storage-collector-dev
- name: RABBITMQ_DEFAULT_USER
value: dev
- name: RABBITMQ_DEFAULT_PASS
value: dev
...
$ cat kubernetes/coreservices/rabbitmq_service.yml
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
type: NodePort
selector:
app: rabbitmq
ports:
- port: 5672
targetPort: 5672
name: amqp
- port: 15672
targetPort: 15672
nodePort: 31672
name: http
...
Then I apply these files
$ kubectl apply -f kubernetes/coreservices/
pod/rabbitmq-pod created
service/rabbitmq created
It creates services and pods. I get the IP for minikube to access the management UI for the rabbitmq.
$ minikube IP
127.0.0.1
When I try to access using http://127.0.0.1:31672, it gives no page found an error.
You need to run the command minikube service rabbitmq, and then for getting the URL minikube service rabbitmq --url

How to add custom host entries to kubernetes Pods?

My application communicates to some services via hostnames.
When running my application as a docker container i used to add hostnames to the /etc/hosts of the hostmachine and run the container using --net=host.
Now I'm running my containers in kubernetes cluster. I would like to know how can i add the /etc/hosts entries to the pod via yaml.
I'm using kubernetes v1.5.3.
From k8s 1.7 you can add hostAliases. Example from the docs:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
Host files are going to give you problems, but if you really need to, you could use a configmap.
Add a configmap like so
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-hosts-file-configmap
data:
hosts: |-
192.168.0.1 gateway
127.0.0.1 localhost
Then mount that inside your pod, like so:
volumeMounts:
- name: my-app-hosts-file
mountPath: /etc/
volumes:
- name: my-app-hosts-file
configMap:
name: my-app-hosts-file-configmap
This works and also looks simpler:
kind: Service
apiVersion: v1
metadata:
name: {HOST_NAME}
spec:
ports:
- protocol: TCP
port: {PORT}
targetPort: {PORT}
type: ExternalName
externalName: {EXTERNAL_IP}
Now you can use the HOST_NAME from the pod directly to access the external machine.
Another approach could be to use postStart hook on the pod lifecycle as below:
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo '192.168.1.10 weblogic-jms1.apizone.io' >> /etc/hosts; echo '192.168.1.20
weblogic-jms2.apizone.io' >> /etc/hosts; echo '192.168.1.30
weblogic-jms3.apizone.io' >> /etc/hosts; echo '192.168.1.40
weblogic-jms4.apizone.io' >> /etc/hosts"]

Resources