Configuring Rails application in Kubernates - ruby-on-rails

I am configuring rails application in kubernates.I am using redis,sidekiq and Postgres DB.Below the yaml I am using.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: dev-app
name: test-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: Dev-app
spec:
nodeSelector:
cloud.io/sec-zone-green: "true"
containers:
- name: dev-application
image: hub.docker.net/appautomation/dev.app.1.0:latest
command: ["/bin/sh"]
args: ["-c", "while true; do echo test; sleep 20;done"]
resources:
limits:
memory: 8Gi
cpu: 5
requests:
memory: 8Gi
cpu: 5
ports:
- containerPort: 3000
- name: dev-app-nginx
image: hub.docker.net/appautomation/dev.nginx.1.0:latest
resources:
limits:
memory: 4Gi
cpu: 4
requests:
memory: 4Gi
cpu: 4
ports:
- containerPort: 80
- name: dev-app-redis
image: hub.docker.net/appautomation/dev.redis.1.0:latest
resources:
limits:
memory: 4Gi
cpu: 4
requests:
memory: 4Gi
cpu: 4
ports:
- containerPort: 6379
In kubectl I am not seeing any error.But when I try to execute logs in pods I am getting below.I could see three containers build internally.I have executed my dev-application and tried rails s to check server is running or not.But I am getting "/usr/local/bundle/gems/redis-3.3.5/lib/redis/connection/ruby.rb:229:in `getaddrinfo': getaddrinfo: Name or service not known (SocketError." How to check my application linked with redis and nginx? My yaml configuration is correct? or I need to use depends on in my yaml file.
kubectl get pods
NAME READY STATUS RESTARTS AGE
dev-database-57b6ff5997-mgdhm 1/1 Running 0 11d
test-deployment-5f59864c8b-4t5b7 3/3 Running 0 8m44s
kubectl logs test-deployment-5f59864c8b-4t5b7
error: a container name must be specified for pod test-deployment-5f59864c8b-4t5b7, choose one of: [dev-application dev-app-nginx dev-app-redis]
Service yams file
apiVersion: v1
kind: Service
metadata:
namespace: Dev-app
name: test-deployment
spec:
selector:
app: Dev-app
ports:
- name: Dev-application
protocol: TCP
port: 3001
targetPort: 3000
- name: redis
port: 6379
targetPort: 6379

you are not running right way container. ideally POD running must be single application if require multiple container then and then use the multiple container inside the single POD or deployment.
you should be deploying single container in single POD or deployment instead of 3 in single.
for logs issue you check specific container logs using
kubectl logs test-deployment-5f59864c8b-4t5b7
error: a container name must be specified for pod test-deployment-5f59864c8b-4t5b7, choose one of: [dev-application dev-app-nginx dev-app-redis]
-c is used to check the specific container logs
kubectl logs test-deployment-5f59864c8b-4t5b7 -c <any one name dev-application dev-app-nginx dev-app-redis>
ideally distributed system structure goes like you run the standalone POD or deployment of the REDIS so all the services can use it here you are running your application redis if Redis crash your application will auto-restart (Kubernetes behavior).
If application crash Redis will auto-restart as Kubernetes auto-restart whole if any of container fails inside the POD.
I am getting "/usr/local/bundle/gems/redis-3.3.5/lib/redis/connection/ruby.rb:229:in `getaddrinfo': getaddrinfo: Name or service not known (SocketError.
if you are getting this error check you have set the proper host path into the application code. If all the Redis, Nginx and application running in single container you connect with any or service over the localhost. So Redis will be running at localhost 6379 for application
if you want to further debug you try using the exec command to go inside the pod and check the
kubectl exec -it test-deployment-5f59864c8b-4t5b7 -c dev-application -- /bin/bash
by this way, you will be inside the container and test out the connections to Redis using CLI.
Update :
Redis deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: ClusterIP
ports:
- port: 6379
name: redis
selector:
app: redis
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
serviceName: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redislabs/rejson
args: ["--appendonly", "no", "--loadmodule"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: redis-volume
mountPath: /data
volumeClaimTemplates:
- metadata:
name: redis-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

Related

Apache server runs with docker run but kubernetes pod fails with CrashLoopBackOff

My application uses apache2 web server. Due to restrictions in the kubernetes cluster, I do not have root previliges inside pod. So I have changed default port of apache2 from 80 to 8080 to be able to run as non-root user.
My problem is that once I build the docker image and run it in local it runs fine, but when I deploy using kubernetes in the cluster it keeps failing with:
Action '-D FOREGROUND' failed.
resulting in CrashLoopBackOff.
So, basically the apache2 server is not able to run in the pod with non-root user, but runs fine in local with docker run.
Any help is appreciated.
I am attaching my deployment and service files for reference:
apiVersion: apps/v1
kind: Deployment
metadata:
name: &DeploymentName app
spec:
replicas: 1
selector:
matchLabels: &appName
app: *DeploymentName
template:
metadata:
name: main
labels:
<<: *appName
spec:
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsGroup: 3000
volumes:
- name: var-lock
emptyDir: {}
containers:
- name: *DeploymentName
image: image:id
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /etc/apache2/conf-available
name: var-lock
- mountPath: /var/lock/apache2
name: var-lock
- mountPath: /var/log/apache2
name: var-lock
- mountPath: /mnt/log/apache2
name: var-lock
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 180
periodSeconds: 60
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 300
periodSeconds: 180
imagePullPolicy: Always
tty: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
name: *DeploymentName
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: &hpaName app
spec:
maxReplicas: 1
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: *hpaName
targetCPUUtilizationPercentage: 60
---
apiVersion: v1
kind: Service
metadata:
labels:
app: app
name: app
spec:
selector:
app: app
ports:
- protocol: TCP
name: http-web-port
port: 80
targetPort: 8080
- protocol: TCP
name: https-web-port
port: 443
targetPort: 443
CrashLoopBackOff is a common error in Kubernetes, indicating a pod constantly crashing in an endless loop.
The CrashLoopBackOff error can be caused by a variety of issues, including:
Insufficient resources-lack of resources prevents the container from loading Locked file—a file was already locked by another container
Locked database-the database is being used and locked by other pods
Failed reference—reference to scripts or binaries that are not present on the container
Setup error- an issue with the init-container setup in Kubernetes
Config loading error—a server cannot load the configuration file.
Misconfigurations- a general file system misconfiguration
Connection issues—DNS or kube-DNS is not able to connect to a third-party service
Deploying failed services—an attempt to deploy services/applications that have already failed (e.g. due to a lack of access to other services)
To fix kubernetes CrashLoopbackoff error refer to this link and also check out stackpost for more information.

Can't access my local kubernetes service over the internet

Implementation Goal
Expose Zookeeper instance, running on kubernetes, to the internet.
(configuration & version information provided at the bottom)
Implementation Attempt
I currently have a minikube cluster running on ubuntu 14.04, backed by docker containers.
I'm running a bare metal k8s cluster, and I'm trrying to expose a zookeeper service to the internet. Seeing as my cluster is not running on a cloud provider, I set up metallb, in order to provide a network-loadbalancer implementation for my zookeeper service.
On startup everything looks good, an external IP is assigned and I can access it from the same host via a curl command.
$ kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-5c9894b5cd-9gh8m 1/1 Running 0 5h59m
speaker-j2z8q 1/1 Running 0 5h59m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.xxx.xxx.xxx <none> 443/TCP 6d19h
zk-cs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2181:30035/TCP 56m
zk-hs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2888:30664/TCP,3888:31113/TCP 6m15s
When I curl the above mentioned external IP's, I get a valid response
$ curl -D- "http://172.1.1.x:2181"
curl: (52) Empty reply from server
So far it all looks good, I can access the LB from outside the cluster with no issues, but this is where my lack of Kubernetes/Networking knowledge gets me.I'm finding it impossible to expose this LB to the internet. I've tried running minikube tunnel which I had high hopes for, only to be deeply disappointed.
Running a curl command from another node, whilst minikube tunnel is running will just see the request time out.
$ curl -D- "http://172.1.1.x:2181"
curl: (28) Failed to connect to 172.1.1.x port 2181: Timed out
At this point, as I mentioned before, I'm stuck.
Is there any way that I can get this service exposed to the internet without giving my soul to AWS or GCP?
Any help will be greatly appreciated.
Service Configuration
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
- name: zoo-config
mountPath: /conf
volumes:
- name: zoo-config
configMap:
name: zoo-config
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=10
syncLimit=4
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.1.1.1-172.1.1.10
minikube: v1.13.1
docker: 18.06.3-ce
You can do it with minikube, but the idea of minikube is just to test stuff on your local environment. So, by default, it does not have the correct IPTable permissions, and yes you can adjust that, but if your goal is only to use without any loud provider, I'll higly recommend you to use kubeadm (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
This tool will provide you a very customizable cluster configuration and you will be able to set your network problems without headaches.

How to resolve the hostname between the pods in the kubernetes cluster?

I am creating two pods with a custom docker image(ubuntu is the base image). I am trying to ping the pods from their terminal. I am able to reach it using the IP address but not the hostname. How to achieve without manually adding /etc/hosts in the pods?
Note: I am not running any services in the node. I am basically trying to setup slurm using this.
Pod Manifest File:
apiVersion: v1
kind: Pod
metadata:
name: slurmctld
labels:
app: slurm
spec:
nodeName: docker-desktop
hostname: slurmctld
containers:
- name: slurmctld
image: slurmcontroller
imagePullPolicy: Always
ports:
- containerPort: 6817
resources:
requests:
memory: "1000Mi"
cpu: "1000m"
limits:
memory: "1500Mi"
cpu: "1500m"
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
---
apiVersion: v1
kind: Pod
metadata:
name: worker1
labels:
app: slurm
spec:
nodeName: docker-desktop
hostname: worker1
containers:
- name: worker1
image: slurmworker
imagePullPolicy: Always
ports:
- containerPort: 6818
resources:
requests:
memory: "1000Mi"
cpu: "1000m"
limits:
memory: "1500Mi"
cpu: "1500m"
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
From the docs here
In general a pod has the following DNS resolution:
pod-ip-address.my-namespace.pod.cluster-domain.example.
For example, if a pod in the default namespace has the IP address
172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:
172-17-0-3.default.pod.cluster.local.
Any pods created by a Deployment or DaemonSet exposed by a Service
have the following DNS resolution available:
pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example
If you don't like to deal with ever changing IP of a pod then you need to create service to expose the pods using DNS hostnames. Below is an example of service to expose the slurmctld pod.
apiVersion: v1
kind: Service
metadata:
name: slurmctld-service
spec:
selector:
app: slurm
ports:
- protocol: TCP
port: 80
targetPort: 6817
Assuming you are doing these on default namespace You should now be able to access it via slurmctld-service.default.svc.cluster.local
You can also use hostname -i which in all k8s installs i've tested resolves to the pods IP address.

single service with multiple exposed ports on a pod with multiple containers

I have gotten multiple containers to work in the same pod.
kubectl apply -f myymlpod.yml
kubectl expose pod mypod --name=myname-pod --port 8855 --type=NodePort
then I was able to test the "expose"
minikube service list
..
|-------------|-------------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|-------------------------|-----------------------------|
| default | kubernetes | No node port |
| default | myname-pod | http://192.168.99.100:30036 |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | No node port |
|-------------|-------------------------|-----------------------------|
Now, my myymlpod.yml has multiple containers in it.
One container has a service running on 8855, and one on 8877.
The below article ~hints~ at what I need to do .
https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/
Exposing multiple containers in a Pod
While this example shows how to
use a single container to access other containers in the pod, it’s
quite common for several containers in a Pod to listen on different
ports — all of which need to be exposed. To make this happen, you can
either create a single service with multiple exposed ports, or you can
create a single service for every poirt you’re trying to expose.
"create a single service with multiple exposed ports"
I cannot find anything on how to actually do this, expose multiple ports.
How does one expose multiple ports on a single service?
Thank you.
APPEND:
K8Containers.yml (below)
apiVersion: v1
kind: Pod
metadata:
name: mypodkindmetadataname
labels:
example: mylabelname
spec:
containers:
- name: containername-springbootfrontend
image: mydocker.com/webfrontendspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "612Mi"
cpu: "400m"
ports:
- containerPort: 8877
- name: containername-businessservicesspringboot
image: mydocker.com/businessservicesspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "613Mi"
cpu: "400m"
ports:
- containerPort: 8855
kubectl apply -f K8containers.yml
pod "mypodkindmetadataname" created
kubectl get pods
NAME READY STATUS RESTARTS AGE
mypodkindmetadataname 2/2 Running 0 11s
k8services.yml (below)
apiVersion: v1
kind: Service
metadata:
name: myymlservice
labels:
name: myservicemetadatalabel
spec:
type: NodePort
ports:
- name: myrestservice-servicekind-port-name
port: 8857
targetPort: 8855
- name: myfrontend-servicekind-port-name
port: 8879
targetPort: 8877
selector:
name: mypodkindmetadataname
........
kubectl apply -f K8services.yml
service "myymlservice" created
........
minikube service myymlservice --url
http://192.168.99.100:30784
http://192.168.99.100:31751
........
kubectl describe service myymlservice
Name: myymlservice
Namespace: default
Labels: name=myservicemetadatalabel
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"myservicemetadatalabel"},"name":"myymlservice","namespace":"default"...
Selector: name=mypodkindmetadataname
Type: NodePort
IP: 10.107.75.205
Port: myrestservice-servicekind-port-name 8857/TCP
TargetPort: 8855/TCP
NodePort: myrestservice-servicekind-port-name 30784/TCP
Endpoints: <none>
Port: myfrontend-servicekind-port-name 8879/TCP
TargetPort: 8877/TCP
NodePort: myfrontend-servicekind-port-name 31751/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
....
Unfortunately, it is still not working when I try to invoke the "exposed" items.
calling
http://192.168.99.100:30784/myrestmethod
does not work
and calling
http://192.168.99.100:31751
or
http://192.168.99.100:31751/index.html
does not work
Anyone see what I'm missing.
APPEND (working now)
The selector does not match on "name", it matches on label(s).
k8containers.yml (partial at the top)
apiVersion: v1
kind: Pod
metadata:
name: mypodkindmetadataname
labels:
myexamplelabelone: mylabelonevalue
myexamplelabeltwo: mylabeltwovalue
spec:
containers:
# Main application container
- name: containername-springbootfrontend
image: mydocker.com/webfrontendspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "612Mi"
cpu: "400m"
ports:
- containerPort: 8877
- name: containername-businessservicesspringboot
image: mydocker.com/businessservicesspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "613Mi"
cpu: "400m"
ports:
- containerPort: 8855
k8services.yml
apiVersion: v1
kind: Service
metadata:
name: myymlservice
labels:
name: myservicemetadatalabel
spec:
type: NodePort
ports:
- name: myrestservice-servicekind-port-name
port: 8857
targetPort: 8855
- name: myfrontend-servicekind-port-name
port: 8879
targetPort: 8877
selector:
myexamplelabelone: mylabelonevalue
myexamplelabeltwo: mylabeltwovalue
Yes you can create one single service with multiple ports open or service port connect pointing to container ports.
kind: Service
apiVersion: v1
metadata:
name: mymlservice
spec:
selector:
app: mymlapp
ports:
- name: servicename-1
port: 4444
targetPort: 8855
- name: servicename-2
port: 80
targetPort: 8877
Where target ports are poting out to your container ports.

How to pass docker container flags via kubernetes pod

Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar

Resources