is it possible to put fixed ip and port in minikube? - docker

it is possible to fix the ip and port somewhere in my yaml.
My application has 3 parts: a fronted with its respective balancer, a backend with its respective balancer and the database with statefulset and to persevere the volume, these 3 applications have their respective hpa rules.
I put the yaml of the backend if it is possible to set the ip and the port since I am working local and every so often I have to change the port or the ip.
backend.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: XXXXXXX
command: ["/bin/sh"]
args: ["-c", "node index.js"]
ports:
- containerPort: 4000
imagePullPolicy: IfNotPresent
env:
- name: HOST_DB
value: "172.17.0.3"
- name: PORT_DB
value: "31109"
resources:
requests:
memory: "128Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "1000m"
readinessProbe:
httpGet:
path: /
port: 4000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 4000
initialDelaySeconds: 15
periodSeconds: 20
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 5000
targetPort: 4000
name: https
type: LoadBalancer
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: backend
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: backend
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
the result image is this

The IP is anyway fixed to the single node's IP in case of minikube.You can hardcode the NodePort by specifying nodePort in the service. Without nodePort specified in the service kubernetes will assign a port from the range 30000-32767
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: NodePort
selector:
app: backend
ports:
- protocol: TCP
port: 5000
targetPort: 4000
name: https
nodePort: 30007
Follow this guide to expose applications via NodePort type service in minikube.

Related

GKE Kubernetes Ingress not routing traffic to microservices

I'm pretty new to Kubernetes and trying to deploy what I think is a pretty common use case onto a GKE cluster we have created with Terraform, microservices all hosted on one cluster, but cannot for the life of me get the routing to serve traffic to the correct services. The setup I'm trying to create is as follows:
Deployment & Service for each microservice (canvas-service and video-service)
Single Ingress (class GCE) on the cluster, hosted at a static IP, that routes traffic to each service based on path
The current config looks like this:
canvas-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: canvas-service-deployment
labels:
name: canvas-service
spec:
replicas: 2
selector:
matchLabels:
app: canvas-service
template:
metadata:
labels:
app: canvas-service
spec:
restartPolicy: Always
containers:
- name: canvas-service
image: gcr.io/emile-learning-dev/canvas-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: canvas-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: canvas-service
video-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: video-service-deployment
labels:
name: video-service
spec:
replicas: 2
selector:
matchLabels:
app: video-service
template:
metadata:
labels:
app: video-service
spec:
restartPolicy: Always
containers:
- name: video-service
image: gcr.io/emile-learning-dev/video-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: video-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: video-service
services-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: services-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "services-ip"
spec:
defaultBackend:
service:
name: canvas-service
port:
number: 80
rules:
- http:
paths:
- path: /canvas/*
pathType: ImplementationSpecific
backend:
service:
name: canvas-service
port:
number: 80
- path: /video/*
pathType: ImplementationSpecific
backend:
service:
name: video-service
port:
number: 80
The output of kubectl describe ingress services-ingress looks like this:
Name: services-ingress
Namespace: default
Address: 34.107.136.153
Default backend: canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
Rules:
Host Path Backends
---- ---- --------
*
/canvas/* canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
/video/* video-service:80 (10.244.1.15:8080,10.244.8.51:8080)
Annotations: ingress.kubernetes.io/backends: {"k8s-be-30551--8ba41a687ec15071":"HEALTHY","k8s-be-32145--8ba41a687ec15071":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/target-proxy: k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/url-map: k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1
kubernetes.io/ingress.global-static-ip-name: services-ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m36s loadbalancer-controller UrlMap "k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m34s loadbalancer-controller TargetProxy "k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m26s loadbalancer-controller ForwardingRule "k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal IPChanged 9m26s loadbalancer-controller IP is now 34.107.136.153
Normal Sync 6m56s (x5 over 11m) loadbalancer-controller Scheduled for sync
For testing, I have a healthcheck route at /health for each service. What I'm running into is that when I hit {public_ip}/health (using the default backend) I get the expected response. But when I hit {public_ip}/canvas/health or {public_ip}/video/health, I get a 404 Not Found.
I know it has something to do with the fact that the entire service route structure is on the /canvas or /video route, but thought that the /* was supposed to address exactly that. I'd like to basically make the root route for each service exist on the corresponding subpaths /canvas and /video. Would love to hear any thoughts you guys have as to what I'm doing wrong that's leading to traffic not being routed correctly.
If it's an issue with the GCP default Ingress resource or this isn't within its functionality, I'm totally open to using an nginx Ingress. But, I haven't been able to get an nginx Ingress to expose an IP at all so figured the GCP Ingress would probably be a shorter path to getting this cluster working. If I'm wrong about this also please let me know.
It's due to the paths defined in Ingress. Changing the path type to /video and /canvas will make this work.
To understand the reason behind this, you need to read nginx ingress controller documentation about path and pathType. You can also use the regex pattern in the path according to the documentation here https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/
Tip: When in doubt, you can always go to the nginx ingress controller pods and check the nginx.conf file about the location determined by nginx.

Why my canary deployment does not work with istio?

I am trying to learn the basics about istio so I have gone through the official documentation here in order to create a 80/20 canary deployment, I have also follow this guide from digitalocean which explains it very easy for a simple deployment https://www.digitalocean.com/community/tutorials/how-to-do-canary-deployments-with-istio-and-kubernetes.
I have created a simple app with 2 different messages on the homepage, and then created the virtualService, Gateway and the destination rules. I (as mentioned in the guide) get the external ip with kubectl -n istio-system get svc and the try to navigate to that address but I get a 503 error. It seems very simple but I have to be missing something. These are my 3 files (as far as I understand there are no more necessary files):
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
namespace: istio
name: flask-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "*"
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flask-app
namespace: istio
spec:
hosts:
- "*"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask-app
subset: v1
weight: 80
- destination:
host: flask-app
subset: v2
weight: 20
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: flask-app
namespace: istio
spec:
host: flask-app
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Here are the yaml with deployment and services for v1 and v2:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
version: v1
name: flask-deployment-v1
namespace: istio
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
version: v1
app: flask-app
spec:
containers:
- name: flask-app
image: latalavera/flask-app:1.3
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: flask-service
namespace: istio
spec:
selector:
app: flask-app
ports:
- port: 5000
protocol: TCP
targetPort: 5000
type: ClusterIP
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
version: v2
name: flask-deployment-v2
namespace: istio
spec:
replicas: 1
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
app: flask-app
version: v2
spec:
containers:
- name: flask-app
image: latalavera/flask-app:2.0
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: flask-service2
namespace: istio
spec:
selector:
app: flask-app
ports:
- port: 5000
protocol: TCP
targetPort: 5000
type: ClusterIP
I have added the labels version: v1 and version: v2 to my deployments, and I have also used the kubectl label ns istio istio-injection=enabled command, but they are not working anyways
You named the service flask-service and set the host in your VirtualService to flask-app.
The host field is not a selector but the FQDN of the service you want to route the traffic to. So it should be called flask-service or better flask-service.istio.svc.cluster.local:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flask-app
namespace: istio
spec:
hosts:
- "*"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask-service.istio.svc.cluster.local
subset: v1
weight: 80
- destination:
host: flask-service.istio.svc.cluster.local
subset: v2
weight: 20
Alternatively you could just call the service flask-app like the Deployment. But using the full FQDN <service-name>.<namespace-name>.svc.cluster.local is recommended in any case. From docs:
Note for Kubernetes users: When short names are used (e.g. “reviews” instead of “reviews.default.svc.cluster.local”), Istio will interpret the short name based on the namespace of the rule, not the service. A rule in the “default” namespace containing a host “reviews” will be interpreted as “reviews.default.svc.cluster.local”, irrespective of the actual namespace associated with the reviews service. To avoid potential misconfigurations, it is recommended to always use fully qualified domain names over short names.
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService -> hosts
You btw don't need 2 services, just one. Your service has a selector for app: flask-app so it can route traffic to v1 and v2. How the traffic is routed is defined by the VirtualService and DestionationRule. I would recommand to remove service flask-service2. If you need to route the traffic inside the mesh, add mesh as gateways in the VirtualService or create a new one for mesh internal traffic, to reach both versions. More on that topic:
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService -> gateways

Kubernetes Ingress get Unhealthy backend services on Google Kubernetes Engine

I'm trying to deploy two services on Google container engine, I have created a cluster with 3 Nodes.
My docker images are in private docker hub repo that's why I have created a secret and used in Deployments, The ingress is creating a load balancer in the Google cloud console but it shows that backend services are not healthy and inside the kubernetes section under workloads it says Does not have minimum availability.
I'm new to kubernetes, what can be a problem?
Here are my yamls:
Deployment.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: pythonaryapp
labels:
app: pythonaryapp
spec:
replicas: 1 #We always want more than 1 replica for HA
selector:
matchLabels:
app: pythonaryapp
template:
metadata:
labels:
app: pythonaryapp
spec:
containers:
- name: pythonaryapp #1st container
image: docker.io/arycloud/docker_web_app:pythonaryapp #Dockerhub image
ports:
- containerPort: 8080 #Exposes the port 8080 of the container
env:
- name: PORT #Env variable key passed to container that is read by app
value: "8080" # Value of the env port.
readinessProbe:
httpGet:
path: /healthz
port: 8080
periodSeconds: 2
timeoutSeconds: 2
successThreshold: 2
failureThreshold: 10
imagePullSecrets:
- name: docksecret
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: pythonaryapp1
labels:
app: pythonaryapp1
spec:
replicas: 1 #We always want more than 1 replica for HA
selector:
matchLabels:
app: pythonaryapp1
template:
metadata:
labels:
app: pythonaryapp1
spec:
containers:
- name: pythonaryapp1 #1st container
image: docker.io/arycloud/docker_web_app:pythonaryapp1 #Dockerhub image
ports:
- containerPort: 8080 #Exposes the port 8080 of the container
env:
- name: PORT #Env variable key passed to container that is read by app
value: "8080" # Value of the env port.
readinessProbe:
httpGet:
path: /healthz
port: 8080
periodSeconds: 2
timeoutSeconds: 2
successThreshold: 2
failureThreshold: 10
imagePullSecrets:
- name: docksecret
---
And here's services.yaml:
kind: Service
apiVersion: v1
metadata:
name: pythonaryapp
spec:
type: NodePort
selector:
app: pythonaryapp
ports:
- protocol: TCP
port: 8080
---
---
kind: Service
apiVersion: v1
metadata:
name: pythonaryapp1
spec:
type: NodePort
selector:
app: pythonaryapp1
ports:
- protocol: TCP
port: 8080
---
And Here's my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysvcs
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: pythonaryapp
servicePort: 8080
- path: /<name>
backend:
serviceName: pythonaryapp1
servicePort: 8080
Update:
Here's flask service code:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World, from Python Service.', 200
if __name__ == '__main__':
app.run()
And, on running the container of it's docker image it retunrs 200 sttaus code at the root path /.
Thanks in advance!
Have a look at this post. It might contain helpful tips for your issue.
For example I do see a readiness probe but not a liveness probe in your config files.
This post suggests that “Does not have minimum availability” in k8s could be a result of a CrashloopBackoff caused by a failing liveness probe.
In GKE, the ingress is implemented by GCP LoadBalancer. The GCP LB is checking the health of the service by calling it in the service address with the root path '/'. Make sure that your container can respond with 200 on the root, or alternatively change the LB backend service health check route (you can do it in the GCP console)

Nginx and Ingress with Kubernetes not routing my request

I have Docker, Kubernetes(1.7) and Nginx all running on my RHEL7 server with my own services being inside a docker container and being picked up by Kubernetes. I know Kubernetes is working right with docker because I can call a get request of the Kubernete pod using its own IP:PORT addresses and it works. I set up Nginx with a default backend and have all of this working. I know this by calling get pods and get svc commands and everything is running as it should. When I create ingress, I know Nginx is picking it up because when I use the command kubectl describe pods {NGNIX-CONTROLLER} I see it updates its ingress and even logs what I named it. Now I get the IP address of Kubernetes master using kubectl clusterinfo and I use this ip address to attempt to call my services, something along the lines of http://KUBEIPADDRESS/PATH/TO/MY/SERVICE, with no port number but it doesn't work. I have no idea what is going on. Can someone help me why Ingress and/or Nnginx isn't routing properly to my services? I'll give my ingress and nginx file down below.
(Note, for the nginx yaml file, the deployment of the nginx controller is all the way in the bottom.)
Ingress yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: testhost
http:
paths:
- path: /customer
backend:
serviceName: customer
servicePort: 9001
nginx controller yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress
rules:
- apiGroups:
- ""
- "extensions"
resources:
- configmaps
- secrets
- services
- endpoints
- ingresses
- nodes
- pods
verbs:
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- apiGroups:
- ""
resources:
- events
- services
verbs:
- create
- list
- update
- get
- apiGroups:
- "extensions"
resources:
- ingresses/status
- ingresses
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-ns
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- apiGroups:
- ""
resources:
- services
verbs:
- get
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-ns-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-ns
subjects:
- kind: ServiceAccount
name: ingress
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress
subjects:
- kind: ServiceAccount
name: ingress
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: kube-system
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: nginx-ingress-controller
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
# like with kubeadm
hostNetwork: true
terminationGracePeriodSeconds: 60
serviceAccountName: ingress
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.3
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
Also when I do kubectl describe ing I get
Name: gateway-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
testhost
/customer customer:9001 ({IP}:9001,{IP}:9001)
Annotations:
rewrite-target: /
Events: <none>
Here are my deployment and service of the customer in case anyone needs that
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: customer
labels:
run: customer
spec:
replicas: 2
template:
metadata:
labels:
run: customer
spec:
containers:
- name: customer
image: customer
imagePullPolicy: Always
ports:
- containerPort: 9001
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: customer
spec:
selector:
run: customer
type: NodePort
ports:
- name: port1
protocol: TCP
port: 9001
targetPort: 9001
There are some issues with your setup as far as I can see:
KUBEIPADDRESS in the URL you call: an IP address won't work because you configured your Ingress to listen on testhost. So you need to call http://testhost/customer, and configure your network to resolve testhost to the correct IP address
but what is the correct IP address? You are trying to use k8s master on port 80. That won't work without further configuration. For that you need to use a NodePort service for the Ingress Controller, which exposes it on port 80 (and probably 433). In order to use that low ports, you need to allow it with an option of kube-apiserver, see --service-node-port-range on https://kubernetes.io/docs/admin/kube-apiserver/. Once that works, you can use any IP address of any node of your k8s cluster for testhost. Note: be sure that no other application uses these ports on any node!

What is the difference between the template section in replicator.yml and pod.yml in Kubernetes?

I'm trying to understand the difference between the manifest files used for bringing up the Kubernetes cluster.
Say suppose I have a file called pod.yml that defines my pod, that is the containers running it:
Pod.yml
apiversion : v1
kind: Pod
metadata:
name : web
spec:
containers:
- name : webserver
image : httpd
ports :
- ContainerPort: 80
HostPort: 80`
And I have replicator.yml file to launch 3 of these pods:
Replicator.yml
kind: "ReplicationController"
apiVersion: "v1"
metadata:
name: "webserver-controller"
spec:
replicas: 3
selector:
app: "webserver"
template:
spec:
containers:
- name: webserver
image: httpd
ports:
- containerPort: 80
hostport: 80`
Can I avoid the template section in the replicator.yml if I'm already using pod.yml to define the images to be used to build the containers in the pod.
Do you need all three manifest files pod.yml, service.yml and replicator.yml or can you just use service.yml and replicator.yml to create the cluster.
If you are using a ReplicationController, Deployment, DaemonSet or a Pet Set, you don't need a separate pod definition. However, the service should be defined if you want to expose the pod and this can be done on the same file.
Example:
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: default
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
namespace: default
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi

Resources