Docket Desktop and Istio unable to access endpoint on MAC OS - docker

I am trying to work on sample project for istio. I have two apps demo1 and demo2.
demoapp Yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-1-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-1-app
template:
metadata:
labels:
app: demo-1-app
spec:
containers:
- name: demo-1-app
image: muzimil:demo-1
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: demo-1-app
spec:
selector:
app: demo-1-app
ports:
- port: 8080
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-1-app
labels:
account: demo-1-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-2-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-2-app
template:
metadata:
labels:
app: demo-2-app
spec:
containers:
- name: demo-2-app
image: muzimil:demo2-1
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: demo-2-app
spec:
selector:
app: demo-2-app
ports:
- port: 8080
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-2-app
labels:
account: demo-2-app
And My gateway os this
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: demo-app-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: demo-service1
spec:
hosts:
- "*"
gateways:
- demo-app-gateway
http:
- match:
- uri:
exact: /demo1
route:
- destination:
host: demo-1-app
port:
number: 8080
- match:
- uri:
exact: /demo2
route:
- destination:
host: demo-2-app
port:
number: 8080
I tried to hit url with localhost/demo1/getDetails both 127.0.0.1/demo1/getDetails
But I am getting always 404
istioctl analyse does not give any errors.

To access the application - either change istio-ingressgateway service to NodePort or do port forwarding for the istio ingress gateway service. Edit the istio-ingressgateway service to change the service type.
type: NodePort
K8s will give a node port then you can provide the same nodeport values in istio gateway.
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: <nodeportnumber>
name: http
protocol: HTTP

Related

Istio gateway redirects to HTML nginx image doesn't work

I had Istio 1.3.3 gateway and helloworld gateway toward my application service.
Istio Gateway
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
chart: gateways-1.0.0
heritage: Tiller
istio: ingressgateway
release: RELEASE-NAME
name: istio-ingressgateway
namespace: istio-system
spec:
externalTrafficPolicy: Cluster
ports:
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 31390
port: 443
protocol: TCP
targetPort: 443
- name: tcp
nodePort: 31400
port: 31400
protocol: TCP
targetPort: 31400
- name: tcp-pilot-grpc-tls
nodePort: 32565
port: 15011
protocol: TCP
targetPort: 15011
- name: tcp-citadel-grpc-tls
nodePort: 32352
port: 8060
protocol: TCP
targetPort: 8060
- name: http2-helloworld
nodePort: 31750
port: 15033
protocol: TCP
targetPort: 15033
selector:
app: istio-ingressgateway
istio: ingressgateway
type: LoadBalancer
HelloWorld Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 15033
name: http2-helloworld
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- port: 15033
route:
- destination:
host: helloworld
port:
number: 5000
HelloWorld.yaml
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
version: v1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: karthequian/helloworld:latest
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
When I tried to access the application from Istio gateway using localhost:15033, working with different port and docker images are working fine, but this docker image that used nginx doesn't work well.
I got an error when access to localhost:15033
upstream connect error or disconnect/reset before headers. reset reason: connection termination
Informations
Kubernetes started and installed from Mac Desktop Docker Application. Context was desktop-docker.
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
kubectl cluster-info
Kubernetes master is running at https://kubernetes.docker.internal:6443
KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubectl cluster-info dump > clusterInfoDump.txt
https://justpaste.it/5n1op
istioctl version
client version: 1.3.3
control plane version: 1.3.3
In Your HelloWorld.yaml You are missing targetPort and this is why nginx is unreachable.
This is how it should look:
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
targetPort: 80
selector:
app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
version: v1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: karthequian/helloworld:latest
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80

Connection issue between services in kubernetes

I have three different images related to my application which works fine in docker-compose and has issues running on kubernetes cluster in GCP.
Below is the deployment file.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql-database
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql-database
tier: database
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql-database
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-database
tier: database
spec:
hostname: mysql
containers:
- image: mysql/mysql-server:5.7
name: mysql
env:
- name: "MYSQL_USER"
value: "root"
- name: "MYSQL_HOST"
value: "mysql"
- name: "MYSQL_DATABASE"
value: "xxxx"
- name: "MYSQL_PORT"
value: "3306"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "password"
- name: "RAILS_ENV"
value: "production"
ports:
- containerPort: 5432
name: db
---
apiVersion: v1
kind: Service
metadata:
name: dgservice
labels:
app: dgservice
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
name: dgservice
tier: dgservice
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dgservice
labels:
app: dgservice
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: dgservice
tier: dgservice
spec:
hostname: dgservice
containers:
- image: gcr.io/sample/sample-image:check_1
name: dgservice
ports:
- containerPort: 8080
name: dgservice
---
apiVersion: v1
kind: Service
metadata:
name: dg-ui
labels:
name: dg-ui
spec:
type: NodePort
ports:
- nodePort: 30156
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: dg-ui
tier: dg
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dg-ui
labels:
app: dg-ui
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: dg-ui
tier: dg
spec:
hostname: dg-ui
containers:
- image: gcr.io/sample/sample:latest
name: dg-ui
env:
- name: "MYSQL_USER"
value: "root"
- name: "MYSQL_HOST"
value: "mysql"
- name: "MYSQL_DATABASE"
value: "xxxx"
- name: "MYSQL_PORT"
value: "3306"
- name: "MYSQL_PASSWORD"
value: "password"
- name: "MYSQL_ROOT_PASSWORD"
value: "password"
- name: "RAILS_ENV"
value: "production"
- name: "DG_SERVICE_HOST"
value: "dgservice"
ports:
- containerPort: 8000
name: dg-ui
The image is being pulled successfully from GCR as well.
The connection between mysql and ui service also works fine and my data's are getting migrated without any issues. But the connection is not established between the service and the ui.
Why ui is not able to access service in my application?
As your deployment has the following lables so service need to have same labels in order to create endpoint object
endpoints are the API object behind a service. The endpoints are where a service will route connections to when a connection is made to the ClusterIP of a service
Following are the labels of deployments
labels:
app: dgservice
tier: dgservice
New Service definition with correct labels
apiVersion: v1
kind: Service
metadata:
name: dgservice
labels:
app: dgservice
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
app: dgservice
tier: dgservice
I am assuming by "service" you are referring to your "dgservice". With the yaml presented above, I believe you also need to specify the DG_SERVICE_PORT (port 8080) to correctly access "dgservice".
As mentioned by Suresh in the comments, you should expose internal services using ClusterIP type. The NodePort is a superset of ClusterIP, and will expose the service internally to your cluster at service-name:port, and externally at node-ip:nodeport, targeting your deployment/pod at targetport.

Google Kubernetes Engine Ingress UNHEALTHY backend service

Kind Note: I have googled a lot and take a look too many questions related to this issue at StackOverflow also but couldn't solve my issue, that's why don't mark this as duplicate, please!
I'm trying to deploy 2 services (One is Python flask and other is NodeJS) on Google Kubernetes Engine. I have created two Kubernetes-deployments one for each service and two Kubernetes-services one for each service of type NodePort. Then, I have created an Ingress and mentioned my endpoints but Ingress says that One backend service is UNHEALTHY.
Here are my Deployments YAML definitions:
# Pyservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: pyservice
labels:
app: pyservice
namespace: default
spec:
selector:
matchLabels:
app: pyservice
template:
metadata:
labels:
app: pyservice
spec:
containers:
- name: pyservice
image: docker.io/arycloud/docker_web_app:pyservice
ports:
- containerPort: 5000
imagePullSecrets:
- name: docksecret
# # Nodeservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodeservice
labels:
app: nodeservice
namespace: default
spec:
selector:
matchLabels:
app: nodeservice
template:
metadata:
labels:
app: nodeservice
tier: web
spec:
containers:
- name: nodeservice
image: docker.io/arycloud/docker_web_app:nodeservice
ports:
- containerPort: 8080
imagePullSecrets:
- name: docksecret
And, here are my services and Ingress YAML definitions:
# pyservcie service
kind: Service
apiVersion: v1
metadata:
name: pyservice
spec:
type: NodePort
selector:
app: pyservice
ports:
- protocol: TCP
port: 5000
nodePort: 30001
---
# nodeservcie service
kind: Service
apiVersion: v1
metadata:
name: nodeservcie
spec:
type: NodePort
selector:
app: nodeservcie
ports:
- protocol: TCP
port: 8080
nodePort: 30002
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: pyservice
servicePort: 5000
- path: /*
backend:
serviceName: pyservice
servicePort: 5000
- path: /node/svc/
backend:
serviceName: nodeservcie
servicePort: 8080
The pyservice is working fine but the nodeservice shows as UNHEALTHY backend. Here's a screenshot:
Even I have edited the Firewall Rules for all gke-.... and allow all ports just for getting out from this issue, but it still showing the UNHEALTHY status for nodeservice.
What's wrong here?
Thanks in advance!
Why are you using a GCE ingress class and then specifying a nginx rewrite annotation? In case you haven't realised, the annotation won't do anything to the GCE ingress.
You have also got 'nodeservcie' as your selector instead of 'nodeservice'.

how to use AWS ELB with nginx ingress on k8s

1) I have SSL certs generated on AWS
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:...fa5298fc
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: ingress-nginx-lb-svc
# namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
ports:
- name: https
port: 443
protocol: TCP
targetPort: http
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress-control-pod
type: LoadBalancer
2) then I have nginx controller pod
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-control-pod
labels:
app: nginx-ingress-control-pod
spec:
replicas: 1
selector:
app: nginx-ingress-control-pod
template:
metadata:
labels:
app: nginx-ingress-control-pod
spec:
containers:
- image: nginxdemos/nginx-ingress:1.0.0
imagePullPolicy: Always
name: nginx-ingress-control-pod
ports:
- name: http
containerPort: 80
hostPort: 80
#- name: https
# containerPort: 443
# hostPort: 443
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
args:
#- -v=3
#- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
#- -default-server-tls-secret=$(POD_NAMESPACE)/web-secret
3) lastly I am using helm to deploy grafana and prometheus (this setup works when accessing via NodePort)
I just can not figure out setup with ELB and ingress.
Btw ingress is a part of grafana deployment
which is correctly created
3)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: 2018-04-06T09:28:10Z
generation: 1
labels:
app: graf-helmf-default-ns-grafana
chart: grafana-0.8.5
component: grafana
heritage: Tiller
release: graf-helmf-default-ns
name: graf-helmf-default-ns-grafana
namespace: default
resourceVersion: "995865"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/graf-helmf-default-ns-grafana
uid: d2991870-397c-11e8-9d...5a37f5a
spec:
rules:
- host: grafana.my.valid.domain.com
http:
paths:
- backend:
serviceName: graf-helmf-default-ns-grafana
servicePort: 80
status:
loadBalancer: {}

connecting backend API address

I have a frontend single page application written using Vuejs, I use axios to call a backend API. I am trying to use kubernetes to run the service:
My deployment and service yml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: testapi
spec:
replicas: 1
template:
metadata:
labels:
app: testapi
spec:
containers:
- name: testapi
image: testregistry.azurecr.io/testapi:latest
ports:
- containerPort: 3001
---
apiVersion: v1
kind: Service
metadata:
name: testapi
spec:
type: LoadBalancer
ports:
- port: 3001
selector:
app: testapi
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: testportal
spec:
replicas: 1
template:
metadata:
labels:
app: testportal
spec:
containers:
- name: testportal
image: testregistry.azurecr.io/testportal
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: testportal
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: testportal
The frontend is running at the client browser. My axios url is connect to http://testapi:3001, which obviously is not working. Any idea how to have it connected to the backed API?
you can only use that service name from any other deployment inside the same kubernetes cluster. if you want to call it from front-end, you will have to expose an external public accessible endpoint.

Resources