I was just wondering how to manually set the external endpoint used by the Kubernetes web dashboard.
After creating the namespace kube-system, I ran the following:
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
Is there a flag I can use to specify which tcp port to use for external access? As far as I can tell it's just randomly assigning one. I've looked through the documentation but I'm having a hard time finding a solution. Any help would be appreciated.
You can specify the desired port as the nodePort in the yaml spec that you use to create the service. In this case, where the yaml file you linked to defines the service as:
- kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard
You would want to define it as below, assuming your desired port number is 33333:
- kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
nodePort: 33333
selector:
app: kubernetes-dashboard
Related
I try to deploy simple API to AKS. API works fine in local docker.
I get no errors. I see from AKS that status is running. However browser does not respond. Could someone check yaml or give tip what could be wrong?
C:\Azure\ACIPythonAPI\conda-flask-api> kubectl get service demo-flask-api --watch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-flask-api LoadBalancer 10.0.105.135 11.22.33.44 80:31551/TCP 5m53s
aks.yaml file:
apiVersion: v1
kind: Service
metadata:
name: demo-flask-api
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: demo-flask-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-flask-api
spec:
replicas: 1
selector:
matchLabels:
app: demo-flask-api
template:
metadata:
labels:
app: demo-flask-api
spec:
containers:
- name: demo-flask-api
image: mycontainerregistry.azurecr.io/demo/flask-api:v1
ports:
- containerPort: 8080
Seems you did not defined a Ingress resource nether an Ingress Controller.
Here is a Workflow from MS to install ingress-nginx as Ingress Controller on your Cluster.
You then expose the IC service like this:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
The Ingress Controller then will route incoming traffic to your demo flask API with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Need some basic help with EKS. Not sure what I am doing wrong.
I have a java springboot application as a docker container in ECR.
I created a simple deployment script
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-deployment
labels:
app: java-microservice
spec:
replicas: 2
selector:
matchLabels:
app: java-microservice
template:
metadata:
labels:
app: java-microservice
spec:
containers:
- name: java-microservice-container
image: xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/yyyyyyy
ports:
- containerPort: 80
I created a loadbalancer to expose this outside
loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: java-microservice-service
spec:
type: LoadBalancer
selector:
app: java-microservice
ports:
- protocol: TCP
port: 80
targetPort: 80
The pods got created. I see they are running
When I do kubectl get service java-microservice-service, I do see the loadbalancer is running
When I go to browser and try to access the application via http://loadbalancer-address, I cannot reach it.
What am I missing? How do I go about debugging this?
thanks in advance
ok. so i changed the port in my yaml files to 8080 and it seems to be working fine.
I'm trying to build a Neo4j Learning Tool for some of our Trainings. I want to use Kubernetes to spin up a Neo4j Pod for each participant to use. Currently I struggle exposing the bolt endpoint using an Ingress and I don't know why.
Here are my deployment configs:
apiVersion: apps/v1
kind: Deployment
metadata:
name: neo4j
namespace: learn
labels:
app: neo-manager
type: database
spec:
replicas: 1
selector:
matchLabels:
app: neo-manager
type: database
template:
metadata:
labels:
app: neo-manager
type: database
spec:
containers:
- name: neo4j
imagePullPolicy: IfNotPresent
image: neo4j:3.5.6
ports:
- containerPort: 7474
- containerPort: 7687
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: neo4j-service
namespace: learn
labels:
app: neo-manager
type: database
spec:
selector:
app: neo-manager
type: database
ports:
- port: 7687
targetPort: 7687
name: bolt
protocol: TCP
- port: 7474
targetPort: 7474
name: client
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neo4j-ingress
namespace: learn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: learn.neo4j.com
http:
paths:
- path: /
backend:
serviceName: neo4j-service
servicePort: 7474
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: learn
data:
7687: "learn/neo4j-service:7687"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: learn
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.16
args:
- /nginx-ingress-controller
- --tcp-services-configmap=${POD_NAMESPACE}/tcp-services
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
The client gets exposed nicely and it reachable under learn.neo4j.com but I don't know where to point it to to connect to the DB using bolt. Whatever I try, it fails saying ServiceUnavailable: Websocket Connection failure (WebSocket network error: The operation couldn’t be completed. Connection refused in the console).
What am I missing?
The nginx-ingress-controller by default creates http(s) proxies only.
In your case you're trying to use a different protocol (bolt) so you need to configure your ingress controller in order for it to make a TCP proxy.
In order to do so, you need to create a configmap (in the nginx-ingress-controller namespace) similar to the following:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
7687: "<your neo4j namespace>/neo4j-service:7687"
Then, make sure your ingress controller has the following flag in its command:
--tcp-services-configmap tcp-services
This will make your nginx-ingress controller listen to port 7687 with a TCP proxy.
You can delete the neo4j-bolt-ingress Ingress, that's not going to be used.
Of course you have to ensure that the ingress controller correctly exposes the 7687 port the same way it does with ports 80 and 443, and possibly you'll have to adjust the settings of any firewall and load balancer you might have.
Source: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
It automatically tries to connect to port 7687 by default - if you enter the connection url http://learn.neo4j.bolt.com:80 (or https), it works.
I haven't used kubernetes ingress in this context before, but I think that when you use HTTP or HTTPS to connect to Neo4J, you still require external availability to connect to the the bolt port (7687). Does your setup allow for that?
Try using multi-path mapping in your Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neo4j-ingress
namespace: learn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: learn.neo4j.com
http:
paths:
- path: /browser
backend:
serviceName: neo4j-service
servicePort: 7474
- path: /
backend:
serviceName: neo4j-service
servicePort: 7687
You should then be able to access the UI at learn.neo4j.com/browser. The bolt Connect URL would have to specified as:
bolt+s://learn.neo4j.com:443/
I want to deploy Jenkins on a local Kubernetes cluster (no cloud).
I will create 2 services above Jenkins.
One service of type NodePort for port 8080 (be mapped on random port and I can access it outside the cluster. I can also access it inside the cluster by using ClusterIP:8080). All fine.
My second service is so my Jenkins slaves can connect.
I choose for a ClusterIP (default) as type of my service:
I read about the 3 types of services:
clusterIP:
Exposes the service on a cluster-internal IP. Choosing this value
makes the service only reachable from within the cluster.
NodePort: is not necessary for 50000 to expose outside cluster
Loadbalancer: I'm not working in the cloud
Here is my .yml to create the services:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: jenkins
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
The problem is that my slaves can not connect to port 50000.
I tried to telnet the ClusterIP:port of the service jenkins-discovery and I got a connection refused. I can telnet to ClusterIP:port of the jenkins-ui service. What am I doing wrong or is there a part I don't understand?
It's solved. The mistake was the selector which is a part which wasn't that clear for me. I was using different nodeselectors what seemed to cause this issue. This worked:
kind: Service
apiVersion: v1
metadata:
name: jenkins-ui
namespace: ci
spec:
type: NodePort
selector:
app: master
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: master
---
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci
spec:
#type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
I have the following deployment yaml:
apiVersion: v1
kind: Namespace
metadata:
name: authentication
labels:
name: authentication
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: authentication-deployment
namespace: authentication
spec:
replicas: 2
template:
metadata:
labels:
app: authentication
spec:
containers:
- name: authentication
image: blueapp/authentication:0.0.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: authentication-service
namespace: authentication
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
name: authentication-deployment
type: LoadBalancer
externalName: authentication
Im pretty new to kubernetes but my understanding of what Im trying to do is create a namespace, in that namespace create a deployment of 2 pods and then create a load balancer to distribute traffic to those pods.
When I run
$ kubectl create -f deployment.yaml
everything creates fine, but then the service never gets assigned an external IP
Is there anything obvious that may be causing this?
Your service is of type NodePort.
To get a load balancer assigned to your service you should be using a LoadBalancer service type:
type: LoadBalancer
See documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
External IPs get assigned only in supported cloud environments, providing that your cloud provider is configured correctly.
Observe the error messages in the kube-controller-manager logs when you create your service.