I've just finished Google's tutorial on how to implement continuous integration for a Go app on Kubernetes using Jenkins, and it works great. I'm now trying to do the same thing with a Node app that is served on port 3001, but I keep getting this error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "services \"gceme-frontend\" not found",
"reason": "NotFound",
"details": {
"name": "gceme-frontend",
"kind": "services"
},
"code": 404
}
The only thing I've changed on the routing side is having the load balancer point to 3001 instead of 80, since that's where the Node app is listening. I have a very strong feeling that the error is somewhere in the .yaml files.
My node server (relevant part):
const PORT = process.env.PORT || 3001;
frontend-dev.yaml: (this is applied to the dev environment)
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: gceme-frontend-dev
spec:
replicas:
template:
metadata:
name: frontend
labels:
app: gceme
role: frontend
env: dev
spec:
containers:
- name: frontend
image: gcr.io/cloud-solutions-images/gceme:1.0.0
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 3001
protocol: TCP
services/frontend.yaml:
kind: Service
apiVersion: v1
metadata:
name: gceme-frontend
spec:
type: LoadBalancer
ports:
- name: http
#THIS PORT ACTUALLY GOES IN THE URL: i.e. gcme-frontend: ****
#when it says "no endpoints available for service, that doesn't mean this one is wrong, it means that target port is not working not exist"
port: 80
#matches port and -port in frontend-*.yaml
targetPort: 3001
protocol: TCP
selector:
app: gceme
role: frontend
Jenkinsfile (for dev branches, which is what I'm trying to get working)
sh("kubectl get ns ${env.BRANCH_NAME} || kubectl create ns ${env.BRANCH_NAME}")
// Don't use public load balancing for development branches
sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml")
sh("sed -i.bak 's#gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/dev/*.yaml")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/services/")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/dev/")
echo 'To access your environment run `kubectl proxy`'
echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${feSvcName}:80/"
Are you creating Service or Ingress resources to expose your application to the outside world?
See tutorials:
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
which have working examples you can copy and modify.
Related
I'm trying to setup ingress on docker driver for minikube 1.16 on windows 10 home (build 19042).
Ingress on docker driver wasn't supported before but it is now on minikube 1.16:
https://github.com/kubernetes/minikube/pull/9761
I've been trying something by myself but i got ERR_CONNECTION_REFUSED when connecting to the ingress at 127.0.0.1 OR kubernetes.docker.internal
Steps:
minikube start
minikube addons enable ingress
create deployment
create ClusterIP
Ingress config
Here is my configuration:
#cluster ip service
apiVersion: v1
kind: Service
metadata:
name: client-cluster-ip-service
spec:
type: ClusterIP
selector:
component: web
ports:
- port: 3000
targetPort: 3000
# not posting deployment code because it's not relevant, but there is a deployment with selector 'component:web' and it's exposing port 3000.
#ingress service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-cluster-ip-service
port:
number: 3000
I have dns redirect in hosts file.
I've also tried "minikube tunnel" on another terminal but no luck either.
Thanks!
There is a mistake in your ingress object definition under rules field:
rules:
- host: kubernetes.docker.internal
- http:
paths:
The exact problem is the - sing in front the http which makes the host and http separate arrays.
Take a look how your converter yaml looks like in json:
{
"spec": {
"rules": [
{
"host": "kubernetes.docker.internal"
},
{
"http": {
"paths": [
{
"path": "/?(.*)",
"pathType": "Prefix",
"backend": {
---
This is how annotations looks like with your ingress definition.
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /?(.*)
pathType: Prefix
And now notice how this yaml converted to json looks like:
{
"spec": {
"rules": [
{
"host": "kubernetes.docker.internal",
"http": {
"paths": [
{
"path": "/?(.*)",
"pathType": "Prefix",
"backend": {
---
You can easily visualize this even better using yaml-viewer
I'm doing research on how to run a Spring Batch job on RedHat OpenShift as a Kubernetes Scheduled Job.
Steps have done,
1) Created a sample Spring Batch app that reads a .csv file that does simple processing and puts some data into in-memory h2 DB. The job launcher is called upon as a REST endpoint (/load). The source code can be found here. Please see the README file for the endpoint info.
2) Created the Docker Image and pushed into DockerHub
3) Deployed using that image to my OpenShift Online cluster as an app
What I want to do is,
Run a Kubernetes Cron Job from OpenShift to call /load REST endpoint which launches the SpringBatch job periodically
Can someone please guide me here on how can I achieve this?
Thank you
Samme
The easiest way would be to curl your /load REST endpoint.
Here's a way to do that:
The Pod definition that I used as replacement for you application (for testing purposes):
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: mendhak/http-https-echo
I used this image because it sends various HTTP request properties back to client.
Create a service for pod:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp the selector
ports:
- protocol: TCP
port: 80 #Port that service is available on
targetPort: 80 #Port that app listens on
Create a CronJob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: curljob
spec:
jobTemplate:
metadata:
name: curljob
spec:
template:
metadata:
spec:
containers:
- command:
- curl
- http://myapp-service:80/load
image: curlimages/curl
imagePullPolicy: Always
name: curljobt
restartPolicy: OnFailure
schedule: '*/1 * * * *'
Alternatively you can use command to launch it:
kubectl create cronjob --image curlimages/curl curljob -oyaml --schedule "*/1 * * * *" -- curl http://myapp-service:80/load
When "*/1 * * * *" will specify how often this CronJob would run. I`ve set it up to run every one minute.
You can see more about how to setup cron job here and here
Here is the result of the kubectl logs from one of the job`s pod:
{
"path": "/load",
"headers": {
"host": "myapp-service",
"user-agent": "curl/7.68.0-DEV",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "myapp-service",
"ip": "::ffff:192.168.197.19",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "myapp-pod"
As you can see the application receives GET request with path: /load.
Let me know if that helps.
I have created a Java based web service which utilizes SparkJava. By default this web service binds and listens to port 4567. My company requested this be placed in a Docker container. I created a Dockerfile and created the image, and when I run I expose port 4567...
docker run -d -p 4567:4567 -t myservice
I can invoke my web service for testing my calling a CURL command...
curl -i -X "POST" -H "Content-Type: application/json" -d "{}" "http://localhost:4567/myservice"
... and this is working. My company then says it wants to put this in Amazon EKS Kubernetes so I publish my Docker image to the company's private Dockerhub. I create three yaml files...
deployment.yaml
service.yaml
ingress.yaml
I see my objects are created and I can get a /bin/bash command line to my container running in Kubernetes and from there test localhost access to my service is working correctly including references to external web service resources, so I know my service is good.
I am confused by the ingress. I need to expose a URI to get to my service and I am not sure how this is supposed to work. Many examples show using NGINX, but I am not using NGINX.
Here are my files and what I have tested so far. Any guidance is appreciated.
service.yaml
kind: Service
apiVersion: v1
metadata:
name: my-api-service
spec:
selector:
app: my-api
ports:
- name: main
protocol: TCP
port: 4567
targetPort: 4567
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-api-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api-container
image: hub.mycompany.net/myproject/my-api-service
ports:
- containerPort: 4567
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
spec:
backend:
serviceName: my-api-service
servicePort: 4567
when I run the command ...
kubectl get ingress my-api-ingress
... shows ...
NAME HOSTS ADDRESS PORTS AGE
my-api-ingress * 80 9s
when I run the command ...
kubectl get service my-api-service
... shows ...
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-api-service ClusterIP 172.20.247.225 <none> 4567/TCP 16h
When I run the following command...
kubectl cluster-info
... I see ...
Kubernetes master is running at https://12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com
As such I try to hit the end point using CURL by issuing...
curl -i -X "POST" -H "Content-Type: application/json" -d "{}" "http://12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com:4567/myservice"
After some time I receive a time-out error...
curl: (7) Failed to connect to 12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com port 4567: Operation timed out
I believe my ingress is at fault but I am having difficulties finding non-NGINX examples to compare.
Thoughts?
barrypicker.
Your service should be "type: NodePort"
This example is very similar (however tested in GKE).
kind: Service
apiVersion: v1
metadata:
name: my-api-service
spec:
selector:
app: my-api
ports:
- name: main
protocol: TCP
port: 4567
targetPort: 4567
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-api-deployment
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api-container
image: hashicorp/http-echo:0.2.1
args = ["-listen=:4567", "-text='Hello api'"]
ports:
- containerPort: 4567
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
spec:
backend:
serviceName: my-api-service
servicePort: 4567
in your ingress kubectl get ingress <your ingress> you should see an external ip address.
You can find specific AWS implementation here. In addition more information about exposing services you can find here
I'm trying to set up a few micro services in Kubernetes. Everything is working as expected, except the connection from one micro service to RabbitMQ.
Problem flow:
.NET Core app --> rabbitmq-kubernetes-service.yml --> RabbitMQ
In the .NET Core app the rabbit connection factory config looks like this:
"RabbitMQ": {
"Host": "rabbitmq-service",
"Port": 7000,
"UserName": "guest",
"Password": "guest"
}
The kubernetes rabbit service looks like this:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-service
spec:
selector:
app: rabbitmq
ports:
- port: 7000
targetPort: 5672
As well as the rabbit deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
labels:
app: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: <private ACR with vanilla cfg - the image is: rabbitmq:3.7.9-management-alpine>
imagePullPolicy: Always
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: "0.5"
ports:
- containerPort: 5672
So this setup is currently not working in k8s. Locally it works like a charm with a basic docker-compose.
However, what I can do in k8s is to go from a LoadBalancer --> to the running rabbit pod and access the management GUI with these config settings.
apiVersion: v1
kind: Service
metadata:
name: rabbitmqmanagement-loadbalancer
spec:
type: LoadBalancer
selector:
app: rabbitmq
ports:
- port: 80
targetPort: 15672
Where am I going wrong?
I'm assuming you are running the .NET Core app outside the Kubernetes cluster.
If this is indeed the case then you need to use type: LoadBalancer.
LoadBalancer is used to expose a service to the internet.
ClusterIP exposes the service inside cluster-internal IP. So Service will be only accessible from within the cluster, also this is a default ServiceType.
NodePort exposes the service on each Node's IP at a static port.
For more details regarding Services please check the Kubernetes docs.
You can if the connection is working using a python script:
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='RABBITMQ_SERVER_IP'))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()
This script will try to connect RABBITMQ_SERVER_IP using port 5672.
Script requires a library pika which can be installed using pip install pika.
I'm a super beginner with Kubernetes and I'm trying to imagine how to split my monolithic application in different micro services.
Let's say i'm writing my micro services application in Flask and each of them exposes some endpoints like:
Micro service 1:
/v1/user-accounts
Micro service 2:
/v1/savings
Micro service 3:
/v1/auth
If all of them were running as blueprints in a monolithic application all of them would be prefixed with the same IP, that is the IP of the host server my application is running on, like 10.12.234.69, eg.
http://10.12.234.69:5000/v1/user-accounts
Now, deploying those 3 "blueprints" on 3 different POD/Nodes in Kubernetes will change the IP address of each endpoint having maybe 10.12.234.69, than 10.12.234.70 or 10.12.234.75
How can i write an application that keep the URL reference constant even if the IP address changes?
Would a Load Balancer Service do the trick?
Maybe the Service Registry feature of Kubernetes does the "DNS" part for me?
I know It can sounds very obvious question but still I cannot find any reference/example to this simple problem.
Thanks in advance!
EDIT: (as follow up to simon answer)
questions:
given the fact that the Ingress service spawns a load balancer and is possible to have all the routes reachable from the http/path prefixed by the IP (http://<ADDRESS>/v1/savings) of the load balancer, how can I associate IP to the load balancer to match the ip of the pod on which flask web server is running?
in case I add other sub routes to the same paths, like /v1/savings/get and /v1/savings/get/id/<var_id> , should i update all of them in the ingress http path in order for them to be reachable by the load balancer ?
A load balancer is what you are looking for.
Kubernetes services will make your pods accessible under a given hostname cluster-internally.
If you want to make your services accessible from outside the cluster under a single IP and different paths, you can use a load balancer and Kubernetes HTTP Ingresses. They define under which domain and path a service should be mapped and can be fetched by a load balancer to build its configuration.
Example based on your micro service architecture:
Mocking applications
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: user-accounts
spec:
template:
metadata:
labels:
app: user-accounts
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
args:
- /bin/bash
- "-c"
- echo 'server { location /v1/user-accounts { return 200 "user-accounts"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: savings
spec:
template:
metadata:
labels:
app: savings
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
command:
- /bin/bash
- "-c"
- echo 'server { location /v1/savings { return 200 "savings"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
template:
metadata:
labels:
app: auth
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
command:
- /bin/bash
- "-c"
- echo 'server { location /v1/auth { return 200 "auth"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
These deployments represent your services and just return their name via HTTP under /v1/name.
Mapping applications to services
---
kind: Service
apiVersion: v1
metadata:
name: user-accounts
spec:
type: NodePort
selector:
app: user-accounts
ports:
- protocol: TCP
port: 80
---
kind: Service
apiVersion: v1
metadata:
name: savings
spec:
type: NodePort
selector:
app: savings
ports:
- protocol: TCP
port: 80
---
kind: Service
apiVersion: v1
metadata:
name: auth
spec:
type: NodePort
selector:
app: auth
ports:
- protocol: TCP
port: 80
These services create an internal IP and a domain resolving to it based on their names, mapping them to the pods found by a given selector. Applications running in the same cluster namespace will be able to reach them under user-accounts, savings and auth.
Making services reachable via load balancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
spec:
rules:
- http:
paths:
- path: /v1/user-accounts
backend:
serviceName: user-accounts
servicePort: 80
- path: /v1/savings
backend:
serviceName: savings
servicePort: 80
- path: /v1/auth
backend:
serviceName: auth
servicePort: 80
This Ingress defines under which paths the different services should be reachable. Verify your Ingress via kubectl get ingress:
# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
example * 80 1m
If you are running on Google Container Engine, there is an Ingress controller running in your cluster which will spawn a Google Cloud Load Balancer when you create a new Ingress object. Under the ADDRESS column of the above output, there will be an IP displayed under which you can access your applications:
# curl http://<ADDRESS>/v1/user-accounts
user-accounts⏎
# curl http://<ADDRESS>/v1/savings
savings⏎
# curl http://<ADDRESS>/v1/auth
auth⏎