I have a proxy running in localhost:8000 which gives me access to my service.
Previously, I was using docker-compose.yml, and by including http_proxy=http://host.docker.internal:8000 I was able to reach my service from within the container.
I have switched to Kubernetes using Minikube. I have started minikube with:
minikube start \
--docker-env HTTP_PROXY=http://host.minikube.internal:8000 \
--docker-env NO_PROXY=stats,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.39.0/24
The service and deployment yaml:
apiVersion: v1
kind: Service
metadata:
name: service-name
namespace: service-space
spec:
type: NodePort
ports:
- port: 5432
targetPort: 5432
selector:
run: service-name
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-name
labels:
app: service-name
spec:
replicas: 1
selector:
matchLabels:
app: service-name
template:
metadata:
labels:
app: service-name
spec:
containers:
- name: service-name
image: container-image:latest
ports:
- containerPort: 5432
From within the service-name container, when I trying to reach the service I get the following error:
curl: (6) Could not resolve host: ...
Related
I created a Service and a Deployment but I am unable to access the service with minikube service --url accounts-service or minikube service accounts-service.
While the second one opens the browser but never connects, the first one just remains in my terminal.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts-service
spec:
replicas: 1
selector:
matchLabels:
app: accounts-service
template:
metadata:
labels:
app: accounts-service
spec:
containers:
- name: accounts-service
image: xxxx:latest
ports:
- containerPort: 3001
Service
apiVersion: v1
kind: Service
metadata:
name: accounts-service
spec:
selector:
app: accounts-service
ports:
- port: 80
targetPort: 3001
type: NodePort
I don't know why my Minikube is not connecting to the Port. I am using minkube with Docker
I did this setup to test kubernetes with minikube 'Set up Ingress on Minikube' and everything worked fine.
Then I tried to do the same with my own app and am having a problem after configuring all the steps.
The steps that I did to setup my app and kubernetes are:
Create an app that works on port 5000
Containarized the app in a docker image and upload to the minikube image registry
Created a deployment for kubernetes with my container
Run kubectl port-forward pod/app 5000 and everyting works fine
Created a service with type Nodeport to expose the deployment
Run kubectl port-forward service/app-service 5000 and everyting works fine
Created an ingress to expose the service
Run curl app.info and it returns 502 bad gateway
Tryied again kubectl port-forward service/app-service 5000 and it still works fine
Check minikube service app-service --url and tried the result URL and it returns Connection refused, the equivalent url in the demo setup that I did previously works fine so it looks like something is wrong in this step even when doing the port-forwarding works correctly.
kind: Deployment
metadata:
namespace: echo-app
name: app
labels:
app: echo
tier: services
spec:
replicas: 1
selector:
matchLabels:
tier: services
template:
metadata:
labels:
tier: services
spec:
containers:
- name: echo-api
image: echo/api:v1.0.0b39c8f9a
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: echo-app
spec:
type: NodePort
selector:
tier: services
ports:
- protocol: TCP
port: 5000
targetPort: 5000
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
namespace: echo-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: echo.info
http:
- paths:
path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 5000
I used the following yaml files to deploy couchbase in kubernetes.
Master:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-master-rc
spec:
replicas: 1
selector:
app: master-pod
template:
metadata:
labels:
app: master-pod
spec:
containers:
- name: couchbase-master
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: MASTER
ports:
- containerPort: 8091
---
apiVersion: v1
kind: Service
metadata:
name: couchbase-master-service
labels:
app: couchbase-master-service
spec:
ports:
- port: 8091
selector:
app: master-pod
type: LoadBalancer
Worker:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-worker-rc
spec:
replicas: 1
selector:
app: couchbase-worker-pod
template:
metadata:
labels:
app: couchbase-worker-pod
spec:
containers:
- name: couchbase-worker
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: "WORKER"
- name: COUCHBASE_MASTER
value: "couchbase-master-service"
- name: AUTO_REBALANCE
value: "false"
ports:
- containerPort: 8091
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: couchbase
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: xxx.com
http:
paths:
- path: /
backend:
serviceName: couchbase-master-service
servicePort: 8091
The pods started running and nothing seems to have an issue at first glance. But when I tried to hit the HostUrl it gives me bad gateway. And when I look into the logs of master's pod it shows me connection refused at 127.0.0.1:8091. I tried to exec into the pod and apply the curl statements from entrypoint.sh manually, but it also gave me the error "failed to connect to 127.0.0.1 port 8091: Connection refused".
I have found that master image is using this entrypoint script
I ran this container image and it looks like the curl is failing because 15s sleep is not enough time for couchbase-server to start and open 8091 port.
The easiest thing you could do is to set this sleep to higher value, but sleep is usually not the best option. (Actually this whole image is full of bad practises).
Better approach would be to replace sleep with following lines that wait until port 8091 is open:
while ! nc -z localhost 8091; do
sleep 1
done
The Kubernetes yaml below creates resources as expected and there are no errors/restarts. External requests to the kafka broker at localhost:80005 hang indefinitely. My understanding is that the docker notion of a "network" is achieved in Kubernetes via services. In this case, I have a NodePort service which brings traffic into the cluster and then a ClusterIp port which handles communication between containers. Based on the non-Kubernetes, docker run version of this Zookeeper/Kafka configuration, I suspect my issue lies somewhere in my service and/or advertised listeners. The internet is replete with docker compose files and helm charts but I don't seem to be able to find a combination of configurations that work.
Kafka/Zookeeper YAML:
# Zookeeper
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-deploy
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper-app
template:
metadata:
labels:
app: zookeeper-app
spec:
containers:
- name: zookeeper-app
image: confluentinc/cp-zookeeper:5.4.1
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_TICK_TIME
value: "2000"
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
---
apiVersion: v1
kind: Service
metadata:
name: zk-cluster-svc
labels:
app: zk
spec:
type: ClusterIP
selector:
app: zookeeper-app
ports:
- port: 2181
targetPort: 2181
protocol: TCP
---
# Kafka
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deploy
spec:
replicas: 1
selector:
matchLabels:
app: kafka-app
template:
metadata:
labels:
app: kafka-app
spec:
containers:
- name: kafka-app
image: confluentinc/cp-kafka:5.4.1
ports:
- containerPort: 9092
- containerPort: 29092
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zk-cluster-svc:2181"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://kafka-cluster-svc:29092,OUTSIDE://localhost:30005"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INSIDE"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
app: kafka-service
spec:
type: NodePort
ports:
- port: 9092
nodePort: 30005
protocol: TCP
selector:
app: kafka-app
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster-svc
labels:
app: kafka-app
spec:
selector:
app: kafka-app
ports:
- port: 29092
targetPort: 29092
protocol: TCP
^ This is based on a working docker run version of these containers:
docker network create kafzoonet
docker run --name app-zookeeper -d --network="kafzoonet" --env ZOOKEEPER_TICK_TIME=2000 --env ZOOKEEPER_CLIENT_PORT=2181 confluentinc/cp-zookeeper:5.4.1
docker run --name app-kafka -p 30005:9092 -d --network="kafzoonet" --env KAFKA_BROKER_ID=1 --env KAFKA_ZOOKEEPER_CONNECT="app-zookeeper:2181" --env KAFKA_ADVERTISED_LISTENERS="PLAINTEXT://app-kafka:29092,PLAINTEXT_HOST://localhost:30005" --env KAFKA_LISTENER_SECURITY_PROTOCOL_MAP="PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT" --env KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT --env KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka:5.4.1
Are the services configured correctly for a single Kafka node? It appears that Kafka is able to reach Zookeeper but for some reason when a client connects, it isn't getting the correct listener echoed back. Unfortunately I don't have any error messages to support this as the client hangs and neither the Kafka or Zookeeper pods log anything.
I have created a Java based web service which utilizes SparkJava. By default this web service binds and listens to port 4567. My company requested this be placed in a Docker container. I created a Dockerfile and created the image, and when I run I expose port 4567...
docker run -d -p 4567:4567 -t myservice
I can invoke my web service for testing my calling a CURL command...
curl -i -X "POST" -H "Content-Type: application/json" -d "{}" "http://localhost:4567/myservice"
... and this is working. My company then says it wants to put this in Amazon EKS Kubernetes so I publish my Docker image to the company's private Dockerhub. I create three yaml files...
deployment.yaml
service.yaml
ingress.yaml
I see my objects are created and I can get a /bin/bash command line to my container running in Kubernetes and from there test localhost access to my service is working correctly including references to external web service resources, so I know my service is good.
I am confused by the ingress. I need to expose a URI to get to my service and I am not sure how this is supposed to work. Many examples show using NGINX, but I am not using NGINX.
Here are my files and what I have tested so far. Any guidance is appreciated.
service.yaml
kind: Service
apiVersion: v1
metadata:
name: my-api-service
spec:
selector:
app: my-api
ports:
- name: main
protocol: TCP
port: 4567
targetPort: 4567
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-api-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api-container
image: hub.mycompany.net/myproject/my-api-service
ports:
- containerPort: 4567
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
spec:
backend:
serviceName: my-api-service
servicePort: 4567
when I run the command ...
kubectl get ingress my-api-ingress
... shows ...
NAME HOSTS ADDRESS PORTS AGE
my-api-ingress * 80 9s
when I run the command ...
kubectl get service my-api-service
... shows ...
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-api-service ClusterIP 172.20.247.225 <none> 4567/TCP 16h
When I run the following command...
kubectl cluster-info
... I see ...
Kubernetes master is running at https://12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com
As such I try to hit the end point using CURL by issuing...
curl -i -X "POST" -H "Content-Type: application/json" -d "{}" "http://12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com:4567/myservice"
After some time I receive a time-out error...
curl: (7) Failed to connect to 12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com port 4567: Operation timed out
I believe my ingress is at fault but I am having difficulties finding non-NGINX examples to compare.
Thoughts?
barrypicker.
Your service should be "type: NodePort"
This example is very similar (however tested in GKE).
kind: Service
apiVersion: v1
metadata:
name: my-api-service
spec:
selector:
app: my-api
ports:
- name: main
protocol: TCP
port: 4567
targetPort: 4567
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-api-deployment
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api-container
image: hashicorp/http-echo:0.2.1
args = ["-listen=:4567", "-text='Hello api'"]
ports:
- containerPort: 4567
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
spec:
backend:
serviceName: my-api-service
servicePort: 4567
in your ingress kubectl get ingress <your ingress> you should see an external ip address.
You can find specific AWS implementation here. In addition more information about exposing services you can find here