Can't connect to etcd via etcdctl - docker

I have deployed etcd server(3.5.0) as a container on Kubernetes and am able to access the /version and /metrics endpoints via the fqdn in HttpProxy on my local machine as below:
https://etcd.apps.domain.net/version
https://etcd.apps.domain.net/metrics
I am on windows platform. I am using etcdctl (3.5.0) which I have downloaded from here: https://github.com/etcd-io/etcd/releases/tag/v3.5.0 , to connect to the server as below:
etcdctl.exe --endpoints=https://etcd.apps.domain.net:443 endpoint health
But the client is not able to connect to the server and gives the below error:
{"level":"warn","ts":1650617630.997635,"logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00072c380/#initially=[https://etcd.apps.domain.net:443]","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
{"level":"warn","ts":1650617632.298635,"logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00072c380/#initially=[https://etcd.apps.domain.net:443]","attempt":1,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
{"level":"warn","ts":1650617633.598635,"logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00072c380/#initially=[https://etcd.apps.domain.net:443]","attempt":2,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
{"level":"warn","ts":1650617634.607135,"logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00072c380/#initially=[https://etcd.apps.domain.net:443]","attempt":3,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
https://etcd.apps.domain.net:443 is unhealthy: failed to commit proposal: context deadline exceeded
Error: unhealthy cluster
Now I know the cluster is not unhealthy because I can access the version endpoint on my local machine : https://etcd.apps.domain.net/version. The output is:
{"etcdserver":"3.5.0","etcdcluster":"3.5.0"}
My kube deployment file is as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: etcd
labels:
app: etcd
spec:
replicas: 1
selector:
matchLabels:
app: etcd
template:
metadata:
labels:
app: etcd
spec:
securityContext:
runAsUser: 999
fsGroup: 999
containers:
- name: etcd
image: <image path>
imagePullPolicy: Always
resources:
limits:
ephemeral-storage: 1000Mi
requests:
ephemeral-storage: 1000Mi
ports:
- containerPort: 2379
---
apiVersion: v1
kind: Service
metadata:
name: etcd
labels:
app: etcd
spec:
ports:
- name: https
port: 2379
targetPort: 2379
protocol: TCP
selector:
app: etcd
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: etcd
spec:
virtualhost:
fqdn: etcd.apps.domain.net
tls:
secretName: ingress-contour/ingress-contour-default-ssl-cert
routes:
- conditions:
- prefix: /
services:
- name: etcd
port: 2379
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: etcd.allow-ingress
spec:
podSelector:
matchLabels:
app: etcd
ingress:
- from:
- namespaceSelector:
matchLabels:
namespace: ingress-contour
ports:
- protocol: TCP
port: 2379
My docker image for etcd :
FROM artifactory/lrh:8.4-202109
RUN mkdir -p /app
RUN chown -R 999:999 /app
COPY tar /usr/bin/
COPY etcd-v3.5.0-linux-amd64.tar.gz /app/
RUN yum -y install gzip
RUN tar -xf /app/etcd-v3.5.0-linux-amd64.tar.gz -C /app --strip 1
ENV ETCD_DATA_DIR=/app
EXPOSE 2379
ENTRYPOINT ["/app/etcd", "-advertise-client-urls", "https://etcd.apps.domain.net:2379", "-listen-client-urls", "http://0.0.0.0:2379"]

Related

Unable to make request to service running in Minikube tunnel

I am working on an API that I will deploy with Kubernetes and I want to test it locally.
I created the Docker image, successfully tested it locally, and pushed it to a public Docker registry. Now I would like to deploy in a Kubernetes cluster and there are no errors being thrown, however, I am not able to make a request to the endpoint exposed by the Minikube tunnel.
Steps to reproduce:
Start Minikube container: minikube start --ports=127.0.0.1:30000:30000
Create deployment and service: kubectl apply -f fastapi.yaml
Start minikube tunnel: minikube service fastapi-server
Encountered the following error: 192.168.49.2 took too long to respond.
requirements.txt:
anyio==3.6.1
asgiref==3.5.2
click==8.1.3
colorama==0.4.4
fastapi==0.78.0
h11==0.13.0
httptools==0.4.0
idna==3.3
pydantic==1.9.1
python-dotenv==0.20.0
PyYAML==6.0
sniffio==1.2.0
starlette==0.19.1
typing_extensions==4.2.0
uvicorn==0.17.6
watchgod==0.8.2
websockets==10.3
main.py:
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
async def root():
return {"status": "OK"}
Dockerfile:
FROM python:3.9
WORKDIR /
COPY . .
RUN pip install --no-cache-dir --upgrade -r ./requirements.txt
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
fastapi.yaml:
# deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
replicas: 1
selector:
matchLabels:
app: fastapi-server
template:
metadata:
labels:
app: fastapi-server
spec:
containers:
- name: fastapi-server
image: smdf/fastapi-test
ports:
- containerPort: 8000
name: http
protocol: TCP
---
# service
apiVersion: v1
kind: Service
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
nodePort: 30000
Your problem is that you did not set the service selector:
# service
apiVersion: v1
kind: Service
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
selector: <------------- Missing part
app: fastapi-server <-------------
type: NodePort <------------- Set the type to NodePort
ports:
- port: 8000
targetPort: 8000
protocol: TCP
nodePort: 30000
How to check if your service is defined properly?
I checked to see if there are any endpoints, and there weren't any since you did not "attach" the service to your deployment
kubectl get endpoints -A
For more info you can read this section under my GitHub
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/05-Services

Minikube - use a proxy running in localhost to access external service

I have a proxy running in localhost:8000 which gives me access to my service.
Previously, I was using docker-compose.yml, and by including http_proxy=http://host.docker.internal:8000 I was able to reach my service from within the container.
I have switched to Kubernetes using Minikube. I have started minikube with:
minikube start \
--docker-env HTTP_PROXY=http://host.minikube.internal:8000 \
--docker-env NO_PROXY=stats,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.39.0/24
The service and deployment yaml:
apiVersion: v1
kind: Service
metadata:
name: service-name
namespace: service-space
spec:
type: NodePort
ports:
- port: 5432
targetPort: 5432
selector:
run: service-name
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-name
labels:
app: service-name
spec:
replicas: 1
selector:
matchLabels:
app: service-name
template:
metadata:
labels:
app: service-name
spec:
containers:
- name: service-name
image: container-image:latest
ports:
- containerPort: 5432
From within the service-name container, when I trying to reach the service I get the following error:
curl: (6) Could not resolve host: ...

How do I get Kubernetes service to open my django application on a web browser using local host?

I have been trying to get my kubernetes to launch my web application on a browser through my local host. When I try to open local host it times out and I have tried using minikube service --url and that also does not work. All of my deployment, and service pods are running. I have also tried port forward and changing the type to nodeport. I have provided my yaml, docker, and svc code.
apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
type: LoadBalancer
selector:
app: mywebsite
ports:
- protocol: TCP
name: http
port: 8743
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite
spec:
selector:
matchLabels:
app: mywebsite
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: mywebsite
imagePullPolicy: Never
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 250Mi
limits:
memory: "2Gi"
cpu: "500m"
# For more information, please refer to https://aka.ms/vscode-docker—python
FROM python:3.8-slim-buster
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR lapp
COPY . .
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode—docker-python—configure-containers
RUN adduser -u 5678 --disab1ed-password --gecos "" appuser && chown -R appuser /app
USER appuser
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Name: mywebsite
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=mywebsite
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.99.161.241
IPs: 10.99.161.241
Port: http 8743/TCP
TargetPort: 5000/TCP
NodePort: http 32697/TCP
Endpoints: 172.17.0.3:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
it's due to your container running on port 8000
But your service is forwarding the traffic to 5000
apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
type: LoadBalancer
selector:
app: mywebsite
ports:
- protocol: TCP
name: http
port: 8743
targetPort: **8000**
deployment should be
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite
spec:
selector:
matchLabels:
app: mywebsite
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: mywebsite
imagePullPolicy: Never
ports:
- containerPort: **8000**
resources:
requests:
cpu: 100m
memory: 250Mi
limits:
memory: "2Gi"
cpu: "500m"
You need to change the targetPort in SVC
and containerPort in Deployment
Or else
change the
EXPOSE 8000 to 5000 and command to run the application on 5000
CMD ["python", "manage.py", "runserver", "0.0.0.0:5000"]
Dont forget to docker build one more time after the above changes.

Can't access my local kubernetes service over the internet

Implementation Goal
Expose Zookeeper instance, running on kubernetes, to the internet.
(configuration & version information provided at the bottom)
Implementation Attempt
I currently have a minikube cluster running on ubuntu 14.04, backed by docker containers.
I'm running a bare metal k8s cluster, and I'm trrying to expose a zookeeper service to the internet. Seeing as my cluster is not running on a cloud provider, I set up metallb, in order to provide a network-loadbalancer implementation for my zookeeper service.
On startup everything looks good, an external IP is assigned and I can access it from the same host via a curl command.
$ kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-5c9894b5cd-9gh8m 1/1 Running 0 5h59m
speaker-j2z8q 1/1 Running 0 5h59m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.xxx.xxx.xxx <none> 443/TCP 6d19h
zk-cs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2181:30035/TCP 56m
zk-hs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2888:30664/TCP,3888:31113/TCP 6m15s
When I curl the above mentioned external IP's, I get a valid response
$ curl -D- "http://172.1.1.x:2181"
curl: (52) Empty reply from server
So far it all looks good, I can access the LB from outside the cluster with no issues, but this is where my lack of Kubernetes/Networking knowledge gets me.I'm finding it impossible to expose this LB to the internet. I've tried running minikube tunnel which I had high hopes for, only to be deeply disappointed.
Running a curl command from another node, whilst minikube tunnel is running will just see the request time out.
$ curl -D- "http://172.1.1.x:2181"
curl: (28) Failed to connect to 172.1.1.x port 2181: Timed out
At this point, as I mentioned before, I'm stuck.
Is there any way that I can get this service exposed to the internet without giving my soul to AWS or GCP?
Any help will be greatly appreciated.
Service Configuration
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
selector:
app: zk
ports:
- port: 2888
targetPort: 2888
name: server
protocol: TCP
- port: 3888
targetPort: 3888
name: leader-election
protocol: TCP
clusterIP: ""
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
selector:
app: zk
ports:
- name: client
protocol: TCP
port: 2181
targetPort: 2181
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: zookeeper
imagePullPolicy: Always
image: "library/zookeeper:3.6"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
- name: zoo-config
mountPath: /conf
volumes:
- name: zoo-config
configMap:
name: zoo-config
securityContext:
fsGroup: 2000
runAsUser: 1000
runAsNonRoot: true
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zoo-config
namespace: default
data:
zoo.cfg: |
tickTime=10000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=10
syncLimit=4
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.1.1.1-172.1.1.10
minikube: v1.13.1
docker: 18.06.3-ce
You can do it with minikube, but the idea of minikube is just to test stuff on your local environment. So, by default, it does not have the correct IPTable permissions, and yes you can adjust that, but if your goal is only to use without any loud provider, I'll higly recommend you to use kubeadm (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
This tool will provide you a very customizable cluster configuration and you will be able to set your network problems without headaches.

Connection refused error when deploying couchbase in kubernetes {failed to connect to 127.0.0.1 port 8091: Connection refused}

I used the following yaml files to deploy couchbase in kubernetes.
Master:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-master-rc
spec:
replicas: 1
selector:
app: master-pod
template:
metadata:
labels:
app: master-pod
spec:
containers:
- name: couchbase-master
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: MASTER
ports:
- containerPort: 8091
---
apiVersion: v1
kind: Service
metadata:
name: couchbase-master-service
labels:
app: couchbase-master-service
spec:
ports:
- port: 8091
selector:
app: master-pod
type: LoadBalancer
Worker:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-worker-rc
spec:
replicas: 1
selector:
app: couchbase-worker-pod
template:
metadata:
labels:
app: couchbase-worker-pod
spec:
containers:
- name: couchbase-worker
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: "WORKER"
- name: COUCHBASE_MASTER
value: "couchbase-master-service"
- name: AUTO_REBALANCE
value: "false"
ports:
- containerPort: 8091
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: couchbase
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: xxx.com
http:
paths:
- path: /
backend:
serviceName: couchbase-master-service
servicePort: 8091
The pods started running and nothing seems to have an issue at first glance. But when I tried to hit the HostUrl it gives me bad gateway. And when I look into the logs of master's pod it shows me connection refused at 127.0.0.1:8091. I tried to exec into the pod and apply the curl statements from entrypoint.sh manually, but it also gave me the error "failed to connect to 127.0.0.1 port 8091: Connection refused".
I have found that master image is using this entrypoint script
I ran this container image and it looks like the curl is failing because 15s sleep is not enough time for couchbase-server to start and open 8091 port.
The easiest thing you could do is to set this sleep to higher value, but sleep is usually not the best option. (Actually this whole image is full of bad practises).
Better approach would be to replace sleep with following lines that wait until port 8091 is open:
while ! nc -z localhost 8091; do
sleep 1
done

Resources