I am working on a project with jenkins where I am already running the jenkins pod and want to run kubectl commands directly from the pod to connect with my host machine in order to do that I followed this SO question about k8s cluster remote access am on windows and have kubectl v1.23.3 installed on a jenkins pod I runned from my host machine k8s.
I managed to verify that running kubectl works properly on the jenkins pod (container):
kubectl create deployment nginx --image=nginx
kubectl create service nodeport nginx --tcp=80:80
when I ran kubectl get all from the jenkins container I get this output:
root#jenkins-64756886f7-2v92n:~/test# kubectl create deployment nginx --image=nginx
kubectl create service nodeport nginx --tcp=80:80
deployment.apps/nginx created
service/nginx created
root#jenkins-64756886f7-2v92n:~/test# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/jenkins-64756886f7-2v92n 1/1 Running 0 37m
pod/nginx-6799fc88d8-kxprv 1/1 Running 0 8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins-service NodePort 10.110.105.78 <none> 8080:30090/TCP 39m
service/nginx NodePort 10.107.115.5 <none> 80:32355/TCP 8s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/jenkins 1/1 1 1 39m
deployment.apps/nginx 1/1 1 1 8s
NAME DESIRED CURRENT READY AGE
replicaset.apps/jenkins-64756886f7 1 1 1 37m
replicaset.apps/nginx-6799fc88d8 1 1 1 8s
root#jenkins-64756886f7-2v92n:~/test#
Initially I had Jenkins deployment attached to a namespace called devops-cicd
Tested the deployment on my browser and worked fine
and this is the output from my host machine:
PS C:\Users\affes> kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d9h
and when I specify the namespace I get the same result as from Jenkins container:
PS C:\Users\affes> kubectl get all -n devops-cicd
NAME READY STATUS RESTARTS AGE
pod/jenkins-64756886f7-2v92n 1/1 Running 0 38m
pod/nginx-6799fc88d8-kxprv 1/1 Running 0 93s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins-service NodePort 10.110.105.78 <none> 8080:30090/TCP 41m
service/nginx NodePort 10.107.115.5 <none> 80:32355/TCP 93s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/jenkins 1/1 1 1 41m
deployment.apps/nginx 1/1 1 1 93s
NAME DESIRED CURRENT READY AGE
replicaset.apps/jenkins-64756886f7 1 1 1 38m
replicaset.apps/nginx-6799fc88d8 1 1 1 93s
I don't know what's causing the resources created on that namespace directly without even specifying the namespace, and is there a possible way to configure something that will allow me to deploy on default namespace instead?
This is my deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: devops-cicd
spec:
selector:
matchLabels:
app: jenkins
workload: cicd
replicas: 1
template:
metadata:
namespace: devops-cicd
labels:
app: jenkins
workload: cicd
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
- name: docker
mountPath: "/usr/bin/docker"
securityContext:
privileged: true
runAsUser: 0 # Root
restartPolicy: Always
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: docker
hostPath:
path: /usr/bin/docker
---
apiVersion: v1
kind: Service
metadata:
namespace: devops-cicd
name: jenkins-service
spec:
selector:
app: jenkins
workload: cicd
ports:
- name: http
port: 8080
nodePort: 30090
type: NodePort
You may have a different namespace configured by default in the kubectl in the Jenkins pod. You can check it with the following command.
kubectl config view | grep namespace
To change the default namespace to `default, you can run the following command.
kubectl config set-context --current --namespace=default
Please find more details here.
New to K8s. So far I have the following:
docker-ce-19.03.8
docker-ce-cli-19.03.8
containerd.io-1.2.13
kubelet-1.18.5
kubeadm-1.18.5
kubectl-1.18.5
etcd-3.4.10
Use Flannel for Pod Overlay Net
Performed all of the host-level work (SELinux permissive, swapoff, etc.)
All Centos7 in an on-prem Vsphere envioronment (6.7U3)
I've built all my configs and currently have:
a 3-node external/stand-alone etcd cluster with peer-to-peer and client-server encrypted transmissions
a 3-node control plane cluster -- kubeadm init is bootstrapped with x509s and targets to the 3 etcds (so stacked etcd never happens)
HAProxy and Keepalived are installed on two of the etcd cluster members, load-balancing access to the API server endpoints on the control plane (TCP6443)
6-worker nodes
Storage configured with the in-tree Vmware Cloud Provider (I know it's deprecated)--and yes, this is my DEFAULT SC
Status Checks:
kubectl cluster-info reports:
[me#km-01 pods]$ kubectl cluster-info
Kubernetes master is running at https://k8snlb:6443
KubeDNS is running at https://k8snlb:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubectl get all --all-namespaces reports:
[me#km-01 pods]$ kubectl get all --all-namespaces -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ag1 pod/mssql-operator-68bcc684c4-rbzvn 1/1 Running 0 86m 10.10.4.133 kw-02.bogus.local <none> <none>
kube-system pod/coredns-66bff467f8-k6m94 1/1 Running 4 20h 10.10.0.11 km-01.bogus.local <none> <none>
kube-system pod/coredns-66bff467f8-v848r 1/1 Running 4 20h 10.10.0.10 km-01.bogus.local <none> <none>
kube-system pod/kube-apiserver-km-01.bogus.local 1/1 Running 8 10h x.x.x..25 km-01.bogus.local <none> <none>
kube-system pod/kube-controller-manager-km-01.bogus.local 1/1 Running 2 10h x.x.x..25 km-01.bogus.local <none> <none>
kube-system pod/kube-flannel-ds-amd64-7l76c 1/1 Running 0 10h x.x.x..30 kw-01.bogus.local <none> <none>
kube-system pod/kube-flannel-ds-amd64-8kft7 1/1 Running 0 10h x.x.x..33 kw-04.bogus.local <none> <none>
kube-system pod/kube-flannel-ds-amd64-r5kqv 1/1 Running 0 10h x.x.x..34 kw-05.bogus.local <none> <none>
kube-system pod/kube-flannel-ds-amd64-t6xcd 1/1 Running 0 10h x.x.x..35 kw-06.bogus.local <none> <none>
kube-system pod/kube-flannel-ds-amd64-vhnx8 1/1 Running 0 10h x.x.x..32 kw-03.bogus.local <none> <none>
kube-system pod/kube-flannel-ds-amd64-xdk2n 1/1 Running 0 10h x.x.x..31 kw-02.bogus.local <none> <none>
kube-system pod/kube-flannel-ds-amd64-z4kfk 1/1 Running 4 20h x.x.x..25 km-01.bogus.local <none> <none>
kube-system pod/kube-proxy-49hsl 1/1 Running 0 10h x.x.x..35 kw-06.bogus.local <none> <none>
kube-system pod/kube-proxy-62klh 1/1 Running 0 10h x.x.x..34 kw-05.bogus.local <none> <none>
kube-system pod/kube-proxy-64d5t 1/1 Running 0 10h x.x.x..30 kw-01.bogus.local <none> <none>
kube-system pod/kube-proxy-6ch42 1/1 Running 4 20h x.x.x..25 km-01.bogus.local <none> <none>
kube-system pod/kube-proxy-9css4 1/1 Running 0 10h x.x.x..32 kw-03.bogus.local <none> <none>
kube-system pod/kube-proxy-hgrx8 1/1 Running 0 10h x.x.x..33 kw-04.bogus.local <none> <none>
kube-system pod/kube-proxy-ljlsh 1/1 Running 0 10h x.x.x..31 kw-02.bogus.local <none> <none>
kube-system pod/kube-scheduler-km-01.bogus.local 1/1 Running 5 20h x.x.x..25 km-01.bogus.local <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ag1 service/ag1-primary NodePort 10.104.183.81 x.x.x..30,x.x.x..31,x.x.x..32,x.x.x..33,x.x.x..34,x.x.x..35 1433:30405/TCP 85m role.ag.mssql.microsoft.com/ag1=primary,type=sqlservr
ag1 service/ag1-secondary NodePort 10.102.52.31 x.x.x..30,x.x.x..31,x.x.x..32,x.x.x..33,x.x.x..34,x.x.x..35 1433:30713/TCP 85m role.ag.mssql.microsoft.com/ag1=secondary,type=sqlservr
ag1 service/mssql1 NodePort 10.96.166.108 x.x.x..30,x.x.x..31,x.x.x..32,x.x.x..33,x.x.x..34,x.x.x..35 1433:32439/TCP 86m name=mssql1,type=sqlservr
ag1 service/mssql2 NodePort 10.109.146.58 x.x.x..30,x.x.x..31,x.x.x..32,x.x.x..33,x.x.x..34,x.x.x..35 1433:30636/TCP 86m name=mssql2,type=sqlservr
ag1 service/mssql3 NodePort 10.101.234.186 x.x.x..30,x.x.x..31,x.x.x..32,x.x.x..33,x.x.x..34,x.x.x..35 1433:30862/TCP 86m name=mssql3,type=sqlservr
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h <none>
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 20h k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-system daemonset.apps/kube-flannel-ds-amd64 7 7 7 7 7 <none> 20h kube-flannel quay.io/coreos/flannel:v0.12.0-amd64 app=flannel
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 <none> 20h kube-flannel quay.io/coreos/flannel:v0.12.0-arm app=flannel
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 <none> 20h kube-flannel quay.io/coreos/flannel:v0.12.0-arm64 app=flannel
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 <none> 20h kube-flannel quay.io/coreos/flannel:v0.12.0-ppc64le app=flannel
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 <none> 20h kube-flannel quay.io/coreos/flannel:v0.12.0-s390x app=flannel
kube-system daemonset.apps/kube-proxy 7 7 7 7 7 kubernetes.io/os=linux 20h kube-proxy k8s.gcr.io/kube-proxy:v1.18.7 k8s-app=kube-proxy
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
ag1 deployment.apps/mssql-operator 1/1 1 1 86m mssql-operator mcr.microsoft.com/mssql/ha:2019-CTP2.1-ubuntu app=mssql-operator
kube-system deployment.apps/coredns 2/2 2 2 20h coredns k8s.gcr.io/coredns:1.6.7 k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
ag1 replicaset.apps/mssql-operator-68bcc684c4 1 1 1 86m mssql-operator mcr.microsoft.com/mssql/ha:2019-CTP2.1-ubuntu app=mssql-operator,pod-template-hash=68bcc684c4
kube-system replicaset.apps/coredns-66bff467f8 2 2 2 20h coredns k8s.gcr.io/coredns:1.6.7 k8s-app=kube-dns,pod-template-hash=66bff467f8
To the problem: There are a number of articles talking about a SQL2019 HA build. It appears that every single one however, is in the cloud whereas mine is on-prem in a Vsphere env. They appear to be very simple: Run 3 scripts in this order: operator.yaml, sql.yaml, and ag-service.yaml.
My YAML's are based on: https://github.com/microsoft/sql-server-samples/tree/master/samples/features/high%20availability/Kubernetes/sample-manifest-files
For the blogs that actually screenshot the environment afterward, there should be at least 7 pods (1 Operator, 3 SQL Init, 3 SQL). If you look at my aforementioned all --all-namespaces output, I have everything (and in a running state) but no pods other than the running Operator...???
I actually broke the control plane back to a single-node just to try to isolate the logs. /var/log/container/* and /var/log/pod/* contain nothing of value to indicate a problem with storage or any other reason the the Pods are non-existent. It's probably also worth noting that I started using the latest sql2019 label: 2019-latest but when I got the same behavior there, I decided to try to use the old bits since so many blogs are based on CTP 2.1.
I can create PVs and PVCs using the VCP storage provider. I have my Secrets and can see them in the Secrets store.
I'm at a loss as to explain why pods are missing or where to look after checking journalctl, the daemons themselves, and /var/log and I don't see any indication there's even an attempt to create them -- the kubectl apply -f mssql-server2019.yaml that I adapted runs to completion and without error indicating 3 sql objects and 3 sql services get created. But here is the file anyway targeting CTP2.1:
cat << EOF > mssql-server2019.yaml
apiVersion: mssql.microsoft.com/v1
kind: SqlServer
metadata:
labels: {name: mssql1, type: sqlservr}
name: mssql1
namespace: ag1
spec:
acceptEula: true
agentsContainerImage: mcr.microsoft.com/mssql/ha:2019-CTP2.1
availabilityGroups: [ag1]
instanceRootVolumeClaimTemplate:
accessModes: [ReadWriteOnce]
resources:
requests: {storage: 5Gi}
storageClass: default
saPassword:
secretKeyRef: {key: sapassword, name: sql-secrets}
sqlServerContainer: {image: 'mcr.microsoft.com/mssql/server:2019-CTP2.1'}
---
apiVersion: v1
kind: Service
metadata: {name: mssql1, namespace: ag1}
spec:
ports:
- {name: tds, port: 1433}
selector: {name: mssql1, type: sqlservr}
type: NodePort
externalIPs:
- x.x.x.30
- x.x.x.31
- x.x.x.32
- x.x.x.33
- x.x.x.34
- x.x.x.35
---
apiVersion: mssql.microsoft.com/v1
kind: SqlServer
metadata:
labels: {name: mssql2, type: sqlservr}
name: mssql2
namespace: ag1
spec:
acceptEula: true
agentsContainerImage: mcr.microsoft.com/mssql/ha:2019-CTP2.1
availabilityGroups: [ag1]
instanceRootVolumeClaimTemplate:
accessModes: [ReadWriteOnce]
resources:
requests: {storage: 5Gi}
storageClass: default
saPassword:
secretKeyRef: {key: sapassword, name: sql-secrets}
sqlServerContainer: {image: 'mcr.microsoft.com/mssql/server:2019-CTP2.1'}
---
apiVersion: v1
kind: Service
metadata: {name: mssql2, namespace: ag1}
spec:
ports:
- {name: tds, port: 1433}
selector: {name: mssql2, type: sqlservr}
type: NodePort
externalIPs:
- x.x.x.30
- x.x.x.31
- x.x.x.32
- x.x.x.33
- x.x.x.34
- x.x.x.35
---
apiVersion: mssql.microsoft.com/v1
kind: SqlServer
metadata:
labels: {name: mssql3, type: sqlservr}
name: mssql3
namespace: ag1
spec:
acceptEula: true
agentsContainerImage: mcr.microsoft.com/mssql/ha:2019-CTP2.1
availabilityGroups: [ag1]
instanceRootVolumeClaimTemplate:
accessModes: [ReadWriteOnce]
resources:
requests: {storage: 5Gi}
storageClass: default
saPassword:
secretKeyRef: {key: sapassword, name: sql-secrets}
sqlServerContainer: {image: 'mcr.microsoft.com/mssql/server:2019-CTP2.1'}
---
apiVersion: v1
kind: Service
metadata: {name: mssql3, namespace: ag1}
spec:
ports:
- {name: tds, port: 1433}
selector: {name: mssql3, type: sqlservr}
type: NodePort
externalIPs:
- x.x.x.30
- x.x.x.31
- x.x.x.32
- x.x.x.33
- x.x.x.34
- x.x.x.35
---
EOF
Edit1: kubectl logs -n ag mssql-operator-*
[sqlservers] 2020/08/14 14:36:48 Creating custom resource definition
[sqlservers] 2020/08/14 14:36:48 Created custom resource definition
[sqlservers] 2020/08/14 14:36:48 Waiting for custom resource definition to be available
[sqlservers] 2020/08/14 14:36:49 Watching for resources...
[sqlservers] 2020/08/14 14:37:08 Creating ConfigMap sql-operator
[sqlservers] 2020/08/14 14:37:08 Updating mssql1 in namespace ag1 ...
[sqlservers] 2020/08/14 14:37:08 Creating ConfigMap ag1
[sqlservers] ERROR: 2020/08/14 14:37:08 could not process update request: error creating ConfigMap ag1: v1.ConfigMap: ObjectMeta: v1.ObjectMeta: readObjectFieldAsBytes: expect : after object field, parsing 627 ...:{},"k:{\"... at {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"ag1","namespace":"ag1","selfLink":"/api/v1/namespaces/ag1/configmaps/ag1","uid":"33af6232-4464-4290-bb14-b21e8f72e361","resourceVersion":"314186","creationTimestamp":"2020-08-14T14:37:08Z","ownerReferences":[{"apiVersion":"mssql.microsoft.com/v1","kind":"ReplicationController","name":"mssql1","uid":"e71a7246-2776-4d96-9735-844ee136a37d","controller":false}],"managedFields":[{"manager":"mssql-server-k8s-operator","operation":"Update","apiVersion":"v1","time":"2020-08-14T14:37:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"e71a7246-2776-4d96-9735-844ee136a37d\"}":{".":{},"f:apiVersion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}}}}]}}
[sqlservers] 2020/08/14 14:37:08 Updating ConfigMap sql-operator
[sqlservers] 2020/08/14 14:37:08 Updating mssql2 in namespace ag1 ...
[sqlservers] ERROR: 2020/08/14 14:37:08 could not process update request: error getting ConfigMap ag1: v1.ConfigMap: ObjectMeta: v1.ObjectMeta: readObjectFieldAsBytes: expect : after object field, parsing 627 ...:{},"k:{\"... at {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"ag1","namespace":"ag1","selfLink":"/api/v1/namespaces/ag1/configmaps/ag1","uid":"33af6232-4464-4290-bb14-b21e8f72e361","resourceVersion":"314186","creationTimestamp":"2020-08-14T14:37:08Z","ownerReferences":[{"apiVersion":"mssql.microsoft.com/v1","kind":"ReplicationController","name":"mssql1","uid":"e71a7246-2776-4d96-9735-844ee136a37d","controller":false}],"managedFields":[{"manager":"mssql-server-k8s-operator","operation":"Update","apiVersion":"v1","time":"2020-08-14T14:37:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"e71a7246-2776-4d96-9735-844ee136a37d\"}":{".":{},"f:apiVersion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}}}}]}}
[sqlservers] 2020/08/14 14:37:08 Updating ConfigMap sql-operator
[sqlservers] 2020/08/14 14:37:08 Updating mssql3 in namespace ag1 ...
[sqlservers] ERROR: 2020/08/14 14:37:08 could not process update request: error getting ConfigMap ag1: v1.ConfigMap: ObjectMeta: v1.ObjectMeta: readObjectFieldAsBytes: expect : after object field, parsing 627 ...:{},"k:{\"... at {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"ag1","namespace":"ag1","selfLink":"/api/v1/namespaces/ag1/configmaps/ag1","uid":"33af6232-4464-4290-bb14-b21e8f72e361","resourceVersion":"314186","creationTimestamp":"2020-08-14T14:37:08Z","ownerReferences":[{"apiVersion":"mssql.microsoft.com/v1","kind":"ReplicationController","name":"mssql1","uid":"e71a7246-2776-4d96-9735-844ee136a37d","controller":false}],"managedFields":[{"manager":"mssql-server-k8s-operator","operation":"Update","apiVersion":"v1","time":"2020-08-14T14:37:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"e71a7246-2776-4d96-9735-844ee136a37d\"}":{".":{},"f:apiVersion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}}}}]}}
I've looked over my operator and mssql2019.yamls (specifically around the kind: SqlServer, since that seems to be where it's failing) and can't identify any glaring inconsistencies or differences.
So your operator is running:
ag1 pod/pod/mssql-operator-68bcc684c4-rbzvn 1/1 Running 0 86m 10.10.4.133 kw-02.bogus.local <none> <none>
I would start by looking at the logs there:
kubectl -n ag1 logs pod/mssql-operator-68bcc684c4-rbzvn
Most likely it needs to interact with the cloud provider (i.e Azure) and VMware is not supported but check what the logs say 👀.
Update:
Based on the logs you posted it looks like you are using K8s 1.18 and the operator is incompatible. It's trying to create a ConfigMap with a spec that the kube-apiserver is rejecting.
✌️
YAMLs mine are based off of: https://github.com/microsoft/sql-server-samples/tree/master/samples/features/high%20availability/Kubernetes/sample-manifest-files
Run 3 scripts in this order: operator.yaml, sql.yaml, and ag-service.yaml.
I have just ran it on my GKE cluster and got similar result if I try running only these 3 files.
If you ran it without preparing PV and PVC ( .././sample-deployment-script/templates/pv*.yaml )
$ git clone https://github.com/microsoft/sql-server-samples.git
$ cd sql-server-samples/samples/features/high\ availability/Kubernetes/sample-manifest-files/
$ kubectl create -f operator.yaml
namespace/ag1 created
serviceaccount/mssql-operator created
clusterrole.rbac.authorization.k8s.io/mssql-operator-ag1 created
clusterrolebinding.rbac.authorization.k8s.io/mssql-operator-ag1 created
deployment.apps/mssql-operator created
$ kubectl create -f sqlserver.yaml
sqlserver.mssql.microsoft.com/mssql1 created
service/mssql1 created
sqlserver.mssql.microsoft.com/mssql2 created
service/mssql2 created
sqlserver.mssql.microsoft.com/mssql3 created
service/mssql3 created
$ kubectl create -f ag-services.yaml
service/ag1-primary created
service/ag1-secondary created
You'll have:
kubectl get pods -n ag1
NAME READY STATUS RESTARTS AGE
mssql-initialize-mssql1-js4zc 0/1 CreateContainerConfigError 0 6m12s
mssql-initialize-mssql2-72d8n 0/1 CreateContainerConfigError 0 6m8s
mssql-initialize-mssql3-h4mr9 0/1 CreateContainerConfigError 0 6m6s
mssql-operator-658558b57d-6xd95 1/1 Running 0 6m33s
mssql1-0 1/2 CrashLoopBackOff 5 6m12s
mssql2-0 1/2 CrashLoopBackOff 5 6m9s
mssql3-0 0/2 Pending 0 6m6s
I see that the failed mssql<N> pods are parts of statefulset.apps/mssql<N> and mssql-initialize-mssql<N> are parts of job.batch/mssql-initialize-mssql<N>
Upon adding PV and PVC it looks in a following way:
$ kubectl get all -n ag1
NAME READY STATUS RESTARTS AGE
mssql-operator-658558b57d-pgx74 1/1 Running 0 20m
And 3 sqlservers.mssql.microsoft.com objects
$ kubectl get sqlservers.mssql.microsoft.com -n ag1
NAME AGE
mssql1 64m
mssql2 64m
mssql3 64m
That is why it looks exactly as it is specified in the abovementioned files.
Any assistance would be greatly appreciated.
However, if you run:
sql-server-samples/samples/features/high availability/Kubernetes/sample-deployment-script/$ ./deploy-ag.py deploy --dry-run
configs will be generated automatically.
without dry-run and that configs (and with properly set PV+PVC) it gives us 7 pods.
You'll have configs generated. It'll be useful to compare auto-generated configs with the one's you have (and compare running only subset 3 files vs. stuff from deploy-ag.py )
P.S.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15+" GitVersion:"v1.15.11-dispatcher"
Server Version: version.Info{Major:"1", Minor:"15+" GitVersion:"v1.15.12-gke.2"
Unable to access the internet on the pod in the public GKE cluster
I'm using gke(1.16.13-gke.1) as a test environment. I am deploying a spring-boot application, and it was successfully running on the gke cluster. The thing is it can't communicate with the internet.
Here is my deployment manifest.
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
namespace: lms-ff
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: gcr.io/sams-api:0.0.1.4.ms1
ports:
- containerPort: 8095
envFrom:
- configMapRef:
name: auth-properties
---
apiVersion: v1
kind: Service
metadata:
name: gcp-auth-service
namespace: lms-ff
spec:
selector:
app: auth
type: ClusterIP
ports:
- protocol: TCP
port: 8095
targetPort: 8095
Here is the error that I got.
api-556c56df4b-pdtk9:/home/misyn/app# ping 4.2.2.2
PING 4.2.2.2 (4.2.2.2): 56 data bytes
64 bytes from 4.2.2.2: seq=0 ttl=59 time=10.762 ms
64 bytes from 4.2.2.2: seq=1 ttl=59 time=10.831 ms
64 bytes from 4.2.2.2: seq=2 ttl=59 time=10.932 ms
64 bytes from 4.2.2.2: seq=3 ttl=59 time=10.798 ms
^C
--- 4.2.2.2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 10.762/10.830/10.932 ms
api-556c56df4b-pdtk9:/home/misyn/app# telnet 220.247.246.105 9010
Connection closed by foreign host
udayanga#udayanga-PC:~/Desktop/kubernetes$ kubectl get all -n lms-ff
NAME READY STATUS RESTARTS AGE
pod/api-556c56df4b-pdtk9 1/1 Running 0 6h27m
pod/auth-77c755b854-7bqts 1/1 Running 0 4h57m
pod/mariadb-555bcb6d95-5x6wx 1/1 Running 0 15h
pod/middle-767558df89-kc7kz 1/1 Running 0 12h
pod/portal-cf84d7845-vvxl7 1/1 Running 0 105m
pod/redis-b467466b5-ndlgb 1/1 Running 0 15h
pod/web-5b967cd44c-lbmnk 1/1 Running 0 103m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gcp-api-service ClusterIP 10.0.13.15 <none> 8091/TCP 6h27m
service/gcp-auth-service ClusterIP 10.0.6.154 <none> 8095/TCP 4h57m
service/gcp-mariadb-service ClusterIP 10.0.14.196 <none> 3306/TCP 15h
service/gcp-middle-service ClusterIP 10.0.3.26 <none> 8093/TCP 6h49m
service/gcp-portal-service ClusterIP 10.0.1.229 <none> 8090/TCP 105m
service/gcp-redis-service ClusterIP 10.0.2.188 <none> 6379/TCP 15h
service/gcp-web-service LoadBalancer 10.0.3.141 static-ip 80:30376/TCP 14h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/api 1/1 1 1 6h27m
deployment.apps/auth 1/1 1 1 4h57m
deployment.apps/mariadb 1/1 1 1 15h
deployment.apps/middle 1/1 1 1 12h
deployment.apps/portal 1/1 1 1 105m
deployment.apps/redis 1/1 1 1 15h
deployment.apps/web 1/1 1 1 103m
NAME DESIRED CURRENT READY AGE
replicaset.apps/api-556c56df4b 1 1 1 6h28m
replicaset.apps/auth-77c755b854 1 1 1 4h57m
replicaset.apps/mariadb-555bcb6d95 1 1 1 15h
replicaset.apps/middle-767558df89 1 1 1 12h
replicaset.apps/portal-cf84d7845 1 1 1 105m
replicaset.apps/redis-b467466b5 1 1 1 15h
replicaset.apps/web-5b967cd44c 1 1 1 103m
udayanga#udayanga-PC:~/Desktop/kubernetes$
Your service Type is
apiVersion: v1
kind: Service
metadata:
name: gcp-auth-service
namespace: lms-ff
spec:
selector:
app: auth
type: ClusterIP
ports:
- protocol: TCP
port: 8095
targetPort: 8095
ClusterIP it should be LoadBalancer or NodePort if you want to expose the Service to internet.
Cluster IP : Service only accessible internally inside the cluster.
Load Balancer : Expose the service to internet using IP address
Node Port : It exposes the service to the internet over the port and Uses the Node IP.
read more at : https://kubernetes.io/docs/concepts/services-networking/service/
You can change the service type to LoadBalancer and run command
kubectl get svc
you will see your service with IP address and hit that IP address from browser and you will be able to access the service.
https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#creating_a_service_of_type_loadbalancer
Your service file defines a ClusterIP type that provides and IP address that's only accessible within your Kubernetes cluster. It's an internal IP that Kubernetes makes available by default.
You should define a service file with a NodePort type which gives an external IP address for your nodes. Then combine the node's IP address with the NodePort number defined within the service file.
The resultant address should be in this format -> EXTERNAL_IP:NodePort
Don't also forget to create a firewall rule that allows ingress traffic into your nodes.
Please check this documentation for detailed steps on how to go about it.
I am practicing the k8s by following the ingress chapter. I am using Google Cluster. Specification are as follows
master: 1.11.7-gke.4
node: 1.11.7-gke.4
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-singh-default-pool-a69fa545-1sm3 Ready <none> 6h v1.11.7-gke.4 10.148.0.46 35.197.128.107 Container-Optimized OS from Google 4.14.89+ docker://17.3.2
gke-singh-default-pool-a69fa545-819z Ready <none> 6h v1.11.7-gke.4 10.148.0.47 35.198.217.71 Container-Optimized OS from Google 4.14.89+ docker://17.3.2
gke-singh-default-pool-a69fa545-djhz Ready <none> 6h v1.11.7-gke.4 10.148.0.45 35.197.159.75 Container-Optimized OS from Google 4.14.89+ docker://17.3.2
master endpoint: 35.186.148.93
DNS: singh.hbot.io (master IP)
To keep my question short. I post my source code in the snippet and links back to here.
Files:
deployment.yaml
ingress.yaml
ingress-rules.yaml
Problem:
curl http://singh.hbot.io/webapp1 got timed out
Description
$ kubectl get deployment -n nginx-ingress
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-ingress 1 1 1 0 2h
nginx-ingress deployment is not available.
$ kubectl describe deployment -n nginx-ingress
Name: nginx-ingress
Namespace: nginx-ingress
CreationTimestamp: Mon, 04 Mar 2019 15:09:42 +0700
Labels: app=nginx-ingress
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-ingress","namespace":"nginx-ingress"},"s...
Selector: app=nginx-ingress
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=nginx-ingress
Service Account: nginx-ingress
Containers:
nginx-ingress:
Image: nginx/nginx-ingress:edge
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
-nginx-configmaps=$(POD_NAMESPACE)/nginx-config
-default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
Environment:
POD_NAMESPACE: (v1:metadata.namespace)
POD_NAME: (v1:metadata.name)
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-ingress-77fcd48f4d (1/1 replicas created)
Events: <none>
pods:
$ kubectl get pods --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
default webapp1-7d67d68676-k9hhl 1/1 Running 0 6h
default webapp2-64d4844b78-9kln5 1/1 Running 0 6h
default webapp3-5b8ff7484d-zvcsf 1/1 Running 0 6h
kube-system event-exporter-v0.2.3-85644fcdf-xxflh 2/2 Running 0 6h
kube-system fluentd-gcp-scaler-8b674f786-gvv98 1/1 Running 0 6h
kube-system fluentd-gcp-v3.2.0-srzc2 2/2 Running 0 6h
kube-system fluentd-gcp-v3.2.0-w2z2q 2/2 Running 0 6h
kube-system fluentd-gcp-v3.2.0-z7p9l 2/2 Running 0 6h
kube-system heapster-v1.6.0-beta.1-5685746c7b-kd4mn 3/3 Running 0 6h
kube-system kube-dns-6b98c9c9bf-6p8qr 4/4 Running 0 6h
kube-system kube-dns-6b98c9c9bf-pffpt 4/4 Running 0 6h
kube-system kube-dns-autoscaler-67c97c87fb-gbgrs 1/1 Running 0 6h
kube-system kube-proxy-gke-singh-default-pool-a69fa545-1sm3 1/1 Running 0 6h
kube-system kube-proxy-gke-singh-default-pool-a69fa545-819z 1/1 Running 0 6h
kube-system kube-proxy-gke-singh-default-pool-a69fa545-djhz 1/1 Running 0 6h
kube-system l7-default-backend-7ff48cffd7-trqvx 1/1 Running 0 6h
kube-system metrics-server-v0.2.1-fd596d746-bvdfk 2/2 Running 0 6h
kube-system tiller-deploy-57c574bfb8-xnmtj 1/1 Running 0 1h
nginx-ingress nginx-ingress-77fcd48f4d-rfwbk 0/1 CrashLoopBackOff 35 2h
describe pod
$ kubectl describe pods -n nginx-ingress
Name: nginx-ingress-77fcd48f4d-5rhtv
Namespace: nginx-ingress
Priority: 0
PriorityClassName: <none>
Node: gke-singh-default-pool-a69fa545-djhz/10.148.0.45
Start Time: Mon, 04 Mar 2019 17:55:00 +0700
Labels: app=nginx-ingress
pod-template-hash=3397804908
Annotations: <none>
Status: Running
IP: 10.48.2.10
Controlled By: ReplicaSet/nginx-ingress-77fcd48f4d
Containers:
nginx-ingress:
Container ID: docker://5d3ee9e2bf7a2060ff0a96fdd884a937b77978c137df232dbfd0d3e5de89fe0e
Image: nginx/nginx-ingress:edge
Image ID: docker-pullable://nginx/nginx-ingress#sha256:16c1c6dde0b904f031d3c173e0b04eb82fe9c4c85cb1e1f83a14d5b56a568250
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
-nginx-configmaps=$(POD_NAMESPACE)/nginx-config
-default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 04 Mar 2019 18:16:33 +0700
Finished: Mon, 04 Mar 2019 18:16:33 +0700
Ready: False
Restart Count: 9
Environment:
POD_NAMESPACE: nginx-ingress (v1:metadata.namespace)
POD_NAME: nginx-ingress-77fcd48f4d-5rhtv (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-token-zvcwt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nginx-ingress-token-zvcwt:
Type: Secret (a volume populated by a Secret)
SecretName: nginx-ingress-token-zvcwt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned nginx-ingress/nginx-ingress-77fcd48f4d-5rhtv to gke-singh-default-pool-a69fa545-djhz
Normal Created 25m (x4 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Created container
Normal Started 25m (x4 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Started container
Normal Pulling 24m (x5 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz pulling image "nginx/nginx-ingress:edge"
Normal Pulled 24m (x5 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Successfully pulled image "nginx/nginx-ingress:edge"
Warning BackOff 62s (x112 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Back-off restarting failed container
Fix container terminated
Add to the command to ingress.yaml to prevent container finish running and get terminated by k8s.
command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
Ingress has no IP address from GKE. Let me have a look in details
describe ingress:
$ kubectl describe ing
Name: webapp-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.48.0.8:8080)
Rules:
Host Path Backends
---- ---- --------
*
/webapp1 webapp1-svc:80 (<none>)
/webapp2 webapp2-svc:80 (<none>)
webapp3-svc:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"webapp-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"webapp1-svc","servicePort":80},"path":"/webapp1"},{"backend":{"serviceName":"webapp2-svc","servicePort":80},"path":"/webapp2"},{"backend":{"serviceName":"webapp3-svc","servicePort":80}}]}}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Translate 7m45s (x59 over 4h20m) loadbalancer-controller error while evaluating the ingress spec: service "default/webapp1-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp2-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp3-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"
From this line I got all the ultimate solution from Christian Roy Thank you very much.
Fix the ClusterIP
It is default value then I have to edit my manifest file using NodePort as follow
apiVersion: v1
kind: Service
metadata:
name: webapp1-svc
labels:
app: webapp1
spec:
type: NodePort
ports:
- port: 80
selector:
app: webapp1
And that is.
The answer is in your question. The describe of your ingress shows the problem.
You did kubectl describe ing and the last part of that output was:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Translate 7m45s (x59 over 4h20m) loadbalancer-controller error while evaluating the ingress spec: service "default/webapp1-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp2-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp3-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"
The important part is:
error while evaluating the ingress spec: service "default/webapp1-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"
Solution
Just change all your services to be of type NodePort and it will work.
I have to add command in order to let the container not finish working.
command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
I am very new in rancher/kubernetes world and I am having a problem.
I am trying to deploy an application that needs to be stateful
To be honest I am trying to deploy a service registry(Yes, I need to).
What and why I am trying to do:
What:
- Deploy multiple service registries that get registered into between them(for High availability)
- Iḿ exposing them with a StatefulSet object, to use a specific name of the registry(for client registration purposes), so I need to get something like registry-0, registry-1 and use this names to config the clients.
Why:
- If I use the ClusterIP, I'll balance between service registries and not register my client into each server(because the client and server could self-register into only one registry), this is bad for me.
My infrastructure:
Rancher installed into AWS
A Kubernetes cluster configured into 3
nodes as:
node1: all(worker,etcd, controlpane)
node2: worker
node3: worker
My problem is:
When I apply the YAML and the Kubernetes deploy my application,
if the service registry is in the node1 it works perfectly and it can see himself and the other replicas that are in node1, for example:
node1: eureka1;eureka2 (eureka1 see itself and eureka2) same occurs with eureka2(see itself and eureka1)
but if I create another 4 replicas of Eureka for example and master deploy into another node like
2 more eureka into node2(worker only) and then another 2 into node3 (worker only)
they can not see each other either itself and eureka1 and eureka2 cannot see eureka3 eureka4, eureka5, and eureka6
TLDR:
The pods in node 1 can see each other but don't see the other nodes.
The pods in node 2 and node 3 can't see himself and neither the other nodes.
If I execute in localhost with minikube, all works fine.
To reproduce, just apply both files above and access the main ip of the kubernetes.
Service registry deployment file is:
Service-registry.yaml:
---
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
ports:
- port: 7700
name: eureka
clusterIP: None
selector:
app: eureka
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
replicas: 5
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: leandrozago/eureka
ports:
- containerPort: 7700
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# Due to camelcase issues with "defaultZone" and "preferIpAddress", _JAVA_OPTIONS is used here
- name: _JAVA_OPTIONS
value: -Deureka.instance.preferIpAddress=false -Deureka.client.serviceUrl.defaultZone=http://eureka-0.eureka:7700/eureka/,http://eureka-1.eureka:7700/eureka/,http://eureka-2.eureka:7700/eureka/,http://eureka-3.eureka:7700/eureka/,http://eureka-4.eureka:7700/eureka/,http://eureka-5.eureka:7700/eureka/,http://eureka-6.eureka:7700/eureka/
- name: EUREKA_CLIENT_REGISTERWITHEUREKA
value: "true"
- name: EUREKA_CLIENT_FETCHREGISTRY
value: "true"
# The hostnames must match with the eureka serviceUrls, otherwise, the replicas are reported as unavailable in the eureka dashboard
- name: EUREKA_INSTANCE_HOSTNAME
value: ${MY_POD_NAME}.eureka
# No need to start the pods in order. We just need the stable network identity
podManagementPolicy: "Parallel"
Ingress yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: eureka
servicePort: 7700
EDITED:
kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
cattle-system cattle-cluster-agent-557ff9f65d-5qsv6 0/1 CrashLoopBackOff 15 58m 10.42.1.41 rancher-b2b-rancheragent-1-worker
cattle-system cattle-node-agent-mxfpm 1/1 Running 0 4d 172.18.80.152 rancher-b2b-rancheragent-0-all
cattle-system cattle-node-agent-x2wdc 1/1 Running 0 4d 172.18.82.84 rancher-b2b-rancheragent-0-worker
cattle-system cattle-node-agent-z6cnw 1/1 Running 0 4d 172.18.84.152 rancher-b2b-rancheragent-1-worker
default eureka-0 1/1 Running 0 52m 10.42.2.41 rancher-b2b-rancheragent-0-worker
default eureka-1 1/1 Running 0 52m 10.42.1.42 rancher-b2b-rancheragent-1-worker
default eureka-2 1/1 Running 0 52m 10.42.0.28 rancher-b2b-rancheragent-0-all
default eureka-3 1/1 Running 0 52m 10.42.1.43 rancher-b2b-rancheragent-1-worker
default eureka-4 1/1 Running 0 52m 10.42.2.42 rancher-b2b-rancheragent-0-worker
default eureka-5 1/1 Running 0 59s 10.42.0.29 rancher-b2b-rancheragent-0-all
default eureka-6 1/1 Running 0 59s 10.42.2.43 rancher-b2b-rancheragent-0-worker
ingress-nginx default-http-backend-797c5bc547-wkp5z 1/1 Running 0 4d 10.42.0.5 rancher-b2b-rancheragent-0-all
ingress-nginx nginx-ingress-controller-dd5mt 1/1 Running 0 4d 172.18.82.84 rancher-b2b-rancheragent-0-worker
ingress-nginx nginx-ingress-controller-m6jkh 1/1 Running 1 4d 172.18.84.152 rancher-b2b-rancheragent-1-worker
ingress-nginx nginx-ingress-controller-znr8c 1/1 Running 0 4d 172.18.80.152 rancher-b2b-rancheragent-0-all
kube-system canal-bqh22 3/3 Running 0 4d 172.18.80.152 rancher-b2b-rancheragent-0-all
kube-system canal-bv7zp 3/3 Running 0 3d 172.18.84.152 rancher-b2b-rancheragent-1-worker
kube-system canal-m5jnj 3/3 Running 0 4d 172.18.82.84 rancher-b2b-rancheragent-0-worker
kube-system kube-dns-7588d5b5f5-wdkqm 3/3 Running 0 4d 10.42.0.4 rancher-b2b-rancheragent-0-all
kube-system kube-dns-autoscaler-5db9bbb766-snp4h 1/1 Running 0 4d 10.42.0.3 rancher-b2b-rancheragent-0-all
kube-system metrics-server-97bc649d5-q2bxh 1/1 Running 0 4d 10.42.0.2 rancher-b2b-rancheragent-0-all
kube-system rke-ingress-controller-deploy-job-bqvcl 0/1 Completed 0 4d 172.18.80.152 rancher-b2b-rancheragent-0-all
kube-system rke-kubedns-addon-deploy-job-sf4w5 0/1 Completed 0 4d 172.18.80.152 rancher-b2b-rancheragent-0-all
kube-system rke-metrics-addon-deploy-job-55xwp 0/1 Completed 0 4d 172.18.80.152 rancher-b2b-rancheragent-0-all
kube-system rke-network-plugin-deploy-job-fdg9d 0/1 Completed 0 21h 172.18.80.152 rancher-b2b-rancheragent-0-all