I am working on a project with jenkins where I am already running the jenkins pod and want to run kubectl commands directly from the pod to connect with my host machine in order to do that I followed this SO question about k8s cluster remote access am on windows and have kubectl v1.23.3 installed on a jenkins pod I runned from my host machine k8s.
I managed to verify that running kubectl works properly on the jenkins pod (container):
kubectl create deployment nginx --image=nginx
kubectl create service nodeport nginx --tcp=80:80
when I ran kubectl get all from the jenkins container I get this output:
root#jenkins-64756886f7-2v92n:~/test# kubectl create deployment nginx --image=nginx
kubectl create service nodeport nginx --tcp=80:80
deployment.apps/nginx created
service/nginx created
root#jenkins-64756886f7-2v92n:~/test# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/jenkins-64756886f7-2v92n 1/1 Running 0 37m
pod/nginx-6799fc88d8-kxprv 1/1 Running 0 8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins-service NodePort 10.110.105.78 <none> 8080:30090/TCP 39m
service/nginx NodePort 10.107.115.5 <none> 80:32355/TCP 8s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/jenkins 1/1 1 1 39m
deployment.apps/nginx 1/1 1 1 8s
NAME DESIRED CURRENT READY AGE
replicaset.apps/jenkins-64756886f7 1 1 1 37m
replicaset.apps/nginx-6799fc88d8 1 1 1 8s
root#jenkins-64756886f7-2v92n:~/test#
Initially I had Jenkins deployment attached to a namespace called devops-cicd
Tested the deployment on my browser and worked fine
and this is the output from my host machine:
PS C:\Users\affes> kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d9h
and when I specify the namespace I get the same result as from Jenkins container:
PS C:\Users\affes> kubectl get all -n devops-cicd
NAME READY STATUS RESTARTS AGE
pod/jenkins-64756886f7-2v92n 1/1 Running 0 38m
pod/nginx-6799fc88d8-kxprv 1/1 Running 0 93s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins-service NodePort 10.110.105.78 <none> 8080:30090/TCP 41m
service/nginx NodePort 10.107.115.5 <none> 80:32355/TCP 93s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/jenkins 1/1 1 1 41m
deployment.apps/nginx 1/1 1 1 93s
NAME DESIRED CURRENT READY AGE
replicaset.apps/jenkins-64756886f7 1 1 1 38m
replicaset.apps/nginx-6799fc88d8 1 1 1 93s
I don't know what's causing the resources created on that namespace directly without even specifying the namespace, and is there a possible way to configure something that will allow me to deploy on default namespace instead?
This is my deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: devops-cicd
spec:
selector:
matchLabels:
app: jenkins
workload: cicd
replicas: 1
template:
metadata:
namespace: devops-cicd
labels:
app: jenkins
workload: cicd
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
- name: docker
mountPath: "/usr/bin/docker"
securityContext:
privileged: true
runAsUser: 0 # Root
restartPolicy: Always
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: docker
hostPath:
path: /usr/bin/docker
---
apiVersion: v1
kind: Service
metadata:
namespace: devops-cicd
name: jenkins-service
spec:
selector:
app: jenkins
workload: cicd
ports:
- name: http
port: 8080
nodePort: 30090
type: NodePort
You may have a different namespace configured by default in the kubectl in the Jenkins pod. You can check it with the following command.
kubectl config view | grep namespace
To change the default namespace to `default, you can run the following command.
kubectl config set-context --current --namespace=default
Please find more details here.
Related
I have a docker container which runs a basic front end angular app. I have verified it runs with no issues and I can successfully access the web app in the browser with docker run -p 5901:80 formbuilder-stand-alone-form.
I am able to successfully deploy it with minikube and kubernetes on my cloud dev server
apiVersion: v1
kind: Service
metadata:
name: stand-alone-service
spec:
selector:
app: stand-alone-form
ports:
- protocol: TCP
port: 5901
targetPort: 80
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: stand-alone-form-app
labels:
app: stand-alone-form
spec:
replicas: 1
selector:
matchLabels:
app: stand-alone-form
template:
metadata:
labels:
app: stand-alone-form
spec:
containers:
- name: stand-alone-form-pod
image: formbuilder-stand-alone-form
imagePullPolicy: Never
ports:
- containerPort: 80
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get pods
NAME READY STATUS RESTARTS AGE
stand-alone-form-app-6d4669f569-vsffc 1/1 Running 0 6s
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
stand-alone-form-app 1/1 1 1 8s
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d7h
stand-alone-service LoadBalancer 10.96.197.197 <pending> 5901:30443/TCP 21s
However, I am not able to access it with the url:
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh
% minikube service stand-alone-service
|-----------|---------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------------|-------------|---------------------------|
| default | stand-alone-service | 5901 | http://192.168.49.2:30443 |
|-----------|---------------------|-------------|---------------------------|
In this example, http://192.168.49.2:30443/ gives me a dead web page.
I disabled all my iptables for troubleshooting.
Any idea how to access the front end web app? I was thinking I might have the selectors wrong but sure.
UPDATE: Here is the requested new outputs:
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl describe service stand-alone-service
Name: stand-alone-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=stand-alone-form
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.197.197
IPs: 10.96.197.197
LoadBalancer Ingress: 10.96.197.197
Port: <unset> 5901/TCP
TargetPort: 80/TCP
NodePort: <unset> 30443/TCP
Endpoints: 172.17.0.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% minikube tunnel
Password:
Status:
machine: minikube
pid: 237498
route: 10.96.0.0/12 -> 192.168.49.2
minikube: Running
services: [stand-alone-service]
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
Note: I noticed with the tunnel I do have a external IP for the loadbalancer now:
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d11h
stand-alone-service LoadBalancer 10.98.162.179 10.98.162.179 5901:31596/TCP 3m10s
It looks like your LoadBalancer hasn't quite resolved correctly, as the External-IP is still marked as <pending>
According to Minikube, this happens when the tunnel is missing:
https://minikube.sigs.k8s.io/docs/handbook/accessing/#check-external-ip
Have you tried running minikube tunnel in a separate command window?
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
https://minikube.sigs.k8s.io/docs/commands/tunnel/
When setting up an ingress in my kubernetes project I can't seem to get it to work. I already checked following questions:
Enable Ingress controller on Docker Desktop with WLS2
Docker Desktop + k8s plus https proxy multiple external ports to pods on http in deployment?
How can I access nginx ingress on my local?
But I can't get it to work. When testing the service via NodePort (http://kubernetes.docker.internal:30090/ or localhost:30090) it works without any problem, but when using http://kubernetes.docker.internal/ I get kubernetes.docker.internal didn’t send any data. ERR_EMPTY_RESPONSE.
This is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
minReadySeconds: 30
selector:
matchLabels:
app: webapp
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: gcr.io/google-samples/hello-app:2.0
env:
- name: "PORT"
value: "3000"
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- name: http
port: 3000
nodePort: 30090 # only for NotPort > 30,000
type: NodePort #ClusterIP inside cluster
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
spec:
defaultBackend:
service:
name: webapp-service
port:
number: 3000
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 3000
I also used following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.45.0/deploy/static/provider/cloud/deploy.yaml
The output of kubectl get all -A is as follows (indicating that the ingress controller is running):
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/webapp-78d8b79b4f-7whzf 1/1 Running 0 13m
ingress-nginx pod/ingress-nginx-admission-create-gwhbq 0/1 Completed 0 11m
ingress-nginx pod/ingress-nginx-admission-patch-bxv9v 0/1 Completed 1 11m
ingress-nginx pod/ingress-nginx-controller-6f5454cbfb-s2w9p 1/1 Running 0 11m
kube-system pod/coredns-f9fd979d6-6xbxs 1/1 Running 0 19m
kube-system pod/coredns-f9fd979d6-frrrv 1/1 Running 0 19m
kube-system pod/etcd-docker-desktop 1/1 Running 0 18m
kube-system pod/kube-apiserver-docker-desktop 1/1 Running 0 18m
kube-system pod/kube-controller-manager-docker-desktop 1/1 Running 0 18m
kube-system pod/kube-proxy-mfwlw 1/1 Running 0 19m
kube-system pod/kube-scheduler-docker-desktop 1/1 Running 0 18m
kube-system pod/storage-provisioner 1/1 Running 0 18m
kube-system pod/vpnkit-controller 1/1 Running 0 18m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19m
default service/webapp-service NodePort 10.111.167.112 <none> 3000:30090/TCP 13m
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.106.21.69 localhost 80:32737/TCP,443:32675/TCP 11m
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.105.208.234 <none> 443/TCP 11m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 19m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 19m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/webapp 1/1 1 1 13m
ingress-nginx deployment.apps/ingress-nginx-controller 1/1 1 1 11m
kube-system deployment.apps/coredns 2/2 2 2 19m
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/webapp-78d8b79b4f 1 1 1 13m
ingress-nginx replicaset.apps/ingress-nginx-controller-6f5454cbfb 1 1 1 11m
kube-system replicaset.apps/coredns-f9fd979d6 2 2 2 19m
NAMESPACE NAME COMPLETIONS DURATION AGE
ingress-nginx job.batch/ingress-nginx-admission-create 1/1 1s 11m
ingress-nginx job.batch/ingress-nginx-admission-patch 1/1 3s 11m
I already tried debugging, and when doing an exec to the nginx service:
kubectl exec service/ingress-nginx-controller -n ingress-nginx -it -- sh
I can do the following curl: curl -H "host:kubernetes.docker.internal" localhost and it returns the correct content. So to me this seems like my loadbalancer service is not used when opening http://kubernetes.docker.internal via the browser. I also tried using the same curl from my terminal but that had the same 'empty response' result.
I knew this is quite outdated thread, but i think my answer can help later visitors
Answer: You have to install ingress controller. For exam: ingress-nginx controller,
either using helm:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
or kubectl:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
you can find addional info here
don't forget to add defined host to /etc/hosts file. e.g
127.0.0.1 your.defined.host
then access defined host as usual
I currently have a Dockerised Spring Boot application with exposed Java REST APIs that I deploy on to my NUC (just a remote machine) and can connect to it from my Mac, via the NUCs static IP address. Both machines are on the same network.
I am now looking into hosting the Docker application in Kubernetes (Minikube)
(using this tutorial https://medium.com/bb-tutorials-and-thoughts/how-to-run-java-rest-api-on-minikube-4b564ea982cc).
I have used the Kompose tool from Kubernetes to convert my Docker compose files into Kubernetes deployments and services files. One of the services I'm trying to get working first simply opens up port 8080 and has a number of REST resources. Everything seems to be up and running, but I cannot access the REST resources from my Mac (or even the NUC itself) with a curl -v command.
After getting around a small issue with my Docker images (needing built to Minikube's internal Docker images repo), I can successfully deploy my services and deployments. There are a number of others, but for the purposes of getting past this step, I'll just include the one:
$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-hhgn8 1/1 Running 0 7h
kube-system etcd-minikube 1/1 Running 0 7h
kube-system kube-apiserver-minikube 1/1 Running 0 7h
kube-system kube-controller-manager-minikube 1/1 Running 0 7h
kube-system kube-proxy-rszpv 1/1 Running 0 7h
kube-system kube-scheduler-minikube 1/1 Running 0 7h
kube-system storage-provisioner 1/1 Running 0 7h
meanwhileinhell apigw 1/1 Running 0 6h54m
meanwhileinhell apigw-75bc5z1f5j-cklxt 1/1 Running 0 6h54m
$ kubectl get service apigw
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apigw NodePort 10.107.116.239 <none> 8080:32327/TCP 6h53m
$ kubectl cluster-info
Kubernetes master is running at https://192.168.44.2:8443
KubeDNS is running at https://192.168.44.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
However, I cannot hit this master IP address, or any expected open port using the static IP of my NUC. I have tried to use the service types LoadBalancer and NodePort for the service but the former hangs on pending for the external IP.
I have played about a little with exposing ports and port forwarding but haven't been able to get anything working (port 7000 is just an arbitrary number):
kubectl port-forward apigw 7000:8080
kubectl expose deployment apigw --port=8080 --target-port=8080
Here is my apigw deployment, service and pod yaml files:
apigw-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: apigw
name: apigw
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: apigw
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.network/networkinhell: "true"
io.kompose.service: apigw
spec:
containers:
image: meanwhileinhell/api-gw:latest
name: apigw
ports:
- containerPort: 8080
resources: {}
imagePullPolicy: Never
volumeMounts:
- mountPath: /var/log/gateway
name: combined-logs
hostname: apigw
restartPolicy: Always
volumes:
- name: combined-logs
persistentVolumeClaim:
claimName: combined-logs
status: {}
apigw-service.yaml
apiVersion: v1
kind: Service
metadata:
name: apigw
labels:
run: apigw
spec:
ports:
- port: 8080
protocol: TCP
selector:
app: apigw
type: NodePort
apigw-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
io.kompose.service: apigw
name: apigw
spec:
containers:
- image: meanwhileinhell/api-gw:latest
name: apigw
imagePullPolicy: Never
resources: {}
ports:
- containerPort: 8080
Using kubectl create -f to create the services.
Ubuntu 18.04.5 LTS
Minikube v1.15.0
KubeCtl v1.19.4
hope you all well!
I need to see my app on the browser but I believe that I'm missing something here and hope you can help me with this.
[root#kubernetes Docker]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-app2-56d5c786dd-n7mqq 1/1 Running 0 19m
pod/nginx-86c57db685-bxkpl 1/1 Running 0 13h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31h
service/my-app2 ClusterIP 10.101.108.199 <none> 8085/TCP 12m
service/nginx NodePort 10.106.14.144 <none> 80:30525/TCP 13h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-app2 1/1 1 1 19m
deployment.apps/nginx 1/1 1 1 13h
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-app2-56d5c786dd 1 1 1 19m
replicaset.apps/nginx-86c57db685 1 1 1 13h
Overall you can see that everything is working fine right, looks the same to me.
To open this on my browser I'm using my IP address from Slave node where the container is allocated.
On my app I'm mapping the Hello like this #RequestMapping("/Hello")
On my dockerfile to build my image i used this:
[root#kubernetes project]# cat Dockerfile
FROM openjdk:8
COPY microservico-0.0.1-SNAPSHOT.jar microservico-0.0.1-SNAPSHOT.jar
#WORKDIR /usr/src/microservico-0.0.1-SNAPSHOT.jar
EXPOSE 8085
ENTRYPOINT ["java", "-jar", "microservico-0.0.1-SNAPSHOT.jar"]
So at the end, I think I need to call for my app this way.
---> ip:8085/Hello
[root#kubernetes project]# telnet kubeslave 8085
Trying 192.168.***.***...
telnet: connect to address 192.168.***.***: Connection refused
but I still see nothing...
Here is my deploy and service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app2
labels:
app: app
spec:
selector:
matchLabels:
app: app
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: app
role: master
tier: backend
spec:
containers:
- name: appcontainer
image: *****this is ok*****:my-java-app
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 8085
apiVersion: v1
kind: Service
metadata:
name: my-app2
labels:
app: app
role: master
tier: backend
spec:
ports:
- port: 8085
targetPort: 8085
selector:
app: app
role: master
tier: backend
You have create a service which is of type ClusterIP(default). This type of service is only for accessing from inside the kubernetes cluster.For accessing it from browser you need to expose the pod via LoadBalancer or Nodeport service. LoadBalancer only works if you are one of supported public cloud otherwise Nodeport need to be used.
https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/
https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/
Other than using service you can use kubectl proxy to access it as well.
If you are on Minikube then follow this
https://kubernetes.io/docs/tutorials/hello-minikube/
I'm not sure how to access the Pod which is running behind a Service.
I have Docker CE installed and running. With this, I have the Docker 'Kubernetes' running.
I created a Pod file and then kubectl created it ... and then used port-forwarding to test that it's working and it was. Tick!
Next I created a Service as a LoadBalancer and kubectl create that also and it's running ... but I'm not sure how to test it / access the Pod that is running.
Here's the terminal outputs:
Tests-MBP:k8s test$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
hornet-data 1/1 Running 0 4h <none>
Tests-MBP:k8s test$ kubectl get services --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
hornet-data-lb LoadBalancer 10.0.44.157 XX.XX.XX.XX 8080:32121/TCP 4h <none>
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 14d component=apiserver,provider=kubernetes
Tests-MBP:k8s test$
Not sure if the pod Label <none> is a problem? I'm using labels for the Service selector.
Here's the two files...
apiVersion: v1
kind: Pod
metadata:
name: hornet-data
labels:
app: hornet-data
spec:
containers:
- image: ravendb/ravendb
name: hornet-data
ports:
- containerPort: 8080
and
apiVersion: v1
kind: Service
metadata:
name: hornet-data-lb
spec:
type: LoadBalancer
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: hornet-data
Update 1:
As requested by #vasily:
Tests-MBP:k8s test$ kubectl get ep hornet-data-lb
NAME ENDPOINTS AGE
hornet-data-lb <none> 5h
Update 2:
More info for/from Vasily:
Tests-MBP:k8s test$ kubectl apply -f hornet-data-pod.yaml
pod/hornet-data configured
Tests-MBP:k8s test$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
hornet-data 1/1 Running 0 5h app=hornet-data
Tests-MBP:k8s test$ kubectl get services --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
hornet-data-lb LoadBalancer 10.0.44.157 XX.XX.XX.XX 8080:32121/TCP 5h <none>
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 14d component=apiserver,provider=kubernetes
#vailyangapov basically answered this via comments in the OP - this answer is in two parts.
I didn't apply my changes in my manifest. I made some changes to my services yaml file but didn't push these changes up. As such I needed to do kubectl apply -f myPod.yaml.
I was in the wrong context. The current context was pointing to a test Azure Kubernetes Service. I thought it was all on my localhost cluster that comes with Docker-CE (called the docker-for-desktop cluster). As this is a new machine, I failed to enable Kubernetes with Docker (it's a manual step AFTER Docker-CE is installed .. with the default setting having it NOT enabled/not ticked). Once I manually noticed that, I ticked the option to enable Kubernetes and docker-for-desktop) cluster was installed. Then I manually changed over to this context:kubectl config use-context docker-for-desktop`.
Both these mistakes were simple. The reason for providing them into an answer is to hopefully help others use this information to help them review their own settings if something isn't working right - a similar problem to me, is occurring.