How can I access from network newly deployed pod in kubernetes? - docker

I am a newbie in kubernetes and i know i am missing something small but cannot see what.
I am creating a pod with file: kubectl create -f mysql.yaml
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 2
image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
# change this
value: TestingDB1
ports:
- containerPort: 3306
name: mysql
and a service with: kubectl create -f mysql_service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
externalIPs:
- 10.19.13.127
ports:
- port: 3306
selector:
name: mysql
Output of "kubectl version"
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"d33fd89e399396658aed4e48dfe7d5d8d50ac6e8", GitTreeState:"clean", BuildDate:"2017-05-26T17:08:24Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"d33fd89e399396658aed4e48dfe7d5d8d50ac6e8", GitTreeState:"clean", BuildDate:"2017-05-26T17:08:24Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Output of "kubectl cluster-info"
Kubernetes master is running at http://localhost:8080
Output of "kubectl get pods"
NAME READY STATUS RESTARTS AGE
mysql 1/1 Running 0 20m
Output of "kubectl get svc"
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 18h
mysql 10.254.129.206 10.19.13.127 3306/TCP 1h
Output of "kubectl get no"
NAME STATUS AGE
10.19.13.127 Ready 19h
Output of "docker ps"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74ea1fb2b383 mysql "docker-entrypoint.sh" 3 minutes ago Up 3 minutes k8s_mysql.ae7893ad_mysql_default_e58d1c09-4a8e-11e7-9baf-fa163ee3f5d9_793d8d7c
I can see the pod is being created normally. Even when i connect to the container I am able to log in to mysql with credentials.
My question is:
How can i access/expose port running on my kubernetes node from my network ? For example I want to do a telnet from my PC to the kubernetes node where the mysql pod is running.
Thank you !

Below command verifies that the Redis server is running in the pod and listening on which port (generally it run on 6379 port) :
kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
Output : 6739

Following command gives you port a pod listening on, so you can create route or port forwarding to access the service
kubectl get pod <pod_name> -o "go-template={{(index (index .spec.containers 0).ports 0).containerPort}}

Related

Access a web service in kubernetes pod from local browser using NodePort yields Connection refused

What do I need to do in order to get my local browser to and request a resource to a web service running inside Minikube instance running locally on my machine?
I am getting a Connection refused when trying to kubectl port-forward.
My workflow is:
Creating Dockerfile with web service on
Start minikube in docker
Build docker image
Import image locally into Minikube
Created a deployment with one container and a NodePort service
Applied deployment/service
Ran kubectl port-forward (to hopefully forward requests to my container)
Open browser to 127.0.0.1:31000
Port Configuration Summary
Dockerfile:
Expose: 80
uvicorn: 80
Deployment
NodePort Service:
Port: 80
Target Port: 80
Node Port: 31000
Kubectl Command: 8500:31000
Browser: 127.0.0.1:8500
Setup and run through
dev.dockerfile (Step 1)
FROM python:3.11-buster # Some Debian Python image... I built my own
COPY ../sources/api/ /app/
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
ENV PYTHONPATH=/app/
EXPOSE 80
CMD ["uvicorn", "app.main:app", "--proxy-headers", "--host", "0.0.0.0", "--port", "80"]
Build Sequence (Steps 2 to 4)
# 2 - start minikube
minikube start --bootstrapper=kubeadm --vm-driver=docker
minikube docker-env
## 3 - build image
docker build -f ../../service1/deploy/dev.dockerfile ../../service1 -t acme-app.service1:latest
## 4 - load image into minikube
minikube image load acme-app.service1:latest
Deployment (Step 5 and 6)
deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: acme-service-1-deployment
namespace: acme-app-dev
labels:
app: service-1
spec:
replicas: 1
selector:
matchLabels:
app: service-1
template:
metadata:
labels:
app: service-1
spec:
containers:
- name: service1-container
image: docker.io/library/acme-app.service1:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: service-1-service
namespace: acme-app-dev
spec:
type: NodePort
selector:
app: service-1
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
Deploy
kubectl apply -f deployment.yaml
kubectl port forward (Step 7)
Find Pod
kubectl get pods -n acme-app-dev
NAME READY STATUS RESTARTS AGE
acme-service-1-deployment-76748d7ff6-llcsr 1/1 Running 0 11s
Port Forward to pod
port-forward acme-service-1-deployment-76748d7ff6-llcsr 8500:31000 -n acme-app-dev
Forwarding from 127.0.0.1:8500 -> 31000
Forwarding from [::1]:8500 -> 31000
Test in Browser (Step 8)
Open favorite browser and navigate to 127.0.0.1:31000.
The console running the port forward now outputs:
E0123 14:54:16.208010 25932 portforward.go:406] an error occurred forwarding 8500 -> 31000: error forwarding port 31000 to pod d4c0fa6cb16ce02335a05cad904fbf2ab7818e2073d7c7ded8ad05f193aa37e7, uid : exit status 1: 2023/01/23 14:54:16 socat[39370] E connect(5, AF=2 127.0.0.1:31000, 16): Connection refused
E0123 14:54:16.213268 25932 portforward.go:234] lost connection to pod
What have I looked at?
I've tried looking through the docs on kubernetes website as well as issues on here (yes there are similar). This is pretty similar - although no marked answer and still an issue by the looks of it. I couldn't see a solution for my issue here.
NodePort exposed Port connection refused
I am running Minikube on Windows and I'm just setting out on a kubernetes journey.
The image itself works in docker from a docker compose. I can see the pod is up and running in minikube from the logs (minikube dashboard).
You got your wires crossed:
The pod is listening on port 80
The NodePort service is listening on port 31000 on the node, but its underlying ClusterIP service is listening on port 80 as well.
You are trying to port-forward to port 31000 on the Pod. This will not work.
Call one of the following instead:
kubectl port-forward -n acme-app-dev deploy/acme-service-1-deployment 8500:80
or kubectl port-forward -n acme-app-dev service/service-1-service 8500:80
or use minikube service -n acme-app-dev service-1-service and use the provided URL.

Deployment pod cannot connect ClusterIP service

I try to expose my server IP by using Ingress.
The server is an Express.js app. It listens at http://localhost:5000 locally when without docker.
Here are my Kubernetes config files:
server-deployment.yaml
apiVersion: v1
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 1
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: hongbomiao/hongbomiao-server:latest
ports:
- containerPort: 5000
env:
- name: NODE_ENV
value: development
server-cluster-ip-service.yaml
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 5000
targetPort: 5000
ingress-service.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: server-cluster-ip-service
port:
number: 5000
I got my minikube IP by
➜ minikube ip
192.168.64.12
When I open 192.168.64.12 in my browser, I got 502 Bad Gateway.
I got some debug idea after reading https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#kubectl-apply. Here is what I have tried:
➜ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h34m
server-cluster-ip-service ClusterIP 10.102.5.161 <none> 5000/TCP 4h39m
➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
server-deployment-bc6777445-pj59f 1/1 Running 0 4h39m
➜ kubectl exec -it server-deployment-bc6777445-pj59f -- sh
/app # apk add --no-cache curl
...
/app # curl 10.102.5.161:5000
curl: (28) Failed to connect to 10.102.5.161 port 5000: Operation timed out
It seems my deployment pod has issue connecting ClusterIP service now. Any help will be nice!
It turns out the issue is caused by my VPN.
I didn't change anything for the Kubernetes config in my question.
Also, letting the Express.js server explicitly listen at 0.0.0.0 is not necessary neither.
(Note #David Maze's comment under the question about 0.0.0.0 is still valuable)
const app = express()
.use(bodyParser.json())
.use(express.static(path.join(__dirname, '../dist')))
app.listen(5000); // This just works. No need explicitly change to app.listen(5000, '0.0.0.0');
At the time of writing, I was in China. To get rid of the VPN while still using Kubenetes / minikube, I found a way and posted it at GitHub here.
After turning off the VPN with this workaround solution, everything just works.
Copy my solution using minikube in China here:
Step 1 - Download the Aliyun version minikube
curl -Lo minikube https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.14.2/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
Note: You can find if there is a new version to replace v1.14.2 in the command above at https://github.com/AliyunContainerService/minikube/wiki#%E5%AE%89%E8%A3%85minikube
Step 2 - Start the minikube
minikube start --image-mirror-country cn \
--driver=hyperkit \
--iso-url=https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.15.0.iso \
--registry-mirror=https://xxxxxxxx.mirror.aliyuncs.com
Note 1: You can find latest minikube version at https://github.com/kubernetes/minikube/blob/master/CHANGELOG.md, then replace v1.15.0 in the command above to newer version.
However, Aliyun's minikube version is a little behind. To verify if a new version exists, you can replace the version in the URL of https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.15.0.iso to different new versions, such as v1.15.1, and then open it in the browser.
Note 2: For the xxxxxxxx in the command above, you can find yours at
https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
(Need register an Aliyun account first)
Note 3: You can pass more parameters to this Aliyun version minikube start, check at https://github.com/AliyunContainerService/minikube/wiki#%E5%90%AF%E5%8A%A8
In my case, I am using the driver hyperkit on macOS, and Aliyun's iso-url, registry-mirror to speed up.

How to pull image from Docker registry within Kubernetes cluster?

I'm learning Kubernetes and want to set up a Docker registry to run within my cluster, deploy any custom code to this private registry, then have my nodes pull images from this private registry to create pods. I've described my setup in this StackOverflow question
Originally I was caught up trying to figure out SSL certificates, but for now I've postponed that and I'm trying to work with an insecure registry. To that end I've created the following pod to run my registry (I know it's a pod and not a replica set or deployment -- this is only for experimental purposes and I'll make it cleaner once it's working):
apiVersion: v1
kind: Pod
metadata:
name: docker-registry
labels:
app: docker-registry
spec:
containers:
- name: docker-registry
image: registry:2
ports:
- containerPort: 80
hostPort: 80
env:
- name: REGISTRY_HTTP_ADDR
value: 0.0.0.0:80
I then created the following NodePort service:
apiVersion: v1
kind: Service
metadata:
name: docker-registry-external
labels:
app: docker-registry
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 32000
selector:
app: docker-registry
I have a load balancer set up in front of my Kubernetes cluster which I configured to route traffic on port 80 to port 32000. So I can hit this registry at http://example.com
I then updated my local /etc/docker/daemon.json as follows:
{
"insecure-registries": ["example.com"]
}
With this I was able to push an image to my registry successfully:
> docker pull ubuntu
> docker tag ubuntu example.com/my-ubuntu
> docker push exapmle.com/my-ubuntu
The push refers to repository [example.com/my-ubuntu]
cc9d18e90faa: Pushed
0c2689e3f920: Pushed
47dde53750b4: Pushed
latest: digest: sha256:1d7b639619bdca2d008eca2d5293e3c43ff84cbee597ff76de3b7a7de3e84956 size: 943
Now I want to try and pull this image when creating a pod. So I created the following ClusterIP service to make my registry accessible within my cluster:
apiVersion: v1
kind: Service
metadata:
name: docker-registry-internal
labels:
app: docker-registry
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: docker-registry
Then I created a secret:
apiVersion: v1
kind: Secret
metadata:
name: local-docker
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: ewoJImluc2VjdXJlLXJlZ2lzdHJpZXMiOiBbImRvY2tlci1yZWdpc3RyeS1pbnRlcm5hbCJdCn0K
The base64 bit decodes to:
{
"insecure-registries": ["docker-registry-internal"]
}
Finally, I created the following pod:
apiVersion: v1
kind: Pod
metadata:
name: test-docker
labels:
name: test
spec:
imagePullSecrets:
- name: local-docker
containers:
- name: test
image: docker-registry-internal/my-ubuntu
When I tried to create this pod (kubectl create -f test-pod.yml) and looked at my cluster, this is what I saw:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
test-docker 0/1 ErrImagePull 0 4s
docker-registry 1/1 Running 0 34m
> kubectl describe pod test-docker
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m33s default-scheduler Successfully assigned default/test-docker to pool-uqa-dev-3sli8
Normal Pulling 3m22s (x2 over 3m32s) kubelet Pulling image "docker-registry-internal/my-ubuntu"
Warning Failed 3m22s (x2 over 3m32s) kubelet Failed to pull image "docker-registry-internal/my-ubuntu": rpc error: code = Unknown desc = Error response from daemon: pull access denied for docker-registry-internal/my-ubuntu, repository does not exist or may require 'docker login'
Warning Failed 3m22s (x2 over 3m32s) kubelet Error: ErrImagePull
Normal SandboxChanged 3m19s (x7 over 3m32s) kubelet Pod sandbox changed, it will be killed and re-created.
Normal BackOff 3m18s (x6 over 3m30s) kubelet Back-off pulling image "docker-registry-internal/my-ubuntu"
Warning Failed 3m18s (x6 over 3m30s) kubelet Error: ImagePullBackOff
It's clearly failing to find the host "docker-registry-internal", despite the ClusterIP service.
I tried inspecting a pod from the inside using a trick I found online:
> kubectl run -i --tty --rm debug --image=ubuntu --restart=Never -- bash
If you don't see a command prompt, try pressing enter.
root#debug:/# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.67 debug
It doesn't seem like ClusterIP services are being added to the /etc/hosts file, so I'm not sure how services are supposed to find one another?
I tried watching several Kubernetes tutorials on general service communication (e.g. an app pod communicating with a redis pod) and every time all they did was supply the service name as a host and it magically connected. I'm not sure if I'm missing something. Bear in mind I'm brand new to Kubernetes so the internals are still mystical to me.

Can't Access Kubernetes Service Exposed via NodePort

I'm using minikube to test kubernetes on latest MacOS.
Here are my relevant YAMLs:
namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: micro
labels:
name: micro
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: adderservice
spec:
replicas: 1
template:
metadata:
labels:
run: adderservice
spec:
containers:
- name: adderservice
image: jeromesoung/adderservice:0.0.1
ports:
- containerPort: 8080
service.yml
apiVersion: v1
kind: Service
metadata:
name: adderservice
labels:
run: adderservice
spec:
ports:
- port: 8080
name: main
protocol: TCP
targetPort: 8080
selector:
run: adderservice
type: NodePort
After running minikube start, the steps I took to deploy is as follows:
kubectl create -f namespace.yml to create the namespace
kubectl config set-context minikube --namespace=micro
kubectl create -f deployment.yml
kubectl create -f service.yml
Then, I get the NodeIP and NodePort with below commands:
kubectl get services to get the NodePort
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
adderservice NodePort 10.99.155.255 <none> 8080:30981/TCP 21h
minikube ip to get the nodeIP
$ minikube ip
192.168.99.103
But when I do curl, I always get Connection Refused like this:
$ curl http://192.168.99.103:30981/add/1/2
curl: (7) Failed to connect to 192.168.99.103 port 30981: Connection refused
So I checked node, pod, deployment and endpoint as follows:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 23h v1.13.3
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
adderservice-5b567df95f-9rrln 1/1 Running 0 23h
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
adderservice 1 1 1 1 23h
$ kubectl get endpoints
NAME ENDPOINTS AGE
adderservice 172.17.0.5:8080 21h
I also checked service list from minikube with:
$ minikube service -n micro adderservice --url
http://192.168.99.103:30981
I've read many posts regarding accessing k8s service via NodePorts. To my knowledge, I should be able to access the app with no problem. The only thing I suspect is that I'm using a custom namespace. Will this cause the access issue?
I know namespace will change the DNS, so, to be complete, I ran below commands also:
$ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
$ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.micro
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
Name: kubernetes.micro
Address: 198.105.244.130
Name: kubernetes.micro
Address: 104.239.207.44
Could anyone help me out? Thank you.
The error Connection Refused mostly means that the application inside the container does not accept requests on the targeted interface or not mapped through the expected ports.
Things you need to be aware of:
Make sure that your application bind to 0.0.0.0 so it can receive requests from outside the container either externally as in public or through other containers.
Make sure that your application is actually listening on the containerPort and targetPort as expect
In your case you have to make sure that ADDERSERVICE_SERVICE_HOST equals to 0.0.0.0 and ADDERSERVICE_SERVICE_PORT equals to 8080 which should be the same value as targetPort in service.yml and containerPort in deployment.yml
Not answering the question but if someone who googled comes here like me who faced the same issue. Here is my solution for the same problem.
My Mac System IP and minikube IP are different.
So localhost:port didn't work instead try getting IP
minikube ip
Later, use that IP:Port to access the app and it works.
Check if service is really listening on 8080.
Try telnet within the container.
telnet 127.0.0.1 8080
.
.
.
telnet 172.17.0.5 8080

Extending Docker JBoss WildFly server not working

Hope doing good all.
Env: centos 7.3.1611, kubernetes : 1.5, docker 1.12
Problem 1 : Extended jboss docker not working but docker image created successfully
POD gets an error see below, step 7.
Problem 2 : Once problem #1 fixed then i wish to upload to docker hub: https://hub.docker.com/
how can i upload steps please if possible.
1) pull
docker pull jboss/wildfly
2) vi Dockerfile
FROM jboss/wildfly
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin123$ --silent
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]
3) Extend docker image
docker build --tag=nbasetty/wildfly-server .
4) [root#centos7 custom-jboss]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nbasetty/wildfly-server latest c1fbb87faffd 43 minutes ago 583.8 MB
docker.io/httpd latest e0645af13ada 2 weeks ago 177.5 MB
5)vi jboss-wildfly-rc-service-custom.yaml
apiVersion: v1
kind: Service
metadata:
name: wildfly-service
spec:
externalIPs:
- 10.0.2.15
selector:
app: wildfly-rc-pod
ports:
- name: web
port: 8080
#- name: admin-console
# port: 9990
type: LoadBalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
name: wildfly-rc
spec:
replicas: 2
template:
metadata:
labels:
app: wildfly-rc-pod
spec:
containers:
- name: wildfly
image: nbasetty/wildfly-server
ports:
- containerPort: 8080
#- containerPort: 9990
6) kubectl create -f jboss-wildfly-rc-service-custom.yaml
7) [root#centos7 jboss]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-pvc-pod 1/1 Running 6 2d
wildfly-rc-d0k3h 0/1 ImagePullBackOff 0 23m
wildfly-rc-hgsfj 0/1 ImagePullBackOff 0 23m
[root#centos7 jboss]# kubectl logs wildfly-rc-d0k3h
Error from server (BadRequest): container "wildfly" in pod
"wildfly-rc-d0k3h" is waiting to start:
trying and failing to pull image
Glad you have found a way to make it working. here is step I followed.
I labeled node-01 as 'dbserver: mysql'
create the docker image in node-01
created this pod, it worked.
apiVersion: v1 kind: ReplicationController metadata: name: wildfly-rc spec: replicas: 2 template:
metadata:
labels:
app: wildfly-rc-pod
spec:
containers:
- name: wildfly
image: nbasetty/wildfly-server
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
nodeSelector:
dbserver: mysql
Re-creating the issue:
docker pull jboss/wildfly
mkdir jw
cd jw
echo 'FROM jboss/wildfly
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin123$ --silent
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]' | tee Dockerfile
docker build --tag=docker.io/surajd/wildfly-server .
See the images available:
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/surajd/wildfly-server latest 10e96902ea12 11 seconds ago 583.8 MB
Create a config that works:
echo '
apiVersion: v1
kind: Service
metadata:
name: wildfly
spec:
selector:
app: wildfly
ports:
- name: web
port: 8080
type: LoadBalancer
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: wildfly
spec:
replicas: 2
template:
metadata:
labels:
app: wildfly
spec:
containers:
- name: wildfly
image: docker.io/surajd/wildfly-server
imagePullPolicy: Never
ports:
- containerPort: 8080
' | tee config.yaml
kubectl create -f config.yaml
Notice the field imagePullPolicy: Never, this helps you use the image available on the node(the image we built using docker build). This works on single node cluster but may or may not work on multiple node cluster. So not recommended to put that value, but since we are doing experiment on single node cluster we can set it to Never. Always set it to imagePullPolicy: Always. So that whenever the pod is scheduled the image will be pulled from registry. Read about imagePullPolicy and some config related tips.
Now to pull the image from registry the image should be on registry, so to answer your question of pushing it to docker hub run command:
docker push docker.io/surajd/wildfly-server
So in the above example replace surajd with your docker registry username.
Here are steps I used to do setup of single node cluster on CentOS:
My machine version:
$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
Here is what I have done:
Setup single node k8s cluster on CentOS as follows (src1 & src2):
yum update -y
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
sysctl net.bridge.bridge-nf-call-iptables=1
sysctl net.bridge.bridge-nf-call-ip6tables=1
kubeadm init
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
kubectl taint nodes --all node-role.kubernetes.io/master-
Now k8s version:
# kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Resources