I'm trying to set up a dev environment with Kubernetes via Minikube. I successfully mounted the same volume to the same data dir on the same image with Docker for Mac, but I'm having trouble with Minikube.
Relevant files and logs:
db-pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
name: msyql
name: db
namespace: default
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: volumesnew
volumes:
- name: volumesnew
hostPath:
path: "/Users/eric/Volumes/mysql"
kubectl get pods:
NAME READY STATUS RESTARTS AGE
db 0/1 Error 1 3s
kubectl logs db:
2016-08-29 20:05:55 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-08-29 20:05:55 0 [Note] mysqld (mysqld 5.6.32) starting as process 1 ...
2016-08-29 20:05:55 1 [Warning] Setting lower_case_table_names=2 because file system for /var/lib/mysql/ is case insensitive
kubectl describe pods db:
Name: db
Namespace: default
Node: minikubevm/10.0.2.15
Start Time: Wed, 31 Aug 2016 07:48:39 -0700
Labels: name=msyql
Status: Running
IP: 172.17.0.3
Controllers: <none>
Containers:
mysqldev:
Container ID: docker://af0937edcd9aa00ebc278bc8be00bc37d60cbaa403c69f71bc1b378182569d3d
Image: mysql/mysql-server:5.6.32
Image ID: docker://sha256:0fb418d5a10c9632b7ace0f6e7f00ec2b8eb58a451ee77377954fedf6344abc5
Port: 3306/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 31 Aug 2016 07:48:42 -0700
Finished: Wed, 31 Aug 2016 07:48:43 -0700
Ready: False
Restart Count: 1
Environment Variables:
MYSQL_ROOT_PASSWORD: test
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
volumesnew:
Type: HostPath (bare host directory volume)
Path: /Users/eric/Volumes/newmysql
default-token-il74e:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-il74e
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
7s 7s 1 {default-scheduler } Normal Scheduled Successfully assigned db to minikubevm
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id 568f9112dce0
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id 568f9112dce0
6s 4s 2 {kubelet minikubevm} spec.containers{mysqldev} Normal Pulled Container image "mysql/mysql-server:5.6.32" already present on machine
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id af0937edcd9a
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id af0937edcd9a
3s 2s 2 {kubelet minikubevm} spec.containers{mysqldev} Warning BackOff Back-off restarting failed docker container
3s 2s 2 {kubelet minikubevm} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysqldev" with CrashLoopBackOff: "Back-off 10s restarting failed container=mysqldev pod=db_default(012d5178-6f8a-11e6-97e8-c2daf2e2520c)"
I was able to mount the data directory from the host to the container in a test directory, but I'm having trouble mounting to the MySQL data directory. Also, I tried to mount an empty directory to the container's data dir with the appropriate MySQL environment variables set, which in Docker for Mac allowed me to perform a SQL dump in the new dir, but I'm seeing the same errors in Minikube.
Any thought on what might be the cause, or if I'm not setting up my dev environment the preferred Kubernetes/Minikube way, please share your thoughts.
I was able to resolve this with the following:
echo "/Users -network 192.168.99.0 -mask 255.255.255.0 -alldirs -maproot=root:wheel" | sudo tee -a /etc/exports
sudo nfsd restart
minikube start
minikube ssh -- sudo umount /Users
minikube ssh -- sudo /usr/local/etc/init.d/nfs-client start
minikube ssh -- sudo mount 192.168.99.1:/Users /Users -o rw,async,noatime,rsize=32768,wsize=32768,proto=tcp
I am running Minikube in VirtualBox. I don't know if this will work with other VM drivers - xhyve, etc.
Reference: https://github.com/kubernetes/minikube/issues/2
EDIT: I should mention that this works for minikube v0.14.0.
1. Mount the folder you want to share on your host, in minikube:
minikube mount ./path/to/mySharedData:/mnt1/shared1
Don't close the terminal. That process needs to be running all the time for the folder to be accessible.
2. Use that folder with hostPath:
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: my-volume
volumes:
- name: my-volume
hostPath:
path: "/mnt1/shared1"
3. Writing access issue?
In case you have a writing access issue, you might want to mount the volume with:
minikube mount ./path/to/mySharedData:/mnt1/shared1 --uid 10001 --gid 10001
Here, the volume mounted in minikube will have group id and user id 10001. This is the user id of Azure SQL Edge server inside the container.
I don't know which is the user id of mysql in your case. If you want to know, log into your container and type id, it will tell you the user id.
Related
I am trying to setup a gitlab private registry for my kubernetes container images.
I've cut the irrelevant code out below.
My replica set is defined as:
kind: ReplicaSet
...
spec:
containers:
- name: redacted
image: registry.gitlab.com/redacted/redacted/redacted:latest
ports:
- containerPort: 8080
volumeMounts:
- name: redacted-data
mountPath: /var/www/html
imagePullSecrets:
- name: github-auth
...
I'm setting my secret with the following kubectl command:
kubectl create -n redacted secret docker-registry gitlab-auth \
--docker-server="registry.gitlab.com:5000" \
--docker-username="redacted" \
--docker-password="redacted" \
--docker-email="redacted" \
--namespace="redacted"
Here is the failing container output:
Name: redacted-cgbrk
...
Containers:
redacted:
Container ID:
Image: registry.gitlab.com/redacted/redacted/redacted:latest
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qv24l (ro)
/var/www/html from redacted-data (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 64s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal Scheduled 62s default-scheduler Successfully assigned redacted/redacted-cgbrk to pool-2t9lbcb5l-7d37n
Normal SuccessfulAttachVolume 55s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-6c4aac85-bb60-44e8-b557-7f65d62543fa"
Normal Pulling 16s (x3 over 54s) kubelet Pulling image "registry.gitlab.com/redacted/mpro/redacted:latest"
Warning Failed 16s (x3 over 54s) kubelet Failed to pull image "registry.gitlab.com/redacted/redacted/redacted:latest": rpc error: code = Unknown desc = failed to pull and unpack image "registry.gitlab.com/redacted/redacted/redacted:latest": failed to resolve reference "registry.gitlab.com/redacted/redacted/redacted:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
...
Kubernetes uses a separate auth than docker login, Check you may have configured Kubernetes with the required authentication so it can pull from your private registries.
Follow below steps :
1)Log in to Docker Hub
2)Create a Secret based on existing credentials
3)Create a Secret by providing credentials on the command line
4)Inspecting the Secret regcred
5)Create a Pod that uses your Secret
Please see K8S issue here: Pull an Image from a Private Registry for more information.
Also Refer to this Similar SO for more information.
I got error when creating deployment.
This is my Dockerfile that i have run and test it on local, i also push it to DockerHub
FROM node:14.15.4
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
RUN npm install pm2 -g
COPY . .
EXPOSE 3001
CMD [ "pm2-runtime", "server.js" ]
In my raspberry pi 3 model B, i have install k3s using curl -sfL https://get.k3s.io | sh -
Here is my controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-deployment
labels:
app: controller
spec:
replicas: 1
selector:
matchLabels:
app: controller
template:
metadata:
labels:
app: controller
spec:
containers:
- name: controller
image: akirayorunoe/node-controller-server
ports:
- containerPort: 3001
After run this file the pod is error
When i log the pod, it said
standard_init_linux.go:219: exec user process caused: exec format error
Here is the reponse from describe pod
Name: controller-deployment-8669c9c864-sw8kh
Namespace: default
Priority: 0
Node: raspberrypi/192.168.0.30
Start Time: Fri, 21 May 2021 11:21:05 +0700
Labels: app=controller
pod-template-hash=8669c9c864
Annotations: <none>
Status: Running
IP: 10.42.0.43
IPs:
IP: 10.42.0.43
Controlled By: ReplicaSet/controller-deployment-8669c9c864
Containers:
controller:
Container ID: containerd://439edcfdbf49df998e3cabe2c82206b24819a9ae13500b0 13b9bac1df6e56231
Image: akirayorunoe/node-controller-server
Image ID: docker.io/akirayorunoe/node-controller-server#sha256:e1c5115 2f9d596856952d590b1ef9a486e136661076a9d259a9259d4df314226
Port: 3001/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 21 May 2021 11:24:29 +0700
Finished: Fri, 21 May 2021 11:24:29 +0700
Ready: False
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-txm85 (ro )
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-txm85:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-txm85
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m33s default-scheduler Successfully ass igned default/controller-deployment-8669c9c864-sw8kh to raspberrypi
Normal Pulled 5m29s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.072053213s
Normal Pulled 5m24s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.018192177s
Normal Pulled 5m6s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.015959209s
Normal Pulled 4m34s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 2.921116157s
Normal Created 4m34s (x4 over 5m29s) kubelet Created containe r controller
Normal Started 4m33s (x4 over 5m28s) kubelet Started containe r controller
Normal Pulling 3m40s (x5 over 5m32s) kubelet Pulling image "a kirayorunoe/node-controller-server"
Warning BackOff 30s (x23 over 5m22s) kubelet Back-off restart ing failed container
Here is the error images
You are trying to launch a container built for x86 (or x86_64, same difference) on an ARM machine. This does not work. Containers for ARM must be built specifically for ARM and contain ARM executables. While major projects are slowly adding ARM support to their builds, most random images you find on Docker Hub or whatever will not work on ARM.
I have to share local .ssh directory content to pod. I search for hat and got answer from one of the post to share start as --mount-string.
$ minikube start --mount-string="$HOME/.ssh/:/ssh-directory" --mount
π minikube v1.9.2 on Darwin 10.14.6
β¨ Using the docker driver based on existing profile
π Starting control plane node m01 in cluster minikube
π Pulling base image ...
π Restarting existing docker container for "minikube" ...
π³ Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
βͺ kubeadm.pod-network-cidr=10.244.0.0/16
E0426 23:44:18.447396 80170 kubeadm.go:331] Overriding stale ClientConfig host https://127.0.0.1:32810 with https://127.0.0.1:32813
π Creating mount /Users/myhome/.ssh/:/ssh-directory ...
π Enabling addons: default-storageclass, storage-provisioner
π Done! kubectl is now configured to use "minikube"
β /usr/local/bin/kubectl is v1.15.5, which may be incompatible with Kubernetes v1.18.0.
π‘ You can also use 'minikube kubectl -- get pods' to invoke a matching version
When I check the docker for the given Minikube, it return
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad64f642b63 gcr.io/k8s-minikube/kicbase:v0.0.8 "/usr/local/bin/entrβ¦" 3 weeks ago Up 45 seconds 127.0.0.1:32815->22/tcp, 127.0.0.1:32814->2376/tcp, 127.0.0.1:32813->8443/tcp minikube
And check the .ssh directory content are there or not.
$ docker exec -it 5ad64f642b63 ls /ssh-directory
id_rsa id_rsa.pub known_hosts
I have deployment yml as
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
stack: api
app: api-web
spec:
replicas: 1
selector:
matchLabels:
app: api-web
template:
metadata:
labels:
app: api-web
spec:
containers:
- name: api-web-pod
image: tiangolo/uwsgi-nginx-flask
ports:
- name: api-web-port
containerPort: 80
envFrom:
- secretRef:
name: api-secrets
volumeMounts:
- name: ssh-directory
mountPath: /app/.ssh
volumes:
- name: ssh-directory
hostPath:
path: /ssh-directory/
type: Directory
When it ran, it gives error for /ssh-directory.
$ kubectl describe pod/api-deployment-f65db9c6c-cwtvt
Name: api-deployment-f65db9c6c-cwtvt
Namespace: default
Priority: 0
Node: minikube/172.17.0.2
Start Time: Sat, 02 May 2020 23:07:51 -0500
Labels: app=api-web
pod-template-hash=f65db9c6c
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/api-deployment-f65db9c6c
Containers:
api-web-pod:
Container ID:
Image: tiangolo/uwsgi-nginx-flask
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables from:
api-secrets Secret Optional: false
Environment: <none>
Mounts:
/app/.ssh from ssh-directory (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9shz5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ssh-directory:
Type: HostPath (bare host directory volume)
Path: /ssh-directory/
HostPathType: Directory
default-token-9shz5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9shz5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/api-deployment-f65db9c6c-cwtvt to minikube
Warning FailedMount 11m kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[ssh-directory], unattached volumes=[default-token-9shz5 ssh-directory]: timed out waiting for the condition
Warning FailedMount 2m13s (x4 over 9m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[ssh-directory], unattached volumes=[ssh-directory default-token-9shz5]: timed out waiting for the condition
Warning FailedMount 62s (x14 over 13m) kubelet, minikube MountVolume.SetUp failed for volume "ssh-directory" : hostPath type check failed: /ssh-directory/ is not a directory
When I check the content of /ssh-directory in docker.
It gives IO error.
$ docker exec -it 5ad64f642b63 ls /ssh-directory
ls: cannot access '/ssh-directory': Input/output error
I know there are default mount points for Minikube. As mentioned in https://minikube.sigs.k8s.io/docs/handbook/mount/,
+------------+----------+---------------+----------------+
| Driver | OS | HostFolder | VM |
+------------+----------+---------------+----------------+
| VirtualBox | Linux | /home |/hosthome |
+------------+----------+---------------+----------------+
| VirtualBox | macOS | /Users |/Users |
+------------+----------+---------------+----------------+
| VirtualBox | Windows |C://Users | /c/Users |
+------------+----------+---------------+----------------+
|VMware Fusio| macOS |/Users |/Users |
+------------+----------+---------------+----------------+
| KVM | Linux | Unsupported. | |
+------------+----------+---------------+----------------+
| HyperKit | Linux | Unsupported |(see NFS mounts)|
+------------+----------+---------------+----------------+
But I installed minikube as brew install minikube and its set driver as docker.
$ cat ~/.minikube/config/config.json
{
"driver": "docker"
}
There is no mapping for docker driver in mount point.
Initially, this directory has the files, but somehow, when I try to create the pod, it delete or something is wrong.
While reproducing this on ubuntu I encountered the exact issue.
The directory was indeed looked like mounted but the files were missing which lead me to think that this is a general issue with mounting directories with docker driver.
There is open issue on github about the same problem ( mount directory empty ) and open feature request to mount host volumes into docker driver.
Inspecting minikube container shows no record of that mounted volume and confirms information mentioned in the github request that the only volume shared with host as of now is the one that mounts by default (that is /var/lib/docker/volumes/minikube/_data mounted into minikube's /var directory).
$ docker inspect minikube
"Mounts": [
{
"Type": "volume",
"Name": "minikube",
"Source": "/var/lib/docker/volumes/minikube/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
As the workaround you could copy your .ssh directory into the running minikube docker container with following command:
docker cp $HOME/.ssh minikube:<DESIRED_DIRECTORY>
and then mount this desired directory into the pod.
When I try to run my elasticsearch container through kubernetes deployments, my elasticsearch pod fails after some time, While it runs perfectly fine when directly run as docker container using docker-compose or Dockerfile. This is what I get as a result of kubectl get pods
NAME READY STATUS RESTARTS AGE
es-764bd45bb6-w4ckn 0/1 Error 4 3m
below is the result of kubectl describe pod
Name: es-764bd45bb6-w4ckn
Namespace: default
Node: administrator-thinkpad-l480/<node_ip>
Start Time: Thu, 30 Aug 2018 16:38:08 +0530
Labels: io.kompose.service=es
pod-template-hash=3206801662
Annotations: <none>
Status: Running
IP: 10.32.0.8
Controlled By: ReplicaSet/es-764bd45bb6
Containers:
es:
Container ID: docker://9be2f7d6eb5d7793908852423716152b8cefa22ee2bb06fbbe69faee6f6aa3c3
Image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
Image ID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch#sha256:9ae20c753f18e27d1dd167b8675ba95de20b1f1ae5999aae5077fa2daf38919e
Port: 9200/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 78
Started: Thu, 30 Aug 2018 16:42:56 +0530
Finished: Thu, 30 Aug 2018 16:43:07 +0530
Ready: False
Restart Count: 5
Environment:
ELASTICSEARCH_ADVERTISED_HOST_NAME: es
ES_JAVA_OPTS: -Xms2g -Xmx2g
ES_HEAP_SIZE: 2GB
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nhb9z (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-nhb9z:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nhb9z
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m default-scheduler Successfully assigned default/es-764bd45bb6-w4ckn to administrator-thinkpad-l480
Normal Pulled 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Container image "docker.elastic.co/elasticsearch/elasticsearch:6.2.4" already present on machine
Normal Created 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Created container
Normal Started 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Started container
Warning BackOff 1m (x15 over 5m) kubelet, administrator-thinkpad-l480 Back-off restarting failed container
Here is my elasticsearc-deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.1.0 (36652f6)
creationTimestamp: null
labels:
io.kompose.service: es
name: es
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: es
spec:
containers:
- env:
- name: ELASTICSEARCH_ADVERTISED_HOST_NAME
value: es
- name: ES_JAVA_OPTS
value: -Xms2g -Xmx2g
- name: ES_HEAP_SIZE
value: 2GB
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
name: es
ports:
- containerPort: 9200
resources: {}
restartPolicy: Always
status: {}
When i try to get logs using kubectl logs -f es-764bd45bb6-w4ckn, I get
Error from server: Get https://<slave node ip>:10250/containerLogs/default/es-764bd45bb6-w4ckn/es?previous=true: dial tcp <slave node ip>:10250: i/o timeout
What could be the reason and solution for this problem ?
I had the same problem, there can be couple of reasons for this issue. In my case the jar file was missing. #Lakshya has already answered this problem, I would like to add the steps that you can take to troubleshoot it.
Get the pod status, Command - kubectl get pods
Describe pod to have further look - kubectl describe pod "pod-name"
The last few lines of output gives you events and where your deployment failed
Get logs for more details - kubectl logs "pod-name"
Get container logs - kubectl logs "pod-name" -c "container-name"
Get the container name from the output of describe pod command
If your container is up, you can use the kubectl exec -it command to further analyse the container
Hope it helps community members in future issues.
I found the logs using docker logs for the es container and found that es was not starting because of the vm.max_map_count set to very low value.
I changed the vm.max_map_count to desired value using sysctl -w vm.max_map_count=262144 and the pod has started after that.
In my case, I just run kubectl run ubuntu --image=ubuntu get similar err and kubectl logs is empty
I guess the reason is ubuntu image without command will auto poweroff, so the solution is:
output k8s ubuntu conf yaml
in command make container command with don't container poweroff(for ex, add "sleep infinity", following is work conf yaml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "ubuntu",
"creationTimestamp": null,
"labels": {
"run": "ubuntu"
}
},
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:20.04",
"command": [
"sleep",
"infinity"
],
"resources": {},
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always"
},
"status": {}
}
Maybe config incorrect but valid, read pod logs and find error message. Fix configs and redeploy app
I have a very simple "Hello" spring-boot application
#RestController
public class HelloWorld {
#RequestMapping("/")
public String sayHello() {
return "Hello Spring Boot!!";
}
}
I packaged Dockerfile
FROM java:8
COPY ./springsimple-1.0-SNAPSHOT.jar /Users/a/Documents/dev/intellij/dockerImages/
WORKDIR /Users/a/Documents/dev/intellij/dockerImages/
EXPOSE 8090
CMD ["java", "-jar", "springsimple-1.0-SNAPSHOT.jar"]
and pulled into my container registry and deployed it
amhg$ kubectl run testproject --image acontainerregistry.azurecr.io/hellospring:v1
deployment.apps "testproject" created
amhg$ kubectl expose deployments testproject --port=5000 --type=LoadBalancer
service "testproject" exposed
command kubectl get pods
NAME READY STATUS RESTARTS AGE
testproject-bdf5b54d-gkk92 1/1 Running 0 41s
However when I try the command (Starting to serve on 127.0.0.1:8001) I got the error:
amhg$ curl http://127.0.0.1:8001/api/v1/proxy/namespaces/default/pods/testproject-bdf5b54d-gkk92/
Internal Server Error
What is missing?
The description of the pod is
amhg$ kubectl describe pod testproject-bdf5b54d-gkk92
Name: testproject-bdf5b54d-gkk92
Namespace: default
Node: aks-nodepool1-39744669-0/10.240.0.4
Start Time: Thu, 19 Apr 2018 13:13:20 +0200
Labels: pod-template-hash=68916108
run=testproject
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"testproject-bdf5b54d","uid":"aa99808e-43c2-11e8-9537-0a58ac1f0f4...
Status: Running
IP: 10.244.0.40
Controlled By: ReplicaSet/testproject-bdf5b54d
Containers:
testproject:
Container ID: docker://6ed3878fa4476a5d2e56f0ba70908742702709c7505c7b19989efc6ff658ea55
Image: acontainerregistry.azurecr.io/hellospring:v1
Image ID: docker-pullable://acontainerregistry.azurecr.io/azure-vote-front#sha256:e2af252d275c99b802e21b3b469c75b256d7812ee71d7582cd759bd4faf5a6ec
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 19 Apr 2018 13:13:21 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkpjm (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-vkpjm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkpjm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 57m default-scheduler Successfully assigned testproject-bdf5b54d-gkk92 to aks-nodepool1-39744669-0
Normal SuccessfulMountVolume 57m kubelet, aks-nodepool1-39744669-0 MountVolume.SetUp succeeded for volume "default-token-vkpjm"
Normal Pulled 57m kubelet, aks-nodepool1-39744669-0 Container image "acontainerregistry.azurecr.io/hellospring:v1" already present on machine
Normal Created 57m kubelet, aks-nodepool1-39744669-0 Created container
Normal Started 57m kubelet, aks-nodepool1-39744669-0 Started container
Let's start from the beginning: it is always better to use YAML config files to do anything with Kubernetes. It will help you with debugging if something goes wrong and repeat your action in future.
First, you use the command to create the pod:
kubectl run testproject --image acontainerregistry.azurecr.io/hellospring:v1
where YAML looks like:
apiVersion: v1
kind: Pod
metadata:
name: test-app
spec:
containers:
- name: java-app
image: acontainerregistry.azurecr.io/hellospring:v1
ports:
- containerPort: 8090
and you can apply it as a command:
kubectl apply -f ./pod.yaml
You get the same result as while running your command, but additionally you have the config file which can be used in future.
You`re trying to expose your pod using command:
kubectl expose deployments testproject --port=5000 --type=LoadBalancer
YAML for your service looks like:
apiVersion: v1
kind: Service
metadata:
name: java-service
labels:
name: test-app
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 8090
name: http
selector:
name: test-app
Doing the same but with using YAML allows to describe more and be sure you don't miss anything.
You tried to curl the localhost but I`m not sure what did you expect from this command:
amhg$ curl http://127.0.0.1:8001/api/v1/proxy/namespaces/default/pods/testproject-bdf5b54d-gkk92/
Internal Server Error
After you create the service, you call kubectl describe service $service_name, which you can find here:
LoadBalancer Ingress: XX.XX.XX.XX
Port: http 5000/TCP
You can curl this address and receive the answer from your application.
curl -v XX.XX.XX.XX:5000
Don't forget to open the port on Azure firewall.