Kubernetes CrashLoopBackOff - docker

I got error when creating deployment.
This is my Dockerfile that i have run and test it on local, i also push it to DockerHub
FROM node:14.15.4
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
RUN npm install pm2 -g
COPY . .
EXPOSE 3001
CMD [ "pm2-runtime", "server.js" ]
In my raspberry pi 3 model B, i have install k3s using curl -sfL https://get.k3s.io | sh -
Here is my controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-deployment
labels:
app: controller
spec:
replicas: 1
selector:
matchLabels:
app: controller
template:
metadata:
labels:
app: controller
spec:
containers:
- name: controller
image: akirayorunoe/node-controller-server
ports:
- containerPort: 3001
After run this file the pod is error
When i log the pod, it said
standard_init_linux.go:219: exec user process caused: exec format error
Here is the reponse from describe pod
Name: controller-deployment-8669c9c864-sw8kh
Namespace: default
Priority: 0
Node: raspberrypi/192.168.0.30
Start Time: Fri, 21 May 2021 11:21:05 +0700
Labels: app=controller
pod-template-hash=8669c9c864
Annotations: <none>
Status: Running
IP: 10.42.0.43
IPs:
IP: 10.42.0.43
Controlled By: ReplicaSet/controller-deployment-8669c9c864
Containers:
controller:
Container ID: containerd://439edcfdbf49df998e3cabe2c82206b24819a9ae13500b0 13b9bac1df6e56231
Image: akirayorunoe/node-controller-server
Image ID: docker.io/akirayorunoe/node-controller-server#sha256:e1c5115 2f9d596856952d590b1ef9a486e136661076a9d259a9259d4df314226
Port: 3001/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 21 May 2021 11:24:29 +0700
Finished: Fri, 21 May 2021 11:24:29 +0700
Ready: False
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-txm85 (ro )
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-txm85:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-txm85
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m33s default-scheduler Successfully ass igned default/controller-deployment-8669c9c864-sw8kh to raspberrypi
Normal Pulled 5m29s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.072053213s
Normal Pulled 5m24s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.018192177s
Normal Pulled 5m6s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.015959209s
Normal Pulled 4m34s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 2.921116157s
Normal Created 4m34s (x4 over 5m29s) kubelet Created containe r controller
Normal Started 4m33s (x4 over 5m28s) kubelet Started containe r controller
Normal Pulling 3m40s (x5 over 5m32s) kubelet Pulling image "a kirayorunoe/node-controller-server"
Warning BackOff 30s (x23 over 5m22s) kubelet Back-off restart ing failed container
Here is the error images

You are trying to launch a container built for x86 (or x86_64, same difference) on an ARM machine. This does not work. Containers for ARM must be built specifically for ARM and contain ARM executables. While major projects are slowly adding ARM support to their builds, most random images you find on Docker Hub or whatever will not work on ARM.

Related

Pod terminated with reason Error and Exit Code: 1

I have Question on pod termination on using image: kodekloud/throw-dice
pod-defintion.yml
apiVersion: v1
kind: Pod
metadata:
name: throw-dice-pod
spec:
containers:
- image: kodekloud/throw-dice
name: throw-dice
restartPolicy: Never
I have checked the steps in DockerFile . It runs throw-dice.sh which randomly returns number in between 1 - 6.
let's consider first time container return 3, so how below pod is terminated ? where is the condition defined in the pod level it suppose to terminate if script return number is !6.
Below steps were performed to execute pod-definition.yml
master $ kubectl create -f /root/throw-dice-pod.yaml
pod/throw-dice-pod created
master $ kubectl get pods
NAME READY STATUS RESTARTS AGE
throw-dice-pod 0/1 ContainerCreating 0 9s
master $ kubectl get pods
NAME READY STATUS RESTARTS AGE
throw-dice-pod 0/1 Error 0 12s
master $ kubectl describe pod throw-dice-pod
Name: throw-dice-pod
Namespace: default
Priority: 0
Node: node01/172.17.0.83
Start Time: Sat, 16 May 2020 11:20:01 +0000
Labels: <none>
Annotations: <none>
Status: Failed
IP: 10.88.0.4
IPs:
IP: 10.88.0.4
Containers:
throw-dice:
Container ID: docker://4560c794b58cf8f3e3fad691b2292e37db4e84e20c9286321f026d1735272b5f
Image: kodekloud/throw-dice
Image ID: docker-pullable://kodekloud/throw-dice#sha256:9c70a0f907b99293885a9591b6162e9ec89e127937626a97ca7f9f6be2d98b01
Port: <none>
Host Port: <none>
State: Terminated
Reason: Error
Exit Code: 1
Started: Sat, 16 May 2020 11:20:10 +0000
Finished: Sat, 16 May 2020 11:20:10 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nr5kl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-nr5kl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nr5kl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/throw-dice-pod to node01
Normal Pulling 21s kubelet, node01 Pulling image "kodekloud/throw-dice"
Normal Pulled 19s kubelet, node01 Successfully pulled image "kodekloud/throw-dice"
Normal Created 19s kubelet, node01 Created container throw-dice
Normal Started 18s kubelet, node01 Started container throw-dice
master $ kubectl logs throw-dice-pod
2
The dockerfile has ENTRYPOINT sh throw-dice.sh which means execute the script and then the container terminates automatically. If you want the container to keep running you need to start a long running process for example a java process ENTRYPOINT ["java", "-jar", "/whatever/your.jar"]
Exit code 1 here is set by the application.
Take a look into throw-dice.sh. We see that the application shuffling exit code between 0,1,2.
The exit code is always equal to shuffling result above.
If the exit code is 0, 6 is logged.
If the exit code is 1 or 2, result from shuffling integer from 1-5 is logged
So in your case, exit code from shuffling is 1 and shuffling result for log is 2. And exit code 1 from application is considered as Error reason for Kubernetes.

What is the reason for Back-off restarting failed container for elasticsearch kubernetes pod?

When I try to run my elasticsearch container through kubernetes deployments, my elasticsearch pod fails after some time, While it runs perfectly fine when directly run as docker container using docker-compose or Dockerfile. This is what I get as a result of kubectl get pods
NAME READY STATUS RESTARTS AGE
es-764bd45bb6-w4ckn 0/1 Error 4 3m
below is the result of kubectl describe pod
Name: es-764bd45bb6-w4ckn
Namespace: default
Node: administrator-thinkpad-l480/<node_ip>
Start Time: Thu, 30 Aug 2018 16:38:08 +0530
Labels: io.kompose.service=es
pod-template-hash=3206801662
Annotations: <none>
Status: Running
IP: 10.32.0.8
Controlled By: ReplicaSet/es-764bd45bb6
Containers:
es:
Container ID: docker://9be2f7d6eb5d7793908852423716152b8cefa22ee2bb06fbbe69faee6f6aa3c3
Image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
Image ID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch#sha256:9ae20c753f18e27d1dd167b8675ba95de20b1f1ae5999aae5077fa2daf38919e
Port: 9200/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 78
Started: Thu, 30 Aug 2018 16:42:56 +0530
Finished: Thu, 30 Aug 2018 16:43:07 +0530
Ready: False
Restart Count: 5
Environment:
ELASTICSEARCH_ADVERTISED_HOST_NAME: es
ES_JAVA_OPTS: -Xms2g -Xmx2g
ES_HEAP_SIZE: 2GB
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nhb9z (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-nhb9z:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nhb9z
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m default-scheduler Successfully assigned default/es-764bd45bb6-w4ckn to administrator-thinkpad-l480
Normal Pulled 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Container image "docker.elastic.co/elasticsearch/elasticsearch:6.2.4" already present on machine
Normal Created 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Created container
Normal Started 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Started container
Warning BackOff 1m (x15 over 5m) kubelet, administrator-thinkpad-l480 Back-off restarting failed container
Here is my elasticsearc-deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.1.0 (36652f6)
creationTimestamp: null
labels:
io.kompose.service: es
name: es
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: es
spec:
containers:
- env:
- name: ELASTICSEARCH_ADVERTISED_HOST_NAME
value: es
- name: ES_JAVA_OPTS
value: -Xms2g -Xmx2g
- name: ES_HEAP_SIZE
value: 2GB
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
name: es
ports:
- containerPort: 9200
resources: {}
restartPolicy: Always
status: {}
When i try to get logs using kubectl logs -f es-764bd45bb6-w4ckn, I get
Error from server: Get https://<slave node ip>:10250/containerLogs/default/es-764bd45bb6-w4ckn/es?previous=true: dial tcp <slave node ip>:10250: i/o timeout
What could be the reason and solution for this problem ?
I had the same problem, there can be couple of reasons for this issue. In my case the jar file was missing. #Lakshya has already answered this problem, I would like to add the steps that you can take to troubleshoot it.
Get the pod status, Command - kubectl get pods
Describe pod to have further look - kubectl describe pod "pod-name"
The last few lines of output gives you events and where your deployment failed
Get logs for more details - kubectl logs "pod-name"
Get container logs - kubectl logs "pod-name" -c "container-name"
Get the container name from the output of describe pod command
If your container is up, you can use the kubectl exec -it command to further analyse the container
Hope it helps community members in future issues.
I found the logs using docker logs for the es container and found that es was not starting because of the vm.max_map_count set to very low value.
I changed the vm.max_map_count to desired value using sysctl -w vm.max_map_count=262144 and the pod has started after that.
In my case, I just run kubectl run ubuntu --image=ubuntu get similar err and kubectl logs is empty
I guess the reason is ubuntu image without command will auto poweroff, so the solution is:
output k8s ubuntu conf yaml
in command make container command with don't container poweroff(for ex, add "sleep infinity", following is work conf yaml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "ubuntu",
"creationTimestamp": null,
"labels": {
"run": "ubuntu"
}
},
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:20.04",
"command": [
"sleep",
"infinity"
],
"resources": {},
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always"
},
"status": {}
}
Maybe config incorrect but valid, read pod logs and find error message. Fix configs and redeploy app

.NET Core Docker Container won't work in Kubernetes

PLEASE READ UPDATE 2
I have a very simple EventHubClient app. It will just listen to an EventHub messages.
I get it running with the Docker support given in Visual Studio 2017 (Linux Container).
But when I try to deploy it in Kubernetes, I get "Back-off restarting failed container"
C# Code:
public static void Main(string[] args)
{
// Init Mapper
AutoMapper.Mapper.Initialize(cfg =>
{
cfg.AddProfile<AiElementProfile>();
});
Console.WriteLine("Registering EventProcessor...");
var eventProcessorHost = new EventProcessorHost(
EventHubPath,
ConsumerGroupName,
EventHubConnectionString,
AzureStorageConnectionString,
ContainerName
);
// Registers the Event Processor Host and starts receiving messages
eventProcessorHost.RegisterEventProcessorAsync<EventProcessor>();
Console.WriteLine("Receiving. Press ENTER to stop worker.");
Console.ReadLine();
}
Kubernetes Manifest file (.yaml):
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: historysvc-deployment
spec:
selector:
matchLabels:
app: historysvc
replicas: 2
template:
metadata:
labels:
app: historysvc
spec:
containers:
- name: historysvc
image: vncont.azurecr.io/historysvc:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: acr-auth
kubectl get pods:
NAME READY STATUS RESTARTS AGE
historysvc-deployment-558fc5649f-bln8f 0/1 CrashLoopBackOff 17 1h
historysvc-deployment-558fc5649f-jgjvq 0/1 CrashLoopBackOff 17 1h
kubectl describe pod historysvc-deployment-558fc5649f-bln8f
Name: historysvc-deployment-558fc5649f-bln8f
Namespace: default
Node: aks-nodepool1-81522366-0/10.240.0.4
Start Time: Tue, 24 Jul 2018 10:15:37 +0200
Labels: app=historysvc
pod-template-hash=1149712059
Annotations: <none>
Status: Running
IP: 10.244.0.11
Controlled By: ReplicaSet/historysvc-deployment-558fc5649f
Containers:
historysvc:
Container ID: docker://59e66f1e6420146f6eca4f19e2801a4ee0435a34c7ac555a8d04f699a1497f35
Image: vncont.azurecr.io/historysvc:v1
Image ID: docker-pullable://vncont.azurecr.io/historysvc#sha256:636d81435bd421ec92a0b079c3841cbeb3ad410509a6e37b1ec673dc4ab8a444
Port: 80/TCP
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 24 Jul 2018 10:17:10 +0200
Finished: Tue, 24 Jul 2018 10:17:10 +0200
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 24 Jul 2018 10:16:29 +0200
Finished: Tue, 24 Jul 2018 10:16:29 +0200
Ready: False
Restart Count: 4
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mt8mm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-mt8mm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mt8mm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned historysvc-deployment-558fc5649f-bln8f to aks-nodepool1-81522366-0
Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-81522366-0 MountVolume.SetUp succeeded for volume "default-token-mt8mm"
Normal Pulled 8s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Container image "vncont.azurecr.io/historysvc:v1" already present on machine
Normal Created 7s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Created container
Normal Started 6s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Started container
Warning BackOff 6s (x8 over 1m) kubelet, aks-nodepool1-81522366-0 Back-off restarting failed container
What am I missing?
UPDATE 1
kubectl describe pod historysvc-deployment-558fc5649f-jgjvq
Name: historysvc-deployment-558fc5649f-jgjvq
Namespace: default
Node: aks-nodepool1-81522366-0/10.240.0.4
Start Time: Tue, 24 Jul 2018 10:15:37 +0200
Labels: app=historysvc
pod-template-hash=1149712059
Annotations: <none>
Status: Running
IP: 10.244.0.12
Controlled By: ReplicaSet/historysvc-deployment-558fc5649f
Containers:
historysvc:
Container ID: docker://ccf83bce216276450ed79d67fb4f8a66daa54cd424461762478ec62f7e592e30
Image: vncont.azurecr.io/historysvc:v1
Image ID: docker-pullable://vncont.azurecr.io/historysvc#sha256:636d81435bd421ec92a0b079c3841cbeb3ad410509a6e37b1ec673dc4ab8a444
Port: 80/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 25 Jul 2018 09:32:34 +0200
Finished: Wed, 25 Jul 2018 09:32:35 +0200
Ready: False
Restart Count: 277
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mt8mm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-mt8mm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mt8mm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 2m (x6238 over 23h) kubelet, aks-nodepool1-81522366-0 Back-off restarting failed container
UPDATE 2
When I run it localy with:
docker run <image>
it ends instantly (ignores the read line) (completes), which seems to be the problem.
I have to write
docker run -it <image>
-it at the end for it to do the read line.
How does kubernetes runs the docker image? Where can I set that?
This can be done by attaching an argument to run with your deployment.
In your case the Kubernetes Manifest file (.yaml) should look like this:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: historysvc-deployment
spec:
selector:
matchLabels:
app: historysvc
replicas: 2
template:
metadata:
labels:
app: historysvc
spec:
containers:
- name: historysvc
image: vncont.azurecr.io/historysvc:v1
ports:
- containerPort: 80
args: ["-it"]
imagePullSecrets:
- name: acr-auth
You can find this explained in k8s docs inject-data-application/define-command-argument-container
When you create a Pod, you can define a command and arguments for the containers that run in the Pod. To define a command, include the command field in the configuration file. To define arguments for the command, include the args field in the configuration file. The command and arguments that you define cannot be changed after the Pod is created.
The command and arguments that you define in the configuration file override the default command and arguments provided by the container image. If you define args, but do not define a command, the default command is used with your new arguments.

spring boot on azure Internal Server Error

I have a very simple "Hello" spring-boot application
#RestController
public class HelloWorld {
#RequestMapping("/")
public String sayHello() {
return "Hello Spring Boot!!";
}
}
I packaged Dockerfile
FROM java:8
COPY ./springsimple-1.0-SNAPSHOT.jar /Users/a/Documents/dev/intellij/dockerImages/
WORKDIR /Users/a/Documents/dev/intellij/dockerImages/
EXPOSE 8090
CMD ["java", "-jar", "springsimple-1.0-SNAPSHOT.jar"]
and pulled into my container registry and deployed it
amhg$ kubectl run testproject --image acontainerregistry.azurecr.io/hellospring:v1
deployment.apps "testproject" created
amhg$ kubectl expose deployments testproject --port=5000 --type=LoadBalancer
service "testproject" exposed
command kubectl get pods
NAME READY STATUS RESTARTS AGE
testproject-bdf5b54d-gkk92 1/1 Running 0 41s
However when I try the command (Starting to serve on 127.0.0.1:8001) I got the error:
amhg$ curl http://127.0.0.1:8001/api/v1/proxy/namespaces/default/pods/testproject-bdf5b54d-gkk92/
Internal Server Error
What is missing?
The description of the pod is
amhg$ kubectl describe pod testproject-bdf5b54d-gkk92
Name: testproject-bdf5b54d-gkk92
Namespace: default
Node: aks-nodepool1-39744669-0/10.240.0.4
Start Time: Thu, 19 Apr 2018 13:13:20 +0200
Labels: pod-template-hash=68916108
run=testproject
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"testproject-bdf5b54d","uid":"aa99808e-43c2-11e8-9537-0a58ac1f0f4...
Status: Running
IP: 10.244.0.40
Controlled By: ReplicaSet/testproject-bdf5b54d
Containers:
testproject:
Container ID: docker://6ed3878fa4476a5d2e56f0ba70908742702709c7505c7b19989efc6ff658ea55
Image: acontainerregistry.azurecr.io/hellospring:v1
Image ID: docker-pullable://acontainerregistry.azurecr.io/azure-vote-front#sha256:e2af252d275c99b802e21b3b469c75b256d7812ee71d7582cd759bd4faf5a6ec
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 19 Apr 2018 13:13:21 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkpjm (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-vkpjm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkpjm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 57m default-scheduler Successfully assigned testproject-bdf5b54d-gkk92 to aks-nodepool1-39744669-0
Normal SuccessfulMountVolume 57m kubelet, aks-nodepool1-39744669-0 MountVolume.SetUp succeeded for volume "default-token-vkpjm"
Normal Pulled 57m kubelet, aks-nodepool1-39744669-0 Container image "acontainerregistry.azurecr.io/hellospring:v1" already present on machine
Normal Created 57m kubelet, aks-nodepool1-39744669-0 Created container
Normal Started 57m kubelet, aks-nodepool1-39744669-0 Started container
Let's start from the beginning: it is always better to use YAML config files to do anything with Kubernetes. It will help you with debugging if something goes wrong and repeat your action in future.
First, you use the command to create the pod:
kubectl run testproject --image acontainerregistry.azurecr.io/hellospring:v1
where YAML looks like:
apiVersion: v1
kind: Pod
metadata:
name: test-app
spec:
containers:
- name: java-app
image: acontainerregistry.azurecr.io/hellospring:v1
ports:
- containerPort: 8090
and you can apply it as a command:
kubectl apply -f ./pod.yaml
You get the same result as while running your command, but additionally you have the config file which can be used in future.
You`re trying to expose your pod using command:
kubectl expose deployments testproject --port=5000 --type=LoadBalancer
YAML for your service looks like:
apiVersion: v1
kind: Service
metadata:
name: java-service
labels:
name: test-app
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 8090
name: http
selector:
name: test-app
Doing the same but with using YAML allows to describe more and be sure you don't miss anything.
You tried to curl the localhost but I`m not sure what did you expect from this command:
amhg$ curl http://127.0.0.1:8001/api/v1/proxy/namespaces/default/pods/testproject-bdf5b54d-gkk92/
Internal Server Error
After you create the service, you call kubectl describe service $service_name, which you can find here:
LoadBalancer Ingress: XX.XX.XX.XX
Port: http 5000/TCP
You can curl this address and receive the answer from your application.
curl -v XX.XX.XX.XX:5000
Don't forget to open the port on Azure firewall.

Kubernetes doesn't pull from private Docker Registry

I've deployed a private registry and can pull from it with docker pull x.x.x/name. The thing is that I can't make Kubernetes pull from that repository. I think I've followed all the answers on other topics, but they don't seem to do the trick.
.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: private-image-test-1
spec:
containers:
- name: uses-private-image
image: x.x.x/nginx_1
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
imagePullSecrets:
- name: registrypullsecret
kubectl get pods:
NAME READY STATUS RESTARTS AGE
private-image-test-1 0/1 Image: x.x.x/nginx_1 is ready, container is creating 0 4m
kubectl describe pods private-image-test-1
Name: private-image-test-1
Namespace: default
Node: 37.72.163.69/37.72.163.69
Start Time: Fri, 06 May 2016 08:04:45 +0000
Labels: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
uses-private-image:
Container ID:
Image: x.x.x/nginx_1
Image ID:
Port:
Command:
echo
SUCCESS
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Reason: Image: x.x.x/nginx_1 is ready, container is creating
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-zrn4n:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zrn4n
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 4m 1 {scheduler } scheduled Successfully assigned private-image-test-1 to 37.72.163.69
4m 8s 30 {kubelet 37.72.163.69} implicitly required container POD pulled Successfully pulled image "gcr.io/google_containers/pause:0.8.0"
4m 8s 30 {kubelet 37.72.163.69} implicitly required container POD failed Failed to create docker container with error: no such image
4m 8s 30 {kubelet 37.72.163.69} failedSync Error syncing pod, skipping: no such image
Any help is welcome at this point, thanks!
In most cases where I've come across this issue, it is almost always your credential secret being incorrect. The proper format should be along the lines of
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: {BASE64 encoding of your config}
type: kubernetes.io/dockerconfigjson
From memory, the type field has changed in recent versions of k8s so definitely check that you have the correct type listed.
Also, your yaml example has bad indenting, but thats likely a SO editor issue.

Resources