Kubernetes - How to make simple Linux container Image without App Running permanently - docker

I need to make my Linux container running (without any app or service in it) so I can enter /bin/bash and modify some local linux files before I actually manually run my app from the container shell (this is purely for some debugging purposes so I do not want any modifications in my image itself, please do not suggest that as an option)
I have defined my Kubernetes YAML file hoping that I would be able to execute simple command: ["/bin/bash"] but this does not work because it will execute command and Exit the container. So how I can make it not to exit so I am able to exec container?
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-deployment2
labels:
app: frontarena-ads-deployment2
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-aks-test2
labels:
app: frontarena-ads-aks-test2
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
restartPolicy: Always
containers:
- name: frontarena-ads-aks-test2
image: test.dev/ads:test2
imagePullPolicy: Always
env:
- name: DB_TYPE
value: "odbc"
- name: LANG
value: "en_US.utf8"
command: ["/bin/bash"]
imagePullSecrets:
- name: fa-repo-secret
selector:
matchLabels:
app: frontarena-ads-aks-test2
When I want to see what is going on after the deployment I can notice:
NAME READY STATUS RESTARTS AGE
frontarena-ads-deployment2-546fc4b75-zmmrs 0/1 CrashLoopBackOff 19 77m
kubectl logs $POD doesn't return anything
and kubectl describe pod $POD output is:
Command:
/bin/bash
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 07 Apr 2021 11:40:31 +0000
Finished: Wed, 07 Apr 2021 11:40:31 +0000

You just need to run some long/endless process for the container to be up. For example you can read a stream, that'll last forever unless you kill the pod/container:
command: ["/bin/bash", "-c"]
args: ["cat /dev/stdout"]

Related

How to resolve ImagePullBackOff error in local?

Net core application image and I am trying to create deployment in local kubernetes.
I created docker image as below.
docker tag microservicestest:dev microservicestest .
docker build -t microservicestest .
docker run -d -p 8080:80 --name myapp microservicetest
Then I created deployment as below.
kubectl run microservicestest-deployment --image=microservicestest:latest --port 80 --replicas=3
kubectl expose deployment microservicestest-deployment --type=NodePort
then when I see Kubectl get pods I see below error
Below is the output when I run docker images
Below is the output
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-09-22T04:29:14Z"
generation: 1
labels:
run: microservicestest-deployment
name: microservicestest-deployment
namespace: default
resourceVersion: "17282"
selfLink: /apis/apps/v1/namespaces/default/deployments/microservicestest-deployment
uid: bf75410a-d332-4016-9757-50d534114599
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
run: microservicestest-deployment
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: microservicestest-deployment
spec:
containers:
- image: microservicestest:latest
imagePullPolicy: Always
name: microservicestest-deployment
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2020-09-22T04:29:14Z"
lastUpdateTime: "2020-09-22T04:29:14Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2020-09-22T04:29:14Z"
lastUpdateTime: "2020-09-22T04:29:14Z"
message: ReplicaSet "microservicestest-deployment-5c67d587b9" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 1
replicas: 3
unavailableReplicas: 3
updatedReplicas: 3
I am not able to understand why my pods are not able to pull the image from local. Can someone help me to identify the issue What I am making here. Any help would be appreciated. Thank you
if you are using minikube you first need to build the images in the docker hosted in the minikube machine doing this in your bash session eval $(minikube docker-env) for windows check here
then you need to tell Kubernetes your image pull policy to be Never or IfNotPresent to look for local images
spec:
containers:
- image: my-image:my-tag
name: my-app
imagePullPolicy: Never
check here the official documentation
By default, the kubelet tries to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
as you are not using yaml file you can create the resources like this
kubectl run microservicestest-deployment --image=microservicestest:latest --image-pull-policy=Never --port 80 --replicas=3
kubectl expose deployment microservicestest-deployment --type=NodePort

Run Kubernetes/Openshift cronjob with container user id

I am using Openshift to deploy a django application which uses pyodbc for connecting to external database.
Currently I wanted to schedule a cronjob in openshift using yaml file. The cronjob gets created with no problem but throws this error when run:
('IM004', "[IM004] [unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed (0) (SQLDriverConnect)")
This error occcured before as Openshift overrides uid when running a container. I overcame this error by following this workaround: https://github.com/VeerMuchandi/mssql-openshift-tools/blob/master/mssql-client/uid_entrypoint.sh
This error pops up again when the cronjob is run and this maybe due to same uid issue. Following is my yaml file for scheduling cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: samplecron
spec:
securityContext:
runAsUser: 1001
runAsGroup: 0
schedule: "*/5 * * * *"
concurrencyPolicy: "Forbid"
startingDeadlineSeconds: 60
suspend:
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
metadata:
labels:
parent: "cronjobpi"
spec:
containers:
- name: samplecron
image: docker-registry.default.svc:5000/image-name
volumeMounts:
- mountPath: /path-to-mount
name: "volume-name"
command: [ "python3", "/script.py" ]
volumes:
- name: "vol-name"
restartPolicy: Never
Can someone suggest how I can provide same userid's information in yaml file of cronjob or any other way of solving this issue?
Was able to solve the issue using the entrypoint script I mentioned above and I included the the command to run the python script inside the .sh entrypoint script and instead of command: [ "python3", "/script.py" ] ["sh" , "/entrypoint.sh"] was used..The python script is used to connect to a DB server using pyodbc. pyodbc.connect() causes an issue if UID of container of is not written in etc/passwd which is done by entrypoint script mentioned above.

How can you read a database port from application.properties with environment variables

i am very new to Spring Boot and the application.properties. I have the problem, that i need to be very flexible with my database port, because i have two different databases. Therefore i want to read the port from a environment variable. I tried the following:
spring.data.mongodb.uri = mongodb://project1:${db-password}#abc:12345/project
This code works fine, if my Database has the port 12345. But if i now try to read the port from an environment variable there is a problem.
I tried this:
spring.data.mongodb.uri = mongodb://project1:${db-password}#abc:${port}/project
The problem is the following: I am using k8 and Jenkins. The environment variable "port" is given to my program in my k8 and this works fine for "db-password", but not for the port. My Jenkins says:
"The connection string contains an invalid host 'abd:${port}'. The port '${port}' is not a valid, it must be an integer between 0 and 65535"
So now to my question:
How can i read a port as an environment variable, without getting this error?
Thank you in advance!
To inject environment variable to the pods you can do the following:
Configmap
You can create ConfigMap and configure your pods to use it.
Steps required:
Create ConfigMap
Update/Create the deployment with ConfigMap
Test it
Create ConfigMap
I provided simple ConfigMap below to store your variables:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
port: "12345"
To apply it and be able to use it invoke following command:
$ kubectl create -f example-configmap.yaml
The ConfigMap above will create the environment variable port with value of 12345.
Check if ConfigMap was created successfully:
$ kubectl get configmap
Output should be like this:
NAME DATA AGE
example-config 1 21m
To get the detailed information you can check it with command:
$ kubectl describe configmap example-config
With output:
Name: example-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
port:
----
12345
Events: <none>
Update/Create the deployment with ConfigMap
I provided simple deployment with ConfigMap included:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
envFrom:
- configMapRef:
name: example-config
ports:
- containerPort: 80
Configuration responsible for using ConfigMap:
envFrom:
- configMapRef:
name: example-config
After that you need to run your deployment with command:
$ kubectl create -f configmap-test.yaml
And check if it's working:
$ kubectl get pods
With output:
NAME READY STATUS RESTARTS AGE
nginx-deployment-84d6f58895-b4zvz 1/1 Running 0 23m
nginx-deployment-84d6f58895-dp4c7 1/1 Running 0 23m
Test it
To test if environment variable is working you need to get inside the pod and check for yourself.
To do that invoke the command:
$ kubectl exec -it NAME_OF_POD -- /bin/bash
Please provide the variable NAME_OF_POD with appropriate one for your case.
After successfully getting into container run:
$ echo $port
It should show:
root#nginx-deployment-84d6f58895-b4zvz:/# echo $port
12345
Now you can use your environment variables inside pods.

Docker Kubernetes (Mac) - Autoscaler unable to find metrics

I have installed a local instance of Kubernetes via Docker on my Mac.
Following the walkthrough on how to activate autoscaling on a deployment I have experienced an issue. The autoscaler can't read the metrics.
When I am running kubectl describe hpa the current cpu usage comes back as unknown / 50% with the warnings:
Warning FailedGetResourceMetric:
horizontal-pod-autoscaler unable to get metrics for resource cpu:
unable to fetch metrics from API: the server could not find the
requested resource (get pods.metrics.k8s.io)
Warning FailedComputeMetricsReplicas
horizontal-pod-autoscaler failed to get cpu utilization: unable to
get metrics for resource cpu: unable to fetch metrics from API: the
server could not find the requested resource (get pods.metrics.k8s.io)
I have installed the metrics-server via git clone https://github.com/kubernetes-incubator/metrics-server.gitand installed it with kubectl create -f deploy/1.8+
I finally got it working..
Here are the full steps I took to get things working:
Have Kubernetes running within Docker
Delete any previous instance of metrics-server from your Kubernetes instance with kubectl delete -n kube-system deployments.apps metrics-server
Clone metrics-server with git clone https://github.com/kubernetes-incubator/metrics-server.git
Edit the file deploy/1.8+/metrics-server-deployment.yaml to override the default command by adding a command section that didn't exist before. The new section will instruct metrics-server to allow for an insecure communications session (don't verify the certs involved). Do this only for Docker, and not for production deployments of metrics-server:
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
command:
- /metrics-server
- --kubelet-insecure-tls
Add metrics-server to your Kubernetes instance with kubectl create -f deploy/1.8+ (if errors with the .yaml, write this instead: kubectl apply -f deploy/1.8+)
Remove and add the autoscaler to your deployment again. It should now show the current cpu usage.
EDIT July 2020:
Most of the above steps hold true except the metrics-server has changed and that file does not exist anymore.
The repo now recommends installing it like this:
apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
So we can now download this file,
curl -L https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml --output components.yaml
add --kubelet-insecure-tls under args (L88) to the metrics-server deployment and run
kubectl apply -f components.yaml
For who are use Internal-IP here may work for you. Follow #Mr.Turtle above at step 4. add more one command.
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.3
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
We upgraded to AWS EKS version 1.13.7 and that's when we started having problems with HPA, It turns out on my deployment I had to specified a value for resources.requests.cpu=200m and the HPA started working for me.
Had same issue while using my kubernetes kubeadm lab and the updated procedure is here
https://github.com/kubernetes-sigs/metrics-server
This solved the issue:
horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
If someone still has problems fixing this issue, this helped me fix it on minikube:
I had 2 deployments with the same label, something like this:
kind: Deployment
metadata:
name: webserver
spec:
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
---
kind: Deployment
metadata:
name: database
spec:
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
I renamed the label and matchLabels of the database (e.g. to app: db), then deleted both deployments and applied the new config - et voilĂ  it worked. (after hours of trying to solve the problem..)
Further informations to this issue: https://github.com/kubernetes/kubernetes/issues/79365
I have deployed over EKS and i was facing the same issue.
muhasan#admins-MacBook-Pro devops % kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
backend-iam-deployment Deployment/backend-iam-deployment 36278272/100Mi, <unknown>/50% 1 10 1 10m
in my deployment i just specified resources and that helped me to get HPA running
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-iam-deployment
labels:
app: backend-iam-deployment
spec:
replicas: 1
selector:
matchLabels:
app: backend-iam-deployment
template:
metadata:
labels:
app: backend-iam-deployment
spec:
containers:
- name: backend-iam-deployment
image: <imagename>
ports:
- containerPort: 80
resources:
limits:
cpu: 500m
requests:
cpu: 200m
startupProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 10
failureThreshold: 30
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 5
imagePullSecrets:
- name: us-east-1-ecr-registry
After applying resource limit to my deployment HPA started working for me.
muhasan#admins-MacBook-Pro fx.identitymanagement % kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
backend-iam-deployment Deployment/backend-iam-deployment 23216128/100Mi, 1%/50% 1 10 1 24m

spring boot on azure Internal Server Error

I have a very simple "Hello" spring-boot application
#RestController
public class HelloWorld {
#RequestMapping("/")
public String sayHello() {
return "Hello Spring Boot!!";
}
}
I packaged Dockerfile
FROM java:8
COPY ./springsimple-1.0-SNAPSHOT.jar /Users/a/Documents/dev/intellij/dockerImages/
WORKDIR /Users/a/Documents/dev/intellij/dockerImages/
EXPOSE 8090
CMD ["java", "-jar", "springsimple-1.0-SNAPSHOT.jar"]
and pulled into my container registry and deployed it
amhg$ kubectl run testproject --image acontainerregistry.azurecr.io/hellospring:v1
deployment.apps "testproject" created
amhg$ kubectl expose deployments testproject --port=5000 --type=LoadBalancer
service "testproject" exposed
command kubectl get pods
NAME READY STATUS RESTARTS AGE
testproject-bdf5b54d-gkk92 1/1 Running 0 41s
However when I try the command (Starting to serve on 127.0.0.1:8001) I got the error:
amhg$ curl http://127.0.0.1:8001/api/v1/proxy/namespaces/default/pods/testproject-bdf5b54d-gkk92/
Internal Server Error
What is missing?
The description of the pod is
amhg$ kubectl describe pod testproject-bdf5b54d-gkk92
Name: testproject-bdf5b54d-gkk92
Namespace: default
Node: aks-nodepool1-39744669-0/10.240.0.4
Start Time: Thu, 19 Apr 2018 13:13:20 +0200
Labels: pod-template-hash=68916108
run=testproject
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"testproject-bdf5b54d","uid":"aa99808e-43c2-11e8-9537-0a58ac1f0f4...
Status: Running
IP: 10.244.0.40
Controlled By: ReplicaSet/testproject-bdf5b54d
Containers:
testproject:
Container ID: docker://6ed3878fa4476a5d2e56f0ba70908742702709c7505c7b19989efc6ff658ea55
Image: acontainerregistry.azurecr.io/hellospring:v1
Image ID: docker-pullable://acontainerregistry.azurecr.io/azure-vote-front#sha256:e2af252d275c99b802e21b3b469c75b256d7812ee71d7582cd759bd4faf5a6ec
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 19 Apr 2018 13:13:21 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkpjm (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-vkpjm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkpjm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 57m default-scheduler Successfully assigned testproject-bdf5b54d-gkk92 to aks-nodepool1-39744669-0
Normal SuccessfulMountVolume 57m kubelet, aks-nodepool1-39744669-0 MountVolume.SetUp succeeded for volume "default-token-vkpjm"
Normal Pulled 57m kubelet, aks-nodepool1-39744669-0 Container image "acontainerregistry.azurecr.io/hellospring:v1" already present on machine
Normal Created 57m kubelet, aks-nodepool1-39744669-0 Created container
Normal Started 57m kubelet, aks-nodepool1-39744669-0 Started container
Let's start from the beginning: it is always better to use YAML config files to do anything with Kubernetes. It will help you with debugging if something goes wrong and repeat your action in future.
First, you use the command to create the pod:
kubectl run testproject --image acontainerregistry.azurecr.io/hellospring:v1
where YAML looks like:
apiVersion: v1
kind: Pod
metadata:
name: test-app
spec:
containers:
- name: java-app
image: acontainerregistry.azurecr.io/hellospring:v1
ports:
- containerPort: 8090
and you can apply it as a command:
kubectl apply -f ./pod.yaml
You get the same result as while running your command, but additionally you have the config file which can be used in future.
You`re trying to expose your pod using command:
kubectl expose deployments testproject --port=5000 --type=LoadBalancer
YAML for your service looks like:
apiVersion: v1
kind: Service
metadata:
name: java-service
labels:
name: test-app
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 8090
name: http
selector:
name: test-app
Doing the same but with using YAML allows to describe more and be sure you don't miss anything.
You tried to curl the localhost but I`m not sure what did you expect from this command:
amhg$ curl http://127.0.0.1:8001/api/v1/proxy/namespaces/default/pods/testproject-bdf5b54d-gkk92/
Internal Server Error
After you create the service, you call kubectl describe service $service_name, which you can find here:
LoadBalancer Ingress: XX.XX.XX.XX
Port: http 5000/TCP
You can curl this address and receive the answer from your application.
curl -v XX.XX.XX.XX:5000
Don't forget to open the port on Azure firewall.

Resources