ImagePullBackOff after deploy to OpenShift - docker

I'm starting with Docker and OpenShift v3.
I have a simple Node.js project and a Dockerfile basically copied from nodejs.org that runs perfectly fine on my local machine with docker run. I pushed my image to Docker Hub and then created my project via oc new-project.
After oc new-app and oc get pods, I see a pod with status ImagePullBackOff and another as Running. After sometime, only one pod lasts, with status Error. oc logs only brings me: pods for deployment took longer than 600 seconds to become ready.
Another thing that probably could help is that, after the oc new-app command, I got a message like * [WARNING] Image runs as the 'root' user which may not be permitted by your cluster administrator.
Am I doing something wrong or missing something? Is more info needed?
You can see my Docker file in here and my project's code in here.

By default OpenShift will prevent you from running containers as root due to the security risk. Whether you can configure a cluster to allow you to run a specific container as root will depend on what privileges you have to administer the cluster.
You are better off not running your container as root in the first place. To do that suggest you use the image at:
https://hub.docker.com/r/ryanj/centos7-s2i-nodejs/
This image is Source-to-Image (S2I) enabled and so integrates with OpenShift's build system.

Related

Kubernetes Docker - Do I need to run docker compose when I update a secret?

Quick question, do i need to docker compose up on airflow when i amend a secret in kubectl?
I've changed a password using the command line and kubectl in vscode and just want to know if it is necessary to run docker compose up now that it has been changed or not?
If you've installed your airflow system using helm charts directly on k8s. Then you don't have to do anything. Secrets are automatically refreshed inside pods by the kubelet. And you don't have to manipulate docker directly when you already have k8s installed and are interacting with it using kubectl. That's the whole point of having k8s.
If you're using both, you shouldn't, really. Just interact with k8s and forget about docker. You will almost never have to think about docker unless you are debugging some serious problem with k8s system itself.
Nah. Docker compose has nothing to do with it. You probably just need to restart your Pods somehow. I always just do a "Redeploy" through our Rancher interface. I'm sure there is a way to do that with kubectl as well. You just need to get the secret into the Pods, the image itself is unchanged.

What happens when kubernetes restarts containers or the cluster is scaled up?

We are using Helm Chart for deploying out application in Kubernetes cluster.
We have a statefulsets and headless service. To initialize mTLS, we have created a 'job' kind and in 'command' we are passing shell & python scripts are an arguments. And created a 'cronjob' kind to update of certificate.
We have written a 'docker-entrypoint.sh' inside 'docker image' for some initialization work & to generate TLS certificates.
Questions to ask :
Who (Helm Chart/ Kubernetes) take care of scaling/monitoring/restarting containers ?
Does it deploy new docker image if pod fails/restarts ?
Will docker ENTRYPOINT execute after container fails/restarts ?
Does 'job' & 'cronjob' execute if container restarts ?
What are the other steps taken by Kubernetes ? Would you also share container insights ?
Kubernetes and not helm will restart a failed container by default unless you set restartPolicy: Never in pod spec
Restarting of container is exactly same as starting it out first time. Hence in restart you can expect things to happen same way as it would when starting the container for first time.
Internally kubelet agent running in each kubernetes node delegates the task of starting a container to OCI complaint container runtime such as docker, containerd etc which then spins up the docker image as a container on the node.
I would expect entrypoint script to be executed in both start a restart of a container.
Does it deploy new docker image if pod fails/restarts ?
It creates a new container with same image as specified in the pod spec.
Does 'job' & 'cronjob' execute if container restarts ?
If a container which is part of cronjob fails kubernetes will keep restarting(unless restartPolicy: Never in pod spec) the container till the time job is not considered as failed .Check this for how to make a cronjob not restart a container on failure. You can specify backoffLimit to control number of times it will retry before the job is considered failed.
Scaling up is equivalent of scheduling and starting yet another instance of the same container on the same or altogether different Kubernetes node.
As a side note you should use higher level abstraction such as deployment instead of pod because when a pod fails Kubernetes tries to restart it on same node but when a deployment fails Kubernetes will try to restart it in other nodes as well if it's not able to start the pod on it's current scheduled node.

Is it possible to 'Security Scan' running docker containers that have been deployed to k8s?

We have harbor scanning containers before they have been deployed. Once they are scanned, we then deploy them to the platform (k8s).
Is there anyway to scan a container just say a few weeks down the line after it has been deployed? Without disturbing the deployment of course.
Thanks
I think we have to distinguish between a container (the running process) and the image from which a container is created/started.
If this is about finding out which image was used to create a container that is (still) running and to scan that image for (new) vulnerabilities...here is a way to get information about the images of all running containers in a pod:
kubectl get pods <pod-name> -o jsonpath={.status.containerStatuses[*].image}

Create a new image from a container’s changes in Google Cloud Container

I have an Image which i should add a dependency to it. Therefore I have tried to change the image when is running on the container and create new Image.
I have follow this article with the following commands after :
kubectl run my-app --image=gcr.io/my-project-id/my-app-image:v1 --port 8080
kubectl get pods
kubectl exec -it my-app-container-id -- /bin/bash
then in the shell of container, i have installed the dependency using "pip install NAME_OF_Dependncy".
Then I have exited from the shell of container and as it have been explained in the article, i should commit the change using this command :
sudo docker commit CONTAINER_ID nginx-template
But I can not find the corresponding command for Google Kubernetes Engine with kubectl
How should i do the commit in google container engine?
As with K8s Version 1.8. There is no way to do Hot Fix changes directly to the images.For example, Committing new image from running container. If you still change or add something by using exec it will stay until the container is running. It's not best practice in K8s eco-system.
The recommended way is to use Dockerfile and customise the images according to the necessity and requirements.After that, you can push that images to the registry(public/ private ) and deploy it with K8s manifest file.
Solution to your issue
Create a Dockerfile for your images.
Build the image by using Dockerfile.
Push the image to the registry.
write the deployment manifest file as well service manifest file.
apply the manifest file to the k8s cluster.
Now If you want to change/modify something, you just need to change/modify the Dockerfile and follow the remaining steps.
As you know that containers are a short living creature which does not have persist changed behaviour ( modified configuration, changing file system).Therefore, It's better to give new behaviour or modification at the Dockerfile.
Kubernetes Mantra
Kubernetes is Cloud Native product which means it does not matter whether you are using Google Cloud, AWS or Azure. It needs to have consistent behaviour on each cloud provider.

Running a docker image in Openshift Origin

I am very new to Openshift Origin. I am now trying out the possibility of deploying my docker containers in OpenShift origin.
For that I created a very simple docker container that adds two numbers and produce the result:
https://github.com/abrahamjaison01/openshifttest
I created a docker image locally and a public docker image in docker hub:
docker pull abrahamjaison/openshifttest
I run the docker image locally as follows:
[root#mymachine /]# docker run -it --rm abrahamjaison/openshifttest
Enter first large number
12345
Enter second large number
54321
Result of addition = 66666
Since I am completely new to Openshift, I have no idea on how to deploy this in the Openshift environment.
I created a new project: oc new-project openshifttest
Then a new app: oc new-app docker.io/abrahamjaison/openshifttest
But then I do not know how I can access the console/terminal to provide the inputs. Also many a times when I run this I get the output as "deployment failed" when I issue the command "oc status".
Basically I would like to know how I can deploy this docker image on openshift and how I will be able to access the terminal to provide the inputs for performing the addition.
Could someone help me out with this?
OpenShift is principally for long running services such as web application and database. It isn't really intended for running a Docker container to wrap a command which then returns a result to the console and exits.
To get a better understand of how OpenShift 3 is used, download and read the free eBook at:
https://www.openshift.com/promotions/for-developers.html
The closest you will get to do the same as docker run is the oc run command, but it sort of defeats the whole point of what OpenShift is for. You are better off using Docker on your own system for what you are describing.
A guess at the command you would use if really wanted to try would be:
oc run test -i --tty --rm --image=abrahamjaison/openshifttest
As I say though, not really intended for doing this. That oc run exists is more for testing when having deployment problems for your applications.
Following the "Creating an Application From an Image" part, the syntax should be:
oc new-app abrahamjaison/openshifttest
By default, OpenShift will look for the image in DockerHub.
But that supposes you have pushed your GitHub image there first: see "Store images on Docker Hub". That might be the missing step in your process.
The interaction with oc is done with the OpenShift CLI or web console, as illustrated in the authentication page.

Resources