Not able to connect to a container(Created via Rest API) in Kubernetes - docker

I am creating a docker container ( using docker run) in a kubernetes Environment by invoking a rest API.
I have mounted the docker.sock of the host machine and i am building an image and running that image from RESTAPI..
Now i need to connect to this container from some other container which is actually started by Kubectl from deployment.yml file.
But when used kubeclt describe pod (Pod name), my container created using Rest API is not there.. So where is this container running and how can i connect to it from some other container ?

Are you running the container in the same namespace as namespace with deployment.yml? One of the option to check that would be to run -
kubectl get pods --all-namespaces
If you are not able to find the docker container there than I would suggest performing below steps -
docker ps -a {verify running docker status}
Ensuring that while mounting docker.sock there are no permission errors
If there are permission errors, escalate privileges to the appropriate level
To answer the second question, connection between two containers should be possible by referencing cluster DNS in below format -
"<servicename>.<namespacename>.svc.cluster.local"
I would also request you to detail steps, codes and errors(if there are any) for me to better answer the question.

You probably shouldn't be directly accessing the Docker API from anywhere in Kubernetes. Kubernetes will be totally unaware of anything you manually docker run (or equivalent) and as you note normal administrative calls like kubectl get pods won't see it; the CPU and memory used by the pod won't be known about by the node interface and this could cause a node to become over utilized. The Kubernetes network environment is also pretty complicated, and unless you know the details of your specific CNI provider it'll be hard to make your container accessible at all, much less from a pod running on a different node.
A process running in a pod can access the Kubernetes API directly, though. That page notes that all of the official client libraries are aware of the conventions this uses. This means that you should be able to directly create a Job that launches your target pod, and a Service that connects to it, and get the normal Kubernetes features around this. (For example, servicename.namespacename.svc.cluster.local is a valid DNS name that reaches any Pod connected to the Service.)
You should also consider whether you actually need this sort of interface. For many applications, it will work just as well to deploy some sort of message-queue system (e.g., RabbitMQ) and then launch a pool of workers that connects to it. You can control the size of the worker queue using a Deployment. This is easier to develop since it avoids a hard dependency on Kubernetes, and easier to manage since it prevents a flood of dynamic jobs from overwhelming your cluster.

Related

Run a web page similar to kubernetes dashboard

I a want to run a web page similar like kubernetes dashboard.The web page takes input from the user and generates a small file but i want the web page to be loaded without using any server. kubernetes is deploying a pod and bringing up the web page i want to do the same.If kubernetes is also using a server how is it using it(is it directly downloading it with the OS in the pod or how is kubernetes doing it).
Overview I want to know how kubernetes dashboard is getting deployed is it using a server if so how is it getting the server installed in the kubernetes pod else how is it bring up the UI.
Actually, Kubernetes plays the role as an orchestrator and provides sufficient way for building communication channels between containers in the cluster and uses Docker by default as a container runtime.
Containers represent run-time environment for images, however images consist with OS layer and application binaries, a good explanation you can find here. In order to build own image you might consider two ways to afford this: create an image from existing one in Docker Hub or compose image from Dockerfile.To store the customized image might be the option to push it into Docker Hub repository or stand for some private isolated repo by deploying a Registry server.
When you are ready with an image, and you plan to implement application in Kubernetes cluster, that's a good time to create first microservice. Although, there are tons of materials about Kubernetes cluster and its run-time engine architecture in the globe, I would focus on the application deployment lifecycle.
Deployment is the main mechanism which defines how are Pods should to be implemented within a cluster and provides specific configuration for further application run-time workflow.
Service describes a way how the particular Pod will communicate with other resources within a cluster, providing endpoint IP address and port where your application will respond.
In general scenario with Kubernetes Dashboard, the method in use kubectl proxy will expose the application by proxying gateway between host and Kubernetes API, which is more like for testing purposes and not secure, in comparison with Nodeport type which brings more convenient way to make application accessible outside the cluster, as described in this Stack thread.
I encourage you to get some more learning stuff in the official Kubernetes documentation.

Does it make sense to run Kubernetes on a single server?

I'm using Docker I have implemented a system to deploy environments (on a single server) based on Git branches using Traefik (*.dev.domain.com) and Docker Compose templates.
I like Kubernetes and I've never switched to it since I'm limited to one single server for my infrastructure. I've only used it using local installations (Docker for Windows).
So, my question is: does it make sense to run a Kubernetes "cluster" (master and nodes) on a single server to orchestrate and route containers (in place of Traefik/Rancher/Docker Compose)?
This use is for development and staging only for the moment, so high availability is not a prerequisite.
Thanks.
If it is not a production environment, it doesn't matter how many nodes you are using. So yes, it should be just fine in this case. But make sure all the k8s features you will need in production are available in test/dev, to keep things similar and portable.
AFAIU,
I do not see a requirement for kubernetes unless we are doing below at least for single host using native docker run or docker-compose or docker engine swarm mode -
Make sure there are enough(>=2) replicas of your app in a single server and you are balancing the load across those apps docker containers.
If you want to go bit advanced, we should be able to scale up & down dynamically (docker swarm mode supports this out of the box else use jwilder nginx proxy).
Your deployment should not cause a downtime. Make sure a single container is always healthy at any instant of time while deploying.
Container should auto heal(restart automatically) in case your HTTP or TCP health check fails.
Doing all of the above will certainly put you in a better place but single host is still a single source of failure which you got to deal with at regular intervals.
Preferred : if possible try to start with docker engine swarm mode or kubernetes single master or minikube. This will automatically take care of all the above scenarios out of the box and will also allow you to further scale up anytime by adding more nodes without changing much in your YML files for docker swarm or kubernetes.
Ref -
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
https://docs.docker.com/engine/swarm/
I would use single host k8s only if I managed clusters with the same project that I would like to deploy to the said host. This enables you to reuse manifests and all the automation you've created for your clusters.
Have I had single host environments only, I would probably stick to docker-compose.
If you're looking to try it out your easiest options are probably minikube (easy to run single-node cluster locally but without some features) or using one of the free trial accounts for a managed Kubernetes service from one of the big cloud providers (fully-featured and multi-node but limited use before you have to pay).

How to add containers to a Kubernetes pod on runtime

I have a number of Jobs
running on k8s.
These jobs run a custom agent that copies some files and sets up the environment for a user (trusted) provided container to run.
This agent runs on the side of the user container, captures the logs, waits for the container to exit and process the generated results.
To achieve this, we mount Docker's socket /var/run/docker.sock and run as a privileged container, and from within the agent, we use docker-py to interact with the user container (setup, run, capture logs, terminate).
This works almost fine, but I'd consider it a hack. Since the user container was created by calling docker directly on a node, k8s is not aware of it's existence. This has been causing troubles since our monitoring tools interact with K8s, and don't get visibility to these stand-alone user containers. It also makes pod scheduling harder to manage, since the limits (cpu/memory) for the user container are not accounted as the requests for the pod.
I'm aware of init containers but these don't quite fit this use case, since we want to keep the agent running and monitoring the user container until it completes.
Is it possible for a container running on a pod, to request Kubernetes to add additional containers to the same pod the agent is running? And if so, can the agent also request Kubernetes to remove the user container at will (e.g. certain custom condition was met)?
From this GitHub issue, it seems that the answer is that adding or removing containers to a pod is not possible, since the container list in the pod spec is immutable.
In kubernetes 1.16, there is an alpha feature that would allow for creation of ephemeral containers which could be "added" to running pods. Note, that this requires a feature gate to be enabled on relevant components e.g. kubelet. This may be hard to enable on control plane for cloud provider managed services such as EKS.
API Reference 1.16
Simple tutorial
I don't think you can alter a running pod like that but you can certainly define your own pod and run it programmatically using API
What I mean is you should define a pod with the user container and whatever other containers you wish and run it as a unit. It's possible you'll need to play around with liveness checks to have post processing completed after your user container dies
You can share data between multiple containers in a pod using shared volumes. this would let your agent container read from log files written on the user container, and drop config files into the shared volume for setup.
This way you could run the user container and the agent container as a Job with both containers in the pod. When both containers exit, the job will be completed.
You seem to indicate above that you are manually terminating the user container. That wouldn't be supported via shared volume unless you did something like forcing users to terminate their execution at the presence of a file on the shared volume.
Is it possible for a container running on a pod, to request Kubernetes
to add additional containers to the same pod the agent is running? And
if so, can the agent also request Kubernetes to remove the user
container at will (e.g. certain custom condition was met)?
I'm not aware of any way to add containers to existing Job pod definitions. There's no replicas option for Jobs so you couldn't hack it by changing replicas from 0->1 like you potentially could on a Deployment.
I'm not aware of any way to use kubectl to delete a container but not the whole pod. See kubectl delete.
If you want to kill the user container (rather than having it run to completion), you'll have to get on the host and use docker kill <sha> on the user container. Make sure to set .spec.template.spec.restartPolicy = "Never" on the user container or k8s will restart it.
I'd recommend:
Having a shared volume to transfer logs to the agent and so the agent can set up the user container
Making user containers expect to exit on their own and read configs from the shared volume
I don't know what workloads you are doing or how users are making containers so that may not be possible. If you're not able to dictate how users build their containers, the above may not work.
Another option is providing a binary that acts as a command API on the user container. This binary could accept commands like "setup", "run", "terminate", "transfer logs" via RPC and it would be the main process in their docker container.
Then you could make the build process for users something like:
FROM your-container-with-binary:latest
put whatever you want in this
container and set ENV JOB_PATH=/path/to/executable/code (or put code
in specific location)
Lots of moving parts to this whichever way you make it happen.
You can inject containers to pods dynamically via : https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
An admission controller is a piece of code that intercepts requests to
the Kubernetes API server prior to persistence of the object, but
after the request is authenticated and authorized. The controllers
consist of the list below, are compiled into the kube-apiserver
binary, and may only be configured by the cluster administrator. In
that list, there are two special controllers: MutatingAdmissionWebhook
and ValidatingAdmissionWebhook. These execute the mutating and
validating (respectively) admission control webhooks which are
configured in the API.
Admission controllers may be “validating”, “mutating”, or both.
Mutating controllers may modify the objects they admit; validating
controllers may not.
And you can inject additional runtime requirements to pods via : https://kubernetes.io/docs/concepts/workloads/pods/podpreset/
A Pod Preset is an API resource for injecting additional runtime
requirements into a Pod at creation time. You use label selectors to
specify the Pods to which a given Pod Preset applies.

Container Orchestration for provisioning single containers based on user action

I'm pretty new to Docker orchestration and managing a fleet of containers. I'm wanting to build an app that would give the user a container when they ran a command. What is the best tool and best way to accomplish this?
I plan on having a pool of CoreOS servers to run the containers on and I'm imagining the scheduler to have an API that I can just call to create the container.
Most of what I have seen with Nomad, Kubernetes, Docker Swarm, etc is how to provision multiple clusters of containers all doing the same thing. I'm wanting to be able to create a single container based on a users command and then be able to communicate with an API on that container. Anyone have experience with this?
I'd look at Kubernetes + the Jobs API (short lived) or Deployments (long lived)
I'm not sure exactly what you mean by command, but I'll assume its some sort of dev env triggered by a CLI, make-dev.
User triggers make-dev, which sends a webhook to your app sitting in front of the Jobs API, ideally doing rate-limiting and/or auth.
Your app takes the command, sanity checks it, then fires off a Job/Deployment request + an Ingress rule + Service
Kubernetes will schedule it out across your fleet of machines
Your app waits for the pod to start, then returns back the address of the API with a unique identifier (the same thing in the ingress rule) like devclusters.com/foobar123
User now accesses their service at that address. Internally Kubernetes uses the ingress and service to route the requests to your pod
This should scale well, and if your different environments use the same base container image, they should start really fast.
Plug: If you want an easy CoreOS + Kubernetes cluster plus a UI try https://coreos.com/tectonic
I plan on having a pool of CoreOS servers to run the containers on and I'm imagining the scheduler to have an API that I can just call to create the container
kubernetes comes with a RESTful API that you can use to directly create pods (the unit of work in kubernetes which contains one or more containers) within your cluster.
The command line utility kubectl also interacts with the cluster in the exact same way, via the api. There are client libraries written in golang, Java, and Python at the moment with others on the way to help communicate with the cluster.
If you later want a higher level abstraction to manage pods, update them and manage their lifetimes, looking at one of the controllers (replicaset, replication controller, deployment, statefulset) should help.

Kubernetes configuration to link containers

I am trying to see if there are any example to create a Kubernetes POD
which starts 2-3 containers and these containers are linked with each other but couldn't find any.
Does anybody tried linking containers using Kubernetes config.
The containers in same pod shares the localhost, so you need not link containers, just use localhost:containerPort.
You have to use the Kubernetes service (Proxy) https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#how-do-they-work.
Have a look how they work togehter: https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook
To be specific, there is no concept of "linking" similarly to the way Docker does it. Every service endpoint is a fully qualified domain name and you just call it from one container to another, and every label on a container that can be picked up by a service endpoint can be used to direct network traffic. So, you don't have to do ENV["$FOO_BAR_BAZ"] to get the correct IP, just call it directly (curl http://foo_bar_baz).
I think you are speaking about single pod multiple container configurations.
In kubernetes the small single unit is a pod. So multiple containers in a pod share the same IPC and multiple processes in different container can access via localhost:process port
Pod is a basic unit of deployment in kubernetes.
You can run one or more containers in pod. They will share same network i.e localhost. So you don't need to specify the link url just specify localhost:containerPort

Resources