How to create a k8s custom pod with a specific configuration? - docker

i have to say that my question is a little confusing but i'll try to be as clear as possible:
in docker there is a command to run a container and make it use another container's network the command is : docker run --net=container
so basically, i want to make k8s execute that command to create a pod, is that possible ? or is there any other alternative command for that in k8s ?
in another words, what command does the k8s api-server execute to create containers on worker nodes?
there is a lot of questions over there lol, i hope you will understand what i want to say ...

I am not sure if i understand what you want but if you want to capture a pod's traffic network you can use a service mesh like Istio or Linkerd
I worked with Istio and you can have metrics for all traffic within a cluster

Related

How to add docker container hostname and network in Kubernetes Deployment?

I have docker images of different elk stacks, and I want to communicate between them. I have achieved it by creating a docker network and accessing them via hostname. I want to know if we can pass this properties in the kubernetes or not?
Can we create a docker network over there? And how do we pass these properties inside the deployment yaml?
I have created a docker network named as "elk", and then passed it in the run arguments (as docker run --network=elk -h elasticsearch ....)
I am expecting to create this network in kubernetes cluster and then pass these properties to deployment yaml
Kubernetes does not have Docker's notion of separate per-application isolated networks. You can't reproduce this Docker setup in Kubernetes and don't need to. Also see Services, Load Balancing, and Networking in the Kubernetes documentation.
In Kubernetes you usually do not communicate directly with Pods (containers). Instead, you also create a Service matching each Deployment, and then make calls to the Service name and port.
If you're currently deploying containers with docker run --net=... then you can ignore that option when migrating to Kubernetes. If you're using Compose, I'd suggest first trying to update the Compose setup to use only the Compose-provided default network, removing all of the networks: blocks.
For something like Elasticsearch, you probably want to run it in a StatefulSet which can also manage the per-replica storage. This has specific requirements around corresponding Services, and it does provide a way to connect to a specific replica when you need to. Relevantly to this question, if the StatefulSet is named elasticsearch then the Pods will be named elasticsearch-0, elasticsearch-1, and so on, and these names will also be visible as the hostname(8) inside the container, matching the docker run -h option.

Automatically Generate Pods in Kubernetes

I previously created a Flask server that spawns Docker containers using the Docker Python SDK. When a client hits a specific endpoint, the server would generate a container. It would maintain queues, and it would be able to kill containers that didn't respond to requests.
I want to migrate towards Kubernetes, but I am starting to think my current server won't be able to "spawn" jobs as pods automatically like in docker.
docker.from_env().containers.run('alpine', 'echo hello world')
Is Docker Swarm a better solution for this, or is there a hidden practice that is done in Kubernetes? Would the Kubernetes Python API be a logical solution for automatically generating pods and jobs, where the Flask server is a pod that manages other pods within the cluster?
'Kubectl run' is much like 'docker run' in that it will create a Pod with a container based on a docker image (e.g. How do i run curl command from within a Kubernetes pod). See https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/ for more comparison. But what you run with k8s are Pods/Jobs that contain containers rather than running containers directly so this will add an extra layer of complexity for you.
Kubernetes is more about orchestrating services rather than running short-lived jobs. It has some features and can be used to run jobs but that isn't its central focus. If you're going in that direction you may want to look at knative (and knative build) or kubeless as what you describe sounds rather like the serverless concept. Or if you are thinking more about Jobs then perhaps brigade (https://brigade.sh). (For more see https://www.quora.com/Is-Kubernetes-suited-for-long-running-batch-jobs) If you are looking to run web app workloads that serve requests then note that you don't need to kill containers that fail to respond on k8s as k8s will monitor and restart them for you.
I don't know swarm well enough to compare. I suspect it would be a bit easier for you as it is aimed more centrally at docker (the k8s API is intended to support other runtimes) but perhaps somebody else can comment on that. Whether using swarm instead helps you will I guess depend on your motivations.

How to run a swarm in Docker?

I'm new to Docker and I'm doing the Get-Started part of the documentation, but I got stuck in step 4, I do not make mistakes when doing this step, but when I enter the ip 192.168.99.100 it does not show me anything. I hope you can help me THANK YOU.docker info
Angel, I do not know which step or docs you're talking about (adding links would help a lot), but there's only one way to start a Docker Swarm
docker swarm init
You may also specify the IP of the machine you're starting the swarm in if it has more than one network interface:
docker swarm init --advertise-addr <ip-where-you-want-the-node-to-listen-to-swarm-events>
I would really recommend you do not use docker toolbox, instead use Play With Docker, where you'll be able to spawn nodes and try stuff around without needing to configure anything.

Kubernetes pods versus Docker container in Google's codelabs tutorial

This question pertains to the Kubernetes tutorial on Google's CodeLabs found here: https://codelabs.developers.google.com/codelabs/cloud-compute-kubernetes/index.html?index=..%2F..%2Fgcp-next#15
I'm new to both Docker and Kubernetes and am confused over their use of the term "pods" which seems to contradict itself.
From that tutorial:
A Kubernetes pod is a group of containers, tied together for the purposes of administration and networking. It can contain one or more containers. All containers within a single pod will share the same networking interface, IP address, disk, etc. All containers within the same pod instance will live and die together. It's especially useful when you have, for example, a container that runs the application, and another container that periodically polls logs/metrics from the application container.
That is in-line with my understanding of how Kubernetes pods relate to containers, however they then go on to say:
Optional interlude: Look at your pod running in a Docker container on the VM
If you ssh to that machine (find the node the pod is running on by using kubectl describe pod | grep Node), you can then ssh into the machine with gcloud compute ssh . Finally, run sudo docker ps to see the actual pod
My problems with the above quote:
. "Look at your pod running in a Docker container" appears to be
backwards. Shouldn't it say "Look at your Docker container running
on the VM"?
"...run sudo docker ps to see the actual pod" doesn't make sense, since "docker ps" lists docker containers, not pods.
So am I way off base here or is the tutorial incorrect?
As mentioned above pod can run more than one container, but in fact to make it simple running more than one container in a pod is an exception and definitely not the common use. you may look at a pod as a container++ that's the easy way to look at it.
If you starting with kubernetes I have wrote the blog below that explain the main 3 entities you need to be familiar with to get started with kubernetes, which are pods, deployments and services.
here it is
http://codefresh.io/blog/kubernetes-snowboarding-everything-intro-kubernetes/
feedback welcome!
One nuance that most people don't know about docker running Kubenretes is that it is running a outdated version. I found that if I went to Google's cloud based solution for Kubernetes everything was quite easy to setup. Here is my sample code of how I set up Kubernetes with Docker.
I had to use the command line utility for Docker though to properly get everything to work. I think this should point you in the right direction.
(I've started learning Kubernetes and have some experience with Docker).
I think the important side of pods is that it may have container(s) which are not from Docker, but from some other implementation.
In this aspect the phrase in problem 1. is fully valid: output confirms that pod is in Docker, not anywhere else.
And reg. problem 2 - the phrase means that further details about the pod you should inspect from docker command. Theoretically different command may be needed in other cases.

run docker after setup network

I'm new to docker. Now I encounter some problems, can anyone help me?
I want to run a container with macvlan.
In my case, I will run a container with --net=none first.
Then configure the network using ip command (or using netns in python).
the order is :
run a container
run app inside container
setup network
my question is that how to setup the network first.
Then run the app.
the order is :
run a container
setup network
run app inside docker
Maybe I can write network configuration script on a file and run it before the other stuff on Dockerfile. But in this way, the network and container are highly coupling, and I need edit it for every container everytime manually.
So is there a better way to handle this situation?
thx in advance.
There is the --net=container argument to docker run which shares the network namespace of the container with another container.
So you could first launch a container with --net=none and a script to set up the networking, then launch your application container with --net=network_container to use that networking stack. That would keep the network configuration and application uncoupled.
Also, take a look at the pipework project if you haven't already.
In general though, I would suggest you are better off looking at existing solutions like Weave and Project Calico.

Resources