Linking Containers in POD in K8S - docker

I want to link my selenium/hub container to my chrome and firefox node containers in a POD.
In docker, it was easily defined in the docker compose yaml file.
I want to know how to achieve this linking in kubernetes.
This is what appears on the log.:
This is the error image:
apiVersion: v1
kind: Pod
metadata:
name: mytestingpod
spec:
containers:
- name: seleniumhub
image: selenium/hub
ports:
- containerPort: 4444
hostPort: 4444
- name: chromenode
image: selenium/node-chrome-debug
ports:
- containerPort: 5901
links: seleniumhub:hub
- name: firefoxnode
image: selenium/node-firefox-debug
ports:
- containerPort: 5902
links: seleniumhub:hub
2:

You don't need to link them. The way Kubernetes works, all the containers in the same Pod are already on the same networking namespace, meaning that they can just talk to each other through localhost and the right port.
The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a pod must coordinate their usage of ports. Each pod has an IP address in a flat shared networking space that has full communication with other physical computers and pods across the network.
If you want to access the chromenode container from the seleniumhub container, just send a request to localhost:5901.
If you want to access the seleniumhub container from the chromenode container, just send a request to localhost:4444.

Simply use kompose described in "Translate a Docker Compose File to Kubernetes Resources": it will translate your docker-compose.yml file into kubernetes yaml files.
You will then see how the selenium/hub container declaration is translated into kubernetes config files.
Note though that docker link are obsolete.
Try instead to follow the kubernetes examples/selenium which are described here.
The way you connect applications with Kubernetes is through a service:
See "Connecting Applications with Services".

Related

Which ports are supposed to be exposed in a Helm Chart when TWO ports are exposed in the docker image?

When working with helm charts (generated by helm create <name>) and specifying a docker image in values.yaml such as the image "kubernetesui/dashboard:v2.4.0" in which the exposed ports are written as EXPOSE 8443 9090 I found it hard to know how to properly specify these ports in the actual helm chart-files and was wondering if anyone could explain a bit further on the topic.
By my understanding, the EXPOSE 8443 9090 means that hostPort "8443" maps to containerPort "9090". It in that case seems clear that service.yaml should specify the ports in a manner similar to the following:
spec:
type: {{ .Values.service.type }}
ports:
- port: 8443
targetPort: 9090
The deployment.yaml file however only comes with the field "containerPort" and no port field for the 8443 port (as you can see below) Should I add some field here in the deployment.yaml to include port 8443?
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
ports:
- name: http
containerPort: 9090
protocol: TCP
As of now when I try to install the helm charts it gives me an error message: "Container image "kubernetesui/dashboard:v2.4.0" already present on machine" and I've heard that it means the ports in service.yaml are not configured to match the docker images exposed ports. I have tested this with simpler docker image which only exposes one port and just added the port everywhere and the error message goes away so it seems to be true, but I am still confused about how to do it with two exposed ports.
I would really appreciate some help, thank you in advance if you have any experience of this and is willing to share.
A Docker image never gets to specify any host resources it will use. If the Dockerfile has EXPOSE with two port numbers, then both ports are exposed (where "expose" means almost nothing in modern Docker). That is: this line says the container listens on both port 8443 and 9090 without requiring any specific external behavior.
In your Kubernetes Pod spec (usually nested inside a Deployment spec), you'd then generally list both ports as containerPorts:. Again, this doesn't really say anything about how a Service uses it.
# inside templates/deployment.yaml
ports:
- name: http
containerPort: 9090
protocol: TCP
- name: https
containerPort: 8443
protocol: TCP
Then in the corresponding Service, you'd republish either or both ports.
# inside templates/service.yaml
spec:
type: {{ .Values.service.type }}
ports:
- port: 80 # default HTTP port
targetPort: http # matching name: in pod, could also use 9090
- port: 443 # default HTTP/TLS port
targetPort: https # matching name: in pod, could also use 8443
I've chosen to publish the unencrypted and TLS-secured ports on their "normal" HTTP ports, and to bind the service to the pod using the port names.
None of this setup is Helm-specific; in this the only Helm template reference is the Service type: (for if the operator needs to publish a NodePort or LoadBalancer service).

How do i translate a docker command with -p 80:80 to kubernetes yaml

docker run -it -p 80:80 con-1
docker run -it -p hostport:containerport
Lets say i have this yaml definition, does below where it says 80 ports -> containerPort: 80 sufficent? In other words how do i account for -p 80:80 the hostport and container port in kubernetes yaml definition?
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
Exposing ports of applications with k8s is different than exposing it with docker.
For pods, spec.containers.ports field isn't used to expose ports. It mostly used for documenting purpouses and also to name ports so that you can reference them later in service object's target-port with their name, and not a number (https://stackoverflow.com/a/65270688/12201084).
So how do we expose pods to the outside?
It's done with service objects. There are 4 types of service: ClusterIP, NodePort, LoadBalancer and ExternalName.
They are all well explained in k8s documentation so I am not going to explain it here. Check out K8s docs on types of servies
Assuming you know what type you want to use you can now use kubectl to create this service:
kubectl expose pod <pod-name> --port <port> --target-port <targetport> --type <type>
kubectl expose deployment <pod-name> --port <port> --target-port <targetport> --type <type>
Where:
--port - is used to pecify the port on whihc you want to expose the application
--target-port - is used to specify the port on which the applciation is running
--type - is used to specify the type of service
With docker you would use -p <port>:<target-port>
OK, but maybe you don't want to use kubeclt to create a service and you would like to keep the service in git or whatever as a yaml file. You can check out the examples in k8s docs, copy it and write you own yaml or do the following:
$ kubectl expose pod my-svc --port 80 --dry-run=client -oyaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: my-svc
name: my-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: my-svc
status:
loadBalancer: {}
Note: notice that if you don't pass a value for --target-port it defaults to the same value as --port
Also notice the selector filed, that has the same values as the labels on a pod. It will forward the traffic to every pod with this label (within the namespace).
Now, if you don't pass the value for --type, it defaults to ClisterIP so it means the service will be accessible only from within the cluster.
If you want to access the pod/application from the outside, you need to use either NodePort or LoadBalancer.
Nodeport opens some random port on every node and connecting to this port will forward the packets to the pod. The problem is that you can't just pick any port to open, and often you dont even get to pick the port at all (it's randomly assigned).
In case of type LoadBalancer you can use whateever port you'd like, but you need to run in cloud and use cloud provisioner to create and configure external loadbalancer for you and point it to your pod. If you are running on bare-metal you can use projects like MetalLB to make use of LoadBalancer type.
To summarize, exposing containers with docker is totally different than exposing them with kubernetes. Don't assume k8s will work the same way the docker works just with different notation, because it won't.
Read the docs and blogs about k8s services and learn how they work.

how to combine mutiple images ( redis+memcache+python) into 1 single container in a pod

how to combine multiple images (redis+ memcache +python) into 1 single container in a pod using kubectl command .
do we have any other option instead of creating custom docker image with
all required image
Instead of this, you could run all three containers in a single Kubernetes pod, which is what I would recommend if they are tightly coupled.
It's a good idea to keep each container as small as it needs to be to do one thing.
Just add more containers to your pod spec...
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: app
image: python
ports:
- containerPort: 80
- name: key-value-store
image: redis
ports:
- containerPort: 6379
- name: cache
image: memcached
ports:
- containerPort: 9001
name: or-whatever-port-memcached-uses
I wouldn't use a pod directly, but the same idea applies to pods created by deployments, daemonsets, etc.

Kubernetes - Create two containers in one pod

I'm developing an application that consists of two containers. I want to deploy them in Kubernetes into one Pod, since I want the two services to be behind one IP address. However, I'm having a hard time trying to connect the Kubernetes Services with the containers.
How could I write a deployment.yml file, so that when the user calls a x.x.x.x:port1, the request is forwarded to the first container, and when the x.x.x.x:port2 is called, the request is forwarded to the second container. How could I specify the Services?
Here's what I have until now:
apiVersion:v1
kind: Pod
metadata:
name: my_app
spec:
containers:
- name: first_container
image: first_image
- name: second_container
image: second_image
---
apiVersion: v1
kind: Service
...
In your containers section you need to define a containerPort for each:
containers:
- name: first_container
image: first_image
ports:
- containerPort: port1
- name: second_container
image: second_image
ports:
- containerPort: port2
And then in the ports section of the service definition you need to point the targetPorts of the service at those ports like in https://stackoverflow.com/a/42272547/9705485

How to specify host port range instead of host port in kubernete's pod yaml file?

In docker run command, we can specify host port range to bind to EXPOSEd container port. Same thing I want to do through Kubernetes. Does any one know, how to do that? My current pod definition is as-
apiVersion: v1
kind: Pod
metadata:
name: nginx-testing
spec:
containers:
- name: nginx-container
image: docker.io/nginx
ports:
- containerPort: 80
hostPort: 9088
At the last line, Instead of specifying single port number, I want a range of port numbers. I tried something like hostPort: 9088-9999 or 9088..9999, but it wouldn't worked.
Port ranges are not currently supported in any of the Kubernetes API objects. There is an open issue discussing port ranges in services. Please add your use case and your thoughts!

Resources