The context
Let me know if I've gone down a rabbit hole here.
I have a simple web app with a frontend and backend component, deployed using Docker/Helm inside a Kubernetes cluster. The frontend is servable via nginx, and the backend component will be running a NodeJS microservice.
I had been thinking to have both run on the same pod inside Docker, but ran into some problems getting both nginx and Node to run in the background. I could try having a startup script that runs both, but the Internet says it's a best practice to have different containers each be responsible for only running one service - so one container to run nginx and another to run the microservice.
The problem
That's fine, but then say the nginx server's HTML pages need to know what to send a POST request to in the backend - how can the HTML pages know what IP to hit for the backend's Docker container? Articles like this one come up talking about manually creating a Docker network for the two containers to speak to one another, but how can I configure this with Helm so that the frontend container knows how to hit the backend container each time a new container is deployed, without having to manually configure any network service each time? I want the deployments to be automated.
You mention that your frontend is based on Nginx.
Accordingly,Frontend must hit the public URL of backend.
Thus, backend must be exposed by choosing the service type, whether:
NodePort -> Frontend will communicate to backend with http://<any-node-ip>:<node-port>
or LoadBalancer -> Frontend will communicate to backend with the http://loadbalancer-external-IP:service-port of the service.
or, keep it ClusterIP, but add Ingress resource on top of it -> Frontend will communicate to backend with its ingress host http://ingress.host.com.
We recommended the last way, but it requires to have ingress controller.
Once you tested one of them and it works, then, you can extend your helm chart to update the service and add the ingress resource if needed
You may try to setup two containers in one pod and then communicate between containers via localhost (but on different ports!). Good example is here - Kubernetes multi-container pods and container communication.
Another option is to create two separate deployments and for each create service. Instead of using IP addresses (won't be the same for every re-deployment of your app) use a DNS name for connecting to them.
Example - two NGINX services communication.
First create two NGINX deplyoments:
kubectl create deployment nginx-one --image=nginx --replicas=3
kubectl create deployment nginx-two --image=nginx --replicas=3
Let's expose them using the kubectl expose command. It's the same if I had created a service from a yaml file:
kubectl expose deployment nginx-one --name=my-service-one --port=80
kubectl expose deployment nginx-two --name=my-service-two --port=80
Now let's check services - as you can see both of them are ClusterIP type:
user#shell:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.36.0.1 <none> 443/TCP 66d
my-service-one ClusterIP 10.36.6.59 <none> 80/TCP 60s
my-service-two ClusterIP 10.36.15.120 <none> 80/TCP 59s
I will exec into pod from nginx-one deployment and curl the second service:
user#shell:~$ kubectl exec -it nginx-one-5869965455-44cwm -- sh
# curl my-service-two
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
If you have problems, make sure you have a proper CNI plugin installed for your cluster - also check this article - Cluster Networking for more details.
Also check these:
My similar answer but with a wider explanation + example of communication between two namespaces.
Access Services Running on Clusters | Kubernetes
Service | Kubernetes
Debug Services | Kubernetes
DNS for Services and Pods | Kubernetes
Related
im trying to make my first kubernetes project, but the problem is that i may have some configuration issues.
For example i wanted to run this project:
https://gitlab.com/codeching/kubernetes-multicontainer-application-react-nodejs-postgres-nginx
I did:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/cloud/deploy.yaml
Then
kubectl apply -f k8s
But when i enter the http://localhost i just get ERR_EMPTY_RESPONSE
Anyone knows why? I have newly installed docker desktop & kubernetes, everything is green & working, but somehow i can't run even this simple project.
The ingress-nginx ingress service is getting deployed as LoadBalancer Service type. if LoadBalancer is not attached, you can use port forwarding of the service to access applications in the cluster.
I have a server that is orchestrated using k8s it's service looks like below
➜ installations ✗ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oxd-server ClusterIP 10.96.124.25 <none> 8444/TCP,8443/TCP 3h32m
and it's pod.
➜ helm git:(helm-rc1) ✗ kubectl get po
NAME READY STATUS RESTARTS AGE
sam-test-oxd-server-6b8f456cb6-5gwwd 1/1 Running 0 3h2m
Now, I have a docker image with an env variable that requires the URL of this server.
I have 2 questions from here.
How can the docker image get the URL or access the URL?
How can I access the same URL in my terminal so I make some curl commands through it?
I hope I am clear on the explanation.
If your docker container is outside the kubernetes cluster, then it's not possible to access you ClusterIP service.
As you could guess by its name, ClusterIP type services are only accessible from within the cluster.
By within the cluster I mean any resource managed by Kubernetes.
A standalone docker container running inside a VM which is part of your K8S cluster is not a resource managed by K8S.
So, in order to achieve what you want, you'll have those possibilities :
Set a hostPort inside your pod. This is not recommanded and is listed as a bad practice in the doc. Keep this usage for very specific case.
Switch your service to NodePort instead of ClusterIP. This way, you'll be able to access it using a node IP + the node port.
Use a LoadBalancer type of service, but this solution needs some configuration and is not straightforward.
Use an Ingress along with an IngressController but just like the load balancer, this solution needs some configuration and is not that straightforward.
Depending on what you do and if this is critical or not, you'll have to choose one of these solutions.
1 & 2 for debug/dev
3 & 4 for prod, but you'll have to work with your k8s admin
You can use the name of the service oxd-server from any other pod in the same namespace to access it i.e., if the service is backed by pods that are serving HTTPS, you can access the service at https://oxd-server:8443/.
If the client pod that wants to access this service is in a different namespace, then you can use oxd-server.<namespace> name. In your case that would be oxd-server.default since your service is in default namespace.
To access this service from outside the cluster(from your terminal) for local debugging, you can use port forwarding.
Then you can use the URL localhost:8443 to make any requests and request would be port forwarded to the service.
kubectl port-forward svc/oxd-server 8443:8443
If you want to access this service from outside the cluster for production use, you can make the service as type: NodePort or type: LoadBalancer. See service types here.
I'm trying to create a Kubernetes cluster for learning purposes. So, I created 3 virtual machines with Vagrant where the master has IP address of 172.17.8.101 and the other two are 172.17.8.102 and 172.17.8.103.
It's clear that we need Flannel so that our containers in different machines can connect to each other without port mapping. And for Flannel to work, we need Etcd, because flannel uses this Datastore to put and get its data.
I installed Etcd on master node and put Flannel network address on it with command etcdctl set /coreos.com/network/config '{"Network": "10.33.0.0/16"}'
To enable ip masquerading and also using the private network interface in the virtual machine, I added --ip-masq --iface=enp0s8 to FLANNEL_OPTIONS in /etc/sysconfig/flannel file.
In order to make Docker use Flannel network, I added --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}' to OPTIONS variable in /etc/sysconfig/docker file. Note that the values for FLANNEL_SUBNET and FLANNEL_MTU variables are the ones set by Flannel in /run/flannel/subnet.env file.
After all these settings, I installed kubernetes-master and kubernetes-client on the master node and kubernetes-node on all the nodes. For the final configurations, I changed KUBE_SERVICE_ADDRESSES value in /etc/kubernetes/apiserver file to --service-cluster-ip-range=10.33.0.0/16
and KUBELET_API_SERVER value in /etc/kubernetes/kubelet file to --api-servers=http://172.17.8.101:8080.
This is the link to k8s-tutorial project repository with the complete files.
After all these efforts, all the services start successfully and work fine. It's clear that there are 3 nodes running when I use the command kubectl get nodes. I can successfully create a nginx pod with command kubectl run nginx-pod --image=nginx --port=80 --labels="app=nginx" and create a service with kubectl expose pod nginx-pod --port=8000 --target-port=80 --name="service-pod" command.
The command kubectl describe service service-pod outputs the following results:
Name: service-pod
Namespace: default
Labels: app=nginx
Selector: app=nginx
Type: ClusterIP
IP: 10.33.39.222
Port: <unset> 8000/TCP
Endpoints: 10.33.72.2:80
Session Affinity: None
No events.
The challenge is that when I try to connect to the created service with curl 10.33.79.222:8000 I get curl: (7) Failed connect to 10.33.72.2:8000; Connection refused but if I try curl 10.33.72.2:80 I get the default nginx page. Also, I can't ping to 10.33.79.222 and all the packets get lost.
Some suggested to stop and disable Firewalld, but it wasn't running at all on the nodes. As Docker changed FORWARD chain policy to DROP in Iptables after version 1.13 I changed it back to ACCEPT but it didn't help either. I eventually tried to change the CIDR and use different IP/subnets but no luck.
Does anybody know where am I going wrong or how to figure out what's the problem that I can't connect to the created service?
The only thing I can see that you have that is conflicting is the PodCidr with Cidr that you are using for the services.
The Flannel network: '{"Network": "10.33.0.0/16"}'. Then on the kube-apiserver --service-cluster-ip-range=10.33.0.0/16. That's the same range and it should be different so you have your kube-proxy setting up services for 10.33.0.0/16 and then you have your overlay thinking it needs to route to the pods running on 10.33.0.0/16. I would start by choosing a completely non-overlapping Cidrs for both your pods and services.
For example on my cluster (I'm using Calico) I have a podCidr of 192.168.0.0/16 and I have a service Cidr of 10.96.0.0/12
Note: you wouldn't be able to ping 10.33.79.222 since ICMP is not allowed in this case.
Your service is of type ClusterIP which means it can only be accessed by other Kubernetes pods. To achieve what you are trying to do consider switching to a service of type NodePort. You can then connect to it using the command curl <Kubernetes-IP-address>:<exposedServicePort>
See https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ for an example of using NodePort.
I'm having an issue where because an application was originally configured to execute on docker-compose.
I managed to port and rewrite the .yaml deployment files to Kubernetes, however, the issue lies within the communication of the pods.
The frontend communicates with the backend to access the services, and I assume as it should be in the same network, the frontend calls the services from the localhost.
I don't have access to the code, as it is an proprietary application that was developed by a company and it does not support Kubernetes, so modifying the code is out of question.
I believe the main reason is because the frontend and backend are runnning on different pods, with different IPs.
When the frontend tries to call the APIs, it does not find the service, and returns an error.
Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.
Unfortunately, I do not know how to make a yaml file to create both containers within a single pod.
Is it possible to have both frontend and backend containers running on the same pod, or would there be another way to make the containers communicate (maybe a proxy)?
Yes, you just add entries to the containers section in your yaml file, example:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
containers:
- name: nginx-container
image: nginx
- name: debian-container
image: debian
Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.
Although you have the accepted answer already in place that is tackling example of running more containers in the same pod I'd like to point out few details:
Containers should be in the same pod only if they scale together (not if you want to communicate over clusterIP amongst them). Your scenario of frontend/backend division doesn't really look like good candidate to cram them together.
If you opt for containers to be in the same pod they can communicate over localhost (they see each other as if two processes are running on the same host (except the part their file systems are different) and can use localhost for direct communication and because of that can't allocate both the same port. Using cluster IP is as if on same host two processes are communicating over external ip).
More kubernetes philosophy approach here would be to:
Create deployment for backend
Create service for backend (exposing necessary ports)
Create deployment for frontend
Communicate from frontend to backend using backend service name (kube-dns resolves this to cluster ip of backend service) and designated backend ports.
Optionally (for this example) create service for frontend for external access or whatever goes outside. Note that here you can allocate same port as for backend service since they are not living on the same pod (host)...
Some of the benefits of this approach include: you can isolate backend better (backend-frontend communication is within cluster only, not exposed to outside world), you can schedule them independently on nodes, you can scale them independently (say you need more backend power but fronted is handling traffic ok or viceversa), you can replace any of them independently etc...
I have non deckerised application that needs to connect to dockerised application running inside kubernetes pod.
Given that pods may died and came again with different ip address, how my application can detect this? any way to assign a hostname that redirect to whatever existing pods?
You will have to use kubernetes service. Service gives you a way to talk to your pods with static Ip and dns (if you're client app is inside the cluster).
https://kubernetes.io/docs/concepts/services-networking/service/
You can do it in several ways:
Easiest: Use kubernetes service with type: NodePort. Then you can access the pod using http://[nodehost]:[nodeport]
Use kubernetes ingress. See this link for more details (https://kubernetes.io/docs/concepts/services-networking/ingress/)
If you are running in the cloud like aws, azure or gce, you can use kubernetes service type LoadBalancer.
In addition to Bal Chua’s work and suggestions from silverfox, I would like to show you the method
I used for Kubernetes to expose and manage incoming traffic from the outside:
Step 1: Deploy an application
In this example, Kubernetes sample hello application will run on port 8080/tcp
kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080
Step 2: Expose your Deployment as a Service internally
This command tells Kubernetes to expose port 8080/tcp to interact with the world outside:
kubectl expose deployment web --target-port=8080 --type=NodePort
After, please check if it exposed running command:
kubectl get service web
Step 3: Manage Ingress resource
Ingress sends traffic to a proper service working inside Kubernetes.
Open a text editor and then create a file basic-ingress.yaml
with content:
apiVersion:
extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
Apply the configuration:
kubectl apply -f basic-ingress.yaml
and that's all. It is time to test. Get the external IP address of Kubernetes installation:
kubectl get ingress basic-ingress
and run web browser with this address to see hello application working.