I would to deploy, through kubernetes, two applications in local docker images (no docker hub/artifactory). I want them to see each other through name (no ip), so I should deploy them in the same POD and load the name of the first as system environment variable in the second container.
They should be both visible from the external, so I need NodePort deploy and I would be able to choose the port.
I know how to reach this goal through kubectl cli commands, but I would to have the result through a YAML configuration file so I can apply with the command kubectl apply -f deploy.yml
Technically, You can deploy multiple app containers in same POD but you should avoid that as
you want to scale them independently (X replicas of APP1 and Y replicas of APP2)
also to keep resource allocation dedicated to one kind of application
and many more benefits of isolation
As far as communicating them via name (no IP) kubernetes has the concept of services to achieve that with ease.
All of this can be written in YAML format
You can see this:
https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/
https://kubernetes.io/docs/tutorials/stateless-application/guestbook/
BUT STILL If you want to do this........
Then containers inside same pod can communicate with each other using localhost and in YML can you define spec for multiple container
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: app1-container
image: app1
- name: .... for app 2
image: app2
Related
I have been trying to port over some infrastructure to K8S from a VM docker setup.
In a traditional VM docker setup I run 2 docker containers: 1 being a proxy node service, and another utilizing the proxy container through an .env file via:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' proxy-container
172.17.0.2
Then within the .env file:
URL=ws://172.17.0.2:4000/
This is what I am trying to setup within a cluster in K8S but failing to reference the proxy-service correctly. I have tried using the proxy-service pod name and/or the service name with no luck.
My env-configmap.yaml is:
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
data:
URL: "ws://$(proxy-service):4000/"
Containers that run in the same pod can connect to each other via localhost. Try URL: "ws://localhost:4000/" in your ConfigMap. Otherwise, you need to specify the service name like URL: "ws://proxy-service.<namespace>:4000".
I want to create a mixed Kubernetes cluster, with some local nodes and some EC2 nodes. The master is in the local network. The docker image has to run in bridge network.
Everything is fine related to the local nodes, but the pods launched in EC2 don't have network access.
Here is a sample yaml file:
---
apiVersion: "v1"
kind: "Pod"
metadata:
labels:
jenkins: "slave"
name: "test"
spec:
containers:
- image: "my-image"
imagePullPolicy: "IfNotPresent"
name: "my-test"
hostNetwork: false
If I put true for hostNetwork the pods are launched fine in both situations (with network access), but there is an application requirement saying that I have to start it with bridge network.
kubectl version: 1.13.5
docker version: 18.06.1-ce
k8s network: flannel
If I start that docker image manually, with bridge network, everything is fine both locally and in EC2, the network is accessible. So it is something related to Kubernetes configuration.
Do you have any idea?
Thank you!
I managed to solve the issue by adding the below line to pod's spec file
dnsPolicy: "Default"
This inherits the name resolution configuration from the host.
By default this is set to ClusterFirst.
More details are available here: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node
I just tried setting up kubernetes on my bare server,
Previously I had successfully create my docker compose
There are several apps :
Apps A (docker image name : a-service)
Apps B (docker image name : b-service)
Inside Application A and B there are configs (actually there are apps A,B,C,D,etc lots of em)
The config file is something like this
IPFORSERVICEA=http://a-service:port-number/path/to/something
IPFORSERVICEB=http://b-service:port-number/path/to/something
At least above config work in docker compose (the config is inside app level, which require to access another apps). Is there any chance for me to access another Kubernetes Service from another service ? As I am planning to create 1 app inside 1 deployment, and 1 service for each deployment.
Something like:
App -> Deployment -> Service(i.e: NodePort,ClusterIP)
Thanks !
Is there any chance for me to access another Kubernetes Service from
another service ?
Yes, you just need to specify DNS name of service (type: ClusterIP works fine for this) you need to connect to as:
<service_name>.<namespace>.svc.cluster.local
In this case such domain name will be correctly resolved into internal IP address of the service you need to connect to using built-in DNS.
For example:
nginx-service.web.svc.cluster.local
where nginx-service - name of your service and web - is apps's namespace, so service yml definition can look like:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: web
spec:
ports:
- name: http
protocol: TCP
port: 80
selector:
app: nginx
type: ClusterIP
See official docs to get more information.
Use Kubernetes service discovery.
Service discovery is the process of figuring out how to connect to a
service. While there is a service discovery option based on
environment variables available, the DNS-based service discovery is
preferable. Note that DNS is a cluster add-on so make sure your
Kubernetes distribution provides for one or install it yourself.
Service dicovery by example
I have non deckerised application that needs to connect to dockerised application running inside kubernetes pod.
Given that pods may died and came again with different ip address, how my application can detect this? any way to assign a hostname that redirect to whatever existing pods?
You will have to use kubernetes service. Service gives you a way to talk to your pods with static Ip and dns (if you're client app is inside the cluster).
https://kubernetes.io/docs/concepts/services-networking/service/
You can do it in several ways:
Easiest: Use kubernetes service with type: NodePort. Then you can access the pod using http://[nodehost]:[nodeport]
Use kubernetes ingress. See this link for more details (https://kubernetes.io/docs/concepts/services-networking/ingress/)
If you are running in the cloud like aws, azure or gce, you can use kubernetes service type LoadBalancer.
In addition to Bal Chua’s work and suggestions from silverfox, I would like to show you the method
I used for Kubernetes to expose and manage incoming traffic from the outside:
Step 1: Deploy an application
In this example, Kubernetes sample hello application will run on port 8080/tcp
kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080
Step 2: Expose your Deployment as a Service internally
This command tells Kubernetes to expose port 8080/tcp to interact with the world outside:
kubectl expose deployment web --target-port=8080 --type=NodePort
After, please check if it exposed running command:
kubectl get service web
Step 3: Manage Ingress resource
Ingress sends traffic to a proper service working inside Kubernetes.
Open a text editor and then create a file basic-ingress.yaml
with content:
apiVersion:
extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
Apply the configuration:
kubectl apply -f basic-ingress.yaml
and that's all. It is time to test. Get the external IP address of Kubernetes installation:
kubectl get ingress basic-ingress
and run web browser with this address to see hello application working.
I have a docker container that needs to run in Kubernetes, but within its parameters, there's one need the container's Cluster IP info. How can I write a Kubernetes yaml file with that info?
# I want docker to run like this
docker run ... --wsrep-node-address=<ClusterIP>
# xxx.yaml
apiVersion: v1
kind: Pod
metadata:
name: galera01
labels:
name: galera01
namespace: cloudstack
spec:
containers:
- name: galeranode01
image: erkules/galera:basic
args:
# Is there any variable that I can use to represent the
# POD IP or CLUSTER IP here?
- --wsrep-node-address=<ClusterIP>
If i get this right you want to know node ip for which runs the container.
You can achive this by using kubernetes dns.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Services
A records
“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. This resolves to the cluster IP of the Service.
Another way you can create a service and use this.
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service