I want to create a mixed Kubernetes cluster, with some local nodes and some EC2 nodes. The master is in the local network. The docker image has to run in bridge network.
Everything is fine related to the local nodes, but the pods launched in EC2 don't have network access.
Here is a sample yaml file:
---
apiVersion: "v1"
kind: "Pod"
metadata:
labels:
jenkins: "slave"
name: "test"
spec:
containers:
- image: "my-image"
imagePullPolicy: "IfNotPresent"
name: "my-test"
hostNetwork: false
If I put true for hostNetwork the pods are launched fine in both situations (with network access), but there is an application requirement saying that I have to start it with bridge network.
kubectl version: 1.13.5
docker version: 18.06.1-ce
k8s network: flannel
If I start that docker image manually, with bridge network, everything is fine both locally and in EC2, the network is accessible. So it is something related to Kubernetes configuration.
Do you have any idea?
Thank you!
I managed to solve the issue by adding the below line to pod's spec file
dnsPolicy: "Default"
This inherits the name resolution configuration from the host.
By default this is set to ClusterFirst.
More details are available here: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node
Related
I have been trying to port over some infrastructure to K8S from a VM docker setup.
In a traditional VM docker setup I run 2 docker containers: 1 being a proxy node service, and another utilizing the proxy container through an .env file via:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' proxy-container
172.17.0.2
Then within the .env file:
URL=ws://172.17.0.2:4000/
This is what I am trying to setup within a cluster in K8S but failing to reference the proxy-service correctly. I have tried using the proxy-service pod name and/or the service name with no luck.
My env-configmap.yaml is:
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
data:
URL: "ws://$(proxy-service):4000/"
Containers that run in the same pod can connect to each other via localhost. Try URL: "ws://localhost:4000/" in your ConfigMap. Otherwise, you need to specify the service name like URL: "ws://proxy-service.<namespace>:4000".
Problem
I have two pods A and B running in a cluster on minikube, both have external IPs www.service-a.com and www.service-b.com. Both external IPs are accessible from outside.
I need A to be able to call B with it's external IP rather than its cluster DNS, that is A needs to use www.service-b.com rather than b.svc.cluster.local (which does work but I can't use it).
I set A to use hostNetwork: true and dnsPolicy: ClusterFirstWithHostNet. If I spin up a NodeJS docker container manually, it indeed can connect and find it. However, A is still unable to connect to service-b.com. Am I using hostNetwork wrong? How can I configure my pod to connect to b in that way?
A's Deployment YAML
...
spec:
replicas: 1
selector:
matchLabels:
app: a-app
template:
metadata:
labels:
app: a-app
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
...
B's service YAML
...
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
ports:
- port: ...
targetPort: ...
protocol: TCP
name: http
...
Background:
I'm using Minio (a local S3-like solution) and I need to presign the URLs to get and put objects. Minio's pods are running in the same cluster as my authentication pod which would generate the presigned urls. The presigned urls would be used from outside the cluster. Hence I can't sign the url with the cluster dns names like minio.svc.cluster.local because this URL would not be accessible from outside the cluster and replacing the host with my-minio.com and keeping the signature does not work because I guess minio signs the entire host and path. Hence I need to have my authentication pod connect to Minio's public facing my-minio.com instead which does not seem to work.
Regarding hostNetwork, it looks like you misunderstood it. Setting it to true means that Pod will have access to the host where it's running. In case of minikube it's VM and not your host where actual containers are running.
Also, i'm not sure how you expose your services to external world, but i suggest you to try Ingress for that.
As Grigoriy suggested, I used an ingress with nginx.ingress.kubernetes.io/upstream-vhost annotation to forward all requests into the cluster with Host: service-b to resolve my issue. Previously I had nginx.ingress.kubernetes.io/rewrite-target: /$1 which stripped the path from the request that caused a serious of issues, so I removed that. The details of how I got it working are here:
NGINX controller Kubernetes: need to change Host header within ingress
I would to deploy, through kubernetes, two applications in local docker images (no docker hub/artifactory). I want them to see each other through name (no ip), so I should deploy them in the same POD and load the name of the first as system environment variable in the second container.
They should be both visible from the external, so I need NodePort deploy and I would be able to choose the port.
I know how to reach this goal through kubectl cli commands, but I would to have the result through a YAML configuration file so I can apply with the command kubectl apply -f deploy.yml
Technically, You can deploy multiple app containers in same POD but you should avoid that as
you want to scale them independently (X replicas of APP1 and Y replicas of APP2)
also to keep resource allocation dedicated to one kind of application
and many more benefits of isolation
As far as communicating them via name (no IP) kubernetes has the concept of services to achieve that with ease.
All of this can be written in YAML format
You can see this:
https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/
https://kubernetes.io/docs/tutorials/stateless-application/guestbook/
BUT STILL If you want to do this........
Then containers inside same pod can communicate with each other using localhost and in YML can you define spec for multiple container
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: app1-container
image: app1
- name: .... for app 2
image: app2
I am trying to setup a Kubernetes setup from scratch in a network behind corporate proxy with
3 Nodes(1 master and 2 slaves)
After setup the deployments are always showing creatingContainer state and hangs there
During the setup the commands kubeadm config image pull and kubeadm init are working without any issue.
After the setup I setup the network plugin using weave with default config using command kubectl apply -f weave.yml
After this the core-dns service started showing running but when I check the containers in docker ps command
it still shows the image for core-dns and other container as k8s.gcr.io/pause:3.1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
After the setup I tried deployment of sample nginx config as per the above Kubernetes sample. But the container hangs in creatingContainer state only.
Can anyone tell why the image is not getting changes to actual image of coredns
Got the issue.
I was using weave network which needs port TCP 6783 and UDP 6783/6784 needs to be opened. But it was blocked in master node. And journalctl didn't give any logs of this error.
I inspected the pause container logs and then enabled firewall. Now it is working.
I have a docker container that needs to run in Kubernetes, but within its parameters, there's one need the container's Cluster IP info. How can I write a Kubernetes yaml file with that info?
# I want docker to run like this
docker run ... --wsrep-node-address=<ClusterIP>
# xxx.yaml
apiVersion: v1
kind: Pod
metadata:
name: galera01
labels:
name: galera01
namespace: cloudstack
spec:
containers:
- name: galeranode01
image: erkules/galera:basic
args:
# Is there any variable that I can use to represent the
# POD IP or CLUSTER IP here?
- --wsrep-node-address=<ClusterIP>
If i get this right you want to know node ip for which runs the container.
You can achive this by using kubernetes dns.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Services
A records
“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. This resolves to the cluster IP of the Service.
Another way you can create a service and use this.
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service