I have been trying to port over some infrastructure to K8S from a VM docker setup.
In a traditional VM docker setup I run 2 docker containers: 1 being a proxy node service, and another utilizing the proxy container through an .env file via:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' proxy-container
172.17.0.2
Then within the .env file:
URL=ws://172.17.0.2:4000/
This is what I am trying to setup within a cluster in K8S but failing to reference the proxy-service correctly. I have tried using the proxy-service pod name and/or the service name with no luck.
My env-configmap.yaml is:
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
data:
URL: "ws://$(proxy-service):4000/"
Containers that run in the same pod can connect to each other via localhost. Try URL: "ws://localhost:4000/" in your ConfigMap. Otherwise, you need to specify the service name like URL: "ws://proxy-service.<namespace>:4000".
Related
I am using Lua code inside the Nginx ingress controller in Minikube to write some logs to a file. I would like this file to be available on the host.
Is there a way to map a volume from the ingress-controller pod to the host? I did not create the Nginx ingress controller pod using a YAML config, but merely enabled the ingress addon in Minikube, so I do not have a YAML that I can add a volume mapping to.
You should be able to kubectl get whatever is running in your cluster and save it to a file.
kubectl get pod nginx -oyaml > mynginxpod.yaml
Then you could edit the file adding your volume and then applying with:
kubectl apply -f mynginxpod.yaml.
this is just an example.
I would to deploy, through kubernetes, two applications in local docker images (no docker hub/artifactory). I want them to see each other through name (no ip), so I should deploy them in the same POD and load the name of the first as system environment variable in the second container.
They should be both visible from the external, so I need NodePort deploy and I would be able to choose the port.
I know how to reach this goal through kubectl cli commands, but I would to have the result through a YAML configuration file so I can apply with the command kubectl apply -f deploy.yml
Technically, You can deploy multiple app containers in same POD but you should avoid that as
you want to scale them independently (X replicas of APP1 and Y replicas of APP2)
also to keep resource allocation dedicated to one kind of application
and many more benefits of isolation
As far as communicating them via name (no IP) kubernetes has the concept of services to achieve that with ease.
All of this can be written in YAML format
You can see this:
https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/
https://kubernetes.io/docs/tutorials/stateless-application/guestbook/
BUT STILL If you want to do this........
Then containers inside same pod can communicate with each other using localhost and in YML can you define spec for multiple container
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: app1-container
image: app1
- name: .... for app 2
image: app2
tl;dr How do you reference an image in a Kubernetes Pod when the image is from a private docker registry hosted on the same k8s cluster without a separate DNS entry for the registry?
In an on-premise Kubernetes deployment, I have setup a private Docker registry using the stable/docker-registry helm chart using a self-signed certificate. This is on-premise and I can't setup a DNS record to give the registry it's own URL. I wish to use these manifests as templates, so I don't want to hardcode any environment specific config.
The docker registry service is of type ClusterIP and looks like this:
apiVersion: v1
kind: Service
metadata:
name: docker-registry
labels:
app: docker-registry
spec:
type: ClusterIP
ports:
- port: 443
protocol: TCP
name: registry
targetPort: 5000
selector:
app: docker-registry
If I've pushed an image to this registry manually (or in the future via a Jenkins build pipeline), how would I reference that image in a Pod spec?
I have tried:
containers:
- name: my-image
image: docker-registry.devops.svc.cluster.local/my-image:latest
imagePullPolicy: IfNotPresent
But I received an error about the node host not being able to resolve docker-registry.devops.svc.cluster.local. I think the Docker daemon on the k8s node can't resolve that URL because it is an internal k8s DNS record.
Warning Failed 20s (x2 over 34s) kubelet, ciabdev01-node3
Failed to pull image "docker-registry.devops.svc.cluster.local/hadoop-datanode:2.7.3":
rpc error: code = Unknown desc = Error response from daemon: Get https://docker-registry.devops.svc.cluster.local/v2/: dial tcp: lookup docker-registry.devops.svc.cluster.local: no such host
Warning Failed 20s (x2 over 34s) kubelet, node3 Error: ErrImagePull
So, how would I reference an image on an internally hosted docker registry in this on-premise scenario?
Is my only option to use a service of type NodePort, reference one of the node's hostname in the Pod spec, and then configure each node's docker daemon to ignore the self signed certificate?
Docker uses DNS settings configured on the Node, and, by default, it does not see DNS names declared in the Kubernetes cluster.
You can try to use one of the following solutions:
Use the IP address from ClusterIP field in "docker-registry" Service description as docker registry name. This address is static until you recreate the service. Also, you can add this IP address to /etc/hosts on each node.
For example, you can add my-docker-registry 10.11.12.13 line to /etc/hosts file. Therefore, you can use 10.11.12.13:5000 or my-docker-registry:5000 as docker registry name for image field in Pods description.
Expose "docker-registry" Service outside the cluster using type: NodePort. Than use localhost:<exposed_port> or <one_of_nodes_name>:<exposed_port> as docker registry name for image field in Pods description.
I have non deckerised application that needs to connect to dockerised application running inside kubernetes pod.
Given that pods may died and came again with different ip address, how my application can detect this? any way to assign a hostname that redirect to whatever existing pods?
You will have to use kubernetes service. Service gives you a way to talk to your pods with static Ip and dns (if you're client app is inside the cluster).
https://kubernetes.io/docs/concepts/services-networking/service/
You can do it in several ways:
Easiest: Use kubernetes service with type: NodePort. Then you can access the pod using http://[nodehost]:[nodeport]
Use kubernetes ingress. See this link for more details (https://kubernetes.io/docs/concepts/services-networking/ingress/)
If you are running in the cloud like aws, azure or gce, you can use kubernetes service type LoadBalancer.
In addition to Bal Chua’s work and suggestions from silverfox, I would like to show you the method
I used for Kubernetes to expose and manage incoming traffic from the outside:
Step 1: Deploy an application
In this example, Kubernetes sample hello application will run on port 8080/tcp
kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080
Step 2: Expose your Deployment as a Service internally
This command tells Kubernetes to expose port 8080/tcp to interact with the world outside:
kubectl expose deployment web --target-port=8080 --type=NodePort
After, please check if it exposed running command:
kubectl get service web
Step 3: Manage Ingress resource
Ingress sends traffic to a proper service working inside Kubernetes.
Open a text editor and then create a file basic-ingress.yaml
with content:
apiVersion:
extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
Apply the configuration:
kubectl apply -f basic-ingress.yaml
and that's all. It is time to test. Get the external IP address of Kubernetes installation:
kubectl get ingress basic-ingress
and run web browser with this address to see hello application working.
I have a docker container that needs to run in Kubernetes, but within its parameters, there's one need the container's Cluster IP info. How can I write a Kubernetes yaml file with that info?
# I want docker to run like this
docker run ... --wsrep-node-address=<ClusterIP>
# xxx.yaml
apiVersion: v1
kind: Pod
metadata:
name: galera01
labels:
name: galera01
namespace: cloudstack
spec:
containers:
- name: galeranode01
image: erkules/galera:basic
args:
# Is there any variable that I can use to represent the
# POD IP or CLUSTER IP here?
- --wsrep-node-address=<ClusterIP>
If i get this right you want to know node ip for which runs the container.
You can achive this by using kubernetes dns.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Services
A records
“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. This resolves to the cluster IP of the Service.
Another way you can create a service and use this.
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service