Kubernetes: Pull images from internal registry with on-premise deployment - docker

tl;dr How do you reference an image in a Kubernetes Pod when the image is from a private docker registry hosted on the same k8s cluster without a separate DNS entry for the registry?
In an on-premise Kubernetes deployment, I have setup a private Docker registry using the stable/docker-registry helm chart using a self-signed certificate. This is on-premise and I can't setup a DNS record to give the registry it's own URL. I wish to use these manifests as templates, so I don't want to hardcode any environment specific config.
The docker registry service is of type ClusterIP and looks like this:
apiVersion: v1
kind: Service
metadata:
name: docker-registry
labels:
app: docker-registry
spec:
type: ClusterIP
ports:
- port: 443
protocol: TCP
name: registry
targetPort: 5000
selector:
app: docker-registry
If I've pushed an image to this registry manually (or in the future via a Jenkins build pipeline), how would I reference that image in a Pod spec?
I have tried:
containers:
- name: my-image
image: docker-registry.devops.svc.cluster.local/my-image:latest
imagePullPolicy: IfNotPresent
But I received an error about the node host not being able to resolve docker-registry.devops.svc.cluster.local. I think the Docker daemon on the k8s node can't resolve that URL because it is an internal k8s DNS record.
Warning Failed 20s (x2 over 34s) kubelet, ciabdev01-node3
Failed to pull image "docker-registry.devops.svc.cluster.local/hadoop-datanode:2.7.3":
rpc error: code = Unknown desc = Error response from daemon: Get https://docker-registry.devops.svc.cluster.local/v2/: dial tcp: lookup docker-registry.devops.svc.cluster.local: no such host
Warning Failed 20s (x2 over 34s) kubelet, node3 Error: ErrImagePull
So, how would I reference an image on an internally hosted docker registry in this on-premise scenario?
Is my only option to use a service of type NodePort, reference one of the node's hostname in the Pod spec, and then configure each node's docker daemon to ignore the self signed certificate?

Docker uses DNS settings configured on the Node, and, by default, it does not see DNS names declared in the Kubernetes cluster.
You can try to use one of the following solutions:
Use the IP address from ClusterIP field in "docker-registry" Service description as docker registry name. This address is static until you recreate the service. Also, you can add this IP address to /etc/hosts on each node.
For example, you can add my-docker-registry 10.11.12.13 line to /etc/hosts file. Therefore, you can use 10.11.12.13:5000 or my-docker-registry:5000 as docker registry name for image field in Pods description.
Expose "docker-registry" Service outside the cluster using type: NodePort. Than use localhost:<exposed_port> or <one_of_nodes_name>:<exposed_port> as docker registry name for image field in Pods description.

Related

Kubernetes env referencing a pod/service

I have been trying to port over some infrastructure to K8S from a VM docker setup.
In a traditional VM docker setup I run 2 docker containers: 1 being a proxy node service, and another utilizing the proxy container through an .env file via:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' proxy-container
172.17.0.2
Then within the .env file:
URL=ws://172.17.0.2:4000/
This is what I am trying to setup within a cluster in K8S but failing to reference the proxy-service correctly. I have tried using the proxy-service pod name and/or the service name with no luck.
My env-configmap.yaml is:
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
data:
URL: "ws://$(proxy-service):4000/"
Containers that run in the same pod can connect to each other via localhost. Try URL: "ws://localhost:4000/" in your ConfigMap. Otherwise, you need to specify the service name like URL: "ws://proxy-service.<namespace>:4000".

Azure Kubernetes Service: Cant deploy with YAML file

I am currently trying to setup a kube pod on a private Kubernetes cluster that i have set up on Azure Kubernetes service
However when i try to deploy it through "Add with YAML" i get an error saying
"Failed to create the pod failed to create to create the pod 'name-of-pod'. Error:(599): unable to reach the api server or api server is too busy to respond. Failed to fetch."
(The error switched between error 500 and error 20)
We have our own private docker container storage on azure which i am pulling from
apiVersion: v1
kind: Pod
metadata:
name: name-of-pod
namespace:
spec:
containers:
- name: name
image:image-name:master
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: secret-name
Any and all help would be greatly appreciated!
As you do not added much information, i will try my best to point you in a direction:
when you have a private AKS cluster, you can create, modify or update the cluster itself through the Azure api, but you can not create, modify or update anything inside the cluster as the API server is not accessible from outside.
An easy solution would be to create a so called jumphost in the same virtual network where the AKS cluster is part of. From here you could use Azure CLI and kubectl to create your pod.
As you mentioned a private docker registry, note that you would also need further Private DNS and Privat Endpoint configurations.

How to fix service PORT for minikube - skaffold local environment?

Currently I am using a local environment with Skaffold + Minikube and every time I start the cluster like this:
skaffold dev -f='./skaffold-cluster.yaml' --no-prune=false --cache-artifacts=false --status-check=false
I get a bunch of services that belongs to my skaffold manifests, but each one of this services are exposed with random ports. The ip is the same because minikube have already started.
If I do: minikube service nice-service --url I will get the service with the random PORT.
I want to be able to fix this port. But I don't see if this is something that should be consider in k8s configuration / skaffold / minikube / docker ??
Typical use case:
I want to access mysql from sequel pro / workbench or any tool... therefor this configurations are saved locally with a port... it would be great to not to have to change the port in this tools, to access to the minikube service of mysql...
Current setup has: Virtualbox in OS system, with minikube and skaffold. Services are being exposed as k8s service node ports.
Is it possible to Fix this port services?
By changing the nodePort option:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
nodePort is the one exposed by minikube service my-service --url by adding this option it will not be random any more, but the port you need.

Docker registry in kubernetes with cert-manager

I'm using cert-manager for SSL management with configuration on ingress level. For example this config for <myhost>.com (skipping metadata and other not-related config parts):
kind: Certificate
spec:
secretName: myhost-tls
issuerRef:
name: letsencrypt-dns
kind: ClusterIssuer
---
kind: Ingress
...
spec:
tls:
- hosts:
- myhost.com
secretName: myhost-tls
...
Now I'm trying to move my docker registry into kubernetes cluster, but it requires certificate file to configure registry deployment.
Is it possible to use docker registry without SSL (because encryption can be done on ingress level) or use cert-manager to get certificate from docker registry?
You can allow the insecure registry in the following way on each node in the cluster:
docker daemon --insecure-registry=255.255.255.255:5000
You can also edit /etc/default/docker and include the following line which will do the above for you:
DOCKER_OPTS="--insecure-registry=5.179.232.65:5000"
The DOCKER_OPTS variable will automatically include that option for the Docker daemon.

How can I get the cluster IP info within the Kubernetes yaml file before it's been created?

I have a docker container that needs to run in Kubernetes, but within its parameters, there's one need the container's Cluster IP info. How can I write a Kubernetes yaml file with that info?
# I want docker to run like this
docker run ... --wsrep-node-address=<ClusterIP>
# xxx.yaml
apiVersion: v1
kind: Pod
metadata:
name: galera01
labels:
name: galera01
namespace: cloudstack
spec:
containers:
- name: galeranode01
image: erkules/galera:basic
args:
# Is there any variable that I can use to represent the
# POD IP or CLUSTER IP here?
- --wsrep-node-address=<ClusterIP>
If i get this right you want to know node ip for which runs the container.
You can achive this by using kubernetes dns.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Services
A records
“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. This resolves to the cluster IP of the Service.
Another way you can create a service and use this.
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service

Resources