I'm using cert-manager for SSL management with configuration on ingress level. For example this config for <myhost>.com (skipping metadata and other not-related config parts):
kind: Certificate
spec:
secretName: myhost-tls
issuerRef:
name: letsencrypt-dns
kind: ClusterIssuer
---
kind: Ingress
...
spec:
tls:
- hosts:
- myhost.com
secretName: myhost-tls
...
Now I'm trying to move my docker registry into kubernetes cluster, but it requires certificate file to configure registry deployment.
Is it possible to use docker registry without SSL (because encryption can be done on ingress level) or use cert-manager to get certificate from docker registry?
You can allow the insecure registry in the following way on each node in the cluster:
docker daemon --insecure-registry=255.255.255.255:5000
You can also edit /etc/default/docker and include the following line which will do the above for you:
DOCKER_OPTS="--insecure-registry=5.179.232.65:5000"
The DOCKER_OPTS variable will automatically include that option for the Docker daemon.
Related
I have been trying to port over some infrastructure to K8S from a VM docker setup.
In a traditional VM docker setup I run 2 docker containers: 1 being a proxy node service, and another utilizing the proxy container through an .env file via:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' proxy-container
172.17.0.2
Then within the .env file:
URL=ws://172.17.0.2:4000/
This is what I am trying to setup within a cluster in K8S but failing to reference the proxy-service correctly. I have tried using the proxy-service pod name and/or the service name with no luck.
My env-configmap.yaml is:
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
data:
URL: "ws://$(proxy-service):4000/"
Containers that run in the same pod can connect to each other via localhost. Try URL: "ws://localhost:4000/" in your ConfigMap. Otherwise, you need to specify the service name like URL: "ws://proxy-service.<namespace>:4000".
Currently I am using a local environment with Skaffold + Minikube and every time I start the cluster like this:
skaffold dev -f='./skaffold-cluster.yaml' --no-prune=false --cache-artifacts=false --status-check=false
I get a bunch of services that belongs to my skaffold manifests, but each one of this services are exposed with random ports. The ip is the same because minikube have already started.
If I do: minikube service nice-service --url I will get the service with the random PORT.
I want to be able to fix this port. But I don't see if this is something that should be consider in k8s configuration / skaffold / minikube / docker ??
Typical use case:
I want to access mysql from sequel pro / workbench or any tool... therefor this configurations are saved locally with a port... it would be great to not to have to change the port in this tools, to access to the minikube service of mysql...
Current setup has: Virtualbox in OS system, with minikube and skaffold. Services are being exposed as k8s service node ports.
Is it possible to Fix this port services?
By changing the nodePort option:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
nodePort is the one exposed by minikube service my-service --url by adding this option it will not be random any more, but the port you need.
tl;dr How do you reference an image in a Kubernetes Pod when the image is from a private docker registry hosted on the same k8s cluster without a separate DNS entry for the registry?
In an on-premise Kubernetes deployment, I have setup a private Docker registry using the stable/docker-registry helm chart using a self-signed certificate. This is on-premise and I can't setup a DNS record to give the registry it's own URL. I wish to use these manifests as templates, so I don't want to hardcode any environment specific config.
The docker registry service is of type ClusterIP and looks like this:
apiVersion: v1
kind: Service
metadata:
name: docker-registry
labels:
app: docker-registry
spec:
type: ClusterIP
ports:
- port: 443
protocol: TCP
name: registry
targetPort: 5000
selector:
app: docker-registry
If I've pushed an image to this registry manually (or in the future via a Jenkins build pipeline), how would I reference that image in a Pod spec?
I have tried:
containers:
- name: my-image
image: docker-registry.devops.svc.cluster.local/my-image:latest
imagePullPolicy: IfNotPresent
But I received an error about the node host not being able to resolve docker-registry.devops.svc.cluster.local. I think the Docker daemon on the k8s node can't resolve that URL because it is an internal k8s DNS record.
Warning Failed 20s (x2 over 34s) kubelet, ciabdev01-node3
Failed to pull image "docker-registry.devops.svc.cluster.local/hadoop-datanode:2.7.3":
rpc error: code = Unknown desc = Error response from daemon: Get https://docker-registry.devops.svc.cluster.local/v2/: dial tcp: lookup docker-registry.devops.svc.cluster.local: no such host
Warning Failed 20s (x2 over 34s) kubelet, node3 Error: ErrImagePull
So, how would I reference an image on an internally hosted docker registry in this on-premise scenario?
Is my only option to use a service of type NodePort, reference one of the node's hostname in the Pod spec, and then configure each node's docker daemon to ignore the self signed certificate?
Docker uses DNS settings configured on the Node, and, by default, it does not see DNS names declared in the Kubernetes cluster.
You can try to use one of the following solutions:
Use the IP address from ClusterIP field in "docker-registry" Service description as docker registry name. This address is static until you recreate the service. Also, you can add this IP address to /etc/hosts on each node.
For example, you can add my-docker-registry 10.11.12.13 line to /etc/hosts file. Therefore, you can use 10.11.12.13:5000 or my-docker-registry:5000 as docker registry name for image field in Pods description.
Expose "docker-registry" Service outside the cluster using type: NodePort. Than use localhost:<exposed_port> or <one_of_nodes_name>:<exposed_port> as docker registry name for image field in Pods description.
I need to deploy a Docker running HAProxy which I already have working on on premise dockers into IBM Cloud (Bluemix) Kubernetes service. I am a bit lost on how to expose por 80 and 443. In plain simple docker that is very straightforward but seems complicated in Kubernetes, or at least in IBM Cloud.
I don't need load balancing, virtual hosts, or any extra configuration, as HAProxy will take care of it. Just need to replicate (move) my on premise running HAProxy exposing ports 80 and 443 into bluemix. (For multiple reasons I want to use HAproxy, so the request here is very specific: Simplest way to expose HAProxy ports 443 and 80 to a permanent IP address in IBM Cloud Kubernetes service.
could I have a basic example yaml kubectl file for that? Thanks
NodePort
To keep the same image running in both environments then you can define a Deployment for the HAProxy containers and a Service to access them via a NodePort on the NodeIP or clusterIP. A NodePort is similar in concept to running docker run -p n:n.
The IP:NodePort would need to be accessable externally and HAProxy will take over from there. Here's a sample HAProxy setup that uses an AWS ELB to get external users to a Node. Most people don't recommend running services via NodePort because Kubernetes offers alternate methods that provide more integration.
LoadBalancer
A LoadBalancer is specifically for automatic configuration of a cloud providers load balancer service. I don't believe IBM Clouds load balancer has any support in Kubernetes, maybe IBM have added something in? If they have you could use this instead of a NodePort to get to your Service.
Ingress
If you are running Docker locally and Kubernetes externally you've kind of thrown consistency out the window already so you could setup Ingress with an Ingress Controller based on HAProxy, there's a few available:
https://github.com/appscode/voyager
https://github.com/jcmoraisjr/haproxy-ingress
This gives you the standard Kubernetes abstraction of how to manage ingress for a service but using HAProxy underneath. This will not be your HAProxy image though, it's likely you can configure the same things for the HAProxy Ingress as you do in your HAProxy image.
Voyagers docco is pretty good:
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/test'
backend:
serviceName: test-service
servicePort: '80'
backendRules:
- 'acl add_url capture.req.uri -m beg /test-second'
- 'http-response set-header X-Added-From-Proxy added-from-proxy if add_url'
If you are fine with running this HAProsy on each node that is supposed to expose port 80/443 then consider running DaemonSet with hostNetwork: true. That will allow you to create pods that open 80 and 443 directly on node network. If you have a loadbalancer support in your cluster, you can instead use a Service of LoadBalancer type. It will forward from high node ports like ie. 32080 to your backing haproxy pods, and also automaticaly configure LB in front of it to give you an external IP and forward 80/443 from that IP to your high node ports (again, assuming your kube deployment supports use of LB services)
IBM Cloud has built-in solutions for load balancer and Ingress. The docs include sample YAMLs for both.
Load Balancer: https://console.bluemix.net/docs/containers/cs_loadbalancer.html#loadbalancer
Ingress: https://console.bluemix.net/docs/containers/cs_ingress.html#ingress
If you need tls termination or want to use a route rather than an IP address for accessing your HAProxy, then Ingress would be the best choice. If those options don't matter, then I'd suggest starting with the provided load balancer to see if that meets your needs.
Note, both load balancer and Ingress required a paid cluster. For lite clusters, only NodePort is available.
Here's a sample YAML that deploys IBM Liberty and exposes it via a load balancer service.
#If you are not logged into the US-South https://api.ng.bluemix.net
region, change the image registry location to match your region.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ibmliberty-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: ibmliberty
spec:
containers:
- name: ibmliberty
image: registry.ng.bluemix.net/ibmliberty
---
apiVersion: v1
kind: Service
metadata:
name: ibmliberty-loadbalancer
spec:
type: LoadBalancer
selector:
app: ibmliberty
ports:
- protocol: TCP
port: 9080
I have a docker container that needs to run in Kubernetes, but within its parameters, there's one need the container's Cluster IP info. How can I write a Kubernetes yaml file with that info?
# I want docker to run like this
docker run ... --wsrep-node-address=<ClusterIP>
# xxx.yaml
apiVersion: v1
kind: Pod
metadata:
name: galera01
labels:
name: galera01
namespace: cloudstack
spec:
containers:
- name: galeranode01
image: erkules/galera:basic
args:
# Is there any variable that I can use to represent the
# POD IP or CLUSTER IP here?
- --wsrep-node-address=<ClusterIP>
If i get this right you want to know node ip for which runs the container.
You can achive this by using kubernetes dns.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Services
A records
“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. This resolves to the cluster IP of the Service.
Another way you can create a service and use this.
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service