How to secure docker private registry credentials used in Kubernetes cluster nodes and namespaces? - docker

Trying to see if there are any recommended or better approaches since docker login my.registry.com creates config.json with user id and password and it's not encrypted. Anyone logged into the node or jumpbox where there images are pushed/pulled from a private registry can easily see the registry credentials. Coming to using those credentials for Kubernetes deployment, I believe only option is to convert that into regcred and refer to that as imagePullSecrets in YAML files. The secret can be namespace scoped but still has the risk of exposing the data to other users who may have access to that namesapce since k8s secrets are simply base64 encoded, not really encrypted.
Are there any recommended tools/plugins to secure and/or encrypt these credentials without involving external API calls?
I have heard about Bitnami sealed secrets but haven't explored that yet, would like to hear from others since this is a very common issue for any team/application that are starting containers journey.

There is no direct solution for this. For some specific hosts like AWS and GCP you can use their native IAM system. However Kubernetes has no provisions beyond this (SealedSecrets won't help at all).

Related

What is the best approach to having the certs inside a container in kubernetes?

I see that normally a new image is created, that is, a dockerfile, but is it a good practice to pass the cert through environment variables? with a batch that picks it up and places it inside the container
Another approach I see is to mount the certificate on a volume.
What would be the best approximation to have a single image for all environments?
Just like what happens with software artifacts, I mean.
Creating a new image for each environment or renewal I find it tedious, but if it has to be like this...
Definitely do not bake certificates into the image.
Because you tagged your question with azure-aks, I recommend using the Secrets Store CSI Driver to mount your certificates from Key Vault.
See the plugin project page on GitHub
See also this doc Getting Certificates and Keys using Azure Key Vault Provider
This doc is better, more thorough and worth going through even if you're not using the nginx ingress controller Enable NGINX Ingress Controller with TLS
And so for different environments, you'd pull in different certificates from one or more key vaults and mount them to your cluster. Please also remember to use different credentials/identities to grab those certs.
The cloud native approach would be to not have your application handle the certificates but to externalize that completely to a different mechanism.
You could have a look at service meshs. They mostly work with the sidecar pattern, where a sidecar container is running in the pod, that handles en-/decryption of your traffic. The iptables inside the pod are configured in a way, that all traffic must go through the sidecar.
Depending on your requirements you can check out istio and linkerd as service mesh solutions.
If a service mesh is not an option I would recommend to store your certs as secret and mount that as volume into your container.

Can a self-signed cert secure multiple CNs / FQDNs?

This is a bit of a silly setup, but here's what I'm looking at right now:
I'm learning Kubernetes
I want to push custom code to my Kubernetes cluster, which means the code must be available as a Docker image available from some Docker repository (default is Docker Hub)
While I'm willing to pay for Docker Hub if I have to (though I'd rather avoid it), I have concerns about putting my custom code on a third-party service. Sudden rate limits, security breaches, sudden ToS changes, etc
To this end, I'm running my own Docker registry within my Kubernetes cluster
I do not want to configure the Docker clients running on the Kubernetes nodes to trust insecure (HTTP) Docker registries. If I do choose to pull any images from an external registry (e.g. public images like nginx I may pull from Docker Hub instead of hosting locally) then I don't want to be vulnerable to MITM attacks swapping out the image
Ultimately I will have a build tool within the cluster (Jenkins or otherwise) pull my code from git, build the image, and push it to my internal registry. Then all nodes pulling from the registry live within the cluster. Since the registry never needs to receive images from sources outside of the cluster or delivery them to sources outside of the cluster, the registry does not need a NodePort service but can instead be a ClusterIP service.... ultimately
Until I have that ultimate setup ready, I'm building images on my local machine and wish to push them to the registry (from the internet)
Because I don't plan on making the registry accessible from the outside world (eventually), I can't utilize Let's Encrypt to generate valid certs for it (even if I were making my Docker registry available to the outside world, I can't use Let's Encrypt, anyway without writing some extra code to utilize certbot or something)
My plan is to follow the example in this StackOverflow post: generate a self-signed cert and then launch the Docker registry using that certificate. Then use a DaemonSet to make this cert trusted on all nodes in the cluster.
Now that you have the setup, here's the crux of my issue: within my cluster my Docker registry can be accessed via a simple host name (e.g. "docker-registry"), but outside of my cluster I need to either access it via a node IP address or a domain name pointing at a node or a load balancer.
When generating my self-signed cert I was asked to provide a CN / FQDN for the certificate. I put in "docker-registry" -- the internal host name I plan to utilize. I then tried to access my registry locally to push an image to it:
> docker pull ubuntu
> docker tag ubuntu example.com:5000/my-ubuntu
> docker push example.com:5000/my-ubuntu
The push refers to repository [example.com:5000/my-ubuntu]
Get https://example.com:5000/v2/: x509: certificate is valid for docker-registry, not example.com
I can generate a certificate for example.com instead of for docker-registry, however I worry that I'll have issues configuring the service or connecting to my registry from within my cluster if I provide my external domain like this instead of an internal host name.
This is why I'm wondering if I can just say that my self-signed cert applies to both example.com and docker-registry. If not, two other acceptable solutions would be:
Can I tell the Docker client not to verify the host name and just trust the certificate implicitly?
Can I tell the Docker registry to deliver one of two different certificates based on the host name used to access it?
If none of the three options are possible, then I can always just forego pushing images from my local machine and start the process of building images within the cluster -- but I was hoping to put that off until later. I'm learning a lot right now and trying to avoid getting distracted by tangential things.
Probably the easiest way to solve your problem would be to use Docker's insecure-registry feature. The concern you mention about this in your post (that it would open you up to security risks later) probably won't apply as the feature works by specifying specific IP addresses or host names to trust.
For example you could configure something like
{
"insecure-registries" : [ "10.10.10.10:5000" ]
}
and the only IP address that your Docker daemons will access without TLS is the one at that host and port number.
If you don't want to do that, then you'll need to get a trusted TLS certificate in place. The issue you mentioned about having multiple names per cert is usually handled with the Subject Alternative Name field in a cert. (indeed Kubernetes uses that feature quite a bit).

How to configure Minio sourced from a subfolder in a single bucket in Docker?

I am trying to find a way to configure MinIO using Docker to back-end into a single S3 bucket, enabling my client to expose S3 capabilities to their internal customers.
To meet some very specialized compliance rules in an air-gapped environment, my client was provisioned a single bucket in an on-premise S3-compatible solution. They cannot get additional buckets but need to provide their internal organizational customers access to S3 capabilities, including the ability to leverage buckets, ACLs, etc. The requirement is to use their existing S3 storage bucket and not other on-premise storage.
I tried Minio gateway but it tries to create and manage new buckets on the underlying S3 provider. I couldn't find anything like a "prefix" capability I could supply to force it to only work inside {host}/{bucketName} instead of the root endpoint for their keys.
Minio server might work but we'd need to mount a docker volume to their underlying bucket and I'm concerned about the solution becoming brittle. Also, I can't seem to find any well-regarded, production-ready, vendor-supported S3 volume drivers. Since I don't have a volume plugin, I haven't validated performance yet, though I'm concerned it will be sub-par as well.
How can I, in a docker environment, make gateway work to provide bucket/user/management capabilities all rooted in a single underlying bucket/folder? I'm open to alternative designs provided I can meet the customer's requirements (run via docker, store in their underlying S3 storage, provide ability to provision and secure new buckets).

what is the use of secrets in kubernetes, and what is its purpose?

I am new to kubernetes and I am not able to understand what is the use of secrets in kubernetes, can anybody tell me why secrets are used and what is the purpose of secrets?
Secrets are used to manage sensitive information like database passwords, user credentials, ssl certificates etc.
It would help you decouple the sensitive information from the builds. You can bind the secrets as volume or environment variables in the pod at run time

How to properly store and share docker host access?

I followed a docker-machine tutorial to setup a docker swarm in the cloud. I had setup a bunch of replicas and life is good. Now I need to give my teammates access to this docker swarm. How do I do that?
Should I share docker certificate files? Can each team member have an individual set of certificate files? Is there any way to setup OAuth or other form of SSO?
The Docker daemon doesn't do any extended client auth.
You can generate certificate's for each client from the CA that signed the swarm certificate, which is probably the minimum you want. Access to Docker is root access to the host so best not to hand out direct access to everyone, or outside of development.
For any extended authentication and authorisation you would need to put a broker between the Docker API and your clients. The easiest way to do this is to use a higher level management platform like Rancher or Shipyard, that can manage the swarm for you.
Mesos/Marathon/Mesosphere and Kubernetes are simliar in function but have more of their own idea of what clustering is.

Resources