What is the best approach to having the certs inside a container in kubernetes? - docker

I see that normally a new image is created, that is, a dockerfile, but is it a good practice to pass the cert through environment variables? with a batch that picks it up and places it inside the container
Another approach I see is to mount the certificate on a volume.
What would be the best approximation to have a single image for all environments?
Just like what happens with software artifacts, I mean.
Creating a new image for each environment or renewal I find it tedious, but if it has to be like this...

Definitely do not bake certificates into the image.
Because you tagged your question with azure-aks, I recommend using the Secrets Store CSI Driver to mount your certificates from Key Vault.
See the plugin project page on GitHub
See also this doc Getting Certificates and Keys using Azure Key Vault Provider
This doc is better, more thorough and worth going through even if you're not using the nginx ingress controller Enable NGINX Ingress Controller with TLS
And so for different environments, you'd pull in different certificates from one or more key vaults and mount them to your cluster. Please also remember to use different credentials/identities to grab those certs.

The cloud native approach would be to not have your application handle the certificates but to externalize that completely to a different mechanism.
You could have a look at service meshs. They mostly work with the sidecar pattern, where a sidecar container is running in the pod, that handles en-/decryption of your traffic. The iptables inside the pod are configured in a way, that all traffic must go through the sidecar.
Depending on your requirements you can check out istio and linkerd as service mesh solutions.
If a service mesh is not an option I would recommend to store your certs as secret and mount that as volume into your container.

Related

How to secure docker private registry credentials used in Kubernetes cluster nodes and namespaces?

Trying to see if there are any recommended or better approaches since docker login my.registry.com creates config.json with user id and password and it's not encrypted. Anyone logged into the node or jumpbox where there images are pushed/pulled from a private registry can easily see the registry credentials. Coming to using those credentials for Kubernetes deployment, I believe only option is to convert that into regcred and refer to that as imagePullSecrets in YAML files. The secret can be namespace scoped but still has the risk of exposing the data to other users who may have access to that namesapce since k8s secrets are simply base64 encoded, not really encrypted.
Are there any recommended tools/plugins to secure and/or encrypt these credentials without involving external API calls?
I have heard about Bitnami sealed secrets but haven't explored that yet, would like to hear from others since this is a very common issue for any team/application that are starting containers journey.
There is no direct solution for this. For some specific hosts like AWS and GCP you can use their native IAM system. However Kubernetes has no provisions beyond this (SealedSecrets won't help at all).

Can a self-signed cert secure multiple CNs / FQDNs?

This is a bit of a silly setup, but here's what I'm looking at right now:
I'm learning Kubernetes
I want to push custom code to my Kubernetes cluster, which means the code must be available as a Docker image available from some Docker repository (default is Docker Hub)
While I'm willing to pay for Docker Hub if I have to (though I'd rather avoid it), I have concerns about putting my custom code on a third-party service. Sudden rate limits, security breaches, sudden ToS changes, etc
To this end, I'm running my own Docker registry within my Kubernetes cluster
I do not want to configure the Docker clients running on the Kubernetes nodes to trust insecure (HTTP) Docker registries. If I do choose to pull any images from an external registry (e.g. public images like nginx I may pull from Docker Hub instead of hosting locally) then I don't want to be vulnerable to MITM attacks swapping out the image
Ultimately I will have a build tool within the cluster (Jenkins or otherwise) pull my code from git, build the image, and push it to my internal registry. Then all nodes pulling from the registry live within the cluster. Since the registry never needs to receive images from sources outside of the cluster or delivery them to sources outside of the cluster, the registry does not need a NodePort service but can instead be a ClusterIP service.... ultimately
Until I have that ultimate setup ready, I'm building images on my local machine and wish to push them to the registry (from the internet)
Because I don't plan on making the registry accessible from the outside world (eventually), I can't utilize Let's Encrypt to generate valid certs for it (even if I were making my Docker registry available to the outside world, I can't use Let's Encrypt, anyway without writing some extra code to utilize certbot or something)
My plan is to follow the example in this StackOverflow post: generate a self-signed cert and then launch the Docker registry using that certificate. Then use a DaemonSet to make this cert trusted on all nodes in the cluster.
Now that you have the setup, here's the crux of my issue: within my cluster my Docker registry can be accessed via a simple host name (e.g. "docker-registry"), but outside of my cluster I need to either access it via a node IP address or a domain name pointing at a node or a load balancer.
When generating my self-signed cert I was asked to provide a CN / FQDN for the certificate. I put in "docker-registry" -- the internal host name I plan to utilize. I then tried to access my registry locally to push an image to it:
> docker pull ubuntu
> docker tag ubuntu example.com:5000/my-ubuntu
> docker push example.com:5000/my-ubuntu
The push refers to repository [example.com:5000/my-ubuntu]
Get https://example.com:5000/v2/: x509: certificate is valid for docker-registry, not example.com
I can generate a certificate for example.com instead of for docker-registry, however I worry that I'll have issues configuring the service or connecting to my registry from within my cluster if I provide my external domain like this instead of an internal host name.
This is why I'm wondering if I can just say that my self-signed cert applies to both example.com and docker-registry. If not, two other acceptable solutions would be:
Can I tell the Docker client not to verify the host name and just trust the certificate implicitly?
Can I tell the Docker registry to deliver one of two different certificates based on the host name used to access it?
If none of the three options are possible, then I can always just forego pushing images from my local machine and start the process of building images within the cluster -- but I was hoping to put that off until later. I'm learning a lot right now and trying to avoid getting distracted by tangential things.
Probably the easiest way to solve your problem would be to use Docker's insecure-registry feature. The concern you mention about this in your post (that it would open you up to security risks later) probably won't apply as the feature works by specifying specific IP addresses or host names to trust.
For example you could configure something like
{
"insecure-registries" : [ "10.10.10.10:5000" ]
}
and the only IP address that your Docker daemons will access without TLS is the one at that host and port number.
If you don't want to do that, then you'll need to get a trusted TLS certificate in place. The issue you mentioned about having multiple names per cert is usually handled with the Subject Alternative Name field in a cert. (indeed Kubernetes uses that feature quite a bit).

Kubernetes - from Minikube to production

I have created a simple PHP api application that works with a mysql database to store data. I have been experimenting with Kubernetes on my Windows 10 machine through Minikube.
I have just about got my head round the ideas involved, yet I’m not sure about how to implement this properly. So far I have used Kompose to create a set of yaml files from an existing docker-compose file. This has been half successful.
To get my application code into a pod hosting PHP, I have been using hostPath to share from my local machine. I mount to the minikube machine and share from there. I was having trouble sharing by other means. The application code is hosted in a github repo.
My questions are:
Is mounting my application code into a pod (assuming this is similar to what happens in docker) the correct way to do this? I’m not clear exactly what information is held on an image retrieved from the docker hub. Although I have read up on containers isolating the build environment from your machine.
How does this approach to translate into a production environment hosted on a cloud? I see there are various storage types. I had for example, wanted to try deploying on AWS just to see how this would work in practice.
I’m really looking for guidance to go from the tutorials found on the web working on my machine, to something that could be done for a customer hosted on the cloud. This might scale up to a more microservices style architecture over time.
The approach you are describing is mostly for development setups, where you want to mount your code into the container as a volume so you don't have to rebuild every time your code changes. Typically done with a docker-compose file.
For production setups, you want the docker image to correctly work and only mount volumes to data you want to persist, typically databases are the core example. For this EKS is deeply integrated into the AWS infrastructure and will create EBS volumes on demand. You don't need to provision any volume or even care for most cases (unless you need multiple read-write volumes needed for scaling).
For a PHP application you really should not persist any data in the pod, because it will create other issues when you need to scale the application. Also, a good approach for managing files that need to persist is S3 (AWS simple storage service).
So generally speaking, you need a deployment per application a service to access each pod on that application and then an ingress object to route traffic from the internet to each pod.
Your application docker image is really the core. You just build it with your code inside. Make sure to pass configuration using environment variable or configuration file so you can connect to the database.
Now for kubernetes, for each compoment (e.g. PHP application, MySQL) you will most likely create a deployment k8s manifest that points to the docker image and add some configuration environment variables.
For production, you will need persistence volume. On aws you can simply use EBS-backed volumes
To get traffic from Internet to your PHP application, you will need to add one or more k8s components:
K8s Service manifest that exposes your PHP deployment/pod on a stable address. If you only have q or very few services, you can use LoadBalancer which on cloud like AWS will create an ALB/ELB (might need to add annotation to your service)
An ingress which is just a reverse proxy (contour, nginx, traefik). On cloud environment it will map to an ALB/ELB. The advantage of this is that you can have a single ALB for all your services i.e. save money. Also you can configure routing path or TLS termination in one place.

Docker publish ports during build

I'm using Docker to build an nginx enviornment. I'm wondering if it's possible to expose to publish the ports (80, 443) during build so letsencrypt can run at build time (it needs network access to a server in the (intermediate) container).
Is this possible?
I have never seen that and i think that is not possible by design.
You should not place the secret key in the image
You might need to re-assure the license after 2 months and would need to rebuild the whole image
in general, this is done using a companion letsencrypt docker image, sometimes called sidekick. You basically have your app (and its containers) and a letsencrypt container, exposing a volume which nginx then mounts using volume_from this volume is were the letsencrypt container puts the fetched certificates. This happens during image-startup, not during image creation. You use a docker-compose file to configure anything needed.
E.g. you can have a look here
a) https://github.com/rancher/community-catalog/blob/master/templates/letsencrypt/2/docker-compose.yml
b) or http://letsencrypt.readthedocs.io/en/latest/using.html#running-with-docker
a) lets you defined the domains you are going to need using ENV variables, which will suite a docker-compose way very well, not providing any files like a configuration on the host ( keeps it portable ).
You can still put all this on the nginx-server, but its just not best practise, out of many reasons ( e.g. the need to configure nginx ).
If you want to stick to "build time", an alternative is using the DNS verify mode, so instead of verifying using connect-back on a port, you rather verify using a DNS-entry, some links for that
- https://github.com/lukas2511/letsencrypt.sh/wiki/Examples-for-DNS-01-hooks
- the a) container does this
For this scenario you might want to pick http://cloudflare.com - AFAIK it is the only DNS service with free API access for unlimited domains, anything else either costs money or has limits.

Docker, Registrator and Consul by example

I am new to both Docker and Consul, and am trying to get a feel for how containerized apps could use Consul for both service registry and KV pair config management ("configuration").
My understanding was that I could:
Create an image that runs Consul server, so something like this; then
Spin up three of these Docker-Consul containers (thus forming a cluster/quorum) on myvm01.example.com (an Ubuntu VM); then
Refactor my app to use Consul and create a Docker image that runs my app and Consul agent, with the agent configured to join the 3-node quorum at startup. On startup, my app uses the local Consul agent to pull down all of its configurations, stored as KV pairs. It also pulls in registered/healthy services, and uses a local load balancing tool to balance the services it integrates with.
Run my app's containers on, say, myvm02.example.com (another Ubuntu VM).
So to begin with, if any of this seems like I am misunderstanding the normal/proper uses of Docker and Consul (sans Registrator), please begin by correcting me!
Assuming I'm more or less correct, I recently stumbled across Registrator and am now even more confused. Registrator seems to be some middleman between your app containers and your Consul (or whatever registry you use) servers.
After reading their Quickstart tutorial, it sounds like what you're supposed to do is:
Deploy my Consul cluster/quorum containers to myvm01.example.com like before
Instead of "Dockerizing" my app to use Consul directly, I simply integrate it with Registrator
Then I deploy a Registrator container somewhere, and configure it to integrate with Consul
Then I deploy my app containers. They integrate with Registrator, and Registrator in turn integrates with Consul.
My concerns:
Is my understanding here correct or way off base? If so, how?
What is actually gained by the addition of Registrator. It doesn't seem (to the untrained eye at least) like anything more than a layer of indirection between the app and the service registry.
Will I still be able to leverage Consul's KV config service through Registrator?
Is my understanding here correct or way off base? If so, how?
It seems to me, that it's not a good solution, to have all cluster/quorum members running inside the same VM. It's not so bad if you use it for development or tetsing or something, where you don't care much about reliability, but not for production.
Once your VM dies, you'll loose all the advantages you have by creating a cluster. And even more, you can loose all the data you have in K/V store, because you are running Consul servers inside a docker containers, which should be additionaly configured to share the configuration between runs.
As for the rest, I see it the same as you.
What is actually gained by the addition of Registrator.
From my point of view, the main thing is, that you don't have to provide an instance of Consul Agent in every container you run. And the container with the image you run is responsible only for their main functions, not for registering itself somewhere. You may simply pull an image and just run a container with it, to make it's service available, without making additional work.
Will I still be able to leverage Consul's KV config service through Registrator?
Unfortunately, no. At least, we didn't find a solution to use it this way, when we were looking for something to make service discovering and configuration management. We came to conclusion, that Registrator is not a proxy for K/V store and is used only to automate service discovery. So you have to use some other logic to access consul's K/V store.
Update: furthermore, here is 2 articles: "Automatic Docker Service Announcement with Registrator" and "Automatic container registration with Consul and Registrator", I found usefull to understand Registrator role in service discovery process.

Resources