This is a bit of a silly setup, but here's what I'm looking at right now:
I'm learning Kubernetes
I want to push custom code to my Kubernetes cluster, which means the code must be available as a Docker image available from some Docker repository (default is Docker Hub)
While I'm willing to pay for Docker Hub if I have to (though I'd rather avoid it), I have concerns about putting my custom code on a third-party service. Sudden rate limits, security breaches, sudden ToS changes, etc
To this end, I'm running my own Docker registry within my Kubernetes cluster
I do not want to configure the Docker clients running on the Kubernetes nodes to trust insecure (HTTP) Docker registries. If I do choose to pull any images from an external registry (e.g. public images like nginx I may pull from Docker Hub instead of hosting locally) then I don't want to be vulnerable to MITM attacks swapping out the image
Ultimately I will have a build tool within the cluster (Jenkins or otherwise) pull my code from git, build the image, and push it to my internal registry. Then all nodes pulling from the registry live within the cluster. Since the registry never needs to receive images from sources outside of the cluster or delivery them to sources outside of the cluster, the registry does not need a NodePort service but can instead be a ClusterIP service.... ultimately
Until I have that ultimate setup ready, I'm building images on my local machine and wish to push them to the registry (from the internet)
Because I don't plan on making the registry accessible from the outside world (eventually), I can't utilize Let's Encrypt to generate valid certs for it (even if I were making my Docker registry available to the outside world, I can't use Let's Encrypt, anyway without writing some extra code to utilize certbot or something)
My plan is to follow the example in this StackOverflow post: generate a self-signed cert and then launch the Docker registry using that certificate. Then use a DaemonSet to make this cert trusted on all nodes in the cluster.
Now that you have the setup, here's the crux of my issue: within my cluster my Docker registry can be accessed via a simple host name (e.g. "docker-registry"), but outside of my cluster I need to either access it via a node IP address or a domain name pointing at a node or a load balancer.
When generating my self-signed cert I was asked to provide a CN / FQDN for the certificate. I put in "docker-registry" -- the internal host name I plan to utilize. I then tried to access my registry locally to push an image to it:
> docker pull ubuntu
> docker tag ubuntu example.com:5000/my-ubuntu
> docker push example.com:5000/my-ubuntu
The push refers to repository [example.com:5000/my-ubuntu]
Get https://example.com:5000/v2/: x509: certificate is valid for docker-registry, not example.com
I can generate a certificate for example.com instead of for docker-registry, however I worry that I'll have issues configuring the service or connecting to my registry from within my cluster if I provide my external domain like this instead of an internal host name.
This is why I'm wondering if I can just say that my self-signed cert applies to both example.com and docker-registry. If not, two other acceptable solutions would be:
Can I tell the Docker client not to verify the host name and just trust the certificate implicitly?
Can I tell the Docker registry to deliver one of two different certificates based on the host name used to access it?
If none of the three options are possible, then I can always just forego pushing images from my local machine and start the process of building images within the cluster -- but I was hoping to put that off until later. I'm learning a lot right now and trying to avoid getting distracted by tangential things.
Probably the easiest way to solve your problem would be to use Docker's insecure-registry feature. The concern you mention about this in your post (that it would open you up to security risks later) probably won't apply as the feature works by specifying specific IP addresses or host names to trust.
For example you could configure something like
{
"insecure-registries" : [ "10.10.10.10:5000" ]
}
and the only IP address that your Docker daemons will access without TLS is the one at that host and port number.
If you don't want to do that, then you'll need to get a trusted TLS certificate in place. The issue you mentioned about having multiple names per cert is usually handled with the Subject Alternative Name field in a cert. (indeed Kubernetes uses that feature quite a bit).
Related
I see that normally a new image is created, that is, a dockerfile, but is it a good practice to pass the cert through environment variables? with a batch that picks it up and places it inside the container
Another approach I see is to mount the certificate on a volume.
What would be the best approximation to have a single image for all environments?
Just like what happens with software artifacts, I mean.
Creating a new image for each environment or renewal I find it tedious, but if it has to be like this...
Definitely do not bake certificates into the image.
Because you tagged your question with azure-aks, I recommend using the Secrets Store CSI Driver to mount your certificates from Key Vault.
See the plugin project page on GitHub
See also this doc Getting Certificates and Keys using Azure Key Vault Provider
This doc is better, more thorough and worth going through even if you're not using the nginx ingress controller Enable NGINX Ingress Controller with TLS
And so for different environments, you'd pull in different certificates from one or more key vaults and mount them to your cluster. Please also remember to use different credentials/identities to grab those certs.
The cloud native approach would be to not have your application handle the certificates but to externalize that completely to a different mechanism.
You could have a look at service meshs. They mostly work with the sidecar pattern, where a sidecar container is running in the pod, that handles en-/decryption of your traffic. The iptables inside the pod are configured in a way, that all traffic must go through the sidecar.
Depending on your requirements you can check out istio and linkerd as service mesh solutions.
If a service mesh is not an option I would recommend to store your certs as secret and mount that as volume into your container.
I want to build some docker images in a certain step of my Google Cloud Build, then push them in another step. I'm thinking the CI used doesn't really matter here.
This is because some of the push commands are dependent on some other conditions and I don't want to re-build the images.
I can docker save to some tar in the mounted workspace, then docker load it later. However that's fairly slow. Is there any better strategy? I thought of trying to copy to/from /var/lib/docker, but that seems ill advised.
The key here is doing the docker push from the same host on which you have done the docker build.
The docker build, however, doesn’t need to take place on the CICD build machine itself, because you can point its local docker client to a remote docker host.
To point your docker client to a remote docker host you need to set three environment variables.
On a Linux environment:
DOCKER_HOST=tcp:<IP Address Of Remote Server>:2376
DOCKER_CERT_PATH=/some/path/to/docker/client/certs
DOCKER_TLS_VERIFY=1
This is a very powerful concept that has many uses. One can for example, point to a dev|tst|prod docker swarm manager node. Or, point from Linux to a remote Windows machine and initiate the build of a Windows container. This latter use case might be useful if you have common CICD tooling that implements some proprietary image labeling that you want to re-use also for Windows containers.
The authentication here is mutual SSL/TLS and so there need to be both client and server private/public keys generated with a common CA. This might be a little tricky at first and so you may want to see how it works using docker-machine first using the environment setting shortcuts initially:
https://docs.docker.com/machine/reference/env/
Once you’ve mastered this concept you’ll then need to script the setting of these environment variables in your CICD scripts making client certs available in a secure way.
I need to block all registries and allow only one private registry for docker to pull images from , how can that be done natively in docker.
Using the RedHat options will not work on the upstream Docker CE or EE engine, RedHat had forked the docker engine and added their own features that are incompatible. You'll also find that /etc/sysconfig/docker is a RedHat only configuration file, designed to work with their version of the startup scripts. And I don't believe RedHat supports this old fork either, instead preferring their own podman and crio runtimes.
A hard limit on registry servers is not currently supported in the Linux Docker engine. The standard way to implement this for servers is with firewall rules on outbound connections, but that needs to only permit outbound connections to a known allow list. You still need to ensure that users don't import images from a tar file, or rebuild the otherwise blocked images from scratch (for example, all of official images on Docker Hub have the source available to rebuild them).
With Docker Desktop, the ability to restrict what registries a user can pull from has been added in their paid business tier with their image access management.
Previously I might have suggested using Notary and Docker Content Trust to ensure you only run trusted images, but that tooling has a variety of known issue, including the use of TOFU (trust on first use) that allows any image from a repo that hasn't been seen before to be signed by anyone and trusted to run. There are a few attempts to replace this, and the current leader is sigstore/cosign, but that isn't integrated directly into the docker engine. If you run in Kubernetes, this would be configured in your admission controller, like Gatekeeper or Kyverno.
Just found in redhat docs:
This can be done on docker daemon config:
/etc/sysconfig/docker
BLOCK_REGISTRY='--block-registry=all'
ADD_REGISTRY='--add-registry=registry.access.redhat.com'
and then do:
systemctl restart docker
I followed a docker-machine tutorial to setup a docker swarm in the cloud. I had setup a bunch of replicas and life is good. Now I need to give my teammates access to this docker swarm. How do I do that?
Should I share docker certificate files? Can each team member have an individual set of certificate files? Is there any way to setup OAuth or other form of SSO?
The Docker daemon doesn't do any extended client auth.
You can generate certificate's for each client from the CA that signed the swarm certificate, which is probably the minimum you want. Access to Docker is root access to the host so best not to hand out direct access to everyone, or outside of development.
For any extended authentication and authorisation you would need to put a broker between the Docker API and your clients. The easiest way to do this is to use a higher level management platform like Rancher or Shipyard, that can manage the swarm for you.
Mesos/Marathon/Mesosphere and Kubernetes are simliar in function but have more of their own idea of what clustering is.
I'm using Docker to build an nginx enviornment. I'm wondering if it's possible to expose to publish the ports (80, 443) during build so letsencrypt can run at build time (it needs network access to a server in the (intermediate) container).
Is this possible?
I have never seen that and i think that is not possible by design.
You should not place the secret key in the image
You might need to re-assure the license after 2 months and would need to rebuild the whole image
in general, this is done using a companion letsencrypt docker image, sometimes called sidekick. You basically have your app (and its containers) and a letsencrypt container, exposing a volume which nginx then mounts using volume_from this volume is were the letsencrypt container puts the fetched certificates. This happens during image-startup, not during image creation. You use a docker-compose file to configure anything needed.
E.g. you can have a look here
a) https://github.com/rancher/community-catalog/blob/master/templates/letsencrypt/2/docker-compose.yml
b) or http://letsencrypt.readthedocs.io/en/latest/using.html#running-with-docker
a) lets you defined the domains you are going to need using ENV variables, which will suite a docker-compose way very well, not providing any files like a configuration on the host ( keeps it portable ).
You can still put all this on the nginx-server, but its just not best practise, out of many reasons ( e.g. the need to configure nginx ).
If you want to stick to "build time", an alternative is using the DNS verify mode, so instead of verifying using connect-back on a port, you rather verify using a DNS-entry, some links for that
- https://github.com/lukas2511/letsencrypt.sh/wiki/Examples-for-DNS-01-hooks
- the a) container does this
For this scenario you might want to pick http://cloudflare.com - AFAIK it is the only DNS service with free API access for unlimited domains, anything else either costs money or has limits.