I am currently setting up a local cluster at my work using docker. Basically everything works fine, the only thing I worry about is, that other devs that use my setup may eventually push the local builds to a remote repository.
Since this would be a catastrophe because we are not allowed to upload the companies artefacts anywhere else than internal servers - is there a way to prevent other users from pushign to a remote docker repo?
docker repo == docker registry?
Not sure I get the full picture about your desired workflow, but here are two options:
Use registry authentication and make sure that only authorised people push
Configure networking / dns / hosts to resolve to the correct registry - e.g. docker-registry.mycompany.com resolves to the local registry for devs and to the remote registry for others.
Related
I am trying to push a docker image to a repository and having several problems. I have my local development environment set up with Docker desktop on OSX. Now, I need to to push the updated image to dockerhub, so it can be pulled into Kubernetes. I can't be sure, but I think I received security warnings from my workplace after trying that action.
Are there known security risks with connecting to dockerhub?
Do I need to protect the daemon socket? Our desktop guy send this to me, but I am not connecting to a host other than Dockerhub, so is the following still relevant? https://docs.docker.com/engine/security/protect-access/
I'm trying to migrate from docker-maven-plugin to kubernetes-maven-plugin for an test-setup for local development and jenkins-builds. The point of the setup is to eliminate differences between the local development and the jenkins-server. Since docker built the image, the image is stored in the local repository and doesn't have to be uploaded to a central server where the base-images are located. So we can basically verify our build without uploading anything to the server and the images is discarded after the task is done (running integrationstests).
Is there a similar way to trick kubernetes to store the image into the local repository without having to take the roundtrip to a central repository? Eg, behave as if the image is already downloaded? Note that I still need to fetch the base-image from the central repository.
If you don't want to use any docker repo (public or private), you can use what is called Pre-pulled-images.
This is a bit annoying as you need to make sure all the kubernetes nodes have the images there and also set the ImagePullPolicy to Never in every kubernetes manifest.
In your case, if what you call local repository is some private docker registry, you just need to store the credentials to the private registry in a kubernetes secret and either patch you default service account with ImagePullSecrets or your actual deployment/pod manifest. More details about that https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
This is a bit of a silly setup, but here's what I'm looking at right now:
I'm learning Kubernetes
I want to push custom code to my Kubernetes cluster, which means the code must be available as a Docker image available from some Docker repository (default is Docker Hub)
While I'm willing to pay for Docker Hub if I have to (though I'd rather avoid it), I have concerns about putting my custom code on a third-party service. Sudden rate limits, security breaches, sudden ToS changes, etc
To this end, I'm running my own Docker registry within my Kubernetes cluster
I do not want to configure the Docker clients running on the Kubernetes nodes to trust insecure (HTTP) Docker registries. If I do choose to pull any images from an external registry (e.g. public images like nginx I may pull from Docker Hub instead of hosting locally) then I don't want to be vulnerable to MITM attacks swapping out the image
Ultimately I will have a build tool within the cluster (Jenkins or otherwise) pull my code from git, build the image, and push it to my internal registry. Then all nodes pulling from the registry live within the cluster. Since the registry never needs to receive images from sources outside of the cluster or delivery them to sources outside of the cluster, the registry does not need a NodePort service but can instead be a ClusterIP service.... ultimately
Until I have that ultimate setup ready, I'm building images on my local machine and wish to push them to the registry (from the internet)
Because I don't plan on making the registry accessible from the outside world (eventually), I can't utilize Let's Encrypt to generate valid certs for it (even if I were making my Docker registry available to the outside world, I can't use Let's Encrypt, anyway without writing some extra code to utilize certbot or something)
My plan is to follow the example in this StackOverflow post: generate a self-signed cert and then launch the Docker registry using that certificate. Then use a DaemonSet to make this cert trusted on all nodes in the cluster.
Now that you have the setup, here's the crux of my issue: within my cluster my Docker registry can be accessed via a simple host name (e.g. "docker-registry"), but outside of my cluster I need to either access it via a node IP address or a domain name pointing at a node or a load balancer.
When generating my self-signed cert I was asked to provide a CN / FQDN for the certificate. I put in "docker-registry" -- the internal host name I plan to utilize. I then tried to access my registry locally to push an image to it:
> docker pull ubuntu
> docker tag ubuntu example.com:5000/my-ubuntu
> docker push example.com:5000/my-ubuntu
The push refers to repository [example.com:5000/my-ubuntu]
Get https://example.com:5000/v2/: x509: certificate is valid for docker-registry, not example.com
I can generate a certificate for example.com instead of for docker-registry, however I worry that I'll have issues configuring the service or connecting to my registry from within my cluster if I provide my external domain like this instead of an internal host name.
This is why I'm wondering if I can just say that my self-signed cert applies to both example.com and docker-registry. If not, two other acceptable solutions would be:
Can I tell the Docker client not to verify the host name and just trust the certificate implicitly?
Can I tell the Docker registry to deliver one of two different certificates based on the host name used to access it?
If none of the three options are possible, then I can always just forego pushing images from my local machine and start the process of building images within the cluster -- but I was hoping to put that off until later. I'm learning a lot right now and trying to avoid getting distracted by tangential things.
Probably the easiest way to solve your problem would be to use Docker's insecure-registry feature. The concern you mention about this in your post (that it would open you up to security risks later) probably won't apply as the feature works by specifying specific IP addresses or host names to trust.
For example you could configure something like
{
"insecure-registries" : [ "10.10.10.10:5000" ]
}
and the only IP address that your Docker daemons will access without TLS is the one at that host and port number.
If you don't want to do that, then you'll need to get a trusted TLS certificate in place. The issue you mentioned about having multiple names per cert is usually handled with the Subject Alternative Name field in a cert. (indeed Kubernetes uses that feature quite a bit).
docker login - how to log in only once for any docker repositories
I set up the on premise Artifactory to host some Docker repositories, using the subdomain approach, i.e. repo1.mycompany.com, repo2.mycompany.com, etc. Everything is working fine. My question is, look like I need to do the 'docker login repo1.mycompany.com' for each repository. Is there a way to log in only once, for all the repositories, and then when pulling/pushing images from/to any repository, there's no need to log in again?
No code to shown here. This is all about setup.
No need to login for each repo.
With subdomain method each docker repository is considered as a docker registry for the client this is why you need to login to each one you want to use.
For the pull from any without login you can use a virtual repository and aggregate all your locals in it. So you'll need to login the the virtual only to be able to pull from any (through the virtual). But push will be limited to the default deployment target repo defined in the virtual one.
Another alternative is to use repo path instead of subdomains. With this approach you'll be able to login on Artifactory and use all repo :
docker login mycompany.com
docker pull/push mycompany.com/repo1/imageName
docker pull/push mycompany.com/repo2/imageName
What is currently the recommended way to mirror a Private Docker Registry?
Mirroring functionality is provided by official docker-registry image but only for the Public Registry.
See documentation:
"Beware that mirroring only works for the public registry. You can not create a mirror for a private registry."
My use-case:
A bigger development team that is working in an office with a limited network. They only pull docker images from registries. Pushing is occasional and handled by Jenkins box hosted in AWS. Most of the images they use resides in our password protected Private Registry (served over https). So it's only natural to mirror/cache the Registry on a machine in a local network. If not for https I would just go for HTTP_PROXY and local squid install.
I'm sure I'm not the only one solving docker dev bandwidth problem. What do you do?
It is now possible to do this with the "proxy" settings in the configuration for a V2 registry. Just put up another registry (on a different server/port from any other private registry you have) and on every docker engine, set the '--registry-mirror' flag to point to it.
Just watch out for accidental pushes - always retag your images to the private registry or a private repository if you wish to keep them private.
Right now, I would recommend using the (new) golang registry (https://github.com/docker/distribution) instead of the (v1) python one, and go with the proxy solution (using HTTP_PROXY + a reverse proxy cache - squid, or whatever else pleases your tastes - I would probably use varnish).
Native support for "mirroring" built into the registry itself will come eventually, and later more flexible transports.