How can I whitelist containers which are allowed to run inside PCF? We want to have control over the containers which are running in pcf.
As I write this Cloud Foundry doesn't directly offer this functionality, but I suppose you could restrict network access so that you can't get to the public registries and can only get to a private registry that you control which only has approved images.
You can use something like Docker Registry, Harbor or Artifactory to run your own registry.
If your custom registry does not have a trusted TLS certificate, you may need to add it to the list of trusted certs in Ops Manager or configure PAS and add it to the Private Docker insecure registry allow list.
Hope that helps!
Related
I've followed this docs manual to create a self-signed private registry on some VM. It works fine when I pull images from another host.
I now try to understand how I configure a Service Connection in Azure DevOps of type Docker Registry to use this registry.
This is my current setup:
And this is the log:
We could go to the Docker's Settings > Network and change DNS Server radio button to Fixed
In addition, I found a sample issue, you could also check this.
I'm doing research on how container services in Azure compare with our on-prem implementation of containers, which includes Docker Trusted Registry.
Is one required to use Azure Container Registry to make use of Azure Containers? Or could we tie into our existing on-prem Docker Trusted Registry?
Thank you!
Yes, you could use a private registry---Docker Trusted Registry for Azure Container Instance.
Containers are built from images that are stored in one or more
repositories. These repositories can belong to a public registry, like
Docker Hub, or to a private registry. An example of a private registry
is the Docker Trusted Registry, which can be installed on-premises or
in a virtual private cloud. You can also use cloud-based private
container registry services, including Azure Container Registry.
A publicly available container image does not guarantee security.
Container images consist of multiple software layers, and each
software layer might have vulnerabilities. To help reduce the threat
of attacks, you should store and retrieve images from a private
registry, such as Azure Container Registry or Docker Trusted Registry.
In addition to providing a managed private registry, Azure Container
Registry supports service principal-based authentication through Azure
Active Directory for basic authentication flows. This authentication
includes role-based access for read-only (pull), write (push), and
other permissions.
When you create the ACI via the Azure portal, you will see the three options.
I am using Azure Kubernetes Service(AKS) and I am no allowed to use Docker Hub repository for pushing and pulling Images, so please tell me, is there a way to create kubernetes deployments or pods by using tar of image or by pulling image using ssh connection from other server in which I am having Docker engine running.
I am assuming that the reason why you are not allowed to use Docker Hub is because of the company policy that wants to keep everything private and contained within Azure.
In that case, I suggest using the Azure's own container registry service named Azure Container Registry which have the following benefit:
It works similar to DockerHub in the sense that you can just sign in with username and password, update the image name and you are good to go.
It is the solution from Azure which should fit nicely in your Infrastructure design. Please refer to this link for details instructions on how to connect your AKS and ACR.
The traffic flow from AKS and ACR is private and not exposed to the Internet.
Is it possible, to pull private images from Docker Hub to a Google Cloud Kubernetes cluster?
Is this recommended, or do I need to push my private images also to Google Cloud?
I read the documentation, but I found nothing that could explain me this clearly. It seems that it is possible, but I don´t know if it's recommended.
There is no restriction to use any registry you want. If you just use the image name, (e.g., image: nginx) in pod specification, the image will be pulled from public docker hub registry with tag assumed as :latest
As mentioned in the Kubernetes documentation:
The image property of a container supports the same syntax as the
docker command does, including private registries and tags. Private
registries may require keys to read images from them.
Using Google Container Registry
Kubernetes has native support for the Google Container Registry (GCR), when running on Google
Compute Engine (GCE). If you are running your cluster on GCE or Google
Kubernetes Engine, simply use the full image name (e.g.
gcr.io/my_project/image:tag). All pods in a cluster will have read
access to images in this registry.
Using AWS EC2 Container Registry
Kubernetes has native support for the AWS EC2 Container Registry, when nodes are AWS EC2 instances.
Simply use the full image name (e.g.
ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag) in the Pod
definition. All users of the cluster who can create pods will be able
to run pods that use any of the images in the ECR registry.
Using Azure Container Registry (ACR)
When using Azure Container Registry you can authenticate using either an admin user or a
service principal. In either case, authentication is done via standard
Docker authentication. These instructions assume the azure-cli command
line tool.
You first need to create a registry and generate credentials, complete
documentation for this can be found in the Azure container registry
documentation.
Configuring Nodes to Authenticate to a Private Repository
Here are the recommended steps to configuring your nodes to use a private
registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes, for example:
if you want the names: nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
if you want to get the IPs: nodes=$(kubectl get nodes -o jsonpath='{range
.items[*].status.addresses[?(#.type=="ExternalIP")]}{.address}
{end}')
Copy your local .docker/config.json to the home directory of root on each node.
for example: for n in $nodes; do scp ~/.docker/config.json root#$n:/root/.docker/config.json; done
Use cases:
There are a number of solutions for configuring private registries.
Here are some common use cases and suggested solutions.
Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
Use public images on the Docker hub.
No configuration required.
On GCE/Google Kubernetes Engine, a local mirror is automatically used for improved speed and availability.
Cluster running some proprietary images which should be hidden to those outside the company, but visible to all cluster users.
Use a hosted private Docker registry.
It may be hosted on the Docker Hub, or elsewhere.
Manually configure .docker/config.json on each node as described above.
Or, run an internal private registry behind your firewall with open read access.
No Kubernetes configuration is required.
Or, when on GCE/Google Kubernetes Engine, use the project’s Google Container Registry.
It will work better with cluster autoscaling than manual node configuration.
Or, on a cluster where changing the node configuration is inconvenient, use imagePullSecrets.
Cluster with a proprietary images, a few of which require stricter access control.
Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods potentially have access to all images.
Move sensitive data into a “Secret” resource, instead of packaging it in an image.
A multi-tenant cluster where each tenant needs own private registry.
Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods of all tenants potentially have access to all
images.
Run a private registry with authorization required.
Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
The tenant adds that secret to imagePullSecrets of each namespace.
Consider reading the Pull an Image from a Private Registry document if you decide to use a private registry.
There are 3 types of registries:
Public (Docker Hub, Docker Cloud, Quay, etc.)
Private: This would be a registry running on your local network. An example would be to run a docker container with a registry image.
Restricted: That is one registry that needs some credentials to validate. Google Container Registry (GCR) in an example.
As you are well saying, in a public registry, such as Docker Hub, you can have private images.
Private and Restricted registries are more secure obviously, as one of them is not even exposed to internet (ideally), and the other one needs credentials.
I guess you can achieve an acceptable security level with any of them. So, it is matter of choice. If you feel your application is critical, and you don't want to run any risk, you should have it in GCR, or in a private registry.
If you feel like it is important, but not critical, you could have it in any public repository, making it private. This will give a layer of security.
I have artifactory pro license.
I want to use the artifactory for docker repository.
As you know, docker repository support user namespace like this,
example.com/username/imagename:tag
but artifactory use repository name instead of username.
but i wanna to use username space and apply permission for each user for their repository.
so, how many repository supported?
Using an Artifactory pro myself, I confirm a docker registry support as many namespace (not just username) as you would need.
All I need to do is:
login
docker login my-registry
tag
docker tag my_tag my-registry:my_label/my_tag
push
docker push my-registry:my_label/my_tag
With "my-registry" being the name of the server referencing your artifactory docker registry, as configured by "Configuring Artifactory / Configuring a Reverse Proxy / Configuring NGINX "
That is because Docker requires the URL of any repository it connects to conform to a specific format (http(s)://<host>:<port>/v1), and Artifactory requires a specific URL format (http://<host>:<port>/artifactory/api/docker/<docker_repository>).
Hence the need for a reverse proxy.
But: there is no notion of username, only namespace.
As mentioned in Artifactory Docker Registry:
With the fine-grained access control provided by built-in security features, Artifactory offers secure Docker push and pull with local Docker repositories as fully functional, secure, private Docker registries.
But those built-in security features are for user authentication to Artifactory in general, not specific to a docker registry which has no notion of username: if a user has permission of pushing to a docker registry, it pushing to any part of it.
I want is to perform ACLs on a namespace basis.
As far as I know, this would not be supported.
You might configure NGiNX to filter that for you, but Artifactory itself does not provide docker registry namespace-based ACL.
So I want to create a repository for each user and grant permission to that repository to use an artifactory like a docker hub. So I'm wondering how many repositories I can create in an artifactory
That implies two things:
different local docker repositories: there are no official limit to the number of repos, only local storage quota limits.
different NGiNX reverse proxied domain names: each separate registry needs to have its own domain name.