I want to reject all docker registries except my own one. I'm looking for a some kind of policies for docker registries and their images.
For example my registry name is registry.my.com. I want to make kubernetes pulling/running images only from registry.my.com, so:
image: prometheus:2.6.1
or any another should be rejected, while:
image: registry.my.com/prometheus:2.6.1
shouldn't.
Is there a way to do that?
Admission Controllers is what you are looking for.
Admission controllers intercept operations to validate what should happen before the operation is committed by the api-server.
An example is the ImagePolicyWebhook, an admission controller that intercept Image operations to validate if it should be allowed or rejected.
It will make a call to an REST endpoint with a payload like:
{
"apiVersion":"imagepolicy.k8s.io/v1alpha1",
"kind":"ImageReview",
"spec":{
"containers":[
{
"image":"myrepo/myimage:v1"
},
{
"image":"myrepo/myimage#sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed"
}
],
"annotations":[
"mycluster.image-policy.k8s.io/ticket-1234": "break-glass"
],
"namespace":"mynamespace"
}
}
and the API answer with Allowed:
{
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
"kind": "ImageReview",
"status": {
"allowed": true
}
}
or Rejected:
{
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
"kind": "ImageReview",
"status": {
"allowed": false,
"reason": "image currently blacklisted"
}
}
The endpoint could be a Lambda function or a container running in the cluster.
This github repo github.com/flavio/kube-image-bouncer implements a sample using ImagePolicyWebhook to reject containers using the tag "Latest".
There is also the option to use the flag registry-whitelist on startup to a pass a comma separated list of allowed registries, this will be used by the ValidatingAdmissionWebhook to validate if the registry is whitelisted.
.
The other alternative is the project Open Policy Agent[OPA].
OPA is a flexible engine used to create policies based on rules to match resources and take decisions according to the result of these expressions. It is a mutating and a validating webhook that gets called for matching Kubernetes API server requests by the admission controller mentioned above. In summary, the operation would work similarly as described above, the only difference is that the rules are written as configuration instead of code. The same example above rewritter to use OPA would be similar to this:
package admission
import data.k8s.matches
deny[{
"id": "container-image-whitelist", # identifies type of violation
"resource": {
"kind": "pods", # identifies kind of resource
"namespace": namespace, # identifies namespace of resource
"name": name # identifies name of resource
},
"resolution": {"message": msg}, # provides human-readable message to display
}] {
matches[["pods", namespace, name, matched_pod]]
container = matched_pod.spec.containers[_]
not re_match("^registry.acmecorp.com/.+$", container.image) # The actual validation
msg := sprintf("invalid container registry image %q", [container.image])
}
The above translates to: deny any pod where the container image does not match the following registry registry.acmecorp.com
Currently not something that you can enable or disable with one command , but there are admission controllers that you can use.
If you are on redhat platform and running just docker or kubernetes nodes on RHEL , with RHEL docker as container runtime , you can white list registries there.
Whitelisting Docker Registries
You can specify a whitelist of docker registries, allowing you to
curate a set of images and templates that are available for download
by OpenShift Container Platform users. This curated set can be placed
in one or more docker registries, and then added to the whitelist.
When using a whitelist, only the specified registries are accessible
within OpenShift Container Platform, and all other registries are
denied access by default.
To configure a whitelist:
Edit the /etc/sysconfig/docker file to block all registries:
BLOCK_REGISTRY='--block-registry=all'
You may need to uncomment the BLOCK_REGISTRY line.
In the same file, add registries to which you want to allow access:
ADD_REGISTRY='--add-registry=<registry1> --add-registry=<registry2>'
Allowing Access to Registries
ADD_REGISTRY='--add-registry=registry.access.redhat.com'
There is also a github project:
https://github.com/flavio/kube-image-bouncer
That you can use to white list registries. I think registry white listing is already implemented in it , you just need to provide it the list when you are going to run the binary.
In case you are dealing with an Azure-managed AKS cluster you can make use of Azure Policies. Here is a summary. I wrote about it in more detail in my blog post which can be found here.
Activate the Policy Insights resource provider on your subscription
az provider register --namespace Microsoft.PolicyInsights
Enable AKS Azure Policy Add-On
az aks enable-addons --addons azure-policy --name <cluster> --resource-group rg-demo
Assign one of the built-in policies that allow just for that use case
# Define parameters for Azure Policy
$param = #{
"effect" = "deny";
"excludedNamespaces" = "kube-system", "gatekeeper-system", "azure-arc", "playground";
"allowedContainerImagesRegex" = "myregistry\.azurecr\.io\/.+$";
}
# Set a name and display name for the assignment
$name = 'restrict-container-registries'
# Retrieve the Azure Policy object
$policy = Get-AzPolicyDefinition -Name 'febd0533-8e55-448f-b837-bd0e06f16469'
# Retrieve the resource group for scope assignment
$scope = Get-AzResourceGroup -Name rg-demo
# Assign the policy
New-AzPolicyAssignment -DisplayName $name -name $name -Scope $scope.ResourceId -PolicyDefinition $policy -PolicyParameterObject $param
A couple of things worth noting:
Installing the add-on, installs gatekeeper for you
It can take up to 20 minutes until the policies do get applied
I excluded the namespace playground on purpose for demo only
Related
I'm using ECSOperator in airflow and I need to pass flags to the docker run. I searched the internet but I couldn't find a way to give an ECSOperator flags such as: -D, --cpus and more.
Is there a way to pass these flags to a docker run (if a certain condition is true) using the ECSOperator (same way we can pass tags, and network configuration), or they can only be defined in the ECS container running the docker image?
I'm not familiar with ECSOpearor but if I understand correctly that is python library. And you can create new task using python
As I can see in this exmaple it is possible to set task_definition and overrides:
...
ecs_operator_task = ECSOperator(
task_id = "ecs_operator_task",
dag=dag,
cluster=CLUSTER_NAME,
task_definition=service['services'][0]['taskDefinition'],
launch_type=LAUNCH_TYPE,
overrides={
"containerOverrides":[
{
"name":CONTAINER_NAME,
"command":["ls", "-l", "/"],
},
],
},
network_configuration=service['services'][0]['networkConfiguration'],
awslogs_group="mwaa-ecs-zero",
awslogs_stream_prefix=f"ecs/{CONTAINER_NAME}",
...
So if you want to set CPU and Memory specs for whole task you have to update task_definition dictionary parameters (something like service['services'][0]['taskDefinition']['cpu'] = 2048)
If you want to specify parameters for exact container, overrides should be proper way:
overrides={
"containerOverrides":[
{
"cpu": 2048,
...
},
],
},
Or edited containerDefinitions may be set directly inside task_definition in theory...
Anyway most of docker parameters should be pass inside containerDefinitions section.
So about your question:
Is there a way to pass these flags to a docker run
If I understand correctly you have a JSON TaskDefinition file and want to run it locally using docker?
Then try to check these tools. It allows you to convert docker-compose.yml into ECS definition, and that is opposite of what you looking for, but maybe some of these tools able to convert it vice-versa..?
In other way you have to parse TaskDefinition's JSON manually and convert it to docker command arguments
Apologies if this is a duplicate, I haven't found a solution in similar questions.
I'm trying to upload a docker image to Google Kubernetes Engine.
I've did it successfully before, but I can't seem to find my luck this time around.
I have Google SDK set up locally with kubectl and my Google Account, which is project owner and has all required permissions.
When I use
kubectl create deployment hello-app --image=gcr.io/{project-id}/hello-app:v1
I see the deployment on my GKE console, consistently crashing as it "cannot pull the image from the repository.ErrImagePull Cannot pull image '' from the registry".
It provides 4 recommendations, which I have by now triple checked:
Check for spelling mistakes in the image names.
Check for errors when pulling the images manually (all fine in Cloud Shell)
Check the image pull secret settings
So, based on this https://blog.container-solutions.com/using-google-container-registry-with-kubernetes, I manually added 'gcr-json-key' from a new service account with project view permissions as well as 'gcr-access-token' to kubectl default service account.
Check the firewall for the cluster to make sure the cluster can connect to the ''. Afaik, this should not be an issue with a newly set up cluster.
The pods themselve provide the following error code:
Failed to pull image "gcr.io/{project id}/hello-app:v1":
[rpc error: code = Unknown desc = Error response from daemon:
Get https://gcr.io/v2/{project id}/hello-app/manifests/v1: unknown: Unable to parse json key.,
rpc error: code = Unknown desc = Error response from daemon:
Get https://gcr.io/v2/{project id}/hello-app/manifests/v1:
unauthorized: Not Authorized., rpc error: code = Unknown desc = Error response from daemon:
pull access denied for gcr.io/{project id}/hello-app,
repository does not exist or may require 'docker login': denied:
Permission denied for "v1" from request "/v2/{project id}/hello-app/manifests/v1".]
My question now, what am I doing wrong or how can I find out why my pods can't pull my image?
Kubernetes default serviceaccount spec:
kubectl get serviceaccount -o json
{
"apiVersion": "v1",
"imagePullSecrets": [
{
"name": "gcr-json-key"
},
{
"name": "gcr-access-token"
}
],
"kind": "ServiceAccount",
"metadata": {
"creationTimestamp": "2020-11-25T15:49:16Z",
"name": "default",
"namespace": "default",
"resourceVersion": "6835",
"selfLink": "/api/v1/namespaces/default/serviceaccounts/default",
"uid": "436bf59a-dc6e-49ec-aab6-0dac253e2ced"
},
"secrets": [
{
"name": "default-token-5v5fb"
}
]
}
It does take several steps and the blog post you referenced appears to have them correctly. So, I suspect your error is in one of the steps.
Couple of things:
The error message says Failed to pull image "gcr.io/{project id}/hello-app:v1". Did you edit the error message to remove your {project id}? If not, that's one problem.
My next concern is the second line: Unable to parse json key. This suggests that you created the secret incorrectly:
Create the service account and generate a key
Create the Secret exactly as shown: kubectl create secret docker-registry gcr-json-key... (in the default namespace unless --namespace=... differs)
Update the Kubernetes spec with ImagePullSecrets
Because of the ImagePullSecrets requirement, I'm not aware of an alternative kubectl run equivalent but, you can try accessing your image using Docker from your host:
See: https://cloud.google.com/container-registry/docs/advanced-authentication#json-key
And then try docker pull gcr.io/{project id}/hello-app:v1 ensuring that {project id} is replaced with the correct GCP Project ID.
This proves:
The Service Account & Key are correct
The Container Image is correct
That leaves, your creation of the Secret and your Kubernetes spec to test.
NOTE The Service Account IAM permission of Project Viewer is overly broad for GCR access, see the permissions
Use StorageObject Viewer (roles/storage.objectViewer) if the Service Account needs only to pull images.
I am trying to run the artifact passing example on Argoproj. However, I am getting the following error:
failed to save outputs: verify serviceaccount platform:default has necessary privileges
This error is appearing in the first step (generate-artifact) itself.
Selecting the generate-artifact component and clicking YAML gives following line highlighted
Nothing appears on clicking LOGS.
I need to understand the correct sequence of steps in running the YAML file so that this error does not appear and artifacts are passed. Could not find much resources on this issue other than this page where the issue is discussed on argo repository.
All pods in a workflow run with the service account specified in workflow.spec.serviceAccountName, or if omitted, the default service account of the workflow's namespace.
Here the default service account of that namespace doesn't seem to be given any roles by default.
Try granting a role to the “default” service account in a namespace:
kubectl create rolebinding argo-default-binding \
--clusterrole=cluster-admin \
--serviceaccount=platform:default \
--namespace=platform
Since the default service account now gets all access via the 'cluster-admin' role, the example should work now.
We have structure within our platform that requires a large number of private images within a single and/or only a few projects if possible. Additionally we are largely a GCP shop and would love to stay within the Google environment.
Currently - as I understand it - GCR ACL structures require the storage.objects.get and storage.objects.list permissions (or the objectViewer role) attached to a service account (in this case) to access the GCR. This isn't an issue generally and we haven't had any direct issues with using gsutil to enable read access at the project level for the container registry. Below is a workflow example of what we're doing to achieve general access. However, it does not achieve our goal of restricted service account per image access.
Simple Docker Image is built, tagged, and pushed into GCR, using exproj in place of used project name.
sudo docker build -t hello_example:latest
sudo docker tag hello_example:latest gcr.io/exproj/hello_example:latest
sudo docker push gcr.io/exproj/hello_example:latest
This provides us with the hello_example repository in the ex_proj project.
We create a service account and give it permissions to read out of the the bucket.
gsutil acl ch -u gcr-read-2#exproj.iam.gserviceaccount.com:R gs://artifacts.exproj.appspot.com/
Updated ACL on gs://artifacts.exproj.appspot.com
Which then allows us to use the Docker login via the key.
sudo docker login -u _json_key --password-stdin https://gcr.io < gcr-read-2.json
Login Succeeded
And then pull down the image from the registry as expected
sudo docker run gcr.io/exproj/hello_example
However, for our purposes we do not want to allow the service account to have access to the entire registry per project, but rather only have access to hello_example as identified above. In my testing with gsutil, I'm unable to define specific per-image based ACLs, but, I'm wondering if I'm just missing something.
gsutil acl ch -u gcr-read-2#exproj.iam.gserviceaccount.com:R gs://artifacts.exproj.appspot.com/hello_example/
CommandException: No URLs matched: gs://artifacts.exproj.appspot.com/hello_example/
In the grand scheme of it all, we would like to hit the following model:
AccountA created ImageA:TagA in ExampleProj
ServiceAccountA is generated
ACLs are set for ServiceAccountA to only access ExampleProj/ImageA and all Tags underneath it
ServiceAccountA JSON is provided to AccountA
AccountA can now access only ExampleProj/ImageA, AccountB cannot access AccountA's ExampleProj/ImageA
While we could do per-project per-Account container registry, the scaling potential of needing to track projects across each Account and being at the whim of GCP project-limitations during a heavy use period is worrying.
I'm open to any ideas or structures that would achieve this other than the above as well!
EDIT
Thanks to jonjohnson for responding! I wrote a quick and dirty script along the recommended lines pertaining to blob reading. I'm working on validating it's success still, but, I did want to state that we do control when pushes occur, therefore tracking the results is less fragile than it could be in other situations.
Here's a script I put together as an example for manifest -> digest permission modifications.
require 'json'
# POC GCR Blob Handler
# ---
# Hardcoded params and system calls for speed
# Content pushed into gcr.io will be at gs://artifacts.{projectid}.appspot.com/containers/images/ per digest
def main()
puts "Running blob gathering from manifest for org_b and example_b"
manifest = `curl -u _token:$(gcloud auth print-access-token) --fail --silent --show-error https://gcr.io/v2/exproj/org_b/manifests/example_b`
manifest = JSON.parse(manifest)
# Manifest is parsed, gather digests to ensure we allow permissions to correct blobs
puts "Gathering digests to allow permissions"
digests = Array.new
digests.push(manifest["config"]["digest"])
manifest["layers"].each {|l| digests.push(l["digest"])}
# Digests are now gathered for the config and layers, loop through the digests and allow permissions to the account
puts "Digests are gathered, allowing read permissions to no-perms account"
digests.each do |d|
puts "Allowing permissions for #{d}"
res = `gsutil acl ch -u no-perms#exproj.iam.gserviceaccount.com:R gs://artifacts.exproj.appspot.com/containers/images/#{d}`
puts res
end
puts "Permissions changed for org_b:example_b for no-perms#exproj.iam.gserviceaccount.com"
end
main()
While this does appropriate set permissions, I'm seeing a fair amount of fragility on the actual authentication to Docker and pulling down in regard to Docker logins not being identified.
Was this along the lines that you were referring to jonjohnson? Essentially allowing access per blob per service account based on manifest/layers associated with that image/tag?
Thanks!
There's not currently an easy way to do what you want.
One thing you can do is grant access to individual blobs in your bucket for each image. This isn't super elegant because you'd have to update the permissions after every push.
You could automate that yourself by using the pubsub support in GCR to listen for pushes, look at the blobs referenced by that image, match the repository path to whichever service accounts need access, then grant those service accounts access to each blob object.
One downside is that each service account will still be able to look at the image manifest (essentially a list of layer digests + some image runtime config). They won't be able to pull the actual image contents, though.
Also, this relies a bit on some implementation details of GCR, so it might break in the future.
I am pushing a docker image to a private docker registry, and am having trouble marking it 'public' via
a script.
For this discussion, I'm guessing the content of the Dockerfile doesn't matter... so lets assume I have the following in my
current working directory:
Dockerfile
from ubuntu
touch /tmp/foo
I build like this:
docker build -t my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04 .
Then, I am doing my push like this:
docker push my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04
Next, I navigate to the web site that allows me to manage my private registry (# the url http://my.private.docker.registry.com)
I look at my image, and I see it has a padlock icon next to it, indicating that it is private. I can manually unlock from the
web UI, but I'd like to know if there are any options to docker's 'push command that will allow me to mark the image
as 'public' without manual intervention.
One thing I tried was setting global settings for my namespace such that all new repos would be readable/writable by all users.
Specifically: I went into the Docker web ui for my private registry and for the namespace 'foo' I tried adding default permissions
(for any newly created repos) such that all users will have 'write' access to any new repo pushed under the 'foo' namespace.
However, even after doing the above, when I pushed a new image to my private registry under namespace foo, that image was still
marked with the pad-lock. I looked up the command line options for 'docker push', and I did not see any option that looked like
it would affect the visibility of the image at the time of push.
thanks in advance for your help !
-chris
So, according to the folks who manage the Docker registry at the company I'm at now: there is no command line way to enable permissions for users other than the repository creator to have write access to that repo. You have to go to the web UI and manually mark the repo 'public', and you have to add permissions for each user (although it is possible to have groups of users, and then add a whole group -- this still is clunky because new employees have to be manually added to the group).
I find it hard to believe that there's no command line way.. But this is what our experts say.. If there are other experts out there who have a better idea, please chime in ! Otherwise I will do it manually through the web UI (grrrrRRrr).