Google Container Registry Per Image ACLs - docker

We have structure within our platform that requires a large number of private images within a single and/or only a few projects if possible. Additionally we are largely a GCP shop and would love to stay within the Google environment.
Currently - as I understand it - GCR ACL structures require the storage.objects.get and storage.objects.list permissions (or the objectViewer role) attached to a service account (in this case) to access the GCR. This isn't an issue generally and we haven't had any direct issues with using gsutil to enable read access at the project level for the container registry. Below is a workflow example of what we're doing to achieve general access. However, it does not achieve our goal of restricted service account per image access.
Simple Docker Image is built, tagged, and pushed into GCR, using exproj in place of used project name.
sudo docker build -t hello_example:latest
sudo docker tag hello_example:latest gcr.io/exproj/hello_example:latest
sudo docker push gcr.io/exproj/hello_example:latest
This provides us with the hello_example repository in the ex_proj project.
We create a service account and give it permissions to read out of the the bucket.
gsutil acl ch -u gcr-read-2#exproj.iam.gserviceaccount.com:R gs://artifacts.exproj.appspot.com/
Updated ACL on gs://artifacts.exproj.appspot.com
Which then allows us to use the Docker login via the key.
sudo docker login -u _json_key --password-stdin https://gcr.io < gcr-read-2.json
Login Succeeded
And then pull down the image from the registry as expected
sudo docker run gcr.io/exproj/hello_example
However, for our purposes we do not want to allow the service account to have access to the entire registry per project, but rather only have access to hello_example as identified above. In my testing with gsutil, I'm unable to define specific per-image based ACLs, but, I'm wondering if I'm just missing something.
gsutil acl ch -u gcr-read-2#exproj.iam.gserviceaccount.com:R gs://artifacts.exproj.appspot.com/hello_example/
CommandException: No URLs matched: gs://artifacts.exproj.appspot.com/hello_example/
In the grand scheme of it all, we would like to hit the following model:
AccountA created ImageA:TagA in ExampleProj
ServiceAccountA is generated
ACLs are set for ServiceAccountA to only access ExampleProj/ImageA and all Tags underneath it
ServiceAccountA JSON is provided to AccountA
AccountA can now access only ExampleProj/ImageA, AccountB cannot access AccountA's ExampleProj/ImageA
While we could do per-project per-Account container registry, the scaling potential of needing to track projects across each Account and being at the whim of GCP project-limitations during a heavy use period is worrying.
I'm open to any ideas or structures that would achieve this other than the above as well!
EDIT
Thanks to jonjohnson for responding! I wrote a quick and dirty script along the recommended lines pertaining to blob reading. I'm working on validating it's success still, but, I did want to state that we do control when pushes occur, therefore tracking the results is less fragile than it could be in other situations.
Here's a script I put together as an example for manifest -> digest permission modifications.
require 'json'
# POC GCR Blob Handler
# ---
# Hardcoded params and system calls for speed
# Content pushed into gcr.io will be at gs://artifacts.{projectid}.appspot.com/containers/images/ per digest
def main()
puts "Running blob gathering from manifest for org_b and example_b"
manifest = `curl -u _token:$(gcloud auth print-access-token) --fail --silent --show-error https://gcr.io/v2/exproj/org_b/manifests/example_b`
manifest = JSON.parse(manifest)
# Manifest is parsed, gather digests to ensure we allow permissions to correct blobs
puts "Gathering digests to allow permissions"
digests = Array.new
digests.push(manifest["config"]["digest"])
manifest["layers"].each {|l| digests.push(l["digest"])}
# Digests are now gathered for the config and layers, loop through the digests and allow permissions to the account
puts "Digests are gathered, allowing read permissions to no-perms account"
digests.each do |d|
puts "Allowing permissions for #{d}"
res = `gsutil acl ch -u no-perms#exproj.iam.gserviceaccount.com:R gs://artifacts.exproj.appspot.com/containers/images/#{d}`
puts res
end
puts "Permissions changed for org_b:example_b for no-perms#exproj.iam.gserviceaccount.com"
end
main()
While this does appropriate set permissions, I'm seeing a fair amount of fragility on the actual authentication to Docker and pulling down in regard to Docker logins not being identified.
Was this along the lines that you were referring to jonjohnson? Essentially allowing access per blob per service account based on manifest/layers associated with that image/tag?
Thanks!

There's not currently an easy way to do what you want.
One thing you can do is grant access to individual blobs in your bucket for each image. This isn't super elegant because you'd have to update the permissions after every push.
You could automate that yourself by using the pubsub support in GCR to listen for pushes, look at the blobs referenced by that image, match the repository path to whichever service accounts need access, then grant those service accounts access to each blob object.
One downside is that each service account will still be able to look at the image manifest (essentially a list of layer digests + some image runtime config). They won't be able to pull the actual image contents, though.
Also, this relies a bit on some implementation details of GCR, so it might break in the future.

Related

GitLab runner ignoring DOCKER_AUTH_CONFIG when credential helper specified

We have a GitLab CI pipeline that currently pulls images from our internal Docker registry, authenticated using a variable defined in .gitlab-ci.yml:
variables:
...
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}}'
This works fine.
We are trying to add a step to the end of the pipeline, to push our built Docker images to an Amazon ECR registry. We have installed the amazon-ecr-credential-helper on our runner instances, and given them the correct IAM permissions to be able to push to these registries. We have changed the .gitlab-ci.yml variable to:
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}, "credHelpers": { "<account-id>.dkr.ecr.<region>.amazonaws.com": "ecr-login"}}'
However, this causes the runner to fail to authenticate to our internal registry, so it cannot pull the images in which our jobs run. Whereas previously we would see in our pipeline jobs' logs:
Authenticating with credentials from $DOCKER_AUTH_CONFIG
... we are no longer seeing this. We're not even getting to the step where we want to push to ECR.
We have added a wrapper script around the credential helper, to log all the ins and outs to a file, and try and debug what is happening. However, it appears as if the helper isn't getting called at all, as there is nothing in the log file.
What can we do to try and get this working?
Our problems here boiled down to a number of causes:
Since we referenced the credential helper in DOCKER_AUTH_CONFIG, we needed the helper installed on the machine spawning the runners. (We use the docker+machine runner.) This machine also needed IAM permissions. Without this, it just gave up on the DOCKER_AUTH_CONFIG variable completely (a questionable decision if you ask me...)
In order to authenticate from within the jobs and push the images to ECR, we needed to configure the helper there too. We did this by modifying our spawner's config.toml file to add a volume /usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login. (We also mounted the log directory and our helper wrapper.) In the docker push command, we added a --config docker-config flag, and wrote out an appropriate config to docker-config.config.json
Finally, our job image was docker/compose, and our verbose wrapper was written in bash, which isn't included in that image, so that was another silent failure. 😖.

docker-compose - how to provide credentials or API key in order to pull image from private repository?

I have private repo where I am uploading images outside of the docker.
image: example-registry.com:4000/test
I have that defined in my docker-compose file.
How I can provide credentials or API key in order to pull from that repository? Is it possible to do it without executing "docker login" command or it is required to always execute those commands prior the docker-compose command?
I have API key which I am using for example to do the REST API from PowerShell or any other tool.
Can I use that somehow in order to avoid "docker login" command constantly?
Thank you
docker login creates or updates the ~/.docker/config.json file for you. With just the login part, it look likes
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "REDACTED"
}
}
}
There can be many things in this file, here is the doc
So to answer your question, you can avoid the login command by distributing this file instead. Something like:
Create a dedicated token (you shouldn't have multiple usage by token) here https://hub.docker.com/settings/security
Move your current config elsewhere if it does exist mv ~/.docker/config.json /tmp
Execute docker login -u YOUR-ACCOUNT, using the token as password
Copy the generated ~/.docker/config.json that you can then distribute to your server(s). This file is as much a secret as your password , don't make it public!
Move back your current config mv /tmp/config.json ~/.docker/
Having the file as a secret that you distribute doesn't make much difference than inputing the docker login command though, especially if you've some scripting to do that.

How to document a docker image

I have a docker image that receives a set of environment variables to customize its execution.
A simple example would be a web-server, that has stuff like client secret for OAuth2, a secret to sign cookies, etc.
The whole app is containerized on a docker image, that receives (runtime) environment variables.
I distribute that docker image on a private registry, and I would like to document that image, so that users can understand how they can customize the image.
Is it possible to ship, as part of the docker image, annotations that e.g. using docker describe my_image output markdown to the stdout?
I could of course use a static page on the web for documentation, but the user would still need to know where that documentation could be found, and the whole distribution would be more complext this way (e.g. documentation changes with image tag).
Any ideas?
There is no silver bullet here as far as I know, All solutions below work, but require the user to be informed of how to retrieve the documentation.
There is no standard way of doing it.
The open container initiative have created an image spec annotation suggesting that
A link to more information about the image should be provided in a label called org.opencontainers.image.documentation.
A description of the software packaged inside the container should be provided in a label called org.opencontainers.image.description
According to OCI, one of the variations of option 1 below is correct.
Option 1: Providing a link in a label (Prefered by OCI)
Assuming the Dockerfile and related assets are version controlled in a git repository that is publicly accessible (for example on github), that git repository could also contain a README.md file. If you have a pipeline hooked up to the repo that builds and publishes the Docker image to a registry automatically, you could setup the docker build command to add a label with a link to the documentation as follows
# Get the current commit id
commit=$(git rev-parse HEAD)
# Build docker image and attach a link to the Readme as a label
docker build -t myimagename:myversion \
--label "org.opencontainers.image.documentation=https://github.com/<user>/<repo>/blob/$commit/README.md"
This solution links to specific commit documentation for that particular commit versioned alongside your Dockerfile. It does however require the user to have access to internet to be able to read the documentation
Option 1b: Providing full documentation in a label (Prefered by OCI)
A variation of option 1 where the full documentation is serialized and put into the label (there is no length restrictions on labels). This way the documentation is bundled with the image itself
As Jorge Leitao pointed out in the comments, the image annotaion spec from OCI specifies the name of such a label as org.opencontainers.image.description
Option 2: Bundling documentation inside image
If you prefer to actually bundle the Readme.md file inside the image to make it independent on any external web page consider the following
Upon build, make sure to copy the Readme.md file to the docker image
Also create a simple shell script describe that cats the Readme.md
describe
#!/usr/bin/env sh
cat /docs/Readme.md
Dockerfile additions
...
COPY Readme.md /docs/Readme.md
COPY describe /opt/bin/describe
RUN chmod +x /opt/bin/describe
ENV PATH="/opt/bin:${PATH}"
...
A user that have your Docker image an now run the following command to have the markdown sent to stdout
docker run myimage:version describe
This solution bundles the documentation for this particular version of the image inside the image and it can be retrieved without any external dependencies

Why can any user login influxdb?

I have installed influxdb. But in the server every user can login when ther type inlux.
Why is it like that? Is not it a security problem. And how can I solve it?
I want to login with spesific admin user and its admin password.
The "why"
Different databases have used reasonings with minor differences over the years, but basically, it goes like this:
In its most simple install, <insert DBMS here> should just run - for integration tests, simple evaluation purposes etc. We could generate a root/admin/superhoncho user password, but more often than not, this is not going to be changed, and that is a Bad Thingâ„¢.
And since nobody sane would run a database in production without authentication and authorisation enabled, providing easy access in the default installation is not a problem anyway, is it?
I tend to agree with this reasoning, though I am off the opinion that in the case the DBMS has authentication and authorisation disabled per default, it should bind to localhost by default, too. You make your DBMS accessible to the outside world, and be it only your company's network? You surely have thought about the implications!
The "how"
Authentication
I am going to use docker to illustrate it and it is quite obvious what you have to do in a non-docker environment.
First, we pull the influxdb docker image and create a default config file in one go:
$ docker run --rm influxdb influxd config > influxdb.conf
Unable to find image 'influxdb:latest' locally
latest: Pulling from library/influxdb
...
Digest: sha256:0aa7fea5336b5e5cc1c80e16062865821ec772e06519c138947ef5ebd9b34907
Status: Downloaded newer image for influxdb:latest
Merging with configuration at: /etc/influxdb/influxdb.conf
Now we change the authentication parameter in the [http] section of our influxdb.conf to true:
...
[http]
auth-enabled = true
...
Next, we start our InfluxDB using this modified config file:
$ docker run -d --name influxdb -p 8086:8086 \
-v $PWD/influxdb.conf:/etc/influxdb/influxdb.conf:ro \
influxdb -config /etc/influxdb/influxdb.conf
1987f962c331d2404a2564bb752d971553b13181dbbbb1e38cf50d345b3191c4
(The hash sum you get will be different.)
Now, we connect to our influxdb and create the admin user
$ docker exec -it influxdb influx
Connected to http://localhost:8086 version 1.7.8
InfluxDB shell version: 1.7.8
> create user admin with password 'secret' with all privileges;
From this point on, credentials are needed for pretty much everything
> show users
ERR: unable to parse authentication credentials
Warning: It is possible this error is due to not setting a database.
Please set a database with the command "use <database>".
> auth
username: admin
password:
> show users
user admin
---- -----
admin true
Authorization
Simple mnemonic: "Users are granted permissions per database." So, in order to grant something to a user, that user must first exist:
> create user berkancetin with password 'supersecret';
> create database foobar
> grant read on foobar to berkancetin
> show users
user admin
---- -----
admin true
berkancetin false
> show grants for "berkancetin"
database privilege
-------- ---------
foobar READ
Further reading (!!!)
Ignore at your own risk. You. Have. Been. Warned.
InfluxDB authentication
InfluxDB docs on Authorization

when pushing docker image to private docker registry, having trouble marking it 'public' via my script (but can do via web ui)

I am pushing a docker image to a private docker registry, and am having trouble marking it 'public' via
a script.
For this discussion, I'm guessing the content of the Dockerfile doesn't matter... so lets assume I have the following in my
current working directory:
Dockerfile
from ubuntu
touch /tmp/foo
I build like this:
docker build -t my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04 .
Then, I am doing my push like this:
docker push my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04
Next, I navigate to the web site that allows me to manage my private registry (# the url http://my.private.docker.registry.com)
I look at my image, and I see it has a padlock icon next to it, indicating that it is private. I can manually unlock from the
web UI, but I'd like to know if there are any options to docker's 'push command that will allow me to mark the image
as 'public' without manual intervention.
One thing I tried was setting global settings for my namespace such that all new repos would be readable/writable by all users.
Specifically: I went into the Docker web ui for my private registry and for the namespace 'foo' I tried adding default permissions
(for any newly created repos) such that all users will have 'write' access to any new repo pushed under the 'foo' namespace.
However, even after doing the above, when I pushed a new image to my private registry under namespace foo, that image was still
marked with the pad-lock. I looked up the command line options for 'docker push', and I did not see any option that looked like
it would affect the visibility of the image at the time of push.
thanks in advance for your help !
-chris
So, according to the folks who manage the Docker registry at the company I'm at now: there is no command line way to enable permissions for users other than the repository creator to have write access to that repo. You have to go to the web UI and manually mark the repo 'public', and you have to add permissions for each user (although it is possible to have groups of users, and then add a whole group -- this still is clunky because new employees have to be manually added to the group).
I find it hard to believe that there's no command line way.. But this is what our experts say.. If there are other experts out there who have a better idea, please chime in ! Otherwise I will do it manually through the web UI (grrrrRRrr).

Resources