Create image-stream on image from private registry on OpenShift - docker

I've set up an OpenShift Origin 1.1.3 cluster. Now I'm pulling images from a private registry. This registry is 'insecure'. It has self-signed certificates and credentials to authenticate. I'm able to perform a docker login and to pull the image manually on my node.
The problem is that only that node can access the image. So when I'm scaling my pod (based on that image), all replica's will run on that specific node. Other nodes are not able to pull or use the image.
So I want to create an image-stream for my image:
oc import-image --insecure=true ec2-xxx:5000/image
But: message: you may not have access to the Docker image "ec2-xxx:5000/image"
reason: Unauthorized
I read about creating a secret. I created it:
oc secrets new-dockercfg mysecret --docker-server=ec2-xxx:5000 --docker-username=*** --docker-password=*** --docker-email=any#mail.com
How do I have to add this secret to my image-stream? And is this the right approach?

#cloudnoob his answer helped me a lot.
But the main problem was that I've created my secret in the wrong way.
I saw this after starting the OpenShift master with loglevel 5.
Unable to find a secret to match https://ec2-xxx:5000/v2/test/image/manifests/83
So I had to create my secret with https (it's called insecure with selfsigned certificates but it's using https):
oc secrets new-dockercfg mysecret --docker-server=https://ec2-xxx:5000 --docker-username=*** --docker-password=*** --docker-email=any#mail.com
After this step I had to perform the steps of cloudnoob. Adding the secrets to the service accounts. After that the import is a succes.

Openshift Origin Doc
To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod will use; default is the default service account:
$ oc secrets add serviceaccount/default secrets/ --for=pull
To use a secret for pushing and pulling build images, the secret must be mountable inside of a pod. You can do this by running:
$ oc secrets add serviceaccount/builder secrets/

Related

How to inject docker registry username/password in docker-compose file?

In order to deploy an application using docker and a remote registry:
Using docker client, execute docker login so that the credentials will be stored either directly in $HOME/.docker/config.json or on a credential store specified also in $HOME/.docker/config.json. Then use the docker create command to start the application.
In kubernetes, a secret can be generated using the docker registry username and password. Then, the secret can be injected in the helm-chart using imagePullSecret. then, helm install command can instruct kubelet to pull the image inside the created container inside the scheduled pod. To update the image registry, the image name and pull secret can be updated before re-installation.
I have three questions:
How can I set the username and password or inject these credentials for the services in docker-compose without having to run docker login first in each deployment host ? (as in nu
Can I populate a credential store specified in a $HOME/.docker/config.json using docker login command on one machine, then specify the same credential store in $HOME/.docker/config.json of another machine, then use the answer of the previous question to inject or pull the credentials
if the docker daemon checks for the credentials inside the credential stores that is specified in $HOME/.docker/config.json, then what is the use of the helper program ?

Is there anyway to use github / gitlab for downloading the docker image?

Since Docker Hub only allows 1 private repo, I wonder if there is any way to use Github or Gitlab, etc., to download the images? for instance:
FROM git#github.com/username/repo
...
...
...
Very easy with an account on gitlab.com. GitLab provides a Docker registry linked to projects and you can have unlimited private projects:
Create a project my-docker-project
Go to Package and Registries > Container registries, you should see a few commands to access your registry
Connect your machine to this registry using a command like:
# Will prompt for login/pass
docker login registry.gitlab.com
You'll need an access token or deploy token with read_registry and write_registry scopes. You can generate one via your profile Preferences > Access token. Login is the token name and password the secret token provided.
You can now push Docker images with commands such as:
# Push an image
docker push registry.gitlab.com/YourUsernameOrGroup/my-docker-project
# Push an image on a sub-path
docker push registry.gitlab.com/YourUsernameOrGroup/my-docker-project/myimage
You can then use the image in a Dockerfile by referencing its URL such as:
FROM registry.gitlab.com/YourUsernameOrGroup/my-docker-project
# ...
Of course the machine from which you build must be authenticated on related GitLab registry using docker login command above (or the project must be public)
both have excellent package registry services
For GitHub GPR
For GitLab GCR
Both have excellent features like use it directly from Dockerfile as you want for example.
I have a public example, you can check it in Github with node.js which uses GPR to store the build image/package.

How to pull/push from/to GCR from GKE node

I'm building an application that I will run in GKE. This application will use shell commands (for now) to build docker images and try to push them to GCR. I'm finding that when I try to do this from a pod running in GKE I get authentication problems. I'm having trouble figuring out why these authentication problems are happening.
Here's a list of all of the debugging I've done so far. At the highest level, my GKE clusters have the https://www.googleapis.com/auth/devstorage.read_write oauth scope. When I examine the permissions on the underlying GCE instance, I see these permissions - note the Read Write value for Storage:
Now, when I SSH into that instance using the console and list the docker images I see the image used by GKE when spinning up my pod:
paymahn#gke-prod-478557c-default-pool-e9314f46-d9mn ~ $ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/gadic-310112/server latest 8f8a22237c31 2 days ago 1.85GB
...
However, if I try to manually pull that image while SSH-ed into the GCP instance, I get an authentication problem:
paymahn#gke-prod-478557c-default-pool-e9314f46-d9mn ~ $ docker pull gcr.io/gadic-310112/server:latest
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I also looked at the service account 65106360748-compute#developer.gserviceaccount.com which is the default compute instance service account. Here are the permissions it has (I manually added the Storage Object Creator role):
Adding the Storage Object Creator role to that service account didn't help.
Is my approach to authentication here fundamentally flawed? It seems like I have all the right pieces in place to pull/push from GCR from GKE. Maybe there's an extra step I need to do for the docker client to authenticate?
Figured it out. I had to:
make a service account with the roles/storage.objectAdmin
generate a key for that service account
store that key as a secret in GKE
Mount that secret into my pods
run gcloud auth activate-service-account --key-file <path to key>
run gcloud auth configure-docker
Once all of that was done, my pods could pull from and push to GCR.

Using image from private docker registry in pulumi on ECS

I want to use pulumi to set up ECS with an image from a private docker registry (GitLab). Is there a way to specify the secret in the container defintion?
I'm trying to set up a new ECS cluster (awsx.ecs.Cluster) with a Service (awsx.ecs.EC2Service) running a Task with a container (awsx.ecs.Container). The image for the container is stored in a Gitlab private docker registry.
In the AWS console I would've created a Task with a container and selected Private repository authentication. This allows setting an arn to a secret in secrets manager containing credentials as described in Private Registry Authentication for Tasks.
I haven't found a way to set this in pulumi though.
then you would need to do it like you normally would in kubernetes.
Create a docker registry secret (set its type to kubernetes.io/dockerconfigjson) and make pod reference that secret, so add imagepullsecrets to pod spec.
FOr more details consult the link I've referenced

Private Docker registry in pull through cache mode return "invalid authorization credential"

I'm using the official Docker registry image, and have configured it as a pull though cache.
My clients can log in and push/pull local images, such as this:
docker login -u username -p secret docker.example.local:5000
docker pull docker.example.local:5000/myImage
I've configured my clients to use the Docker registry server as a proxy:
root#server:/# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://docker.example.local:5000"]
}
But when my clients tries to pull images not already present on the registry server, I get an error. Example pull command:
docker pull alpine
The registry server then responds with this message in its log file:
error authorizing context: basic authentication challenge for realm \"Registry Realm\": invalid authorization credential
I came across this SO post suggesting putting a Nginx proxy server in front, but this seems like a hack and I'd prefer some cleaner way of doing this if possible.
How have others set up their registry server in a pull through cache mode - did you find a better solution than setting up an Nginx proxy in front of the registry server?
You are using wrong name of registry-server-name.
Do not use https:// prefix
#>docker login -u username -p secret docker.example.local:5000
You should ensure that you either provide environment variable REGISTRY_HTTP_HOST=https://docker.example.local:5000 or specify it in /etc/docker/registry/config.yml file of registry image
http:
addr: localhost:5000
prefix: /my/nested/registry/
host: https://docker.example.local:5000
# see https://docs.docker.com/registry/configuration/
Reason is that address used in docker login should match host configuration of docker registry.
It's been a bit since I dug through that code, but I believe docker will attempt to login to your pull through cache using your Hub credentials. It only uses that registries individual credentials when you pull from it directly. So you need to run docker login without a hostname to configure the Hub login. This is only between the docker engine and the mirror.
From the pull through cache to Hub, you configure the user/password in the pull through cache and anyone that can reach the cache will use those credentials when pulling from Hub. This means you need to ensure the cache is configured with a minimal access user or is only accessible by devices on the network that you trust.

Resources