I have created a two instance docker swarm on Google Compute Engine.
Docker version 18.06.1-ce, build e68fc7a on Ubuntu 18.04.1 LTS
I created a service account:
gcloud iam service-accounts create ${KEY_NAME} --display-name "${KEY_DISPLAY_NAME}"
gcloud projects add-iam-policy-binding $PROJECT --member serviceAccount:${KEY_NAME}#${PROJECT}.iam.gserviceaccount.com --role roles/storage.admin
gcloud iam service-accounts keys create --iam-account ${KEY_NAME}#${PROJECT}.iam.gserviceaccount.com key.json
Transferred the key.json to my docker swarm master:
Then I ran the following commands:
gcloud auth configure-docker
cat key.json | tr '\n' ' ' | docker login -u _json_key --password-stdin \
https://eu.gcr.io
I can successfully pull an image from my private eu.gcr.io repository:
docker pull eu.gcr.io/$PROJECT/$IMAGE
So, logging in seems to work and the gcloud helper seems to be properly installed.
But creating a service in my swarm fails:
docker service create --replicas 2 --network overlay --name $NAME eu.gcr.io/$PROJECT/$IMAGE --with-registry-auth
image eu.gcr.io/$PROJECT/$IMAGE:latest could not be accessed on a registry to record
its digest. Each node will access eu.gcr.io/$PROJECT/$IMAGE:latest independently,
possibly leading to different nodes running different versions of the image.
qwdm524vggn50j4lzoe5paknj
overall progress: 0 out of 2 tasks
1/2: No such image: eu.gcr.io/$PROJECT/$IMAGE:latest
2/2: No such image: eu.gcr.io/$PROJECT/$IMAGE:latest
Looking in syslog shows the following:
Aug 25 13:37:15 mgr-1 dockerd[1368]: time="2018-08-25T13:37:15.299064551Z" level=info msg="Attempting next endpoint for pull after error: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication"
Aug 25 13:37:15 mgr-1 dockerd[1368]: time="2018-08-25T13:37:15.299168218Z" level=error msg="pulling image failed" error="unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication" module=node/agent/taskmanager node.id=xgozmc8iyjls7ulh4k3tvions service.id=qwdm524vggn50j4lzoe5paknj task.id=qrktpo34iuhiyl1rmbi71y4wg
AFAICS, I use the correct service account JSON to login into the Google Container Repository (as docker pull works), I added the flag --with-registry-auth to docker create service which has been the answer to similar questions, but still it doesn't work. Is docker create service working similar to docker pull?
Any ideas how I might solve this?
UPDATE
Instead of Google Container Registry I tried Gitlab Registry as well. Created a registry deploy token on the Gitlab site and entered the following commands:
docker login registry.gitlab.com -u $USERNAME -p $PASSWORD
Then this just works:
docker pull registry.gitlab.com/$ORGANISATION/$PROJECT/$IMAGE
But this command fails with a similar error:
docker service create --replicas 2 --network overlay --name $NAME registry.gitlab.com/$ORGANISATION/$PROJECT/$IMAGE --with-registry-auth
image registry.gitlab.com/$ORGANISATION/$PROJECT/$IMAGE:latest could not be accessed on a registry to record
its digest. Each node will access registry.gitlab.com/$ORGANISATION/$PROJECT/$IMAGE:latest independently,
possibly leading to different nodes running different
versions of the image.
r5fqg94jrvt587le0fu779zaw
overall progress: 0 out of 2 tasks
1/2: No such image: $ORGANISATION/$PROJECT/$IMAGE:latest
2/2: No such image: $ORGANISATION/$PROJECT/$IMAGE:latest
And /var/log/syslog contains
Aug 25 21:56:14 mgr-1 dockerd[1368]: time="2018-08-25T21:56:14.615895063Z" level=error msg="pulling image failed" error="Get https://registry.gitlab.com/v2/$ORGANISATION/$PROJECT/$IMAGE/manifests/latest: denied: access forbidden" module=node/agent/taskmanager node.id=xgozmc8iyjls7ulh4k3tvions service.id=r5fqg94jrvt587le0fu779zaw task.id=huwpjtu1wujk527t84y7yvbvd
So it seems docker create service doesn't use the credentials provided and the issue is not related to either Google Container Registry or Gitlab Registry?
OK, I found the problem. I had to use:
docker service create --with-registry-auth --replicas 2 --network overlay --name $NAME registry.gitlab.com/$ORGANISATION/$PROJECT/$IMAGE
rather than
docker service create --replicas 2 --network overlay --name $NAME registry.gitlab.com/$ORGANISATION/$PROJECT/$IMAGE --with-registry-auth
In the latter case the --with-registry-auth was considered an argument to my image rather than to the docker service create call and hence no authentication was used to pull the images from either private repository.
Related
I have created Azure Container Registry
I am able to push an image from local to Azure container Registry.
I can pull or run any docker commands it always gives me the error saying
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
I am new to Azure, please is something i need to install or enable for docker
when I ran docker--version it is showing the version perfectly
I got this error because used 'Azure Cloud Shell' that doesn't support running the docker daemon.
To fix it need to run CMD or Power Shell and execute commands:
# az login
az acr login -n your_registry_name.azurecr.io
docker pull your_registry_name.azurecr.io/company_name/service_image:version
For building and pushing image to Azure, you should user 'acr' command instead of 'docker'.
Follow these steps to build and push docker image to Azure Container Registry -
Open Azure Cloud Shell
Get necessary files
git clone https://github.com/tsrana/spring-boot-db2.git ,
cd spring-boot-db2
Create a Resource Group
az group create --name Docker_RG --location eastus
Create Container Registry
az acr create --resource-group Docker_RG --name tsrContainerRegistry --sku Basic
Build image and push to registry
az acr build --image tsr/hello-worldspring-boot-db2:v1 --registry tsrContainerRegistry --file Dockerfile
I would like to run a Docker container to see what is in a public Lambda Layer.
Following the aws sam layers docs using a sam app with only the pytorch layer I produced the Docker tag then I tried pulling the Docker image which fails with pull access denied / repo may require auth.
I did try aws ecr get-login --no-include-email to auth correctly though still couldn't access the image.
So I think the issue maybe that I am not authorised to pull the image of the lambda layer or the image doesn't exist. It is not clear to me
Alternatively it would be good to download the public Lambda Layer and then I could use https://github.com/lambci/docker-lambda to inspect it
More context about what I tried
So the Lambda Layer I would like to investigate is:
arn:aws:lambda:eu-west-1:934676248949:layer:pytorchv1-py36:1
The docker tag I prodcued is:
python3.6-0ffbca5374c4d95e8e10dbba8
Then I tried pulling the Docker image with:
docker run -it --entrypoint=/bin/bash samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i
docker run -it --entrypoint=/bin/bash <aws_account_id>.dkr.ecr.<region>.amazonaws.com/samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i
Which both failed with the error:
docker: Error response from daemon: pull access denied for samcli/lambda, repository does not exist or may require 'docker login'.
.
Just a quick potential answer (I've not read the links you provided as I am not at my computer), given you mentioned aws ecr get-login --no-include-email I am assuming you are trying to pull a docker image from AWS's docker repository service.
The line docker run -it --entrypoint=/bin/bash samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i, with default config, will look at docker hubs repository. If you are trying to pull a docker image in AWS I would expect something more like docker run -it --entrypoint=/bin/bash aws_account_id.dkr.ecr.region.amazonaws.com/samcli/lambda:python3.6-0ffbca5374c4d95e8e10dbba8 -i (again not saying that cammand will work but something like it to go along with your aws repo sign in command).
Since https://hub.docker.com/samcli/lambda is a 404 I suspect this is one of those occasions the error message is exactly right, the repo does not exist.
I would like to pull a Docker image that was built inside an OpenShift Container Platform 3.9 cluster out of that cluster. To this end I try the following:
username=$(oc whoami)
api_token=$(oc whoami -t)
docker login -u $username -p $api_token my-cluster:443
image=$(oc get is/my-is -o jsonpath='{.status.tags[0].items[0].dockerImageReference}')
docker pull $image
Now docker login works, but docker image produces the error message
lookup docker-registry.default.svc on 1.2.3.4: no such host
where 1.2.3.4 is a placeholder for my local nameserver according to /etc/resolv.conf and $image is of the form docker-registry.default.svc:5000/registry/my-is#sha256:my-id.
Am I doing something wrong or could it be that the cluster administrator must first expose the registry (but should it not be exposed by default)? If I try oc get svc -n default as suggested here I get this error message:
User "my-user" cannot list services in project "default"
So what steps are needed (preferably without intervention by the cluster's administrator) for me successfully pulling out that image? Would the situation change if the pull occurred in a container also executing inside the OpenShift cluster?
The lead provided in a comment was the right one. (Thanks!). The following script now does work; no intervention by a cluster admin was required:
username=$(oc whoami)
api_token=$(oc whoami -t)
docker login -u $username -p $api_token my-cluster:443
docker pull my-cluster:443/my-project/my-is
docker images
It is easy to work with Openshift as a Container As A Service, see the detailed steps. So, via the docker client I can work with Openshift.
I would like to work on my laptop with Minishift. That's the local version of Openshift on your laptop.
Which docker registry should I use in combination with Minishift? Minishift doesn't have it's own registry - I guess.
So, I would like to do:
$ maven clean install -- building the application
$ oc login to your minishift environment
$ docker build -t myproject/mynewapplication:latest
$ docker tag -- ?? normally to a openshift docker registry entry
$ docker push -- ?? to a local docker registry?
$ on 1st time: $ oc new-app mynewapplication
$ on updates: $ oc rollout latest dc/mynewapplication-n myproject
I use just docker and oc cluster up which is very similar. The internal registry that is deployed has an address in the 172.30.0.0/16 space (ie. the default service network).
$ oc login -u system:admin
$ oc get svc -n default | grep registry
docker-registry ClusterIP 172.30.1.1 <none> 5000/TCP 14m
Now, this service IP is internal to the cluster, but it can be exposed on the router:
$oc expose svc docker-registry -n default
$oc get route -n default | grep registry
docker-registry docker-registry-default.127.0.0.1.nip.io docker-registry 5000-tcp None
In my example, the route was docker-registry-default.127.0.0.1.nip.io
With this route, you can log in with your developer account and your token
$oc login -u developer
$docker login docker-registry-default.127.0.0.1.nip.io -p $(oc whoami -t) -u developer
Login Succeeded
Note: oc cluster up is ephemeral by default; the docs can provide instructions on how to make this setup persistent.
One additional note is that if you want OpenShift to try to use some of it's native builders, you can simply run oc new-app . --name <appname> from within the your source code directory.
$ cat Dockerfile
FROM centos:latest
$ oc new-app . --name=app1
--> Found Docker image 49f7960 (5 days old) from Docker Hub for "centos:latest"
* An image stream will be created as "centos:latest" that will track the source image
* A Docker build using binary input will be created
* The resulting image will be pushed to image stream "app1:latest"
* A binary build was created, use 'start-build --from-dir' to trigger a new build
* This image will be deployed in deployment config "app1"
* The image does not expose any ports - if you want to load balance or send traffic to this component
you will need to create a service with 'expose dc/app1 --port=[port]' later
* WARNING: Image "centos:latest" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
imagestream "centos" created
imagestream "app1" created
buildconfig "app1" created
deploymentconfig "app1" created
--> Success
Build scheduled, use 'oc logs -f bc/app1' to track its progress.
Run 'oc status' to view your app.
There is an internal image registry. You login to it and push images just like you suggest. You just need to know the address and what credentials you need. For details see:
http://cookbook.openshift.org/image-registry-and-image-streams/how-do-i-push-an-image-to-the-internal-image-registry.html
I have a GKE cluster running in GCE, I was able to build + tag an image derived from ubuntu:16.04:
/ # docker images
REPOSITORY TAG IMAGE ID
CREATED SIZE
eu.gcr.io/my-project/ubuntu-gcloud latest a723e43228ae 7 minutes ago 347MB
ubuntu 16.04 ebcd9d4fca80 7 days ago 118MB
First I try to log in to registry (as documented in GKE docs)
docker login -u oauth2accesstoken -p `curl -s "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google"|awk -F\" "{ print \$4 }"` eu.gcr.io`
And then the docker push command fails:
# docker push eu.gcr.io/my-project/ubuntu-gcloud
The push refers to a repository [eu.gcr.io/my-project/ubuntu-gcloud]
a3a6893ab23f: Preparing
6e390fa7d62c: Preparing
22b8fccbaf84: Preparing
085eeae7a10b: Preparing
b29983dd2306: Preparing
33f1a94ed7fc: Waiting
b27287a6dbce: Waiting
47c2386f248c: Waiting
2be95f0d8a0c: Waiting
2df9b8def18a: Waiting
denied: Unable to create the repository, please check that you have access to do so.
The token should be valid, in another instance I'm able to gcloud whatever with it; the service account has 'Editor' role on the project.
The weirdest part is when I do docker login with obviously invalid credentials
misko#MacBook ~ $ docker login -u oauth2accesstoken -p somethingverystupidthatisreallynotmypasswordortoken123 eu.gcr.io
Login Succeeded
login always succeeds.
What shall I do to successfully docker push to gcr.io?
Try this:
gcloud docker -- push eu.gcr.io/my-project/ubuntu-gcloud
If you want to use regular docker commands, update your docker configuration with GCR credentials:
gcloud docker -a
Then you can build and push docker images like this:
docker build -t eu.gcr.io/my-project/ubuntu-gcloud .
docker push eu.gcr.io/my-project/ubuntu-gcloud