I created my dockers using a makefile, and checked if it was correct. In fact, I was able to run it and even upload to Docker Hub without problems. I then followed the steps suggested to upload the docker to Bluemix, and was unable to do it. I am getting an error telling me that my credentials are incorrect, although I am sure they are not (in fact, I was able to login on the Bluemix website using the same credential without problems).
See below the steps I did and the error obtained, any suggestion to solve them will be welcomed:
$ cf login
API endpoint: https://api.eu-gb.bluemix.net
Email> agorostidi
Password>
Autenticando...
OK
Org seleccionada agorostidi
Space seleccionado dev
Endpoint API: https://api.eu-gb.bluemix.net (version de API: 2.40.0)
Usuario: andres.gorostidi#gmail.com
Org: agorostidi
Space: dev
MacBook-Pro-de-Andres:apache-docker andres$ cf ic login
Client certificates are being retrieved from IBM Containers...
Client certificates are being stored in /Users/andres/.ice/certs/...
Client certificates are being stored in /Users/andres/.ice/certs/containers-api.eu-gb.bluemix.net/504cc61c-47e2-4528-914a-3def71277eea...
OK
Client certificates were retrieved.
Deleting old configuration file...
Checking local Docker configuration...
OK
Authenticating with registry at host name registry.eu-gb.bluemix.net
OK
Your container was authenticated with the IBM Containers registry.
Your private Bluemix repository is URL: registry.eu-gb.bluemix.net/goros
You can choose from two ways to use the Docker CLI with IBM Containers:
Option 1: This option allows you to use "cf ic" for managing containers on IBM Containers while still using the Docker CLI directly to manage your local Docker host.
Use this Cloud Foundry IBM Containers plug-in without affecting the local Docker environment:
Example Usage:
cf ic ps
cf ic images
Option 2: Use the Docker CLI directly. In this shell, override the local Docker environment to connect to IBM Containers by setting these variables. Copy and paste the following commands:
Note: Only Docker commands followed by (Docker) are supported with this option.
export DOCKER_HOST=tcp://containers-api.eu-gb.bluemix.net:8443
export DOCKER_CERT_PATH=/Users/andres/.ice/certs/containers-api.eu-gb.bluemix.net/504cc61c-47e2-4528-914a-3def71277eea
export DOCKER_TLS_VERIFY=1
Example Usage:
docker ps
docker images
MacBook-Pro-de-Andres:apache-docker andres$ docker push registry.ng.bluemix.net/eci_test/chargeback:latest
The push refers to a repository [registry.ng.bluemix.net/eci_test/chargeback] (len: 1)
Sending image list
Please login prior to push:
Username: agorostidi
Password:
Email: andres.gorostidi#gmail.com
Error response from daemon: Wrong login/password, please try again
You logged in to the Bluemix London region and are trying to push an image to the Bluemix US South region, that's why docker push command is asking for your credentials again.
If you want to push your images to the Bluemix US South region you have to login to that region first.
Please point your API to the Bluemix US South region with the following command:
$ cf api https://api.ng.bluemix.net
Then proceed again with the commands you run before, i.e.:
$ cf login
$ cf ic login
$ docker push registry.ng.bluemix.net/eci_test/chargeback:latest
Otherwise, if you want to push your image to the Bluemix London region, then you have to re-tag the image name to match the London region:
$ docker tag chargeback:latest registry.eu-gb.bluemix.net/eci_test/chargeback:latest
Then you can run the docker push command specifying the new tagged image.
Related
I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.
I'm trying to execute docker commands inside of a Docker container (don't ask why). To do so I start up a container by running.
sudo docker run -v /var/run/docker.sock:/var/run/docker.sock -it my_docker_image
I am able to run all of the docker commands (pull, login, images, etc) but when I try to push to my remote (Gitlab) registry I get denied access. Yes, I did do a docker login and was able to successfully log in.
When looking at the Gitlab logs I see an error telling me no access token was sent with the push. After I do a docker login I see a /root/.docker/config.json with the remote url and a string of random characters (my credentials in base 64 I believe)? I'm using an access token as my password because i have MFA enabled on my Gitlab server.
Appreciate the help!
I ended up resolving the issue by using docker:stable as my runner image. Not quite sure what the problem was with the centos:centos7 image.
So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?
VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper
The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.
after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest
I am trying to copy one of Bluemix registry , registry.ng.bluemix.net/XXXX/rhel:v5 to another i.e. registry.ng.bluemix.net/YYYY/rhel:v5
by using command , after logging into Bluemix account & space associated with XXXX
cf ic cpi registry.ng.bluemix.net/XXXX/rhel:v5 registry.ng.bluemix.net/YYYY/rhel:v5
Note, I have access to both the org, spaces.
Bluemix showing following message ...
Sending build context to Docker daemon 2.048kB
Error response from daemon: Build aborted with error: User does not have access to namespace 'YYYY' Build ID: 268-1502886177.269-12875
FAILED
Command failed
Please suggest, what could go wrong and is there a way to proceed forward ?
The build service currently only supports building from and to the oldest namespace owned by the targeted organization. This includes copying images using cf ic cpi.
To achieve what you want, you'll need to pull the image to your workstation, tag it, then push it back to the registry with the new name:
bx login <account with access to both namespaces>
bx cr login
docker pull registry.ng.bluemix.net/XXXX/rhel:v5
docker tag registry.ng.bluemix.net/XXXX/rhel:v5 registry.ng.bluemix.net/YYYY/rhel:v5
docker push registry.ng.bluemix.net/YYYY/rhel:v5
# Optional: remove the images from your machine: docker rmi registry.ng.bluemix.net/XXXX/rhel:v5 registry.ng.bluemix.net/YYYY/rhel:v5
I'm trying to create a simple ubuntu image on docker within Bluemix.
I have the cli setup (at the latest version) but keep getting a login prompt when trying to push the image.
My dockerfile is trivial:
FROM docker.io/ubuntu:latest
MAINTAINER My Name
RUN echo "Imaged" > /tmp/image.txt
I build it with
sudo docker build -t ubuntu
then tag it with
sudo docker tag ubuntu registry.eu-gb.bluemix.net/MYNAMESPACE/ubuntu
I login with
cf login
Then push with
[ibmcloud#analyticsadmin docker]$ sudo docker push registry.ng.bluemix.net/MYNAMESPACE/ubuntu
The push refers to a repository [registry.ng.bluemix.net/MYNAMESPACE/ubuntu] (len: 1)
Sending image list
Please login prior to push:
Username:
I'm new to bluemix/docker so user error is highly likely. Can you spot my error? My DOCKER* environment variables are set as appropriate for my bluemix container service.
It seems you missed the step to login to the IBM Containers registry, that's why docker push is asking you for the username.
After cf login you have to run the following command as well:
$ cf ic login
This will authenticate you to the IBM Containers registry so you can push your images.
Please note that ic is a plugin you have to install for the cf command line interface. If you have not installed it yet please see instructions in the following link:
https://www.ng.bluemix.net/docs/containers/container_cli_cfic.html#container_cli_cfic_install
For example to install plugin in Linux system run the following command:
$ cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-linux_x64
A typo I see in your commands is that you tag your container for sending it to the UK data center (eu-gb) then try to push it to the south US one (ng), that's why I think the second command asks you to login.