I'm trying to execute docker commands inside of a Docker container (don't ask why). To do so I start up a container by running.
sudo docker run -v /var/run/docker.sock:/var/run/docker.sock -it my_docker_image
I am able to run all of the docker commands (pull, login, images, etc) but when I try to push to my remote (Gitlab) registry I get denied access. Yes, I did do a docker login and was able to successfully log in.
When looking at the Gitlab logs I see an error telling me no access token was sent with the push. After I do a docker login I see a /root/.docker/config.json with the remote url and a string of random characters (my credentials in base 64 I believe)? I'm using an access token as my password because i have MFA enabled on my Gitlab server.
Appreciate the help!
I ended up resolving the issue by using docker:stable as my runner image. Not quite sure what the problem was with the centos:centos7 image.
Related
I want to check if a container on gitlab is built properly with the right content. As a first step, I'm trying to login to the registry by running the following command:
sudo docker login -u "ci-registry-user" -p "some-token" "registry.gitlab.com/some-registry:container"
However, I run into Get "https://registry.gitlab.com/v2/": unauthorized: HTTP Basic: Access denied errors.
My question is in two folds:
How do I access the hosted containers on gitlab? My goal is to access the container and run docker exec -it container_name bash && cat /some/path/to_container.py
Is there an alternative way to achieve this without logging in to the registry?
Check your GitLab PAT scope, to make sure it is API or at least read_registry.
Read-only (pull) for Container Registry images if a project is private and authorization is required.
And make sure you have access to that project with that token, if thesekyi/paalup is a private project.
Avoid sudo, as it changes your environment execution from your logged-in user to root.
I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.
Hi i'm trying docker push
[docker-simple-httpserver]# docker push myregistry/simplehttpserver:latest
The push refers to a repository [myregistry/simplehttpserver] (len: 1)
Sending image list
FATA[0000] Error: Status 403 trying to push repository simplehttpserver: "{\"error\": \"Unauthorized updating repository images\"}"
is there a way for me to specify the username and password on docker push command?
I would think they keep passwords off the command line for security reasons.
The way to do it is to login first then push.
https://docs.docker.com/mac/step_six/
$ docker login --username=maryatdocker --email=mary#docker.com
Password:
WARNING: login credentials saved in C:\Users\sven\.docker\config.json
Login Succeeded
Then push
$ docker push maryatdocker/docker-whale
The push refers to a repository [maryatdocker/docker-whale] (len: 1)
7d9495d03763: Image already exists
c81071adeeb5: Image successfully pushed
Typically you would specify your password using the interactive docker login then do a docker push.
For a non-interactive login, you can use the -u and -p flags:
docker login -u="${DOCKER_USERNAME}" -p="${DOCKER_PASSWORD}"
The Travis CI docs for docker builds gives an example of how to automate a docker login.
See docker login for more details.
As far as I know you have to use docker login. The credentials will be stored in /home/user/.docker/config.json for following docker pushes.
If you are after automation the command expect will be interesting for you.
In case, one needs to login to the custom docker repo, use below:
docker login -u ${USERNAME} -p ${PASSWORD} ${DOCKER_REPOSITORY}
The accepted answer works perfectly fine! However, if you are trying to access a private registry, you may have to consider making the following change request.
docker login -u ${user_name} ${private_registry_domain}
Provide password, when it prompt for the same.
docker login --username=YOUR_DOCKERHUB_USERNAME
In this case your dockerhub password will be an access token.
Refer: https://docs.docker.com/docker-hub/access-tokens/#create-an-access-token
If you are tagging image with IP then login docker registry with IP, If you are tagging image with domain-name then login docker with domain-name, Somehow docker doesn't like mixing IP and domain and failing.
Not direct answer to the question, but you can first login and then do docker push.
docker login -unice-username
After which it will prompt for a password. After successful login you can do docker push.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
use "sudo docker login" not "docker login" as one uses the root account and the other uses your personal.
Personally I create the repo on dockers website prior to the upload.
I'm trying to create a simple ubuntu image on docker within Bluemix.
I have the cli setup (at the latest version) but keep getting a login prompt when trying to push the image.
My dockerfile is trivial:
FROM docker.io/ubuntu:latest
MAINTAINER My Name
RUN echo "Imaged" > /tmp/image.txt
I build it with
sudo docker build -t ubuntu
then tag it with
sudo docker tag ubuntu registry.eu-gb.bluemix.net/MYNAMESPACE/ubuntu
I login with
cf login
Then push with
[ibmcloud#analyticsadmin docker]$ sudo docker push registry.ng.bluemix.net/MYNAMESPACE/ubuntu
The push refers to a repository [registry.ng.bluemix.net/MYNAMESPACE/ubuntu] (len: 1)
Sending image list
Please login prior to push:
Username:
I'm new to bluemix/docker so user error is highly likely. Can you spot my error? My DOCKER* environment variables are set as appropriate for my bluemix container service.
It seems you missed the step to login to the IBM Containers registry, that's why docker push is asking you for the username.
After cf login you have to run the following command as well:
$ cf ic login
This will authenticate you to the IBM Containers registry so you can push your images.
Please note that ic is a plugin you have to install for the cf command line interface. If you have not installed it yet please see instructions in the following link:
https://www.ng.bluemix.net/docs/containers/container_cli_cfic.html#container_cli_cfic_install
For example to install plugin in Linux system run the following command:
$ cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-linux_x64
A typo I see in your commands is that you tag your container for sending it to the UK data center (eu-gb) then try to push it to the south US one (ng), that's why I think the second command asks you to login.
I'm trying to run something called Traildash via it's docker container on a VM via chef (once I get it running I'll move it to an AWS instance). So I've installed docker onto the VM and so I tell chef to run
docker run -i -d -p 80:80 \
appliedtrust/traildash
or even
docker pull appliedtrust/traildash
on the VM and all it does is:
Unable to find image 'appliedtrust/traildash' locally
Pulling repository appliedtrust/traildash
2015/03/16 12:40:38 Get https://index.docker.io/v1/repositories/appliedtrust/traildash/images: x509: certificate is valid for ssl7302.cloudflare.com, *.archeagemall.co
m, *.astrubbank.com, *.billhr2847.com, *.dallasjuniorforum.org, *.goudportal.nl, *.habbinfo.info, *.hoistandcrane.com, *.jlfresno.org, *.jlknoxville.org, *.jlsantabarbara.org, *.jl
wichita.org, *.jrleagueabilene.com, *.okaygoods.com, *.pbajf.org, *.stansberryonline.com, *.unfairmovie.com, *.usepnd.com, *.vaccineinjuryhelpcenter.com, archeagemall.com, astrubba
nk.com, billhr2847.com, dallasjuniorforum.org, goudportal.nl, habbinfo.info, hoistandcrane.com, jlfresno.org, jlknoxville.org, jlsantabarbara.org, jlwichita.org, jrleagueabilene.co
m, okaygoods.com, pbajf.org, stansberryonline.com, unfairmovie.com, usepnd.com, vaccineinjuryhelpcenter.com, not index.docker.io
and then nothing, the container won't actually start nor do I see any files pulled unless docker pulls the files into a different directory?
What do I do to get this running?
You doing everything right. But if you running it outside of EC2 (with IAM Role setted up), you have to explicitly pass AWS creds and optionally other parameters. For more information take a look at https://github.com/AppliedTrust/traildash#quickstart