Docker Login: Error when manually logging into private Registry - docker

I can't manually log into my private GitLab Docker Registry from CLI:
# docker login -u "${DOCKER_USER}" -p "${DOCKER_PASS}" "${DOCKER_URL}"
error getting credentials - err: exit status 1, out: `Cannot autolaunch D-Bus without X11 $DISPLAY`
System info:
Ubuntu 18.04
docker-ce 18.03.1~ce~3-0~ubuntu (from official repo, without install script)
There is no ~/.docker/config.json for any users and I'm executing the docker login as root.
On Google, I just find recommendations to export DISPLAY... Can docker only login to remote registries in a GUI environment?
Exporting DISPLAY=0:0 yields:
error getting credentials - err: exit status 1, out: `Failed to execute child process “dbus-launch” (No such file or directory)`
Am I missing some dependency? Docker runs fine otherwise, but login doesn't work. I know there are backends to store credentials, but I don't want to store credentials. I'm just trying to authenticate against my registry to pull an image, doesn't that work in Docker ootb?

The docker-compose package unnecessarily depend on the broken golang-github-docker-docker-credential-helpers package. Removing the executable fixes this.
rm /usr/bin/docker-credential-secretservice
Note: This is a workaround and will need to be repeated each time the package is updated.
This affects the Ubuntu 18.04 (and possibly other non-LTS releases) and some Debian releases.

Related

How do I grant the paketo builder access permissions to the docker socket when building from a docker image?

When using buildpacks to build my spring boot application on Fedora I get the following error during the execution of the spring-boot-plugin:build-image goal:
[INFO] [creator] ERROR: initializing analyzer: getting previous image: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info": dial unix /var/run/docker.sock: connect: permission denied
After digging into the goal and buildpacks, I found the following command in the buildpack.io docs (by selecting "Linux" and "Container"):
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $PWD:/workspace -w /workspace \
buildpacksio/pack build <my-image> --builder <builder-image>
AFAICT this command should be equivalent to what happens inside of maven and it exhibits the same error.
My previous assumption was that the use in the buildpacksio/pack image doesn't have the access permissions to my docker socket. (The socket had 660 permissions and root:docker owner).
UPDATE: Even after updating to 666 permissions the issue still persists.
I don't really want to allow anyone to interact with the docker socket so setting it to 666 seems unwise. Is this the only option or can I also add the user in the container to the docker group?
The solution was that the Fedora docker package is no longer the most up-to-date way to install Docker. See the official Docker documentation
They both provide the same version number, but their build hash is different.
While I could not fully diagnose the difference between the two, I can report that it works with docker-ce and doesn't with docker.

GCP: Unable to pull docker images from our GCP private container registry on ubuntu/debian VM instances

I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.

Set up Docker Build Step in Bamboo

I am currently despairing at the attempt of setting up a docker build step in Atlassian Bamboo.
For starters, I just want to create a build configuration that runs the hello-world image as a proof of confluence. So far, I have failed.
I have tried following the steps on https://confluence.atlassian.com/bamboo0609/using-bamboo/jobs-and-tasks/configuring-tasks/configuring-the-docker-task-in-bamboo , but to no avail.
My setup is this:
We have Bamboo installed on an Ubuntu server. I also installed Docker on that server and added the bamboo user to the docker usergroup and restarted the server to make sure the permission change takes effect. At this point, docker run hello-world works when I run it directly on the server. I can also confirm that this is the server that Bamboo runs on since Bamboo went offline whenever I restarted the server that I installed Docker on.
Then, I have added the docker capability to the server (the agent is the default agent, so it inherits this capability from the server). As the docker path, I have tried various things, none of which worked (aka, the following errors remained the same for each of these):
/snap/docker (the first folder that I found on a manual search)
/usr/bin/docker (the recommended path, though on inspecting the Ubuntu server I quickly found out that no docker folder exists under /usr/bin on the Ubuntu derver)
/var/snap/docker/common/var-lib-docker (the path that Docker returns as its Root Directory when I run docker info on the Ubuntu server)
/var/snap/docker (for good measure)
Now, for the runner, I have tried two different approaches.
First, I tried using a Docker runner with the following settings:
Command: Run a Docker container
Docker image: hello-world
This returns the following error message:
┊
Error occurred while running Task 'Hello World Docker Test(5)' of type com.atlassian.bamboo.plugins.bamboo-docker-plugin:task.docker.cli.com.atlassian.bamboo.task.TaskException: Failed to execute task
┊
Caused by: com.atlassian.bamboo.docker.DockerException: Error running Docker run command
┊
Caused by: com.atlassian.utils.process.ProcessException: Error executing /snap/docker run --volume /var/atlassian/application-data/bamboo/xml-data/build-dir/CAM-DOC-JOB1:/data --workdir /data --rm hello-world
┊
The second was just to run a shell runner for the command docker run hello-world, which returned the following error:
docker: not found
At this point, I feel like I'm out of ideas. Everything points towards Bamboo for some reason not finding Docker on the server, even though I can clearly confirm that it is there. I have tried various different approaches of telling Bamboo where to find Docker, but none of them have worked.
It's obvious that I'm doing something wrong, but I can't figure out what. Or maybe the problem lies in an entirely different direction altogether? Anyway, I would be grateful for any insight shared on this matter.
Okay, I found out what caused this strange behaviour.
The problem was that I installed Docker using sudo snap install docker, and apparently installing docker via snap causes problems with Bamboo.
So I got it to work using these simple steps:
[Server] Uninstalled Snap Docker using sudo snap remove docker
[Server] Reinstalled Docker using sudo apt install docker.io
[Bamboo] Changed the path to Docker in the Server Capabilities to /usr/bin/docker
After that, the hello-world image build succeeded and printed the expected output to the log.

Sharing docker credentials between Window and WSL

Environment
Windows version and build Version 2004 (OS Build 19037.1)
Docker Edge version 2.1.6.1
Ubuntu 18.04 on WSL 2
Current setup and status:
docker installed on windows
created aliases for docker, docker-compose, docker-credential-desktop, etc ...
Running commands such as docker build, docker ps, docker pull, docker images all work fine. Now I would like push an image and so of course I have to login first.
Problem: logging into docker hub.
I run docker login in the WSL terminal
I put in my username and password
I get the following error
Error saving credentials: error storing credentials - err: exec: "docker-credential-desktop": executable file not found in %PATH
%, out: ``
What I've tried so far
docker login from powershell works fine. So I created a symbolic link between /mnt/c/Users/<winusername>/.docker and /home/<wslusername>/.docker. The equivalent works fine for .aws, but for .docker it was not able to share or even acknowledge the credentials, so it asked again for the user and password and threw the same error as above.
This worked for me,
sudo ln -s /mnt/c/Program\ Files/Docker/Docker/resources/bin/docker-credential-desktop.exe /usr/bin/docker-credential-desktop.exe
Linking the executable from windows path to linux path or you can add the windows PATH on you linux PATH.
Refer: https://github.com/docker/for-win/issues/6652
Update Feb 2021
This is all much simpler now. If you are using WSL2 on a recent release of Windows, just install docker on the Windows side and ensure to configurations:
In General: us the WSL 2 based engine
In Resource/WSL Integration: enable integration with your default WSL distro
You will have to restart docker. Once it is done, everything works transparently.
Below here can be ignored
It turns out that the integration between Docker and WSL is better than I thought. Though it could have been better documented. I was going to change tack and try to install docker in the WSL. So I got rid of all the aliases and restarted my session. Lo and behold, when I ran docker there was still something running.
This is because the edge version of docker create the appropriate symbolic links and now I login into docker hub without any problem.

Access Docker Container from project registry

So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?
VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper
The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.
after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest

Resources