"sudo docker push" fails with a seemingly bogus error message - docker

Here is my terminal log (Ubuntu 22.04.1, Docker version 20.10.22, build 3a2c30b):
paul#desktop:~/work/arc/code$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
pauljurczak/arc latest 4f3f22791983 35 minutes ago 880MB
sub-1 latest 4f3f22791983 35 minutes ago 880MB
neo-1 latest 3dcf55bb7458 3 days ago 891MB
arc-tut latest 3a9aee91689b 4 days ago 230MB
paul#desktop:~/work/arc/code$ sudo docker push pauljurczak/arc
Using default tag: latest
The push refers to repository [docker.io/pauljurczak/arc]
An image does not exist locally with the tag: pauljurczak/arc
paul#desktop:~/work/arc/code$ sudo docker push pauljurczak/arc:latest
The push refers to repository [docker.io/pauljurczak/arc]
An image does not exist locally with the tag: pauljurczak/arc
pauljurczak/arc
paul#desktop:~/work/arc/code$ sudo docker push latest
Using default tag: latest
The push refers to repository [docker.io/library/latest]
An image does not exist locally with the tag: latest
I created pauljurczak/arc:latest image as shown there. I'm trying to push it to my pauljurczak/arc repository. The error messages don't make any sense to me. Why pauljurczak/arc in my push command is considered a tag? Why An image does not exist locally with the tag: latest, while there exists several images with that tag? What is happening here? I'm following push command description at https://docs.docker.com/docker-hub/repos/#pushing-a-docker-container-image-to-docker-hub.
When viewing my repository with Chrome, I see this info:
That's exactly what I was doing.
It seems that skipping sudo makes it work. Why is that?

It looks like you have both Docker Desktop and the standalone Docker Engine installed. This means you have two different Docker daemons running. The Docker Engine one is accessible via /var/run/docker.sock, given appropriate permissions; Docker Desktop runs a hidden Linux virtual machine (even on native Linux) and makes a similar socket file available in your home directory.
Docker Desktop uses the Docker CLI "context" feature to point docker at the socket in your home directory. That configuration is also stored in a file in your home directory.
This is where sudo makes a difference. When you run sudo docker ..., and it reads $HOME/.docker/contexts/, that home directory is now root's home directory, probably /root. That doesn't have any of the Docker Desktop related configuration in it, and so you use the default /var/run/docker.sock Docker Engine socket instead.
As you note, just consistently not using sudo will resolve this. (You could potentially need sudo to access the Docker Engine socket, which can all but trivially be used to root the host; the Docker Desktop VM is a little more isolated.) Uninstalling Docker Desktop and making sure to only use the standalone Docker Engine on native Linux also would resolve this.

Related

List docker images in Nexus repository from a remote machine

I am wanting to list all the docker images in a particular location on Nexus. Both these technologies are new to me which is making it difficult to figure out what to do.
I am on a linux machine with docker installed & running this command -
docker image ls --all xx.xx.xx.xx/myorg/nodelms*
where xx.xx.xx.xx is the IP address of Nexus
Nothing is listed in the output as shown below.
REPOSITORY TAG IMAGE ID CREATED SIZE
Please can someone guide me on how to achieve what I am after.
The docker image ls only interacts with the local docker engine, telling you about images that have been previously pulled. To query a remote registry, you'll want to hit the registry API. That API is documented by the OCI distribution-spec. You could run some curl commands to implement this, though auth is typically the complicated part. Various projects exist to access this API, including go-containerregistry's crane (by Google), skopeo (by RedHat), and regclient (by me). An example of using regclient's regctl for this looks like:
$ regctl tag ls localhost:5000/library/debian
10
10-slim
10.3
10.3-slim
10.4
10.4-slim
10.5
10.5-slim
10.6
10.6-slim
10.7
10.8
6
7
8
9
buster-slim
latest

Where Docker default registry URL is configured?

I am refering to this link - Docker pull.
By default, docker pull pulls images from Docker Hub (https://hub.docker.com).
I would like to know where this link is configured on our local machine setup.
I am using Docker on Windows 10.
You cannot change the default domain of a docker image. This is by design:
Your Docker installation with this "private registry defined in the config file" would be incompatible with every other Docker installation out there. Running docker pull debian needs to pull from the same place on every Docker install.
A developer using Docker on their box will use debian, centos and ubuntu official images. Your hacked up Docker install would just serve your own versions of those images (if they're present) and this will break things.
You should identify your image through the full URL:
<your-private-registry>/<repository>/<image>:<tag>
The default domain docker.io (the "docker hub") is hardcoded in docker's code.
For example here:
https://github.com/docker/distribution/blob/master/reference/normalize.go
Check the function splitDockerDomain which sets docker.io as registry if it's not provided by the user.

GCE doesn't deploy GCR image correctly

I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.

All public image pulls fail with “filesystem layer verification failed for digest sha256”

I've tried this on at least 5 different versions of Linux and always hit the same wall:
I can use docker to run hello-world successfully. But whenever I try to pull any other image (e.g. ubuntu, nginx) it pulls the pieces in parallel and then ends up with the filesystem layer verification failure. Has anyone seen this problem or can offer advice?
Components:
CentOS 7.3.1611 (3.10.0-514.el7.x86_64) as a Virtual Box VM
Docker 1.10.3
xfs file system
Configuration steps (CentOS):
# yum install docker
# systemctl start docker
# systemctl status docker
# systemctl enable docker
# docker run hello-world (works)
# docker pull ubuntu (fails)
Note: Yum doesn't install docker 1.12 if I try to manually there are conflicts.
Current questions:
Are there issues with docker in a Virtualbox guest host?
Does Docker require a specify type of filesystem?
I read a comment somewhere that fails when trying to pull multiple
pieces in parallel (hello-world is a single chunk), but I can't
verify that. Is there another tiny image I can try?
The only issues I've seen relate to AWS and I'm not using AWS. Could
it be a SHA key issue?
Answer to comment:
Note: I can run the hello-world example and busybox. The are both one layer. Not sure if that has anything to do with it.sudo docker pull debian
Using default tag: latest
Trying to pull repository docker.io/library/debian ...
latest: Pulling from docker.io/library/debian
75a822cd7888: Verifying Checksum
filesystem layer verification failed for digest sha256:75a822cd7888e394c49828b951061402d31745f596b1f502758570f2d0ee79e2
filesystem layer verification failed for digest sha256:75a822cd7888e394c49828b951061402d31745f596b1f502758570f2d0ee79e2
This turned out to be a Virtualbox bug. It makes sense, since every machine I was trying this on was a Virtualbox VM (see original post). In investigating a work-around to download the pieces manually via wget, wget was getting errors on all machines. Downloads over a few seconds were throwing "SSL3_GET_RECORD:decryption failed or bad record mac". Googling that showed that this is a known (as of 2014 anyway) bug in Virtualbox when the VM's network type is set to Bridged. The solution is to set the network type in the VM to NAT.

Manifest invalid error while pushing an image in bluemix

I have created a image locally in my windows system. The image copys the hello world application war file to liberty server. I am able to build and run the image locally in my system. But, I am unable to push the application to bluemix.
This is my docker file :
FROM registry.ng.bluemix.net/ibmliberty:latest
COPY HelloWorldWeb.war /opt/ibm/wlp/usr/servers/defaultServer/dropins/
ENV LICENSE accept
EXPOSE 9080 22
These commands are successful :
$ docker build -t libertytest1 c:/Microservices
$ docker tag libertytest1 registry.ng.bluemix.net/my_ibm/libertytest1
$ docker run --rm -i -t libertytest1
This command fails with below error :
$ docker push registry.ng.bluemix.net/my_ibm/libertytest1
The push refers to a repository [registry.ng.bluemix.net/my_ibm/libertytest1]
9f24cf425f1e: Pushed
5f70bf18a086: Pushed
f5115b19b62d: Pushed
d255f44e3bce: Pushed
3eb8d309e7a4: Pushed
b9ca157916fa: Pushed
9d3eae113364: Pushed
8077bafd5c40: Pushed
86a4f2b11dd6: Pushed
58de70953d07: Pushed
3a497f2a043d: Pushed
612baa4f0341: Pushed
63f90ec2c29b: Pushed
54f3ce62fc73: Pushed
7c7cf479c321: Pushed
manifest invalid: manifest invalid
When I login to bluemix and check my containers, I could not see this container. Please suggest how to resolve this error.
Note : I added a manifest.yml in my war file, but still the same error.
Mostly like you are running with old version of Docker.
manifest invalid: manifest invalid
Please upgrade docker client (at least to v1.8.1) and try push again, you should be fine to push the image.
In Docker 1.10, they've made a change to the way image manifests are generated.
The version of the Docker Registry that the IBM Containers Registry runs doesn't support images built with the new format, so you get the error you see when you try to push.
We're working to get pushes working again using the latest version of Docker, but for now you'll need to do one of the following:
Use the IBM Containers build service: cf ic build -t registry.ng.bluemix.net/my_ibm/libertytest1 c:/Microservices
Downgrade to Docker 1.9 on your machine and run your commands locally as above.
EDIT: the issue has now been resolved. You can push images using Docker 1.10 now.
For anyone using Artifactory I ran into this same issue.
manifest invalid: manifest invalid
The fix was to update permissions for the Artifactory user account so it had both write, overwrite, and delete permissions.
I have the same problem with last versions of docker and cf ic
I solved it building the image directly on Bluemix using cf ic build command
cf ic build -t [Bluemix registry URL] [path to your docker file]

Resources