I am running a Docker registry in a container which I run as-is from the image of 'docker-registry', as published on Docker hub. This image is running on a machine in my local network. From my laptop I am able to push an image to that registry without any problems. I subsequently try to pull that same image to a different machine on my network, but there I get an error response:
{"code":"UNAUTHORIZED","message":"authentication required", ...}
This raises the questions: Is this image configured to require authentication? Why does it not require authentication when I push/pull from my laptop?
One of the reason could be that the target machine where you are trying to run your docker image does not have root/sudo access. This is generally an issue with docker. It does require root privileges. Try to ensure required permission is given when you run your docker commands. (Try using sudo with commands)
Can't be very sure of the reason, need more info regarding the machine where you are running the docker.
Related
After a recent update to Docker I find myself unable to create any new containers in Docker. I've already rebooted my operating system and Docker itself. I've tried specifying the tags to specific versions any way I could. I can manually pull the images I want with Docker. But it refuses to run or create any new containers. Already existing containers start up just fine. The full error message is below.
Unable to find image 'all:latest' locally
Error response from daemon: pull access denied for all, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
These aren't from private repositories. These are all public projects from Docker Hub. Any suggestions?
This is correct. You're trying to build using an image called all:latest but if you look on the docker registry that doesn't exist.
https://hub.docker.com/_/all
Are you sure you're not trying to build from a private repository?
I found the issue. I started taking my Docker command apart and found there was an environment variable that had the word "all" in it. Docker was completely ignoring whatever I had for the image and using the environment variable for the image. As soon as I removed this environment variable Docker started working again correctly.
The variable in question is -e NVIDIA_VISIBLE_DEVICES: "all" \ to make sure the Plex container can see that there is an nVidia GPU available. I was using the wrong guide and found out it's supposed to be -e NVIDIA_VISIBLE_DEVICES=all \ instead.
I'm new to docker.Most of the tutorials on docker cover the same thing.I'm afraid I'm just ending up with piles of questions,and no answers really. I've come here after my fair share of Googling, kindly help me out with these basic questions.
When we install a docker,where it gets installed? Is it in our computer in local or does it happen in cloud?
Where does containers get pulled into?I there a way I can see what is inside the container?(I'm using Ubuntu 18.04)
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
Looks like you are confused after reading to many documents. Let me try to put this in simple words. Hope this will help.
When we install a docker,where it gets installed? Is it in our
computer in local or does it happen in cloud?
We install the docker on VM be it you on-prem VM or cloud. You can install the docker on your laptop as well.
Where does containers get pulled into?I there a way I can see what is
inside the container?(I'm using Ubuntu 18.04)
This question can be treated as lack of terminology awareness. We don't pull the container. We pull the image and run the container using that.
Quick terminology summary
Container-> Containers allow you to easily package an application's code, configurations, and dependencies into a template called an image.
Dockerfile-> Here you mention your commands or infrastructure blueprint.
Image -> Image gets derived from Dockerfile. You use image to create and run the container.
Yes, you can log inside the container. Use below command
docker exec -it <container-id> /bin/bash
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
You can pull the opensource image from Docker-hub
When you clone the git project which is docerized, you can look for Dockerfile in that project and create the your own image by build it.
docker build -t <youimagenae:tag> .
When you build or pull the image it get store in to your local.
user docker images command
Refer the below cheat-sheet for more commands to play with docker.
The docker daemon gets installed on your local machine and everything you do with the docker cli gets executed on your local machine and containers.
(not sure about the 1st part of your question). You can easily access your docker containers by docker exec -it <container name> /bin/bash for that you will need to have the container running. Check running containers with docker ps
(again I do not entirely understand your question) The images that you pull get stored on your local machine as well. You can see all the images present on your machine with docker images
Let me know if it was helpful and if you need any futher information.
I'd like to deploy a container that I've built locally on a DigitalOcean droplet. I've been following these instructions. The problem is that by running:
eval $(docker-machine env DROPLET_NAME)
Docker sets the environment variables to be the remote machine, effectively changing the environment to running Docker on the remote machine. This is expected. However, say I have a local image I've built named rb612/docker-img:latest that I haven't pushed up to a remote. I want to run this in the remote machine context.
If I run:
docker run -d -p 80:8000 rb612/docker-img:latest
Then I get Unable to find image 'rb612/docker-img:latest' locally. If my understand is correct, this is because it's no longer running in the context of my machine. Opening a new shell and running the same command works fine without the remote environment variables set.
So I'm wondering if there's a way I can run this local image on my remote machine. I tried using the -w flag to pass in the local path but I got the same error. Deploying instead with a remote docker image works fine.
So I'm wondering if there's a way I can run this local image on my remote machine.
Sure.
You have a couple of options.
Using docker image save/load
You can use docker image save to save the image to a file. Do this either before you run your eval statement, or do it in a different terminal window that doesn't have the remote Docker environment configured:
docker image save rb612/docker-img:latest > docker-img.tar
After running your eval $(...) command, use docker image load to send the image to your remote Docker:
docker image load < docker-img.tar
Now the image is available on your remote Docker host and you can docker run it normally.
Set up a remote registry
You can set up your own remote registry, in which case you can simply docker push to that registry from your local machine and docker pull from the remote machine. This is generally the best long-term solution, but the initial set up (especially securing things properly with SSL) is a little bit more involved. Details are in the documentation.
I've tried this on at least 5 different versions of Linux and always hit the same wall:
I can use docker to run hello-world successfully. But whenever I try to pull any other image (e.g. ubuntu, nginx) it pulls the pieces in parallel and then ends up with the filesystem layer verification failure. Has anyone seen this problem or can offer advice?
Components:
CentOS 7.3.1611 (3.10.0-514.el7.x86_64) as a Virtual Box VM
Docker 1.10.3
xfs file system
Configuration steps (CentOS):
# yum install docker
# systemctl start docker
# systemctl status docker
# systemctl enable docker
# docker run hello-world (works)
# docker pull ubuntu (fails)
Note: Yum doesn't install docker 1.12 if I try to manually there are conflicts.
Current questions:
Are there issues with docker in a Virtualbox guest host?
Does Docker require a specify type of filesystem?
I read a comment somewhere that fails when trying to pull multiple
pieces in parallel (hello-world is a single chunk), but I can't
verify that. Is there another tiny image I can try?
The only issues I've seen relate to AWS and I'm not using AWS. Could
it be a SHA key issue?
Answer to comment:
Note: I can run the hello-world example and busybox. The are both one layer. Not sure if that has anything to do with it.sudo docker pull debian
Using default tag: latest
Trying to pull repository docker.io/library/debian ...
latest: Pulling from docker.io/library/debian
75a822cd7888: Verifying Checksum
filesystem layer verification failed for digest sha256:75a822cd7888e394c49828b951061402d31745f596b1f502758570f2d0ee79e2
filesystem layer verification failed for digest sha256:75a822cd7888e394c49828b951061402d31745f596b1f502758570f2d0ee79e2
This turned out to be a Virtualbox bug. It makes sense, since every machine I was trying this on was a Virtualbox VM (see original post). In investigating a work-around to download the pieces manually via wget, wget was getting errors on all machines. Downloads over a few seconds were throwing "SSL3_GET_RECORD:decryption failed or bad record mac". Googling that showed that this is a known (as of 2014 anyway) bug in Virtualbox when the VM's network type is set to Bridged. The solution is to set the network type in the VM to NAT.
I am learning docker, just getting my feet wet... I begin begging your pardon since I will probably using terminology badly :-(
I have successfully built my first container and run it locally.
The container image is a node.js + express web app.
Locally I run my image this way:
docker run -p 80:3000 myname/myimage
If I point my browser to the local server IP
http://192.168.1.123:80/
I access my app in all of its glory.
Then, I push it to docker hub with this command:
docker push myname/myimage
So far so good. The question is: am I supposed to be able to run my app from docker cloud, already, or should I push it to AWS, for example?
By executing docker push myname/myimage you only sent your image to docker-hub.
This image can then be run to create a container; but as is, it is not running.
You effectively will have to run it on any machine or service in order to access your app.
concerning the terminology:
you build an image, not a container
you push (or pull) an image to (from) docker-hub
you run a container from an image