I am a newbie to Docker and getting hands on with Docker Toolbox in my PC which is behind a corporate proxy. After some SO answers I was able to solve X509 Unauth cert and Proxy issues. Then I was able to do docker search elastic which lists all the images in the dockerhub. But when I tried to pull the official image with docker pull elastic it throws below error. Tried a couple of solutions from google to do docker login and checked but nothing is working out so far. Any solution would be much apprecicated.
raj#localpc MINGW64 ~/DockerWS/app1
$ docker search elastic
NAME DESCRIPTION
STARS OFFICIAL AUTOMATED
elasticsearch Elasticsearch is a powerful open source se
... 2667 [OK]
kibana Kibana gives shape to any kind of data s.
.. 1081 [OK]
itzg/elasticsearch Provides an easily configurable Elasticsea
... 63 [OK]
nshou/elasticsearch-kibana Elasticsearch-6.1.2 Kibana-6.1.2
48 [OK]
kubernetes/fluentd-elasticsearch An image that ingests Docker container log
... 21
raj#localpc MINGW64 ~/DockerWS/app1
$ docker pull elastic
Using default tag: latest
Error response from daemon: unauthorized: authentication required
Thanks.
Raj
The reason is that there is no elastic image available with exactly that name. The search results show for instance elasticsearch or nshou/elasticsearch-kibana which have to be used in the docker pull command.
docker pull elasticsearch would be the solution to use the official ElasticSearch image.
Instead of searching in the terminal you could also use DockerHub directy and copy&paste the docker pull command shown in the right section, e.g. https://hub.docker.com/_/elasticsearch/
The Dockerhub elastic search image has beed deprecated. You can try to pull from the official images using:
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.1.2
Related
I have installed docker on redhat/centos server. docker services are running fine but how I can install or build cassandra/scylla image on docker. my server is not connected with internet so while building cassandra/scylla image or run then getting below error "Unable to find image" with timeout exception.
Can anyone help how to build cassandra/Scylla docker image without internet?
Thanks.
Once you download the image though, it is very simple to take it offline and load it into an offline system using docker save to export the image as a file, and docker load to import the image back into docker.
The problem does not seem to be related to Apache Cassandra or Scylla.
You do need access to Docker hub to download the relevant image for the first time, for example when running docker run hello-world
Once you solve that, you can move to run Apache Cassandra or Scylla with, for example with
docker run --name some-scylla -d scylladb/scylla
So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?
VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper
The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.
after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest
I'm at my wits trying to get Docker images from Google Container Registry onto a Google Compute Engine instance. (The images I need have been successfully uploaded to GCR.)
I've logged in using gcloud auth login and then tried gcloud docker pull -- us.gcr.io/app-999/app which results in ERROR: (gcloud.docker) Docker is not installed..
I've tried to authenticate using oauth and pulling via a normal docker call. I see my credentials when I look at the file at .docker/config.json. Doing that, it looks like it's going to work, but ultimatly ends like this:
mbname#instance-1 ~ $ docker pull -- us.gcr.io/app-999/app
Using default tag: latest
latest: Pulling from app-999/app
b7f33cc0b48e: Pulling fs layer
43a564ae36a3: Pulling fs layer
b294f0e7874b: Pulling fs layer
eb34a236f836: Waiting
error pulling image configuration: unauthorized: authentication required
which looks like progress, because at least it attempted to download something.
I've tried both of these things on my local machine as well and both methods were successful.
Am I missing something?
Thanks for your help.
P.S. I've also tried loading a container from another registry (
Docker Hub) and that worked fine, but I need more than one container and want to keep expenses down.
After contacting Google support they informed me that there is a bug in the CoreOS gcloud alias. This bug is fixed by overwriting the alias in the shell as follows:
alias gcloud='(docker images google/cloud-sdk || docker pull google/cloud-sdk) > /dev/null;docker run -t -i --net=host -v $HOME/.config:/.config -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker google/cloud-sdk gcloud'
I've tried this and it works now.
docker should be included in the latest versions of gcloud. You can update to the latest version of gcloud by running gcloud components update
I'm trying to check the installation docker pull hello-world
But getting the following error:
Pulling repository hello-world
Get https://index.docker.io/v1/repositories/library/hello-world/images: remote error: access denied
I have CentOS 6.5
Docker version 1.7.1, build 786b29d/1.7.1
I'm in a corporate network but curl https://index.docker.io/v1/repositories/library/hello-world/images forks fine.
What might be the issue?
Thanks in advance!
I got this error while I was trying to pull mongodb image instead of mongo.
So make sure image name is correct. The very same error message happens on both run and pull commands.
Had the same problem & error on the host working via proxy.
In essence - if you are behind an HTTP proxy server, you will need to add proxy configuration in the Docker systemd service file.
https://docs.docker.com/engine/admin/systemd/
(See at "HTTP proxy" section).
This helped me.
Did you add your user in docker group
https://docs.docker.com/engine/installation/linux/centos/#/create-a-docker-group
Otherwise, you should execute docker command with sudo before
docker pull hello-world
I am able to happily pull most images, except the following:
docker pull jwilder/nginx-proxy
The following is the error message:
Using default tag: latest
Pulling repository docker.io/jwilder/nginx-proxy
Network timed out while trying to connect to https://index.docker.io/v1/repositories/jwilder/nginx-proxy/images. You may want to check your internet connection or if you are behind a proxy.
Seems like pulling this particular image is affected due to many issues mentioned here - https://github.com/docker/docker/issues/15603
Docker 1.9 could potentially solve this.
I solved it by creating a new docker machine and pulling the image from there. Then did a docker save and docker load onto my original docker-machine.
This solved my issue as mentioned on https://github.com/docker/docker/issues/15603#issuecomment-133151849
$ docker-machine stop default
$ docker images -q | xargs docker rmi
$ docker-machine start default